Let met tell you the story of a company that had problems with their code quality, and solved it the Corporate way ... well, ehum ... "solved."
Poor Code Quality
When I met that organization, their development process basically looked like this:
Developers took their specifications, completed their coding and passed the code for testing. Simple as that.
After a major initiative failed, the Lessons Learned revealed out that the root cause for failure was the poor code quality, which basically made code un-maintainable, hard to read and difficult to fix defects without adding more defects elsewhere.
A problem that would require rules and regulation to fix. So, it was decided that Code Reviews should be done. And - of course, people complained that code reviews took time, and if developers would do the Reviews, they would be even slower in getting their work done.
Stage 1: Introducing: the Code Review Department
What would be the Corporate way, except to introduce a new department of specialists dedicated to the specific task? So, they opened the Code Review Department. Let's call them CRD.
To keep Code Reviews as neutral and objective as possible, the Review Department was strictly separated from the others: physically, functionally and logically. They were not involved in the actual development process, and only reviewed the code that was actually delivered.
With the backing of senior management, the CRD adopted a very strict policy to enforce better code quality and defined their departmental KPI's, most notably:
- Remarks made per reviewer: How many review remarks each reviewer had found, per commit and per month. This allowed the CRD management to see which reviewers were most keen and could best spot problems.
Stage 2: Code Quality improves
Since it's easy to objectively measure this data, the reviewers got busy and lots of review comments were made. Proudly, the CRD management presents statistics to IT leadership, including their own KPI's as well as the obvious metric they can present to measure the software delivered by the Software Development Department (SDD):
- Code Quality per developer: How many review remarks were made, per commit and month, per developer. This allowed the SDD to put a KPI on the quality of code provided by developers. And, conveniently, it would also justify the existence of the CRD.
With the blessings of IT leadership, SDD Management thus adopted the KPI.
So, things were going well ...
Stage 3: Code Review has a problem
Now, developers aren't dumb people. They aopted Linting and basically got their Review remarks down to Zero within in pretty short time. Now, the CRD should be happy, shouldn't they?
Turns out, they weren't. And here was the problem: The Reviews per reviewer metric tanked. Not only didn't reviewers suddenly fail their quota, CRD management figured out they probably weren't looking in the right places.
So what did the CRD reviewers, not being stupid, either, do? When they looked at code, they screened for patterns they considered problematic and introduced a new rule.
The numbers for Review remarks per reviewer rose again, and management was doubly happy: not only were the numbers of the CRD fine again, reviewers were continuously improving their own work.
Great! Things are even getting better!
Stage 4: Developers have a problem
Developers started to get frustrated. Their Lint rules were no longer working in getting them 100% passed reviews. What was worse, that they found whenever their Linting got updated, code was rejected again, and they needed to figure out the new rule to add to their Lint configuration. Not only did this process consume a lot of time, it distracted from actual development work.
Well, developers caught on that meeting the rules wasn't going to get them 100% reviews any more, so they began introducing honeypot infringements: They made obvious mistakes in their code so that reviewers would remark them, and they'd already have the fix in place right when the review came in.
Everyone happy: CRD met their KPI's and developers were no longer forced to constantly adopt new rules. Except ...
Stage 5: Management catches on
CRD reviewers were content, because they had plenty review comments again until the CRD management started to measure policy vialotations by type, and figured out that developers had stopped improving and were making beginner mistakes again. Of course, they reported their findings to higher management. And thus, a new KPI was born:
- Obvious mistakes per developer: How many obvious review infringements were made by team, with a target of Zero and published transparently throughout the SDD.
Well, again, developers aren't stupid people. So, obviously, they would meet their KPI. How?
You might have guessed it: they would hide their, ehum, "mistakes" in the code so they were no longer obvious, and then placed bets who could get most of them past Review without being caught.
Guess who won the game?
Stage 6: Code quality deteriorates
The CRD reviewers got stuck in a game of whack-a-mole with developers, who constantly improved their tricks of hiding more and more insidious coding errors, while updating their Linting rules right away when reviewers added a new rule.
Until that day when a developer hit the jackpot by splipping a Zero-Day exploit past Review.
The CRD management no longer trusted their own Reviewers, so they added peer review reviews and another KPI:
- Issues slipped past Inspection: Reviews were now a staged process where after review by a Junior Reviewer, a Senior Reviewer would review again to figure out what the first reviewer had missed. Every Reviewer would get a Second-Review Score, and that score would need to be Zero. So, they started looking deeper.
You can guess where this is leading, right?
Stage 6: Code quality goes to hell
Now, with four-eye reviews and a whole set of KPI's, nothing could go wrong any more?
Code Reviewers were doing splendidly. They always had remarks and the department's numbers truly validated that a separate, neutral Code Review Department was absolutely essential.
So the problem was fixed now.
Well, except one small fly in the ointment. Let's summarize the process from a development perspective:
- When developers make mistakes, they are reprimanded for doing a poor job.
- When developers make no mistakes, new rules are introduced, returning to (1).
Developers now felt like they were on a sinking ship. It was easier to simply continue making mistakes on existing rules than to adopt new rules. They came to accept that they couldn't meet their KPI anyways.
Since they could no longer win, they stopped caring. Eventually, the review department was relabeled to complaints department, and nobody took their remarks seriously any more. Developers would now simply add buffer time to their estimates, and called it the "Review buffer".
By now, the CRD was firmly established, and they were also fighting a losing battle: try whatever they might, they got more and more overloaded, because truly necessary review remarks and more and more horrible code got more and more common. They needed to add staffing, and eventually outnumbered the developers.
The Code Review Department became the last bastion against poor code quality. A bulwark defying the storming seas of bad code.
So, what's your take ...
is a Code Review Department a good idea?
How would you do it?
This is hilarious, and just barely believable based of similarly stupid enterprise clients do. Well done sir. I only say barely because while I've seen similar bonehead moves but never taken quite this far.
ReplyDeleteElisabeth Hendrickson reveals in a Gene Kim interview her realization that she years earlier, as head of QA, was part of the problem. https://itrevolution.com/the-idealcast-episode-3/