Code review, user stories and sustainable software

The purpose of this short post is to identify the building blocks of an effective code review. The suggestions you will find here come out of our experience and are especially meaningful for our code review team.

We believe that the use of code review is one way to create sustainable software that’s open to changes but closed to modification. By linking each code review to a story we also believe that it’s possible to communicate the effort needed for each piece of the software to all the stakeholders of a project.

We strongly believe that the code review has to be informal because it’s not a way to check if a developer is good or bad at his job, it’s the only way in which an organization can keep a reasonable code quality in place and improve the knowledge of all of its team members.
In an ideal scenario, code review can be performed sitting together in front of a monitor with an author who drives the review by sitting at the keyboard and mouse, opening various files, pointing out the changes since the previous release and explaining why it was done this way. This approach is great and can bring enormous benefits because it simple to do, anyone can do it without any training and it doesn’t involve an impersonal interaction with e-mail or instant messaging systems.
We have some concerns regarding this approach because we have identified some recurrent downfalls:

•    The author often forgets to present something to the reviewer
•    The author can explain something to the reviewer, but without an output from the conversation the next one that looks at the code won’t have the benefit of the explanation
•    Some change requests made by the reviewer can be forgotten
•    There is no way to track the issues in order to create a check list for all team members to follow as a “Best practices” guide

In our opinion, the major drawback of this approach is that sometimes code review is postponed because two people have to arrange a meeting (not always so easy) and there is no way to check that the changes will actually be made by the author.

After some failures in our code review practices we have outlined our procedure…

First of all we ask the whole team to prepare a code review for each piece of code they commit into our source control system so that everyone will be responsible for readability without any additional requests.
A code review is connected to a user story, so all of the code review will be defined through a meaningful title and a note that points the reviewer to the requirements the author worked on. The great advantage of this approach is that the reviewer can be aware of the feature that has been developed and can therefore have the right approach to the inspection of a specific piece of code.
Every author has to attach a file to a code review (the format is up to the author) that summarizes the workflow he followed to develop that piece of software: in this way, first of all the reviewing team’s job will be faster and secondly, the author will recap what he did and probably find and / or fix some issues before submitting the review.

The building blocks of a code review can be summarized as follows:

  • Title: Usually made up of an ID and something that is meaningful
  • Description: A very short description of the feature developed
  • Stories involved: A list of backlog items involved in the review
  • Workflow: A file that summarizes the workflow of this part of the software

Ok, it sounds like a wonderful scenario but how can a developer put a code review in place quickly and easily? And how do we avoid having meetings postponed or time wasted on both sides? How do we keep track of the workflow of the review?

We tried several different ways and at last we identified an online tool that can help us in this process. With this tool, each developer can attach files through a file upload or through the source control system so that, depending on the kind of feature he is working on, he can submit the code and then start the code review process, or he can upload the source code without using the source control.
The advantage of a code review over a piece of code that is already in the source control system is that if there are dependencies between the stories to which the code is related and other requirements, the team will not be blocked and development can continue in an iterative way. Furthermore, if a feature is going to be changed dramatically, keeping this code outside the source control system until it’s fully reviewed and refined makes life easier for the other team members and requires them to change their code only once during the development cycle.
In both scenarios, the time the review team spends on a review doesn’t impact the overall speed of the team.
Another benefit of using an online tool is that we can keep track of the defects opened by the review team and periodically create checklists for all team members.
In this scenario, each developer can perform the following steps before submitting a code review:

1.    Identify the code needed to create a code review (usually no more than 600 rows of code)
2.    Iterate through the checklists, making an initial review to avoid common mistakes
3.    Make a recap of the workflow of this feature in order to double-check the logic and gradually produce the documentation of the software
4.    Chose a meaningful and appropriate title for the review
5.    Identify the stories to which the review is related
6.    Create the review

As for the reviewer, his job is not only to ask for changes but to provide enough information so that the finding may provide enough detail about the problem that anyone can

•    understand the vulnerability
•    understand possible attack scenarios
•    know the key factors driving likelihood and impact

There is value in both assigning a qualitative value to each finding and further discussing why this value was assigned. Some possible risk ratings are

•    Critical
•    High
•    Moderate
•    Low

Justifying the assigned risk ratings is very important. This will allow stakeholders (especially non-technical ones) to gain a better understanding of the issue at hand. Two key points to identify are

•    Likelihood (ease of discovery and execution)
•    Business/Technical impact

Each code reviewer has to suggest remedies to a defect so that alternatives can be clear enough for the authors to provide (when requested) a reliable estimate of the effort required.

Another practice we follow is to provide authors with a reference to help them understand how to avoid this kind of defect: we call this “source” and the value assigned to it can be

•    Check list
•    Systematic
•    Use case

This is usually very helpful in preventing the need to resubmit a code review with common mistakes.

In order to make the code review a practice aimed at helping our team to grow, we periodically create reports that show not the number of defects or their authors (this is the way formal reviews usually take place), but rather the defects themselves, and we ask the team which ones can become part of our checklist.

This post describes only part of our process, but we believe that these building blocks can be the right way to start improving the quality of the software that an organization is working on.

Comments

comments

3 Responses to “Code review, user stories and sustainable software”
  1. John W Gintell July 29, 2010
  2. Tom Van Vleck August 7, 2010
  3. gnstudio August 8, 2010

Leave a Reply

Your email address will not be published.


*