A Code Review Starter Kit

If your organization uses code reviews you're probably familiar with the tendency of some engineers to simply rubber stamp them and trust that the author knows what they're doing. That's all well and good for a time, but it severely diminishes the value of code reviews as a mechanism for discovering programmer errors.

While unit tests and the use of static typing can catch a large class of programmer errors early and cheaply, they are fundamentally limited in their scope. Compilers for most mainstream languages generally are only equipped to detect semantic errors. Meanwhile, unit testing allows the programmer to perform spot verification of the codebase and ensure that certain contracts are upheld by the implementation, but these tests are at times incomplete, missing, or misleading. Even when test coverage nears 100% there is no guarantee that all the important scenarios have been covered.

Because our tools are not perfect and we often overlook our own mistakes, it's important to have other people look at your code and interrogate it for correctness. This lets errors be caught early on when they are still cheap to fix. Over the past few months that I've been working at Google, I've jotted down a set of questions that I wish I or the reviewer had asked before my code was checked in; feel free to use these to examnie your own or other's code and (hopefully) make it better.

At First Glance

  • Do any lines look out of place?

Sometimes you can see errors immediately just by a cursory read-through of the code. Take the opportunity to let the code trigger any "wat"s you may have on the first glance; these are usually worth inquiring about.

  • Are hardcoded values pulled out into constants? Should they be made configurable or converted to command line flags?

It's tempting to write magic numbers or hardcoded strings in our code and just leave them there. After all, what's the point of making this a constant if it's only used once in this piece of code? Unfortunately, these constants usually wind up being replicated down the line; if the next programmer isn't careful about their modifications they may wind up introducing subtle bugs. As well, inline constants are hard to test and usually provide little in the way of compile-time safety. Having inline magic numbers with comments describing them is generally a code smell.

// Calculate the price based on the tax rate - currently 5.9%.
double price = getTotal() + (getTotal() * 0.059);

// Now send it to partner 1: BFB Bank.
transmitPrice(price, 1);

// Update the user's info in our database...
db.update(user, "price", price);

Becomes:

private static final double TAX_RATE = 0.059;
private static enum Partners {
  BFB_BANK,
  ABC_BANK,
  XYZ_BANK
}

... 

double price = getTotal() + (getTotal() * TAX_RATE);
transmitPrice(price, Partners.BFB_BANK);
db.update(userId, Database.PRICE_FIELD, price);
  • Is it possible to disable experimental/dangerous functionality without a new binary? Are the default values for new command line flags sane?

For web applications, it's important that new functionality on its way out to production be able to be easily disabled. If you deploy a new binary and notice a sudden spike in 500 errors, you want to be able to address the issue as quickly as possible. Sometimes that means rolling back the deployment, but that can be a long process for some teams. Having each piece of functionality gated by a command line flag that can be toggled without requiring a new deployment allows a much faster response. You may want to have the feature disabled by default, and only turn it on in a subset of your servers to act as a canary. Make sure that the default value for any new flags has been considered and makes sense.

  • Are the names of variables, methods, functions, and classes unambiguous?

It's usually a good thing when you can read through the code and understand just what it's doing by the names of the functions, variables, and other elements that are in use. If a code block requires a comment explaining what it does, it may be best to extract it into an appropriately-named method. For example:

public double calcSomething(double r, double h) {
  double foo = 3.14 * r * r;
  return foo * h;
}

can become

public double getVolume(double radius, double height) {
  return getArea(radius) * height;
}

public double getArea(double radius) {
  return Math.PI * radius * radius;
}

The line count has increased, yes, but the code is much more readable and self-explanatory. (This smell has been discussed over and over again, but many programmers still write cryptic code. Code review is generally a good time to point it out.)

  • Does this commit include unit tests? Do the unit tests cover the important functionality and any edge cases? Are there methods with no/few unit tests? Are the existing unit tests passing?

    Unit tests are a very helpful tool, but a test is only useful if it actually exists. Make sure that new code is thoroughly tested and examine the tests for code smells just like the code they're testing. When you see an edge case that the code should handle, make a note of it; if there isn't a test for that, make sure it gets written. This is especially important for bug fixes - these should always have a test accompanying them. When integrating with external systems, it doesn't hurt to have unit tests checking your assumptions about those systems. If your code review tool doesn't show the test status, make sure that you run the tests yourself.

  • Are methods set to appropriate visibility?

    The smaller the surface area of our code the easier it is to make assertions about its behavior and to avoid breaking clients when your implementation changes. To facilitate this, hide your methods liberally. If you have helper methods in your class, hide them. If you have methods only used by clients in the same package, restrict visibility to just that package. In general, try to hide as much of your code as possible. You can always make it public later; going the other way is much harder.

Down in the Details

  • What does the code look like it's doing? Trace the execution path. Does it match the commit message?

    Sometimes a commit starts out as one thing and morphs into something else. It's easy for messages to get out of date, especially when you're using a version control system which allows easy modification of commits such as Perforce. Making sure the code is doing what the message says it's doing ensures you can make sense of the commit logs later and find problems faster.

  • What happens when bad data is passed in? Consider null values, the empty string, large or negative numbers, closed file handles, etc.

    It's easy to write a piece of code that performs well in the happy path. What's harder is to ensure that the code will not propagate errors or fail dramatically when bad input is provided. Look for possible ways to destroy the code and make sure that tests have been written to cover those cases.

  • What happens when a dependency throws an exception? Is there handling code in this method, or somewhere farther up the stack? Does the handler do something reasonable? Does this method catch more exceptions than are reasonable?

    Handling exceptions at the right level of abstraction is an important part of having robust code. If your dependencies encounter errors your code should be able to handle them gracefully. In the case where an error causes inconsistent state that is unrecoverable, ensure that your code protects the integrity of your data stores.

  • Is there appropriate logging? How much information will you need to diagnose problems when this is running on a server somewhere? What metrics can you collect from this code to let you know when things are working and when they break?

    Of course, no code is perfect. When this change makes it out to production, there will be problems; it's important that you are aware of them and able to fix them without too much effort and investigation. Logs can help with this if you put an appropriate amount of information into them. As well, consider other monitoring options that let you count important events in your code and send alerts when the code begins operating outside of normal conditions.

Moving Forward

Of course, these questions are just a starting point. Adjusting these questions to reflect the common points of failure on your team is important if you're going to catch problems early. The most crucial part of any code review (and programming in general) is to make sure that someone thinks critically about the code and that past mistakes are repeated as seldom as possible.

If you like what I write, you can subscribe to my mailing list to get new posts in your inbox: