Featured Webinar: Simplify Compliance Workflows With New C/C++test 2024.2 & AI-Driven Automation Watch Now

Eliminate These 7 Bad Habits for More Effective Peer Code Reviews

Headshot of Arthur Hicken, Evangelist at Parasoft
October 12, 2023
6 min read

A good peer code review technique has been proven to increase software quality. Here are some tips to prevent harmful practices that might render your code reviews useless.

Introduction to Peer Code Review

Software, in its many forms, powers our modern world, from the apps on our smartphones to the complex systems running in the background of critical infrastructures in different industries. For these software systems to remain efficient, the codes running in them must be bug-free, have the right logic, and adhere to industry safety and security standards, and one way these can be achieved is through peer code reviews.

Peer code reviews are a fundamental practice in software development that involves the systematic examination of code changes by fellow team members. It is an essential part of the quality assurance process aimed at improving the overall software development life cycle. With peer code reviews, organizations can facilitate collaboration and knowledge sharing among developers, maintain codebase integrity, improve code quality and ensure adherence to coding standards.

Peer code reviews play a vital role in enhancing software quality on multiple fronts. For instance, it acts as a proactive quality control measure by identifying and rectifying issues before they escalate, reducing the likelihood of defects reaching production. The earlier an issue is discovered, the less money it will cost to fix it. Besides serving as a quality control measure, peer code reviews promote code readability, reliability, portability, consistency, and adherence to coding standards, resulting in a more maintainable and comprehensible codebase. They also encourage developers to document their code and add meaningful comments, which further makes the codebase more self-explanatory and accessible to other developers.

We encourage peer code reviews combined with traditional or automated testing methods such as unit testing, functional testing, security testing, API testing, performance testing, and system testing. Peer code reviews complement these testing techniques and offer a different perspective on code quality. For instance, automated testing is excellent at covering functional aspects and specific test cases outlined in test scripts and achieving high test coverage. Peer code reviews, on the other hand, leverage human expertise and intuition to identify design flaws, maintainability concerns, and adherence to coding standards that might be challenging to capture through automated means.

A strong peer code review practice is known to improve software quality, so here are some ways to avoid the bad habits threatening to make your code reviews ineffective.

It has been well-established that peer review provides more value than one might expect. As stated in Steve McConnell’s excellent book, Code Complete, the average defect detection rate is only:

  • 25% for unit testing
  • 35% for functional testing
  • 45% for integration testing

In contrast, the average effectiveness of design and code inspections is 55% and 60%—an impressive statistic indeed. Of course, it only pans out if what you’re doing in peer review is effective and efficient.

Over the years, I’ve witnessed the many pitfalls that lead to ineffective peer reviews. Avoiding these bad habits may just be as effective as adopting good new ones! Eliminate the bad habits below to avoid ineffective peer reviews.

7 Bad Habits to Avoid for Successful Peer Code Reviews

Despite the critical role of peer code reviews in software development, they can only be worth it if conducted effectively and efficiently. A badly conducted peer review can lead to wasted time with no meaningful feedback for code improvement. To ensure success in your code review, here are key habits to avoid.

1. Underutilizing Tools in Peer Code Reviews

To start, you shouldn’t be reviewing or looking for anything that can be done by static analysis. This could include branding issues, style issues like curly placement {
don’t get me started;
},
or using a specific encryption algorithm. If a tool can find it for you, let it. Free yourself to look deeper into the algorithm, security, and performance characteristics of the code. Doing clever work rather than tedious work also has the side benefit of making the review more interesting to participate in, which in turn makes it more engaging and effective.

2. Reviewing Unfinished Code

This is a classic problem that especially pervades calendar-centric organizations. Chances are if you release based on a date, you also review based on a date. The logic goes like this: “I’m not done yet, but we’ve already scheduled a review so let’s at least look at what we have.” You know it as well as I do—it isn’t a great way to do an effective review, so stop doing it. Make sure the code author is finished, and if they’re not ready yet, postpone until they are.

3. Overextending Peer Code Review Sessions

Lengthy code review sessions can be counterproductive. It’s important to set reasonable limits on the scope and duration of each review to maintain focus and productivity. Long, exhaustive reviews can lead to fatigue, reduced attention to detail, and a slower development process.

If the review is taking too much time, you need to rethink something. Too much in this instance could be either review sessions that are over an hour or sessions that consume too much of the overall development schedule. If reviews take more than an hour, you’re probably trying to review too much at once. Or the author wasn’t ready for the review. After an hour of review, potential effectiveness declines fast, especially for the code authors. Even if the commentary wasn’t initially personal, after an unnecessarily long review, the critique can compound and feel more painful.

4. Personalizing Peer Code Review Feedback

A peer review is about the code, not the people. Make sure you’re talking about the code and not the developer. A statement like, “This code won’t scale as well as we need” is less likely to offend than, “You wrote this badly.” Conversely, when you’re on the receiving end of critique, be a good sportsman. Recognize that everyone’s code can be improved and you can learn from the data you’re getting. No matter which end of the review you’re on, you can probably be more graceful in facilitating a smooth, pleasant, and speedy review. Think of reviewing as a great way to get a free mentor. Everyone wants to get a mentor, but we rarely think about the review process as a mentoring process. Valuing the mentorship may help you resist taking it personally.

5. Procrastinating Peer Code Reviews

It’s not just a New Year’s resolution metaphor. I really am a big believer that most software quality practices should be treated like exercise. If you try to binge them at the last moment, they won’t be effective. You can’t run a treadmill the night before a marathon, and you can’t do your peer review the night before your release.

6. Inconsistent Follow-Up in Peer Code Review Processes

How a review is followed up can greatly affect the value of the review. If you find items during a review and don’t check that they’re fixed, you’re probably wasting your time. The best benefit is found in a consistent process that includes accountability. Make sure everyone knows what is expected of them and follow up to make sure fixes are being made.

If nothing is found in the review, be critical. While this can happen from time to time, it should certainly make you suspicious. It may be a sign of quid-pro-quo reviews occurring between developers, or a sign that your development doesn’t understand the value behind code reviews or how to do them properly.

7. Inconsistent Criteria for Peer Code Reviews

If each reviewer isn’t looking for the same things, you will have no idea if your reviews are effective. The scope of peer review must be based on an unambiguous policy that’s clearly written down and can be referenced. Having a checklist may feel constricting at first, but it will help keep the review on track and serve double duty if you’re in a compliance industry like automotive or medical where you need to prove that you’ve done an effective review.

Eliminating ambiguity takes more discipline than just writing down the policy, although that’s a great first step. Ideally, you’d flesh out a couple of scenarios based on your policy and ask different people how they’d do it. It’s not uncommon to find companies accepting a status quo where different groups do things differently, and both think the other group is doing it wrong, but both are being allowed under the ambiguous policy. You can do better.

Get everyone on the same page. It will improve your quality, provide the consistency necessary for assessment and improvement, and protect you if something goes wrong. If you’re in a compliance environment like ISO 26262 or FDA, having consistent criteria will streamline your audits.

I’ve witnessed peer reviews at a wide variety of organizations, and seen where it can be unbelievably worthwhile, as well as a complete waste of time. Building the right tools into the right processes will help ensure that you are gaining value from the process. Support your bad-habit-free peer review system with a static analysis tool that you can depend on for enforcing standards and best practices and focus on the interesting defects that will make you a better engineer.

Conclusion: Maximizing the Value of Peer Code Reviews

In conclusion, peer code reviews hold immense potential for enhancing the software development process and ensuring the delivery of high-quality software. For developers, engaging in effective peer code reviews offers several key benefits, such as providing a valuable learning opportunity that exposes them to different coding styles, best practices, and diverse problem-solving approaches within their team. This knowledge-sharing fosters professional growth and skill development. Additionally, peer code reviews help developers catch and address defects early, as well as contribute to codebase consistency and maintainability.

To support successful code review processes, leveraging static analysis tools can be invaluable. These tools automatically analyze code for potential issues, such as coding standard violations, security vulnerabilities, and code smells. For C, C++, Java, C# and VB.NET, Parasoft offers static code analysis tools like Parasoft C/C++test, Parasoft Jtest, and Parasoft dotTEST, all of which have AI and ML capabilities and are designed for varying development environments. By integrating static analysis tools into the code review workflow, teams can enhance the efficiency and effectiveness of their reviews. Combining the human expertise of peer reviewers with the automation provided by static analysis tools optimizes the value and impact of peer code reviews in the software development life cycle.

Get Unit Testing Done Right: Top Tips for Java Developers