Category Archives: Code Quality

Seven Steps to Quality Code – Step 7

Automated Tests

This is step 7, the final step in a series of articles, starting with Seven Steps To Quality Code – Introduction.

The biggest change in my development practices over the last few years has been has been the introduction of Test Driven Development (TDD).  I remember the first project in which I used it and the feeling it gave me.  A feeling of confidence.  Confidence that the code I had written worked and confidence that I could make changes without worrying about something else breaking unknowingly.

I used to be an assembly language programmer.  In that world, there were no helpful debuggers other than the ability to dump the contents of the registers to the screen or maybe post a “got here” message.  When I  moved to Visual Basic, I was blown away by the fact that I could step through code and inspect the values of variables.  This was simply amazing and of course changed the way I developed software.

Discovering Test Driven Development felt just the same.  A revolution in how I would code.

So now, a decade after those first steps, I am a big fan of TDD.  It suits the way I work (mostly) and really helps to lock in quality.

However, this step in the “seven steps” series is not really about TDD.  The series is more about improving what you already have.  TDD is great for a new project, but it’s too late for a project that’s already in existence.

I’m going to suggest a few very simple actions to take that will allow you to get some of the benefits of TDD.  If you haven’t tried TDD or any automated testing before, it will allow you to get a feel for it in a comfortable environment (i.e. your existing code) without the pressure of going all out on a drive to introduce TDD.

Perhaps at this point an overview of some terminology will be useful.  The table below explains a few terms.

TermDescription
Automated TestingA broad term that covers any form of testing where test cases are automatically run against the code and verified for correctness.
Unit TestingTraditionally, unit testing meant a developer performing their own manual tests on a module/component or other "unit" of software, whether formally or informally. In more modern parlance, it is often used interchangeably with "Automated Testing" above but strictly speaking, the 2 are different.
On one extreme, unit testing means automated testing of a single method - perhaps passing in a series of inputs and verifying that the outputs or the behaviour are correct. Often this will be through the use of "Mocks" (see below) to isolate the unit from any dependencies. Purists will insist that this is the only true definition.
Alternatively, unit testing may mean automated testing of a single class and its behaviour or perhaps a complete software component, though it is more likely that this would be minimal testing of a third party product.
MocksMartin Fowler has written an interesting article called "Mocks Aren't Stubs which describes far better than I ever could, what a mock is. For our purposes here though, we will consider a mock as some sort of software 'thing' used in place of some other software 'thing' in order to isolate the unit being tested and optionally verify behaviour.
Test Driven Development (TDD)This is where tests are written before the code and the code is written purely to make a test pass.
Red-Green-RefactorThis is a phrase that describes the process of TDD. 'Red' refers to the usual indicator against a test showing that it has failed (a red cross perhaps), 'Green' refers to the usual indicator to show that a test has passed (a green tick perhaps) and 'Refactor' refers to the task performed after a test passes, which is to revisit the code and improve it without changing its external behaviour. So, TDD is a repeating cycle of - write a failing test - make it pass - improve the code.
Integration TestAn integration test is a test that checks that several "units" work together. Sometimes referred to as "end to end" tests because of the desire to verify that the system under test functions as a whole from the beginning of a process to the end.
Test Prompted DevelopmentA phrase I use that describes a dangerous situation that can often occur in Test Driven Development. I will blog about this in the near future.

What To Do

The suggestions I have are very simple and should start to show benefits very quickly.

Test Bugs

Whenever a bug is discovered in code that does not have associated tests, write a test (or tests) that reproduces that bug and fix the bug by making the test pass.  Leave the test in the project permanently so that it is possible to verify whenever necessary that the bug has been fixed and has not returned.

This is really no different to how a developer often fixes bugs without automated testing. The really powerful part of it is retaining the test and incorporating it into the project.

Test New Helper Functions

The easiest kind of code to test is often the little helper functions.  If you are writing a new function to do something like string handling (perhaps concatenating first name, middle name and surname or something like that), write some tests.  Verify that the result is correct when the input values are null or empty, when boundaries are reached or breached.  For many functions of the kind I am talking about, you can produce a really solid set of tests that will give you absolute confidence that the code is correct.

Again, make sure the tests are retained as part of the project.

Incorporate Tests into Your Build Process

If you have any form of automated build, include your tests in that process.  All build automation systems will support this in an easily implemented way.  Ensure that test failures result in build failures and ensure that the failures are addressed as a priority.

Conclusion

As I said at the start of this article, automating provides amazing benefits and allows you to be really confident that your code performs as you expect.  A full TDD approach is pretty hard, but if you start by dipping your toe in the water as I have suggested, you can start with little risk and learn the techniques slowly.

Well, that is the seven steps finished.  If you are working in a development environment that needs this information then I hope that it has been useful.  Please let me know your experiences of putting this stuff into practice.  If you need any assistance, get in touch – I’ll be happy to help.

Seven Steps to Quality Code – Step 6

Code Reviews (Peer Reviews)

This is step 6 in a series of articles, starting with Seven Steps To Quality Code – Introduction.

Code reviews seem to me to be the aspect of software development that every organisation incorporates into their “standard process”.  Very few organisations appear to actually do it.  Unfortunately, when deadlines are tight and the pressure’s on, the code review is the first thing to get dropped.  All too often, when deadlines aren’t so tight and the pressure is less, it is still the first thing to get dropped.

In my view, this is a great shame as it is a real opportunity to reap massive benefits.  Benefits such as:

  • Encourages better code
  • Catching bugs early
  • Spreading knowledge about code
  • Improving code reuse
  • Improving maintainability
  • Validating requirements are met
  • Reinforcing, creating or evolving team standards

These lead to the more general benefits that we are trying to achieve by increasing code quality:

  • Reduced costs
  • More reliable software
  • Happier customers

I’ll discuss each of the direct benefits individually.

Encourages Better Code

If you are writing some code that you know will never be seen by anyone else, there will always be the temptation to take a few shortcuts – it doesn’t matter that I spelt that variable name wrong, it’s OK that I just copied and pasted that bit of code.  However, if you are coding in a situation where you know that someone else will be looking at your code, you will take more care to do it right.  No one wants to look like the lazy guy that couldn’t be bothered.

Catching Bugs Early

As we already know, the earlier a bug is caught the cheaper and easier it is to fix.  At code review time, fixing a bug is an awful lot easier and cheaper than when the software is live and in use by multiple customers.

Spreading Knowledge About Code

Keeping code hidden and secret is a “Bad Thing”.  Code should be exposed and available to other members of the team so that several people understand it and therefore can maintain and/or extend it.

The term “Bus Factor” is used to indicate the extent at which knowledge is spread in a team – it is the number of people that would need to be hit by a bus before a project could not continue.  A high bus factor is therefore something to aim for and code reviews will help in this regard, reducing the risk in the project.

Improving Code Reuse

Spreading knowledge also leads to improved code reuse.  When a reviewer sees the same pattern of code that has been used elsewhere, this can be raised as a review point and refactored to make use of a shared method or component or whatever.

More subtly, during a code review the participants will become aware of code that may be reused in the future.  For example, when Sally reviews John’s code she sees that he has created a method to remove the tax from a sales price.  Next time she wants to do the same thing, she’ll know that the feature has already been written.

Improving maintainability

For code to be maintainable, it has to be understandable.  A code review is an excellent way to determine if this is the case.  If another developer can’t make sense of the code that has been written, then it should be refactored to be more understandable.

Validating Requirements are Met

The code review is an ideal opportunity to check that the code under review actually does what it is supposed to do.  Just like catching bugs earlier, if this kind of defect can be caught earlier, they are easier and cheaper to fix.

Reinforcing, Creating or Evolving Team Standards

The more that code is shared and discussed, the more concensus will form around the standards that the team expects.  When code doesn’t adhere to the standards, it can be discovered and corrected or perhaps it can prompt a change in the standard.  This cannot happen without reviews.

Relationship to the Other Steps in the “Seven Steps”

This is step 6 in the Seven Steps and is heavily dependent on the previous steps.  Traditionally, code reviews are often peppered with lots of points about stylistic differences, casing issues etc.  Because these sort of issues will have been addressed automatically by StyleCop and other detectable issues will have been addressed by Code Analysis, a review can focus on more important things like efficiency and understandability and whether or not the code actually satisfies the requirements.

OK, So How Do We Do It?

A critical thing to remember is that it’s an area where diplomacy and sensitivity need to play a big part.  A lot of the time, there will be deep emotional attachment to the code that has been written.  Recognise that this is the case – you know how it feels when someone criticises that piece of work that you’ve been lovingly crafting for the last 2 weeks.

So, in light of this, I would suggest that it isn’t made into a big event.  It shouldn’t be the whole team doing the review.  This is a recipe for stress, humiliation and unhappy developers.  Make it a conversation.  A conversation between 2 people about the code in front of you.  Whenever possible, make it in person, sharing a workstation.

It is better to view it as working together to produce better software, rather than viewing it as a superior performing an inspection on a subordinate’s work.

It’s also important to remember that when reviewing someone else’s code, the fact that you “would have done it differently” doesn’t necessarily mean the code is wrong.  In fact, exposure to  different ways of thinking or of solving the problem is one of the extra benefits of code reviews.

Conclusion

We’re nearly there.  We have one last step to discuss – automated tests.

Seven Steps to Quality Code – Step 5

Code Analysis – Further Rules

This is step 5 in a series of articles, starting with Seven Steps To Quality Code – Introduction.

This step is simply a case of turning the ratchet a little more and locking in further quality gains.

Previously, my suggestion for existing projects has been to set the Code Analysis rule set to “Minimum recommended rules” (or in Visual Studio versions after 2010, “Managed Recommended Rules”) in order to keep the number of rule violations to a minimum.  Now I am going to suggest that the rules are tightened by using a rule set or combination of rule sets that check for further violations.

Ultimately, our goal is to have the “All Rules” rule set enabled on all code, but in practice this may not be achievable for legacy code.  What we can do, is work towards this so that we can catch the more important issues in our code.  For example, the effort of implementing the globalisation rules in legacy code is not going to give you much bang for buck (unless of course globalisation has become a required feature!).

A great feature of Code Analysis is that we can progressively add further rules in order to increase the range of issues that are checked.  The rule sets that are available, however, do not give us a sequential order that we can progressively move through because particular rule sets focus on particular issues.  We can achieve the same effect though by using the option of progressively applying multiple rule sets.  You can do this by selecting the “Choose multiple rule sets” option on the Code Analysis tab of a project’s properties screen, as shown in the screenshot below:

Screenshot showing where to select multiple Code Analysis rule sets
Selecting the Multiple Rule Sets Option

A comprehensive list of the rule sets available can be found on the Microsoft site:

Visual Studio 2010 Code Analysis Rule Sets
Visual Studio 2013 Code Analysis Rule Sets

To avoid getting swamped by too many violations and to allow a “small bite at a time” approach, I suggest considering 1 project (i.e. csproj file) at a time and performing the following procedure:

  1. Run the “All Rules” rule set on the project.  If you consider the number of violations to be manageable, use the “All Rules” rule set.  This is the perfect situation to be in, ignore the further steps in this list and proceed to fix the violations.
  2. If “All Rules” is looking like a step too far, progressively add in the following rule sets one at at time, fixing the violations and checking in as you go:
    Visual Studio 2010Visual Studio 2013
    Microsoft Basic Correctness RulesBasic Correctness Rules rule set for managed code
    Microsoft Basic Design Guideline RulesBasic Design Guideline Rules rule set for managed code
    Microsoft Extended Correctness RulesExtended Correctness Rules rule set for managed code
    Microsoft Extended Design Guideline RulesExtended Design Guidelines Rules rule set for managed code
  3. At this point, you may be in a position to apply the “All Rules” rule set and have a manageable number of violations.  Alternatively, if that still produces too many violations, you may have a particular need for one of the remaining focused rule sets (security or globalization rules) and wish to apply one of those. (Actually, by the time you get to this stage the likelihood is that if you have a lot of violations under “All Rules” they will all be globalization related).

Conclusion

Once this step is followed for a solution, you will be in a great place for continuing development in a high quality environment with automatic guards in place to keep it there.

In step 6 we’ll look at peer reviews – something that everyone knows about but all too often they are the easiest thing to discard in a development process.

Seven Steps to Quality Code – Step 4

StyleCop

This is step 4 in a series of articles, starting with Seven Steps To Quality Code – Introduction.

How often have we heard things like “Steve wrote that – I’m not touching it!” or “That came from the offshore team – it’s a mess”?  Often, this is down to a question of personal style.  The code in question may be perfectly readable and maintainable to the developer who wrote it, but not to anybody else.

If we want robust and maintainable (i.e. quality) code then a standard style, consistent across the whole team or organisation, is what is required.  If the code is always written in the same style, then developers can focus on functionality and correctness rather than being distracted by inconsistencies in coding styles.

As with the other steps, this isn’t a new idea,  It has long been recognised that some sort of corporate “Coding Standard” is useful.   This would perhaps specify common standards for all code – things like layout, formatting, casing, naming conventions etc.  Traditionally, if an organisation has a view on these things, lots of effort would be spent in devising the standard and creating a “coding standards” document hidden away on a server somewhere.   New developers typically have this document on a list of “documents to read” as part of their induction. In my experience, that’s usually as far as it goes.  The practicalities mean that enforcing the corporate standard is simply too difficult and so the code rarely, if ever, actually conforms to the standard.  Each developer uses their own view of “standard practice” and these views often differ widely.

In this fourth step, we are going to address these issues.  The approach is to discard the Coding Standards document and instead introduce our next tool – StyleCop.  This tool was originally developed to enforce standards on internally created code in Microsoft but was then released for public use.  In 2010, it became open source and is now available on the StyleCop CodePlex page.   In a similar way to Code Analysis/FxCop introduced in step 3, it analyses code against a set of predetermined rules.  The difference between the two tools is that Code Analysis looks at the compiled assemblies, whereas StyleCop looks at the contents of your source files.  There are some overlapping areas, but StyleCop will focus on what your code looks like, rather than how it behaves.

A sample of the things that StyleCop will report are as follows:

  • Incorrect casing of variables, method names, property names etc.
  • Incorrect naming conventions (e.g. use of underscores)
  • Missing XML comments
  • Non-standard XML comments (for example, read/write properties must start with “Gets or sets…”)
  • Incorrect spacing around operators, brackets, commas etc.
  • Missing curly brackets
  • Missing and superfluous blank lines

Like Code Analysis, rule violations are reported in the Visual Studio Error window, as shown below:

StyleCop Warnings

Also like Code Analysis, detailed explanations can be found by right-clicking and selecting “Show Error Help”. Unlike Code Analysis, individual rule violations cannot be suppressed.  This is not important with StyleCop as the rules are more objective and not prone to exceptions like Code Analysis rules.

It is possible to switch off a rule entirely, but I would strongly recommend not doing this.  My approach is to leave the rules in their default state.  This leads to a much simpler situation for ongoing configuration management as there is no configuration required by project, solution, developer or machine.  Additionally, there are rules that complement each other and work together “as a set”.  For example, the common practice of prefixing class level fields with an underscore is outlawed under StyleCop, but this practice is complemented by a rule that insists that these fields are preceded by “this”.

Some of the rules may seem a little strange at first.  The aforementioned rule about not prefixing with underscores being a good example if you have been used to this previously, using statements having to be inside the namespace being another.  However, it takes a surprisingly short period of time before the standard StyleCop format becomes “the norm” and non-compliant code is jarringly wrong.

The real beauty of using StyleCop is that it quickly becomes possible to see past what the code looks like (because it’s always the same) and see what it is doing.  Errors in logic or function jump out of the page in a way that is simply not possible when everyone codes to their own style.

Rules/Guidelines for Using StyleCop

These are similar to the guidelines for Code Analysis.

  • Stick with the default rule settings.
  • Incorporate StyleCop into your process. Configure it to run each time the solution is built. If you have an automated build, incorporate it there.
  • Always have the Visual Studio Error window visible.  You get immediate feedback on violations following a build (as well as build errors, compiler warnings and Code Analysis violations).
  • Fix violations frequently.  Fix them as they occur.  Don’t leave this as a task to be performed just before check in.
  • Always use the error help if the rule isn’t understood. There is often interesting commentary on the rationale behind the rule.  In the early days, it’s worth looking at the help even when you think you understand the rule.
  • Use GhostDoc.  StyleCop is hard work without it, GhostDoc becomes absolutely essential.
  • Persevere.  It can be daunting to see a single class with 600 violations, but this means there is massive scope for improvement in such a class and so the benefits and the feeling of accomplishment once it’s done will be worth it.

Once the use of StyleCop is embedded, you will be well placed to move on to step 5...