Failing the Pipeline When NuGet Packages are Out of Date

Letting your dependencies get out of date is an all too common problem. On the face of it, you might think it’s not such a big issue – “our code works, why do we need to worry?”.

However, keeping on top of your updates on a regular basis has several advantages:

  • It is one of the easiest ways to minimise the risk of security vulnerabilities. Vulnerabilities are often discovered in popular (or not so popular) packages. New releases are produced to fix these vulnerabilities.
  • You reap the benefits of any bug fixes or optimisations that have been introduced.
  • Frequent, small changes are easier to manage and carry less risk than allowing changes to back up and become big changes.

With a little preparation, keeping on top of this is pretty easy. The dotnet command has inbuilt support for checking for outdated packages. Simply execute the following while in your solution or project directory:

dotnet list package --outdated

This will check the package versions requested in your project(s) and output a list of any package for which a later version is available.

One thing worth noting is that you need to have done a 'restore' on the project(s) before running this check because it requires the projects.assets.json file to have been generated and be up to date.

There are a few ways that the behaviour of this check can be modified. For example, restricting the check to minor versions. You can find full details in the Microsoft dotnet list package docs.

So far so good, but what if we want to enforce updates in the CI/CD pipelines? We can achieve this by wrapping the command in a Bash script that will return a non-zero exit code if outdated dependencies are detected. I’ve recently used this in Azure DevOps pipelines, but the same technique will apply to other pipeline tools.

The script I used is here:

#!/bin/bash

updates=$(dotnet list package --outdated | grep -e 'has the following updates')

if [[ -n "$updates" ]]; then
    dotnet list package --outdated
    echo "##[error]Outdated NuGet packages were detected"
    exit 1
else
    echo "No outdated NuGet packages were detected"
    exit 0
fi

It simply looks for the text “has the following updates” in the output from calling dotnet list package --outdated and if that phrase is present, exits with the non-zero return code. The ##[error] tag is an Azure DevOps Pipeline feature which means the echoed message will be formatted as an error (i.e. appear in red) in the pipeline log details.

Validating Enums in .Net WebAPI

Recently, I needed to implement some validation rules on a .Net WebAPI. The API in question accepted a payload that, amongst other things, included a currency code (e.g. “GBP”, “EUR”, “USD”). On the face of it, this sounds pretty simple but there are a few things to watch out for in order to do this well.

We might start with something like the following code

using System.Text.Json;
using Microsoft.AspNetCore.Mvc;

[ApiController]
[Route("[controller]")]
public class ExampleController : ControllerBase
{
    private readonly ILogger<ExampleController> _logger;

    public ExampleController(ILogger<ExampleController> logger)
    {
        _logger = logger;
    }

    [HttpPost("payments")]
    public IActionResult Payments(Payment payment)
    {
        _logger.LogInformation(JsonSerializer.Serialize(payment));
        return this.Ok();
    }
}

/// <summary>
/// This is the type we accept as the payload into our API.
/// </summary>
public class Payment
{
    public Currency Currency { get; set; }

    public decimal Amount { get; set; }
}

/// <summary>
/// This is an example enumeration.
/// We want our API consumers to use currencies as strings.
/// </summary>
public enum Currency
{
    GBP,
    EUR,
    USD
}

The first problem we have here is that the Currency value in our Json payload has to be a number. Out of the box, the API won’t accept a string such as “USD” and we’ll get a standard “400 Bad Request” validation failure response.

Accepting Enum String Values

We can easily fix this issue by using the JsonStringEnumConverter class that is part of System.Text.Json. To do this we decorate the enum declaration with the JsonConverter attribute. Optionally, we could have added the attribute to the individual property, but if we decorate the enum declaration, strings will be acceptable wherever it used.

/// <summary>
/// This is an example enumeration.
/// We want our API consumers to use currencies as strings.
/// </summary>
[JsonConverter(typeof(JsonStringEnumConverter))]
public enum Currency
{
    GBP,
    EUR,
    USD,
}

We can now pass in Currency as a string value. Great! If we pass an invalid string, we’ll get a 400 Bad Request. Even better!

Limiting to Valid Enum Values

What we have now is a situation where, if we provide a string value, the validation insists that it is a valid member of the Currency enum. However, because this is an enum, we can also pass in a number. This number does not get validated, so in our controller our payment object contains an invalid value. We can limit the field to valid values by adding an attribute to the property, like so:

    [EnumDataType(typeof(Currency))]
    public Currency Currency { get; set; }

Making Enum Mandatory

What if we want this to be a mandatory field that must be supplied by our caller? We might assume that this is easy – simply add the Required attribute to our Currency property. Just like this:

    [Required, EnumDataType(typeof(Currency))]
    public Currency Currency { get; set; }

Unfortunately, that doesn’t give us the behaviour we want. The field will be marked as mandatory in Swagger/OpenAPI if we are using that, but if we omit the field entirely from our payload, the POST is accepted.

Why would this be? The reason for this is that the underlying type for all enums is a numeric type. Unless specified otherwise, this will be int. That means that enums are value types and when initialised will default to 0. As far as the .Net validation process is concerned, the value is always present, so the Required attribute doesn’t have the effect we need.

To address this is also a simple modification, provided that you can treat 0 as an invalid value. Simply set the first enum value to a non-zero value:

/// <summary>
/// This is an example enumeration.
/// We want our API consumers to use currencies as strings.
/// We also set the first value to 1 so that 0 is invalid.
/// </summary>
[JsonConverter(typeof(JsonStringEnumConverter))]
public enum Currency
{
    GBP = 1,
    EUR,
    USD,
}

We now have an enum property that

  • can be specified as a string
  • is mandatory so must be present in the payload
  • must be a valid enum value
  • is documented appropriately in Swagger/OpenAPI

Seven Steps to Quality Code – Step 7

Automated Tests

This is step 7, the final step in a series of articles, starting with Seven Steps To Quality Code – Introduction.

The biggest change in my development practices over the last few years has been has been the introduction of Test Driven Development (TDD).  I remember the first project in which I used it and the feeling it gave me.  A feeling of confidence.  Confidence that the code I had written worked and confidence that I could make changes without worrying about something else breaking unknowingly.

I used to be an assembly language programmer.  In that world, there were no helpful debuggers other than the ability to dump the contents of the registers to the screen or maybe post a “got here” message.  When I  moved to Visual Basic, I was blown away by the fact that I could step through code and inspect the values of variables.  This was simply amazing and of course changed the way I developed software.

Discovering Test Driven Development felt just the same.  A revolution in how I would code.

So now, a decade after those first steps, I am a big fan of TDD.  It suits the way I work (mostly) and really helps to lock in quality.

However, this step in the “seven steps” series is not really about TDD.  The series is more about improving what you already have.  TDD is great for a new project, but it’s too late for a project that’s already in existence.

I’m going to suggest a few very simple actions to take that will allow you to get some of the benefits of TDD.  If you haven’t tried TDD or any automated testing before, it will allow you to get a feel for it in a comfortable environment (i.e. your existing code) without the pressure of going all out on a drive to introduce TDD.

Perhaps at this point an overview of some terminology will be useful.  The table below explains a few terms.

TermDescription
Automated TestingA broad term that covers any form of testing where test cases are automatically run against the code and verified for correctness.
Unit TestingTraditionally, unit testing meant a developer performing their own manual tests on a module/component or other "unit" of software, whether formally or informally. In more modern parlance, it is often used interchangeably with "Automated Testing" above but strictly speaking, the 2 are different.
On one extreme, unit testing means automated testing of a single method - perhaps passing in a series of inputs and verifying that the outputs or the behaviour are correct. Often this will be through the use of "Mocks" (see below) to isolate the unit from any dependencies. Purists will insist that this is the only true definition.
Alternatively, unit testing may mean automated testing of a single class and its behaviour or perhaps a complete software component, though it is more likely that this would be minimal testing of a third party product.
MocksMartin Fowler has written an interesting article called "Mocks Aren't Stubs which describes far better than I ever could, what a mock is. For our purposes here though, we will consider a mock as some sort of software 'thing' used in place of some other software 'thing' in order to isolate the unit being tested and optionally verify behaviour.
Test Driven Development (TDD)This is where tests are written before the code and the code is written purely to make a test pass.
Red-Green-RefactorThis is a phrase that describes the process of TDD. 'Red' refers to the usual indicator against a test showing that it has failed (a red cross perhaps), 'Green' refers to the usual indicator to show that a test has passed (a green tick perhaps) and 'Refactor' refers to the task performed after a test passes, which is to revisit the code and improve it without changing its external behaviour. So, TDD is a repeating cycle of - write a failing test - make it pass - improve the code.
Integration TestAn integration test is a test that checks that several "units" work together. Sometimes referred to as "end to end" tests because of the desire to verify that the system under test functions as a whole from the beginning of a process to the end.
Test Prompted DevelopmentA phrase I use that describes a dangerous situation that can often occur in Test Driven Development. I will blog about this in the near future.

What To Do

The suggestions I have are very simple and should start to show benefits very quickly.

Test Bugs

Whenever a bug is discovered in code that does not have associated tests, write a test (or tests) that reproduces that bug and fix the bug by making the test pass.  Leave the test in the project permanently so that it is possible to verify whenever necessary that the bug has been fixed and has not returned.

This is really no different to how a developer often fixes bugs without automated testing. The really powerful part of it is retaining the test and incorporating it into the project.

Test New Helper Functions

The easiest kind of code to test is often the little helper functions.  If you are writing a new function to do something like string handling (perhaps concatenating first name, middle name and surname or something like that), write some tests.  Verify that the result is correct when the input values are null or empty, when boundaries are reached or breached.  For many functions of the kind I am talking about, you can produce a really solid set of tests that will give you absolute confidence that the code is correct.

Again, make sure the tests are retained as part of the project.

Incorporate Tests into Your Build Process

If you have any form of automated build, include your tests in that process.  All build automation systems will support this in an easily implemented way.  Ensure that test failures result in build failures and ensure that the failures are addressed as a priority.

Conclusion

As I said at the start of this article, automating provides amazing benefits and allows you to be really confident that your code performs as you expect.  A full TDD approach is pretty hard, but if you start by dipping your toe in the water as I have suggested, you can start with little risk and learn the techniques slowly.

Well, that is the seven steps finished.  If you are working in a development environment that needs this information then I hope that it has been useful.  Please let me know your experiences of putting this stuff into practice.  If you need any assistance, get in touch – I’ll be happy to help.

Seven Steps to Quality Code – Step 6

Code Reviews (Peer Reviews)

This is step 6 in a series of articles, starting with Seven Steps To Quality Code – Introduction.

Code reviews seem to me to be the aspect of software development that every organisation incorporates into their “standard process”.  Very few organisations appear to actually do it.  Unfortunately, when deadlines are tight and the pressure’s on, the code review is the first thing to get dropped.  All too often, when deadlines aren’t so tight and the pressure is less, it is still the first thing to get dropped.

In my view, this is a great shame as it is a real opportunity to reap massive benefits.  Benefits such as:

  • Encourages better code
  • Catching bugs early
  • Spreading knowledge about code
  • Improving code reuse
  • Improving maintainability
  • Validating requirements are met
  • Reinforcing, creating or evolving team standards

These lead to the more general benefits that we are trying to achieve by increasing code quality:

  • Reduced costs
  • More reliable software
  • Happier customers

I’ll discuss each of the direct benefits individually.

Encourages Better Code

If you are writing some code that you know will never be seen by anyone else, there will always be the temptation to take a few shortcuts – it doesn’t matter that I spelt that variable name wrong, it’s OK that I just copied and pasted that bit of code.  However, if you are coding in a situation where you know that someone else will be looking at your code, you will take more care to do it right.  No one wants to look like the lazy guy that couldn’t be bothered.

Catching Bugs Early

As we already know, the earlier a bug is caught the cheaper and easier it is to fix.  At code review time, fixing a bug is an awful lot easier and cheaper than when the software is live and in use by multiple customers.

Spreading Knowledge About Code

Keeping code hidden and secret is a “Bad Thing”.  Code should be exposed and available to other members of the team so that several people understand it and therefore can maintain and/or extend it.

The term “Bus Factor” is used to indicate the extent at which knowledge is spread in a team – it is the number of people that would need to be hit by a bus before a project could not continue.  A high bus factor is therefore something to aim for and code reviews will help in this regard, reducing the risk in the project.

Improving Code Reuse

Spreading knowledge also leads to improved code reuse.  When a reviewer sees the same pattern of code that has been used elsewhere, this can be raised as a review point and refactored to make use of a shared method or component or whatever.

More subtly, during a code review the participants will become aware of code that may be reused in the future.  For example, when Sally reviews John’s code she sees that he has created a method to remove the tax from a sales price.  Next time she wants to do the same thing, she’ll know that the feature has already been written.

Improving maintainability

For code to be maintainable, it has to be understandable.  A code review is an excellent way to determine if this is the case.  If another developer can’t make sense of the code that has been written, then it should be refactored to be more understandable.

Validating Requirements are Met

The code review is an ideal opportunity to check that the code under review actually does what it is supposed to do.  Just like catching bugs earlier, if this kind of defect can be caught earlier, they are easier and cheaper to fix.

Reinforcing, Creating or Evolving Team Standards

The more that code is shared and discussed, the more concensus will form around the standards that the team expects.  When code doesn’t adhere to the standards, it can be discovered and corrected or perhaps it can prompt a change in the standard.  This cannot happen without reviews.

Relationship to the Other Steps in the “Seven Steps”

This is step 6 in the Seven Steps and is heavily dependent on the previous steps.  Traditionally, code reviews are often peppered with lots of points about stylistic differences, casing issues etc.  Because these sort of issues will have been addressed automatically by StyleCop and other detectable issues will have been addressed by Code Analysis, a review can focus on more important things like efficiency and understandability and whether or not the code actually satisfies the requirements.

OK, So How Do We Do It?

A critical thing to remember is that it’s an area where diplomacy and sensitivity need to play a big part.  A lot of the time, there will be deep emotional attachment to the code that has been written.  Recognise that this is the case – you know how it feels when someone criticises that piece of work that you’ve been lovingly crafting for the last 2 weeks.

So, in light of this, I would suggest that it isn’t made into a big event.  It shouldn’t be the whole team doing the review.  This is a recipe for stress, humiliation and unhappy developers.  Make it a conversation.  A conversation between 2 people about the code in front of you.  Whenever possible, make it in person, sharing a workstation.

It is better to view it as working together to produce better software, rather than viewing it as a superior performing an inspection on a subordinate’s work.

It’s also important to remember that when reviewing someone else’s code, the fact that you “would have done it differently” doesn’t necessarily mean the code is wrong.  In fact, exposure to  different ways of thinking or of solving the problem is one of the extra benefits of code reviews.

Conclusion

We’re nearly there.  We have one last step to discuss – automated tests.