Validating Enums in .Net WebAPI

Recently, I needed to implement some validation rules on a .Net WebAPI. The API in question accepted a payload that, amongst other things, included a currency code (e.g. “GBP”, “EUR”, “USD”). On the face of it, this sounds pretty simple but there are a few things to watch out for in order to do this well.

We might start with something like the following code

using System.Text.Json;
using Microsoft.AspNetCore.Mvc;

public class ExampleController : ControllerBase
    private readonly ILogger<ExampleController> _logger;

    public ExampleController(ILogger<ExampleController> logger)
        _logger = logger;

    public IActionResult Payments(Payment payment)
        return this.Ok();

/// <summary>
/// This is the type we accept as the payload into our API.
/// </summary>
public class Payment
    public Currency Currency { get; set; }

    public decimal Amount { get; set; }

/// <summary>
/// This is an example enumeration.
/// We want our API consumers to use currencies as strings.
/// </summary>
public enum Currency

The first problem we have here is that the Currency value in our Json payload has to be a number. Out of the box, the API won’t accept a string such as “USD” and we’ll get a standard “400 Bad Request” validation failure response.

Accepting Enum String Values

We can easily fix this issue by using the JsonStringEnumConverter class that is part of System.Text.Json. To do this we decorate the enum declaration with the JsonConverter attribute. Optionally, we could have added the attribute to the individual property, but if we decorate the enum declaration, strings will be acceptable wherever it used.

/// <summary>
/// This is an example enumeration.
/// We want our API consumers to use currencies as strings.
/// </summary>
public enum Currency

We can now pass in Currency as a string value. Great! If we pass an invalid string, we’ll get a 400 Bad Request. Even better!

Limiting to Valid Enum Values

What we have now is a situation where, if we provide a string value, the validation insists that it is a valid member of the Currency enum. However, because this is an enum, we can also pass in a number. This number does not get validated, so in our controller our payment object contains an invalid value. We can limit the field to valid values by adding an attribute to the property, like so:

    public Currency Currency { get; set; }

Making Enum Mandatory

What if we want this to be a mandatory field that must be supplied by our caller? We might assume that this is easy – simply add the Required attribute to our Currency property. Just like this:

    [Required, EnumDataType(typeof(Currency))]
    public Currency Currency { get; set; }

Unfortunately, that doesn’t give us the behaviour we want. The field will be marked as mandatory in Swagger/OpenAPI if we are using that, but if we omit the field entirely from our payload, the POST is accepted.

Why would this be? The reason for this is that the underlying type for all enums is a numeric type. Unless specified otherwise, this will be int. That means that enums are value types and when initialised will default to 0. As far as the .Net validation process is concerned, the value is always present, so the Required attribute doesn’t have the effect we need.

To address this is also a simple modification, provided that you can treat 0 as an invalid value. Simply set the first enum value to a non-zero value:

/// <summary>
/// This is an example enumeration.
/// We want our API consumers to use currencies as strings.
/// We also set the first value to 1 so that 0 is invalid.
/// </summary>
public enum Currency
    GBP = 1,

We now have an enum property that

  • can be specified as a string
  • is mandatory so must be present in the payload
  • must be a valid enum value
  • is documented appropriately in Swagger/OpenAPI

Seven Steps to Quality Code – Step 7

Automated Tests

This is step 7, the final step in a series of articles, starting with Seven Steps To Quality Code – Introduction.

The biggest change in my development practices over the last few years has been has been the introduction of Test Driven Development (TDD).  I remember the first project in which I used it and the feeling it gave me.  A feeling of confidence.  Confidence that the code I had written worked and confidence that I could make changes without worrying about something else breaking unknowingly.

I used to be an assembly language programmer.  In that world, there were no helpful debuggers other than the ability to dump the contents of the registers to the screen or maybe post a “got here” message.  When I  moved to Visual Basic, I was blown away by the fact that I could step through code and inspect the values of variables.  This was simply amazing and of course changed the way I developed software.

Discovering Test Driven Development felt just the same.  A revolution in how I would code.

So now, a decade after those first steps, I am a big fan of TDD.  It suits the way I work (mostly) and really helps to lock in quality.

However, this step in the “seven steps” series is not really about TDD.  The series is more about improving what you already have.  TDD is great for a new project, but it’s too late for a project that’s already in existence.

I’m going to suggest a few very simple actions to take that will allow you to get some of the benefits of TDD.  If you haven’t tried TDD or any automated testing before, it will allow you to get a feel for it in a comfortable environment (i.e. your existing code) without the pressure of going all out on a drive to introduce TDD.

Perhaps at this point an overview of some terminology will be useful.  The table below explains a few terms.

Automated TestingA broad term that covers any form of testing where test cases are automatically run against the code and verified for correctness.
Unit TestingTraditionally, unit testing meant a developer performing their own manual tests on a module/component or other "unit" of software, whether formally or informally. In more modern parlance, it is often used interchangeably with "Automated Testing" above but strictly speaking, the 2 are different.
On one extreme, unit testing means automated testing of a single method - perhaps passing in a series of inputs and verifying that the outputs or the behaviour are correct. Often this will be through the use of "Mocks" (see below) to isolate the unit from any dependencies. Purists will insist that this is the only true definition.
Alternatively, unit testing may mean automated testing of a single class and its behaviour or perhaps a complete software component, though it is more likely that this would be minimal testing of a third party product.
MocksMartin Fowler has written an interesting article called "Mocks Aren't Stubs which describes far better than I ever could, what a mock is. For our purposes here though, we will consider a mock as some sort of software 'thing' used in place of some other software 'thing' in order to isolate the unit being tested and optionally verify behaviour.
Test Driven Development (TDD)This is where tests are written before the code and the code is written purely to make a test pass.
Red-Green-RefactorThis is a phrase that describes the process of TDD. 'Red' refers to the usual indicator against a test showing that it has failed (a red cross perhaps), 'Green' refers to the usual indicator to show that a test has passed (a green tick perhaps) and 'Refactor' refers to the task performed after a test passes, which is to revisit the code and improve it without changing its external behaviour. So, TDD is a repeating cycle of - write a failing test - make it pass - improve the code.
Integration TestAn integration test is a test that checks that several "units" work together. Sometimes referred to as "end to end" tests because of the desire to verify that the system under test functions as a whole from the beginning of a process to the end.
Test Prompted DevelopmentA phrase I use that describes a dangerous situation that can often occur in Test Driven Development. I will blog about this in the near future.

What To Do

The suggestions I have are very simple and should start to show benefits very quickly.

Test Bugs

Whenever a bug is discovered in code that does not have associated tests, write a test (or tests) that reproduces that bug and fix the bug by making the test pass.  Leave the test in the project permanently so that it is possible to verify whenever necessary that the bug has been fixed and has not returned.

This is really no different to how a developer often fixes bugs without automated testing. The really powerful part of it is retaining the test and incorporating it into the project.

Test New Helper Functions

The easiest kind of code to test is often the little helper functions.  If you are writing a new function to do something like string handling (perhaps concatenating first name, middle name and surname or something like that), write some tests.  Verify that the result is correct when the input values are null or empty, when boundaries are reached or breached.  For many functions of the kind I am talking about, you can produce a really solid set of tests that will give you absolute confidence that the code is correct.

Again, make sure the tests are retained as part of the project.

Incorporate Tests into Your Build Process

If you have any form of automated build, include your tests in that process.  All build automation systems will support this in an easily implemented way.  Ensure that test failures result in build failures and ensure that the failures are addressed as a priority.


As I said at the start of this article, automating provides amazing benefits and allows you to be really confident that your code performs as you expect.  A full TDD approach is pretty hard, but if you start by dipping your toe in the water as I have suggested, you can start with little risk and learn the techniques slowly.

Well, that is the seven steps finished.  If you are working in a development environment that needs this information then I hope that it has been useful.  Please let me know your experiences of putting this stuff into practice.  If you need any assistance, get in touch – I’ll be happy to help.

Seven Steps to Quality Code – Step 6

Code Reviews (Peer Reviews)

This is step 6 in a series of articles, starting with Seven Steps To Quality Code – Introduction.

Code reviews seem to me to be the aspect of software development that every organisation incorporates into their “standard process”.  Very few organisations appear to actually do it.  Unfortunately, when deadlines are tight and the pressure’s on, the code review is the first thing to get dropped.  All too often, when deadlines aren’t so tight and the pressure is less, it is still the first thing to get dropped.

In my view, this is a great shame as it is a real opportunity to reap massive benefits.  Benefits such as:

  • Encourages better code
  • Catching bugs early
  • Spreading knowledge about code
  • Improving code reuse
  • Improving maintainability
  • Validating requirements are met
  • Reinforcing, creating or evolving team standards

These lead to the more general benefits that we are trying to achieve by increasing code quality:

  • Reduced costs
  • More reliable software
  • Happier customers

I’ll discuss each of the direct benefits individually.

Encourages Better Code

If you are writing some code that you know will never be seen by anyone else, there will always be the temptation to take a few shortcuts – it doesn’t matter that I spelt that variable name wrong, it’s OK that I just copied and pasted that bit of code.  However, if you are coding in a situation where you know that someone else will be looking at your code, you will take more care to do it right.  No one wants to look like the lazy guy that couldn’t be bothered.

Catching Bugs Early

As we already know, the earlier a bug is caught the cheaper and easier it is to fix.  At code review time, fixing a bug is an awful lot easier and cheaper than when the software is live and in use by multiple customers.

Spreading Knowledge About Code

Keeping code hidden and secret is a “Bad Thing”.  Code should be exposed and available to other members of the team so that several people understand it and therefore can maintain and/or extend it.

The term “Bus Factor” is used to indicate the extent at which knowledge is spread in a team – it is the number of people that would need to be hit by a bus before a project could not continue.  A high bus factor is therefore something to aim for and code reviews will help in this regard, reducing the risk in the project.

Improving Code Reuse

Spreading knowledge also leads to improved code reuse.  When a reviewer sees the same pattern of code that has been used elsewhere, this can be raised as a review point and refactored to make use of a shared method or component or whatever.

More subtly, during a code review the participants will become aware of code that may be reused in the future.  For example, when Sally reviews John’s code she sees that he has created a method to remove the tax from a sales price.  Next time she wants to do the same thing, she’ll know that the feature has already been written.

Improving maintainability

For code to be maintainable, it has to be understandable.  A code review is an excellent way to determine if this is the case.  If another developer can’t make sense of the code that has been written, then it should be refactored to be more understandable.

Validating Requirements are Met

The code review is an ideal opportunity to check that the code under review actually does what it is supposed to do.  Just like catching bugs earlier, if this kind of defect can be caught earlier, they are easier and cheaper to fix.

Reinforcing, Creating or Evolving Team Standards

The more that code is shared and discussed, the more concensus will form around the standards that the team expects.  When code doesn’t adhere to the standards, it can be discovered and corrected or perhaps it can prompt a change in the standard.  This cannot happen without reviews.

Relationship to the Other Steps in the “Seven Steps”

This is step 6 in the Seven Steps and is heavily dependent on the previous steps.  Traditionally, code reviews are often peppered with lots of points about stylistic differences, casing issues etc.  Because these sort of issues will have been addressed automatically by StyleCop and other detectable issues will have been addressed by Code Analysis, a review can focus on more important things like efficiency and understandability and whether or not the code actually satisfies the requirements.

OK, So How Do We Do It?

A critical thing to remember is that it’s an area where diplomacy and sensitivity need to play a big part.  A lot of the time, there will be deep emotional attachment to the code that has been written.  Recognise that this is the case – you know how it feels when someone criticises that piece of work that you’ve been lovingly crafting for the last 2 weeks.

So, in light of this, I would suggest that it isn’t made into a big event.  It shouldn’t be the whole team doing the review.  This is a recipe for stress, humiliation and unhappy developers.  Make it a conversation.  A conversation between 2 people about the code in front of you.  Whenever possible, make it in person, sharing a workstation.

It is better to view it as working together to produce better software, rather than viewing it as a superior performing an inspection on a subordinate’s work.

It’s also important to remember that when reviewing someone else’s code, the fact that you “would have done it differently” doesn’t necessarily mean the code is wrong.  In fact, exposure to  different ways of thinking or of solving the problem is one of the extra benefits of code reviews.


We’re nearly there.  We have one last step to discuss – automated tests.

Seven Steps to Quality Code – Step 5

Code Analysis – Further Rules

This is step 5 in a series of articles, starting with Seven Steps To Quality Code – Introduction.

This step is simply a case of turning the ratchet a little more and locking in further quality gains.

Previously, my suggestion for existing projects has been to set the Code Analysis rule set to “Minimum recommended rules” (or in Visual Studio versions after 2010, “Managed Recommended Rules”) in order to keep the number of rule violations to a minimum.  Now I am going to suggest that the rules are tightened by using a rule set or combination of rule sets that check for further violations.

Ultimately, our goal is to have the “All Rules” rule set enabled on all code, but in practice this may not be achievable for legacy code.  What we can do, is work towards this so that we can catch the more important issues in our code.  For example, the effort of implementing the globalisation rules in legacy code is not going to give you much bang for buck (unless of course globalisation has become a required feature!).

A great feature of Code Analysis is that we can progressively add further rules in order to increase the range of issues that are checked.  The rule sets that are available, however, do not give us a sequential order that we can progressively move through because particular rule sets focus on particular issues.  We can achieve the same effect though by using the option of progressively applying multiple rule sets.  You can do this by selecting the “Choose multiple rule sets” option on the Code Analysis tab of a project’s properties screen, as shown in the screenshot below:

Screenshot showing where to select multiple Code Analysis rule sets
Selecting the Multiple Rule Sets Option

A comprehensive list of the rule sets available can be found on the Microsoft site:

Visual Studio 2010 Code Analysis Rule Sets
Visual Studio 2013 Code Analysis Rule Sets

To avoid getting swamped by too many violations and to allow a “small bite at a time” approach, I suggest considering 1 project (i.e. csproj file) at a time and performing the following procedure:

  1. Run the “All Rules” rule set on the project.  If you consider the number of violations to be manageable, use the “All Rules” rule set.  This is the perfect situation to be in, ignore the further steps in this list and proceed to fix the violations.
  2. If “All Rules” is looking like a step too far, progressively add in the following rule sets one at at time, fixing the violations and checking in as you go:
    Visual Studio 2010Visual Studio 2013
    Microsoft Basic Correctness RulesBasic Correctness Rules rule set for managed code
    Microsoft Basic Design Guideline RulesBasic Design Guideline Rules rule set for managed code
    Microsoft Extended Correctness RulesExtended Correctness Rules rule set for managed code
    Microsoft Extended Design Guideline RulesExtended Design Guidelines Rules rule set for managed code
  3. At this point, you may be in a position to apply the “All Rules” rule set and have a manageable number of violations.  Alternatively, if that still produces too many violations, you may have a particular need for one of the remaining focused rule sets (security or globalization rules) and wish to apply one of those. (Actually, by the time you get to this stage the likelihood is that if you have a lot of violations under “All Rules” they will all be globalization related).


Once this step is followed for a solution, you will be in a great place for continuing development in a high quality environment with automatic guards in place to keep it there.

In step 6 we’ll look at peer reviews – something that everyone knows about but all too often they are the easiest thing to discard in a development process.