Latest Blog Posts

Effectively stubbing remote HTTP service dependencies with HttpClient Interception

Posted on Thursday, 21 May 2020

Recently I've been reading the new Software Engineering at Google book where I've found the chapters on testing to be particularly interesting. One paragraph that especially resonated with my own personal experiences with testing was the following excerpt:

Excerpt from the Software Engineering at Google book

This chapter got me thinking a lot about my preference for testing units of behaviour as opposed to implementation details, and as someone that works with a lot of services that communicate via HTTP there's one particular library that's helped me time and time again, so much so that I feel deserves its own post.

Introducing HttpClient Interception

From a high level, HttpClient Interception is a .NET Standard library designed to intercept server-side HTTP dependencies.

Prior to learning about it the two patterns I most frequently used in order to stub responses from an upstream API were either to create a custom HttpMessageHandler (which would return desired responses/assertions on a given input), or create an interface over an HttpClient. Both approaches tend to be painful (the former being difficult to mock due to lack of an interface and the latter somewhat the same).

HttpClient Interception, the brainchild of Martin Costello, aims to alleviate these pains making Black Box testing code that depends on one or more external HTTP APIs much less intrusive.

Let's take a look

HttpClient Interception offers a convenient builder pattern over returning a desired response for a given input. Let's take a look at what a simple test would look like this:

In this example we create an instance of HttpRequestInterceptionBuilder then specify the request we wish to match against and the response we'd like to return. We then register the builder against our HttpClientInterceptionOptions type and use it to create an instance of an HttpClient.

Now any requests that match our builder's parameters will return the response defined (a 500 response code) without the request leaving the HttpClient.

Missing Registrations

At this point it's worth calling out the ThrowOnMissingRegistration property.

Setting ThrowOnMissingRegistration to true ensures no request leaves an HttpClient created or modified by HttpClient Interception. This is helpful for ensuring you have visibility over unmatched requests, otherwise leaving ThrowOnMissingRegistration as false could mean your request gets handled by the real upstream service, causing a false positive or having a negative impact on your test without you being aware of it.

Though naturally it all depends on your use case so in some instances this may actually be a desired behaviour so consideration is required.

Matching multiple requests

If you want to stub the response to more than one HTTP request you can register multiple builders like so:

Here I've registered two separate endpoints on the same HttpClientInterceptorOptions object which is then used to create our HttpClient. Now requests to any of the two registered endpoints will result in either a Not Found response (404) or an Internal Server Error (500) response.

Creating an HttpMessageHandler

Not all libraries will enable you to use a custom HttpClient, some may only offer you the ability to add a new HttpMessageHandler, thankfully HttpClient Interception can also create an HttpMessageHandler that can be passed to the HttpClient.

For example:

Using HttpClientInterception with ASP.NET Core and IHttpClientFactory.

Now we've seen what HttpClient Interception can offer, let's take a look at how we could use it to test an ASP.NET Core application depends on an upstream API. This example is taken straight from the Sample application within the HttpClient Interception repository.

Imagine we have an application that makes a remote call to GitHub’s API via Refit. This is what the controller action might look like:

To test this we can utilise HttpClient Interception to stub the response from GitHub's API with the following fixture (based on the common WebApplicationFactory approach documented here).

Notice that when overriding ConfigureWebHost we're taking advantage of the IHttpMessageHandlerBuilderFilter interface to add an additional HttpMessageHandler (created by HttpClient Interception) to any HttpClient created via the IHttpClientFactory API.

It's worth noting that you can also register the HttpMessageHandler using the ConfigurePrimaryHttpMessageHandler (see the docs here) method that lives on the IHttpClientBuilder interface.

Now we can easily control the responses from the GitHub API from within our tests, without having to mock any internal implementation details of our application.

On Closing

Hopefully this post has demonstrated just how helpful and versatile HttpClient Interception can be and how it can create maintainable tests that don't break the moment you change an implementation detail. I'd highly recommend you check it our as this post only scratches the surface of what it can do.

I'd like to close this post with a quote from Kent Beck:

"I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence."

Consider Chesterton's Fence Principle Before Changing That Code

Posted on Thursday, 02 Apr 2020

Let's begin with a short story.

The sun is shining and you're walking through a wood when you happen upon quiet, desolate road. As you walk along the road you encounter an rather peculiar placed fence running from one side of the road to the other. "This fence has no use" you mutter under your breath as you remove it.

Removing the fence could very well be the correct thing to do, nonetheless you have fallen victim to the principle of Chesterton's Fence. Before removing something we should first seek to understand the reasoning and rationale for why it was there in the first place:

"In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it."

As software developers we love writing code. Coupled with our tendency to want to prove ourselves we are often quick to identify code that we hastily deem as "bad" or "wrong", especially when diving into an old codebase.

What we sometimes forget is that software isn't designed and built in a bubble, it's a bi-product of a series of complex interactions of events and factors.

Instead of jumping to conclusions we should first seek to control our impulse to simplify the things around us and be more cognizant of possible context or history by trying to understand why the 'fence is positioned across the road' in the first place. This is especially true for code that looks unusual or out of place. You may actually discover that the unusual approach has a valid reason for why it's designed the way it is.

Similarly, the Systems Thinking, the Local Rationality Principle when summarised states:

"People do things that make sense to them given their goals, understanding of the situation and focus of attention at that time. Work needs to be understood from the local perspectives of those doing the work"

By going the extra mile you can gain a much deeper understanding of why the code or system you're working on is build the way it is, what lead the developer to do something a particular way or what conditions they were under that led them choose a particular approach, because learning is not just about understanding new things, but also understanding design decisions and the trade-offs made of existing things.

A couple of nice Tuple use cases

Posted on Monday, 16 Mar 2020

C# has had a Tuple type (System.Tuple) for sometime now, however a re-occuring theme was it generally lacked the utility that tuples in languages with native support had. This partly due to the fact that it was really just a class utilising generics with no real language or runtime support, resulting in its adoption being rather limited.

Since C# 7 and the introduction of the ValueTuple type (freeing up some GC cycles) and tuple deconstruction, things seem to have changed and I've noticed more and more libraries and developers utilising them, myself included.

With that in mind I'd like to highlight a couple of creative use cases I've seen people using that I'd never considered before:

Tuple constructors

I first learned about this pattern at a recent Jon Skeet talk I attended at the .NET South West meet up I co-organise. Jon was demoing some new C# 8 language features and stopped to talk about how he'd adopted this particular pattern.

Given the following constructor we can make this a little shorter without losing much readability using tuples:

When we look at the lowered version of the code using a tool such as SharpLab, we can see that the Tupled version is lowered to exactly the same syntax as the version before which means there's no cost in using such a pattern.

Tuple Methods

As you might expect, you can also use the same approach for methods. Again this incurs no additional cost:

Tuple / Deconstructor loops

Another pattern I stumbled upon is using tuples within a loop. For instance given the following dictionary of people, due to KeyValuePair now featuring a Deconstruct method, we can unpack the contents in a few ways.

Traditionally we'd have done this:

With the cost efficiently of the ValueTuple type we can start to get creative:

At this point we can go one further by adding a Deconstruct method in our Person class, allowing us to deconstruct the properties into a tuple:

Benchmarking the above patterns

For peace of mind, let's benchmark the aforementioned patterns with BenchmarkDotNet to see what impact it has:

|                Method |     Mean |   Error |  StdDev |  Gen 0 | Gen 1 | Gen 2 | Allocated |
|---------------------- |---------:|--------:|--------:|-------:|------:|------:|----------:|
|                  List | 142.9 ns | 2.92 ns | 6.17 ns | 0.0439 |     - |     - |     184 B |
| DeconstructionToTuple | 138.8 ns | 2.21 ns | 2.07 ns | 0.0439 |     - |     - |     184 B |

As you can see they're pretty much equal in terms of cost, with the added benefit of a more expressive syntax.

Conclusion

Hopefully the patterns highlighted above are as new to you as they were to me, and sparked some more interest in the utility of the ValueTuple type, especially when used in conjunction with KeyValuePair and/or the Deconstruct method. I can only assume we'll start to see more patterns like this emerge in time as Record types make their way into C#.

Chaos Engineering your .NET applications using Simmy

Posted on Friday, 03 Jan 2020

One package I've been using with great success recently is Simmy, so much so that I feel it deserves its very own blog post.

What is Simmy and why you should use it?

Simmy is a fault-injection library that integrates with Polly, the popular .NET transient-fault-handling library. Its name comes from the Simian Army toolset, a suite of tools created by engineers at Netflix who recognised that designing a fault tolerant architecture wasn't enough - you have to exercise it, normalising failure to ensure your system can handle it when it inevitably happens.

This idea isn't new. Experts in the resiliency engineering field such as Dr. Richard Cook have published multiple papers around this topic whilst studying high-risk sectors (such as traffic control and health care). He summarises failure quite nicely in his Working at the Center of the Cyclone talk when he said:

"You build things differently when you expect them to fail. Failure is normal, the failed state is the normal state".

Dr. Richard Cook (amongst others in the resiliency engineering industry) propose that in a complex system, failure is the normal state and it's humans that build mechanisms (either sociologically, organisationally or technically) to ensure systems continue to operate. Without human intervention, failure will happen.

Dr. Richard Cook's paper How Complex Systems Fail goes further by saying:

"The high consequences of failure lead over time to the construction of multiple layers of defence against failure. These defences include obvious technical components (e.g. backup systems, ‘safety’ features of equipment) and human components (e.g. training, knowledge) but also a variety of organisational, institutional, and regulatory defences (e.g. policies and procedures, certification, work rules, team training). The effect of these measures is to provide a series of shields that normally divert operations away from accidents."

This shift in perspective is one of the tenants of Chaos Engineering and why Netflix built the Simian Army - to regularly introduce failure in a safe, controlled manner to force their engineers to consider and handle those failures as part of regular, everyday work. So when those failures do happen, they won't even notice.

With that in mind let's take a look at how we can use Simmy to regularly test our transient fault-handling mechanisms such as timeouts, circuit breakers and graceful degradations.

Simmy in Action

If you're familiar with Polly then Simmy won't take long to pick up as Simmy's fault injection behaviours are Polly policies, so everything fits together nicely and instantly feels familiar.

Let's take a look at Simmy's current failure modes.

Simmy's Failure Modes

At the time of writing Simmy offers the following types of failure injection policies:

Fault Policy

A fault policy can inject exceptions, or substitute results. This gives you the ability to control the type of result that can be returned.

For instance, the following example causes the chaos policy to throw SocketException with a probability of 5% when enabled.

var fault = new SocketException(errorCode: 10013);

var policy = MonkeyPolicy.InjectFault(
	fault, 
	injectionRate: 0.05, 
	enabled: () => isEnabled()  
	);

Latency Policy

Like Netflix's Latency Monkey, a latency policy enables you to inject latency into executions such as remote calls before the calls are made.

var policy = MonkeyPolicy.InjectLatency(
	latency: TimeSpan.FromSeconds(5),
	injectionRate: 0.1,
	enabled: () => isEnabled()
	);

Behaviour Policy

Simmy's Behaviour Policy enables you to invoke any custom behaviour within your system (such as restarting a VM, or executing a custom call or script) before a call is placed.

For instance:

var policy = MonkeyPolicy.InjectBehaviour(
	behaviour: () => KillDockerContainer(), 
	injectionRate: 0.05,
	enabled: () => isEnabled()
	);

A trivial example

Once we've defined our chaos policy we can use the standard Polly APIs to wrap our existing Polly policies.

var latencyPolicy = MonkeyPolicy.InjectLatency(
	latency: TimeSpan.FromSeconds(5),
	injectionRate: 0.5,
	enabled: () => isEnabled()
	);

...

PolicyWrap policies = Policy.WrapAsync(timeoutPolicy, latencyPolicy);

In the example above we introduce 5 seconds latency in 50% of our calls, which depending our timeout policy will force timeouts within our application making us more aware of how our application will handle timeouts.

By registering the Simmy policy as the inner most policy means it'll be invoked just before the outbound call.

Using Polly registries

When implementing Simmy's chaos policies you can use the aforementioned WrapAsync method but you'll probably want to implement them with as little change to your existing policy code as possible, this is why I'd recommend using Polly's PolicyRegistry type. This way you can configure your policies then easily wrap them with your Simmy policies:

// Startup.cs

var policyRegistry = services.AddPolicyRegistry();
policyRegistry["LatencyPolicy"] = GetGetLatencyPolicy();

...

if (env.IsDevelopment())
{
    // Wrap every policy in the policy registry in Simmy chaos injectors.
    var registry = app.ApplicationServices.GetRequiredService<IPolicyRegistry<string>>();
    registry?.AddChaosInjectors();
}

Testing failure modes within integration or end-to-end style tests

Prior to learning about Simmy I used to test Polly policies via unit tests, in a lot of cases this is fine - however when you want to test them these policies part of an integration or end-to-end style test to understand how your application handles slow running requests or times outs things can get a little trickier as it can be hard to replicate these things under an automated test condition.

Using Simmy's Chaos policies we're now able to invoke those behaviours from the outside which is where Simmy really shines.

Taking it further

Regularly and reliably testing some of the fault-tolerance patterns in a real world environment (such as staging, or dare I say it - Production!) can be challenging. If you're on a platform such as Kubernetes then introducing failure into your system can trivial as there are plenty of tools that can do this for you. For anyone else this can be a challenge. Simmy has opened the door to make this so much easier, so much so that at one point we had it enabled in our staging environment to introduce a healthy amount of failure into our application.

As Adrian Cockcroft said in his Managing Failure Modes in Microservice Architectures talk:

"If we change the name from chaos engineering to continuous resilience, will you let us do it all the time in production?"

Wrapping it up

Hopefully this post has demonstrated the value in Simmy. Chaos Engineering is a really interesting area and it's great to see tools like this appearing to support running experiments in your .NET applications. Hopefully over the next coming years we'll start to see more tooling emerge, enabling us to introduce failure scenarios into our applications as part of regular work.

Managing your .NET Core SDK versions with the .NET Install SDK Global Tool

Posted on Tuesday, 03 Sep 2019

During my day to day programming activities I regularly find myself switching between various .NET Core solutions. At work we have a multitude of .NET Core microservices/microsites and at home I enjoy contributing to OSS projects/reading other people's code. As I'm switching between these solutions I regularly come across a project that uses a global.json file to define the version of the .NET Core SDK required.


What is the global.json file?

For those that might not be familiar the global.json file, here's an excerpt from the docs:

The global.json file allows you to define which .NET Core SDK version is used when you run .NET Core CLI commands. Selecting the .NET Core SDK is independent from specifying the runtime your project targets. The .NET Core SDK version indicates which versions of the .NET Core CLI tools are used. In general, you want to use the latest version of the tools, so no global.json file is needed.

You can read more about it here, so let's continue.


Upon stumbling on a project with a global.json file I'd go through the manual process of locating the correct version of the SDK to download then installing it. After a number of times doing this, as most developers would, I decided to remove the friction by creating a .NET Core global tool to automate this process.

The result is the .NET Core Install Global SDK global tool. You can also find it on NuGet here.

Note: Before I continue, a huge thanks to Stuart Lang who was running into the same frustrations, noticed I'd started this tool and contributed a tonne towards it.

.NET Core Install SDK Global Tool

If you want to give it a try you can install the global tool on your machine by running the following command (assuming you've got .NET Core installed).

$ dotnet tool install -g installsdkglobaltool

Once installed the global tool has the following features:

Install .NET Core SDK based on global.json file

If you navigate to a folder with a global.json file in it and run the following command:

$ dotnet install-sdk

The global tool will check the contents of the global.json file and download then start the installation of the defined version of the .NET Core SDK.

Install the latest preview of the .NET Core SDK

It's always fun playing with the latest preview releases of the .NET Core SDK so to save you time finding the latest version you can simply run:

$ dotnet install-sdk --latest-preview

This will download and start the installation of the latest preview version of the .NET Core SDK.

Is this suitable for build/CI environments?

No, certainly not at this moment in time. This global tool has been built with a consumer focus so does not install the SDK in a headless fashion. Instead it launches the install and still gives you the same control you're used to (such as choosing install location).

If you're interested in what the whole experience looks like then check out the video below:

Until next time!

Approval Testing your Open API/Swagger Documents

Posted on Wednesday, 28 Aug 2019

The team I work in at Just Eat have a number of HTTP based APIs which are consumed by other components, some in internally others externally. Like a lot of people building HTTP based APIs we use Swagger for both developer experimentation and documentation of the endpoints exposed. This is done via the Swagger UI, which uses an Open API document (formally Swagger docs) that describes the endpoints and resources in JSON form.

As our APIs evolve it's imperative that we don't unintentionally break any of these public contracts as this would cause headaches for consumers of our service.

What we need is visibility of intentional or unintentional changes to this contract, this is where approval testing has helped.

What is approval testing?

Approval testing is a method of testing that not many are familiar with. If I were to hypothesise why this is, I'd say it's because of it narrow application, especially when contrasted to other more common forms of forms of testing.

Assertion testing

We're all familiar with assertion based testing - you order your test in an arrange, act, assert flow where you first 'arrange' the test, then execute the action (act) then 'assert' the output.

For instance:

// Arrange
var child1 = 13;
var child2 = 22;

// Act
var age = calculateAge(13, 22);

// Assert
age.ShouldBe(35);

Approval testing

Approval testing follows the same pattern but with one difference; instead of asserting the expected return value given your set of input parameters, you 'approve' the output instead.

This slight shift changes your perspective on what a failing test means. In other words, with Approval Testing the failed test doesn't prove something has broken but instead flags that the given output differs from the previous approved output and needs approving.

Actions speak louder than words so let's take a look at how we can apply this method of testing to our Open API documents in order to gain visibility of changes to the contract.

Test Setup

In this example I'll use a simple .NET Core based API that has Swagger setup with Swagger UI. The test will use the Microsoft.AspNetCore.Mvc.Testing package for running our API in-memory. If you're not familiar with testing this way then be sure to check out the docs if you wanted to try this yourself.

First let's take a look at our application:

Our ASP.NET Core Application

// ProductController.cs 

[Route("api/[controller]")]
[ApiController]
public class ProductController : ControllerBase
{
    [HttpGet]
    public ActionResult<IEnumerable<ProductViewModel>> Get()
    {
        return new List<ProductViewModel>
        {
            new ProductViewModel {Id = 1, Name = "Product 1"},
            new ProductViewModel {Id = 2, Name = "Product 1"},
            new ProductViewModel {Id = 3, Name = "Product 1"}
        };
    }
}
// Program.cs

public class Program
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }
    
    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
        .UseStartup<Startup>();
}

// ProductViewModel.cs

public class ProductViewModel
{
    public string Name { get; set; }

    public int Id { get; set; }

    public string Description { get; set; }
}
// Startup.cs

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        ...

        services.AddSwaggerGen(c =>
        {
            c.SwaggerDoc("v1", new OpenApiInfo { Title = "My API", Version = "v1" });
        });
        
    }

    public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
    {
        ...
        
        app.UseSwagger();
        app.UseSwaggerUI(c =>
        {
            c.SwaggerEndpoint("/swagger/v1/swagger.json", "My API V1");
        });
    }
}

If we were to launch our API and go to /swagger/v1/swagger.json we should see a json document that includes information about the resources our API exposes.

Now we have a working API, let's look at our test setup.

Our Test Setup

As mentioned above, our test will use the Microsoft.AspNetCore.Mvc.Testing package for running our API in-memory. Why do we need to run it in-memory? Hold that thought and read on.

We'll use a simple xunit fixture class that uses WebApplicationFactory<T> to run our API in memory.

// ApiFixture.cs

public class ApiFixture
{
    private readonly WebApplicationFactory<Startup> _fixture;

    public ApiFixture()
    {
        _fixture = new WebApplicationFactory<Startup>();
    }

    public HttpClient Create()
    {
        return _fixture.CreateClient();
    }
}

Now we've got our API and infrastructure setup, this is where the magic happens. Below is a simple test that will perform the following:

  1. Run our ASP.NET Core API in memory and allow the tests to call it via an HttpClient.
  2. Make a GET request to the Open API document (Swagger docs) and parse the content into a string.
  3. Call .ShouldMatchApproved() on the string content.

What is ShouldMatchApproved()? Read on...

// OpenApiDocumentTests.cs

public class OpenApiDocumentTests : IClassFixture<ApiFixture>
{
    private readonly ApiFixture _apiFixture;

    public OpenApiDocumentTests(ApiFixture apiFixture)
    {
        _apiFixture = apiFixture;
    }
    
    [Fact]
    public async Task DocumentHasChanged()
    {
        // Arrange
        var client = _apiFixture.Create();

        // Act
        var request = await  client.GetAsync("/swagger/v1/swagger.json");
        var content = await request.Content.ReadAsStringAsync();
        
        // Assert
        content.ShouldMatchApproved();
    }
}

Shouldly and .ShouldMatchApproved()

Shouldly is an open-source assertion testing framework that I frequently use (and as a result, contribute to) that simplifies test assertions.

One assertion method Shouldly provides that sets it apart from others is the .ShouldMatchApproved() extension which hangs off of .NET's string type. This extension method enables us to easily apply approval based testing to any string. There are other libraries such as ApprovalTests.Net that support more complex use cases, but for this example Shouldly will suffice.

How does .ShouldMatchApproved() work?

Upon executing your test .ShouldMatchApproved() performs the following steps:

  1. First it checks to see if an approved text file lives in the expected location on disk.

  2. If the aforementioned file exists Shouldly will diff the contents of the file (containing the expected) against the input (the actual) of the test.

  3. If ShouldMatchApproved detects a difference between the expected and actual values then it will scan common directories for a supported diff tool (such as Beyond Compare, Win Merge etc, or even VS Code).

  4. If it finds a supported difftool it will automatically launch it, prompting you to approve the diffs and save the approved copy to the location described in Step 1.

  5. Once approved and saved you can rerun the test and it will pass.

Any changes to your API from this point on will result in a test pass, providing no difference is detected between your test case and the approval file stored locally. If a change is detected then the above process starts again.

One of the powerful aspects of this method of testing is that the approval file is committed to source control, meaning those diffs to the contract are visible to anyone reviewing your changes, whilst also keeping a history of changes to the public contract.

Demo

Along side the source code to the demonstration in this post and the below video, I've published quick demonstration on YouTube which you can take a look at. The video starts by first creating our approved document, I then go on to make a change to a model exposed via the Open API document, approve that change then rerun the test again.

You can stop mocking ILogger

Posted on Saturday, 01 Jun 2019

Just a quick post, nothing technically challenging but hopefully valuable to some none the less.

It all started a couple of days ago when I found myself browsing through Microsoft's Logging Abstractions library source code. For those that aren't familiar with this library the aim of it (as the name suggests) is to provide a set of abstractons over the top of common APIs that have emerged around logging over time (take ILogger for instance).

As I was reading the source code I noticed the library contains an implementation of the common ILogger inferface called NullLogger. "Great!" I thought, "I really don't like mocking and now I don't need to mock ILogger anymore!"

Initially I thought this was old news and I'd somehow missed the memo, so I swiftly moved on. But given the number of likes and retweets a tweet of the discovery I made received I thought it would be valuable to write a short post as a nod of its existence for anyone who hasn't seen it yet.

Logger.Instance

As you can see the implementation of the class is very straight forward, it does nothing at all...

    public class NullLogger : ILogger
    {
        public static NullLogger Instance { get; } = new NullLogger();

        private NullLogger()
        {
        }

        public IDisposable BeginScope<TState>(TState state)
        {
            return NullScope.Instance;
        }

        public bool IsEnabled(LogLevel logLevel)
        {
            return false;
        }

        public void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter)
        {
        }
    }

However it does mean that if the project I'm working on depends on this this library (which most on .NET Core will be doing) I can start to use instances of NullLogger instead of mocking ILogger.

It doesn't stop at ILogger either, there are also implementation of ILogger<T> in there (unsurprisingly called NullLogger<T>) and NullLoggerFactory.

Great, thanks Microsoft!

Subcutaneous Testing in ASP.NET Core

Posted on Thursday, 14 Mar 2019

Having successfully applied Subcutaneous testing to a number of projects I was interested to see what options were available in ASP.NET Core. In this post we'll touch on what Subcutaneous testing is, its trade offs and how we can perform such tests in ASP.NET Core.

What is Subcutaneous Testing?

I first learnt about Subcutaneous testing from Matt Davies and Rob Moore's excellent Microtesting - How We Set Fire To The Testing Pyramid While Ensuring Confidence talk at NDC Sydney. The talk introduces its viewers to various testing techniques and libraries you can use to ensure speed and confidence in your testing strategy whilst avoiding the crippling test brittleness that often ensues.

The term subcutaneous means "situated or applied under the skin", which translated to an application would mean just under the UI layer.

Why would you want to test under the UI later?

As Martin Fowler highlights in his post on Subcutaneous testing, such a means of testing is especially useful when trying to perform functional tests where you want to exercise end-to-end behaviour whilst avoiding some of the difficulties associated with testing via the UI itself.

Martin goes on to say (emphasis is my own, which we'll touch on later):

Subcutaneous testing can avoid difficulties with hard-to-test presentation technologies and usually is much faster than testing through the UI. The big danger is that, unless you are a firm follower of keeping all useful logic out of your UI, subcutaneous testing will leave important behaviour out of its test.

Let's take a look at an example...

Subcutaneous testing an ASP.NET Core application

In previous non-core versions of ASP.NET MVC I used to use a tool called FluentMvcTesting, however with the advent of .NET Core I was keen to see what options were available to create subcutaneous tests whilst leaning on some of the primitives that exist in bootstrapping one's application.

This investigation ultimately lead me to the solution we'll discuss shortly via the new .AddControllersAsServices() extension method that can be called via the IMvcBuilder interface.

It's always nice to understand what these APIs are doing so let's take a moment to look under the covers of the AddControllersAsServices method:

public static IMvcBuilder AddControllersAsServices(this IMvcBuilder builder)
{
    var feature = new ControllerFeature();
    builder.PartManager.PopulateFeature(feature);
    foreach (Type type in feature.Controllers.Select(c => c.AsType()))
    {
        builder.Services.TryAddTransient(type, type);
    }
    
    builder.Services.Replace(
        ServiceDescriptor.Transient<IControllerActivator, ServiceBasedControllerActivator>());
    
    return builder;
}

Looking at the source code it appears that the AddControllersAsServices() method populates the ControllerFeature.Controllers collection with a list of controllers via the ControllerFeatureProvider (which is indirectly invoked via the PopulateFeature call). The ControllerFeatureProvider then loops through all the "parts" (classes within your solution) looking for classes that are controllers. It does this, among a few other things such as checking to see if the type is public, by looking for anything ending in with the strong "Controller".

Once the controllers in your application are added to the collection within the ControllerFeature.Controllers collection, they're then registered as transient services within .NET Core's IOC container (IServiceCollection).

What does this mean for us and Subcutaneous testing? Ultimately this means we can resolve our chosen controller from the IOC container and in doing so it will resolve any dependencies also registered in the controller, such as services, repositories etc.

Putting it together

First we'll need to call the AddControllersAsServices() extension method as part of the AddMvc method chains in Startup.cs:

// Startup.cs
public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        ...
        services.AddMvc().AddNewtonsoftJson().AddControllersAsServices();
    }
}

Alternatively if you've no reason to resolve controllers directly from your IOC container other than for testing you might prefer to configure it as part of your test infrastructure. A common pattern to do this is to move the MVC builder related method calls into a virtual method so we can override it and call the base method like so:

// Startup.cs
public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        ...
        ConfigureMvcBuilder(services);
    }

    public virtual IMvcBuilder ConfigureMvcBuilder(IServiceCollection services)
    => services.AddMvc().AddNewtonsoftJson();
    }

Now all that's left to do is create a derived version of Startup.cs which we'll call TestStartup then call AddControllersAsServices() after setting up MVC.

// TestStartup.cs
public class TestStartup : Startup
{
    public TestStartup(IConfiguration configuration) : base(configuration)
    {
    }

    public override IMvcBuilder ConfigureMvcBuilder(IServiceCollection serviceCollection)
    {
        var services = base.ConfigureMvcBuilder(serviceCollection);
        return services.AddControllersAsServices();
    }
}

We can now start to resolve controllers from the container using the .Host.Services.GetService<T>() method. Putting it in a simple test fixture would look like this:

// SubcutaneousTestFixture.cs
public class SubcutaneousTestFixture
{
    private TestServer _server;

    public T ResolveController<T>() where T : Controller
        => _server.Host.Services.GetService<T>();

    public SubcutaneousTestFixture Run<T>() where T : Startup
    {
        var webHostBuilder = WebHost.CreateDefaultBuilder();
        webHostBuilder.UseStartup<T>();
        webHostBuilder.UseContentRoot(Directory.GetCurrentDirectory());

        _server = new TestServer(webHostBuilder);

        return this;
    }
}

Which can be invoked like so within our test:

// NewsDeletionTests.cs
public class NewsDeletionTests : IClassFixture<SubcutaneousTestFixture>
{
    private readonly SubcutaneousTestFixture _fixture;

    public NewsDeletionTests(SubcutaneousTestFixture fixture)
    {
        _fixture = fixture.Run<TestStartup>();
    }

    ...
}

Now if we want to test our controller actions (using a trivial example here) we now do so without having to go through too much effort associated with end to end testing such as using Selenium:

//DeleteNewsController.cs
[HttpPost]
public async Task<IActionResult> Delete(DeleteNews.Command command)
{
    try
    {
        await _mediator.Send(command);
    }
    catch (ValidationException e)
    {
        return View(new ManageNewsViewModel
        {
            Errors = e.Errors.ToList()
        });
    }

    return RedirectToRoute(RouteNames.NewsManage);
}
// DeleteNewsTests.cs
[Fact]
public void UserIsRedirectedAfterSuccessfulDeletion()
{
    // Arrange
    var controller = _fixture.ResolveController<DeleteNewsController>();

    // Act
    var result = (RedirectToActionResult) controller.Delete(new DeleteNews.Command { Id = 1 });
    
    // Assert
    result.ControllerName.ShouldBe(nameof(NewsController));
    result.ActionName.ShouldBe(nameof(NewsController.Index));
}

[Fact]
public void ErrorsAreReturnedToUser()
{
    // Arrange
    var controller = _fixture.ResolveController<DeleteNewsController>();

    // Act
    var result = (ViewResult) controller.Delete(new DeleteNews.Command { Id = 1 });
    var viewModel = result.Model as DeleteNewsViewModel;

    // Assert
    viewModel?.Errors.Count.ShouldBe(3);
}

At this point we've managed to successfully exercise end-to-end behaviour in our application whilst avoiding the difficulties often associated with with hard-to-test UI technologies. At the same time we've successfully avoided mocking out dependencies so our tests won't start breaking when we modify the implementation details.

There are no silver bullets

Testing is an imperfect size and there's rarely a one size fits all solution. As Fred Brooks put it, "there are no silver bullets", and this approach is no exception - it has its limitations which, depending on how you architect your application, could affect its viability. Let's see what they are:

  • No HTTP Requests
    As you may have noticed, there's no HTTP request, which means anything regarding HttpContext

  • Request filters No HTTP request also means any global filters you may have will not be invoked.

  • ModelState is not set
    As we're not generating an HTTP request, you'll notice ModelState validation is not set and would require mocking. To some this may or may not be a problem. Personally as someone that's a fan of MediatR or Magneto coupled with FluentValidation, my validation gets pushed down into my domain layer. Doing this also means I don't have to lean on validating my input models using yucky attributes.

Conclusion

Hopefully this post has given you a brief insight as to what Subcutaneous Testing is, the trade offs one has to make and an approach you could potentially use to test your applications in a similar a manner. There are a few approaches out there, but on occasion there are times where I wish to test a behaviour inside of my controller so this can do the trick.

Ultimately Subcutaneous Testing will enable you to test large parts of your application but will still leave you lacking the confidence you'd require in order to push code into production, this is where you could fill in the gaps with tests that exercise the UI.

The myth of the right tool for the job

Posted on Friday, 18 Jan 2019

The phrase "the right tool for the job" is one we've all heard in software development and we've all most likely said it at some point. However when you stop and think about what such a phrase actually means you begin to realise it's actually quite a problematic one, it makes too many assumptions. One could also go as far to say it has the potential to be quite a detrimental way to justify the use of a tool over other alternatives.

This post aims to take a look at what those five seemingly innocent words really mean, and hopefully by the end of this post you'll possibly reconsider using it in future, or at the very least be a little more conscious about its use.

Making assumptions in a world of variability

Often when you hear the aforementioned phrase used it's usually in the context of making an assertion or justification for the most suitable framework, language or service to be used in a given project. The problem with this is it makes too many assumptions. It's rare anyone truly knows the full scope and nature of a project upfront until it's done. You may have read the ten page project specification, or know the domain well, but ask yourself this - how many times have you been working on a project only to have the scope or specification changed underneath you. No one can predict the future, so why use a language that implies unquestionable certainty.

Ultimately building software is about making the right trade offs given the various different constraints and conditions (risks, costs, time to name a few), there are no "right" or "wrong" solutions, just ones that make the appropriate trade offs, this also applies to our tooling.

I'll leave you with this great tweet from John Allspaw:

Skipping XUnit tests based on runtime conditions

Posted on Wednesday, 02 Jan 2019

Have you ever needed to skip a test under a certain condition? Say, the presence or absence of an environmental variable, or some conditional runtime check? No, me neither - that was up until recently.

Skipping tests is usually a good practice to get into, but I use the word "usually" here because as with all things in software there sometimes certain constraints or parameters that may well justify doing so. In my case I needed to skip a certain test if I was running on AppVeyor in Linux.

Having done a bit of digging I found two options that would work:

Option 1 - Using the #if preprocessor directive

Depending on your scenario it might be possible to simple use an #if preprocessor directive to include the test. If you're wishing to exclude a test based on the operating system the test were running on then this may be a solution.

#if IS_LINUX
[Fact]
public void ShouldlyApi()
{
    ...
}
#endif

My scenario also involved checking the presence of environmental variables, which I'd rather do through code. This led me to the next approach which I felt was more suitable.

Preferred Option - Leveraging existing XUnit functionality

XUnit already has the ability to skip a test by providing a reason for skipping the test in question via the Skip property that exists within the Fact attribute:

[Fact(Skip = "Doesn't work at the moment")]
public void ClassScenarioShouldFail()
{
    ...
}

So all we have to do is extend the FactAttribute class and set that property:

public sealed class IgnoreOnAppVeyorLinuxFact : FactAttribute
{
    public IgnoreOnAppVeyorLinuxFact() {
        if(RuntimeInformation.IsOSPlatform(OSPlatform.Linux) && IsAppVeyor()) {
            Skip = "Ignore on Linux when run via AppVeyor";
        }
    }
    
    private static bool IsAppVeyor()
        => Environment.GetEnvironmentVariable("APPVEYOR") != null;
}

Now instead of using the traditional [Fact] attribute, we can use our new IgnoreOnAppVeyorLinuxFact attribute:

[IgnoreOnAppVeyorLinuxFact]
public void ShouldlyApi()
{
    ...
}