Latest Blog Posts

Managing your .NET Core SDK versions with the .NET Install SDK Global Tool

Posted on Tuesday, 03 Sep 2019

During my day to day programming activities I regularly find myself switching between various .NET Core solutions. At work we have a multitude of .NET Core microservices/microsites and at home I enjoy contributing to OSS projects/reading other people's code. As I'm switching between these solutions I regularly come across a project that uses a global.json file to define the version of the .NET Core SDK required.


What is the global.json file?

For those that might not be familiar the global.json file, here's an excerpt from the docs:

The global.json file allows you to define which .NET Core SDK version is used when you run .NET Core CLI commands. Selecting the .NET Core SDK is independent from specifying the runtime your project targets. The .NET Core SDK version indicates which versions of the .NET Core CLI tools are used. In general, you want to use the latest version of the tools, so no global.json file is needed.

You can read more about it here, so let's continue.


Upon stumbling on a project with a global.json file I'd go through the manual process of locating the correct version of the SDK to download then installing it. After a number of times doing this, as most developers would, I decided to remove the friction by creating a .NET Core global tool to automate this process.

The result is the .NET Core Install Global SDK global tool. You can also find it on NuGet here.

Note: Before I continue, a huge thanks to Stuart Lang who was running into the same frustrations, noticed I'd started this tool and contributed a tonne towards it.

.NET Core Install SDK Global Tool

If you want to give it a try you can install the global tool on your machine by running the following command (assuming you've got .NET Core installed).

$ dotnet tool install -g installsdkglobaltool

Once installed the global tool has the following features:

Install .NET Core SDK based on global.json file

If you navigate to a folder with a global.json file in it and run the following command:

$ dotnet install-sdk

The global tool will check the contents of the global.json file and download then start the installation of the defined version of the .NET Core SDK.

Install the latest preview of the .NET Core SDK

It's always fun playing with the latest preview releases of the .NET Core SDK so to save you time finding the latest version you can simply run:

$ dotnet install-sdk --latest-preview

This will download and start the installation of the latest preview version of the .NET Core SDK.

Is this suitable for build/CI environments?

No, certainly not at this moment in time. This global tool has been built with a consumer focus so does not install the SDK in a headless fashion. Instead it launches the install and still gives you the same control you're used to (such as choosing install location).

If you're interested in what the whole experience looks like then check out the video below:

Until next time!

Approval Testing your Open API/Swagger Documents

Posted on Wednesday, 28 Aug 2019

The team I work in at Just Eat have a number of HTTP based APIs which are consumed by other components, some in internally others externally. Like a lot of people building HTTP based APIs we use Swagger for both developer experimentation and documentation of the endpoints exposed. This is done via the Swagger UI, which uses an Open API document (formally Swagger docs) that describes the endpoints and resources in JSON form.

As our APIs evolve it's imperative that we don't unintentionally break any of these public contracts as this would cause headaches for consumers of our service.

What we need is visibility of intentional or unintentional changes to this contract, this is where approval testing has helped.

What is approval testing?

Approval testing is a method of testing that not many are familiar with. If I were to hypothesise why this is, I'd say it's because of it narrow application, especially when contrasted to other more common forms of forms of testing.

Assertion testing

We're all familiar with assertion based testing - you order your test in an arrange, act, assert flow where you first 'arrange' the test, then execute the action (act) then 'assert' the output.

For instance:

// Arrange
var child1 = 13;
var child2 = 22;

// Act
var age = calculateAge(13, 22);

// Assert
age.ShouldBe(35);

Approval testing

Approval testing follows the same pattern but with one difference; instead of asserting the expected return value given your set of input parameters, you 'approve' the output instead.

This slight shift changes your perspective on what a failing test means. In other words, with Approval Testing the failed test doesn't prove something has broken but instead flags that the given output differs from the previous approved output and needs approving.

Actions speak louder than words so let's take a look at how we can apply this method of testing to our Open API documents in order to gain visibility of changes to the contract.

Test Setup

In this example I'll use a simple .NET Core based API that has Swagger setup with Swagger UI. The test will use the Microsoft.AspNetCore.Mvc.Testing package for running our API in-memory. If you're not familiar with testing this way then be sure to check out the docs if you wanted to try this yourself.

First let's take a look at our application:

Our ASP.NET Core Application

// ProductController.cs 

[Route("api/[controller]")]
[ApiController]
public class ProductController : ControllerBase
{
    [HttpGet]
    public ActionResult<IEnumerable<ProductViewModel>> Get()
    {
        return new List<ProductViewModel>
        {
            new ProductViewModel {Id = 1, Name = "Product 1"},
            new ProductViewModel {Id = 2, Name = "Product 1"},
            new ProductViewModel {Id = 3, Name = "Product 1"}
        };
    }
}
// Program.cs

public class Program
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }
    
    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
        .UseStartup<Startup>();
}

// ProductViewModel.cs

public class ProductViewModel
{
    public string Name { get; set; }

    public int Id { get; set; }

    public string Description { get; set; }
}
// Startup.cs

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        ...

        services.AddSwaggerGen(c =>
        {
            c.SwaggerDoc("v1", new OpenApiInfo { Title = "My API", Version = "v1" });
        });
        
    }

    public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
    {
        ...
        
        app.UseSwagger();
        app.UseSwaggerUI(c =>
        {
            c.SwaggerEndpoint("/swagger/v1/swagger.json", "My API V1");
        });
    }
}

If we were to launch our API and go to /swagger/v1/swagger.json we should see a json document that includes information about the resources our API exposes.

Now we have a working API, let's look at our test setup.

Our Test Setup

As mentioned above, our test will use the Microsoft.AspNetCore.Mvc.Testing package for running our API in-memory. Why do we need to run it in-memory? Hold that thought and read on.

We'll use a simple xunit fixture class that uses WebApplicationFactory<T> to run our API in memory.

// ApiFixture.cs

public class ApiFixture
{
    private readonly WebApplicationFactory<Startup> _fixture;

    public ApiFixture()
    {
        _fixture = new WebApplicationFactory<Startup>();
    }

    public HttpClient Create()
    {
        return _fixture.CreateClient();
    }
}

Now we've got our API and infrastructure setup, this is where the magic happens. Below is a simple test that will perform the following:

  1. Run our ASP.NET Core API in memory and allow the tests to call it via an HttpClient.
  2. Make a GET request to the Open API document (Swagger docs) and parse the content into a string.
  3. Call .ShouldMatchApproved() on the string content.

What is ShouldMatchApproved()? Read on...

// OpenApiDocumentTests.cs

public class OpenApiDocumentTests : IClassFixture<ApiFixture>
{
    private readonly ApiFixture _apiFixture;

    public OpenApiDocumentTests(ApiFixture apiFixture)
    {
        _apiFixture = apiFixture;
    }
    
    [Fact]
    public async Task DocumentHasChanged()
    {
        // Arrange
        var client = _apiFixture.Create();

        // Act
        var request = await  client.GetAsync("/swagger/v1/swagger.json");
        var content = await request.Content.ReadAsStringAsync();
        
        // Assert
        content.ShouldMatchApproved();
    }
}

Shouldly and .ShouldMatchApproved()

Shouldly is an open-source assertion testing framework that I frequently use (and as a result, contribute to) that simplifies test assertions.

One assertion method Shouldly provides that sets it apart from others is the .ShouldMatchApproved() extension which hangs off of .NET's string type. This extension method enables us to easily apply approval based testing to any string. There are other libraries such as ApprovalTests.Net that support more complex use cases, but for this example Shouldly will suffice.

How does .ShouldMatchApproved() work?

Upon executing your test .ShouldMatchApproved() performs the following steps:

  1. First it checks to see if an approved text file lives in the expected location on disk.

  2. If the aforementioned file exists Shouldly will diff the contents of the file (containing the expected) against the input (the actual) of the test.

  3. If ShouldMatchApproved detects a difference between the expected and actual values then it will scan common directories for a supported diff tool (such as Beyond Compare, Win Merge etc, or even VS Code).

  4. If it finds a supported difftool it will automatically launch it, prompting you to approve the diffs and save the approved copy to the location described in Step 1.

  5. Once approved and saved you can rerun the test and it will pass.

Any changes to your API from this point on will result in a test pass, providing no difference is detected between your test case and the approval file stored locally. If a change is detected then the above process starts again.

One of the powerful aspects of this method of testing is that the approval file is committed to source control, meaning those diffs to the contract are visible to anyone reviewing your changes, whilst also keeping a history of changes to the public contract.

Demo

Along side the source code to the demonstration in this post and the below video, I've published quick demonstration on YouTube which you can take a look at. The video starts by first creating our approved document, I then go on to make a change to a model exposed via the Open API document, approve that change then rerun the test again.

You can stop mocking ILogger

Posted on Saturday, 01 Jun 2019

Just a quick post, nothing technically challenging but hopefully valuable to some none the less.

It all started a couple of days ago when I found myself browsing through Microsoft's Logging Abstractions library source code. For those that aren't familiar with this library the aim of it (as the name suggests) is to provide a set of abstractons over the top of common APIs that have emerged around logging over time (take ILogger for instance).

As I was reading the source code I noticed the library contains an implementation of the common ILogger inferface called NullLogger. "Great!" I thought, "I really don't like mocking and now I don't need to mock ILogger anymore!"

Initially I thought this was old news and I'd somehow missed the memo, so I swiftly moved on. But given the number of likes and retweets a tweet of the discovery I made received I thought it would be valuable to write a short post as a nod of its existence for anyone who hasn't seen it yet.

Logger.Instance

As you can see the implementation of the class is very straight forward, it does nothing at all...

    public class NullLogger : ILogger
    {
        public static NullLogger Instance { get; } = new NullLogger();

        private NullLogger()
        {
        }

        public IDisposable BeginScope<TState>(TState state)
        {
            return NullScope.Instance;
        }

        public bool IsEnabled(LogLevel logLevel)
        {
            return false;
        }

        public void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter)
        {
        }
    }

However it does mean that if the project I'm working on depends on this this library (which most on .NET Core will be doing) I can start to use instances of NullLogger instead of mocking ILogger.

It doesn't stop at ILogger either, there are also implementation of ILogger<T> in there (unsurprisingly called NullLogger<T>) and NullLoggerFactory.

Great, thanks Microsoft!

Subcutaneous Testing in ASP.NET Core

Posted on Thursday, 14 Mar 2019

Having successfully applied Subcutaneous testing to a number of projects I was interested to see what options were available in ASP.NET Core. In this post we'll touch on what Subcutaneous testing is, its trade offs and how we can perform such tests in ASP.NET Core.

What is Subcutaneous Testing?

I first learnt about Subcutaneous testing from Matt Davies and Rob Moore's excellent Microtesting - How We Set Fire To The Testing Pyramid While Ensuring Confidence talk at NDC Sydney. The talk introduces its viewers to various testing techniques and libraries you can use to ensure speed and confidence in your testing strategy whilst avoiding the crippling test brittleness that often ensues.

The term subcutaneous means "situated or applied under the skin", which translated to an application would mean just under the UI layer.

Why would you want to test under the UI later?

As Martin Fowler highlights in his post on Subcutaneous testing, such a means of testing is especially useful when trying to perform functional tests where you want to exercise end-to-end behaviour whilst avoiding some of the difficulties associated with testing via the UI itself.

Martin goes on to say (emphasis is my own, which we'll touch on later):

Subcutaneous testing can avoid difficulties with hard-to-test presentation technologies and usually is much faster than testing through the UI. The big danger is that, unless you are a firm follower of keeping all useful logic out of your UI, subcutaneous testing will leave important behaviour out of its test.

Let's take a look at an example...

Subcutaneous testing an ASP.NET Core application

In previous non-core versions of ASP.NET MVC I used to use a tool called FluentMvcTesting, however with the advent of .NET Core I was keen to see what options were available to create subcutaneous tests whilst leaning on some of the primitives that exist in bootstrapping one's application.

This investigation ultimately lead me to the solution we'll discuss shortly via the new .AddControllersAsServices() extension method that can be called via the IMvcBuilder interface.

It's always nice to understand what these APIs are doing so let's take a moment to look under the covers of the AddControllersAsServices method:

public static IMvcBuilder AddControllersAsServices(this IMvcBuilder builder)
{
    var feature = new ControllerFeature();
    builder.PartManager.PopulateFeature(feature);
    foreach (Type type in feature.Controllers.Select(c => c.AsType()))
    {
        builder.Services.TryAddTransient(type, type);
    }
    
    builder.Services.Replace(
        ServiceDescriptor.Transient<IControllerActivator, ServiceBasedControllerActivator>());
    
    return builder;
}

Looking at the source code it appears that the AddControllersAsServices() method populates the ControllerFeature.Controllers collection with a list of controllers via the ControllerFeatureProvider (which is indirectly invoked via the PopulateFeature call). The ControllerFeatureProvider then loops through all the "parts" (classes within your solution) looking for classes that are controllers. It does this, among a few other things such as checking to see if the type is public, by looking for anything ending in with the strong "Controller".

Once the controllers in your application are added to the collection within the ControllerFeature.Controllers collection, they're then registered as transient services within .NET Core's IOC container (IServiceCollection).

What does this mean for us and Subcutaneous testing? Ultimately this means we can resolve our chosen controller from the IOC container and in doing so it will resolve any dependencies also registered in the controller, such as services, repositories etc.

Putting it together

First we'll need to call the AddControllersAsServices() extension method as part of the AddMvc method chains in Startup.cs:

// Startup.cs
public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        ...
        services.AddMvc().AddNewtonsoftJson().AddControllersAsServices();
    }
}

Alternatively if you've no reason to resolve controllers directly from your IOC container other than for testing you might prefer to configure it as part of your test infrastructure. A common pattern to do this is to move the MVC builder related method calls into a virtual method so we can override it and call the base method like so:

// Startup.cs
public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        ...
        ConfigureMvcBuilder(services);
    }

    public virtual IMvcBuilder ConfigureMvcBuilder(IServiceCollection services)
    => services.AddMvc().AddNewtonsoftJson();
    }

Now all that's left to do is create a derived version of Startup.cs which we'll call TestStartup then call AddControllersAsServices() after setting up MVC.

// TestStartup.cs
public class TestStartup : Startup
{
    public TestStartup(IConfiguration configuration) : base(configuration)
    {
    }

    public override IMvcBuilder ConfigureMvcBuilder(IServiceCollection serviceCollection)
    {
        var services = base.ConfigureMvcBuilder(serviceCollection);
        return services.AddControllersAsServices();
    }
}

We can now start to resolve controllers from the container using the .Host.Services.GetService<T>() method. Putting it in a simple test fixture would look like this:

// SubcutaneousTestFixture.cs
public class SubcutaneousTestFixture
{
    private TestServer _server;

    public T ResolveController<T>() where T : Controller
        => _server.Host.Services.GetService<T>();

    public SubcutaneousTestFixture Run<T>() where T : Startup
    {
        var webHostBuilder = WebHost.CreateDefaultBuilder();
        webHostBuilder.UseStartup<T>();
        webHostBuilder.UseContentRoot(Directory.GetCurrentDirectory());

        _server = new TestServer(webHostBuilder);

        return this;
    }
}

Which can be invoked like so within our test:

// NewsDeletionTests.cs
public class NewsDeletionTests : IClassFixture<SubcutaneousTestFixture>
{
    private readonly SubcutaneousTestFixture _fixture;

    public NewsDeletionTests(SubcutaneousTestFixture fixture)
    {
        _fixture = fixture.Run<TestStartup>();
    }

    ...
}

Now if we want to test our controller actions (using a trivial example here) we now do so without having to go through too much effort associated with end to end testing such as using Selenium:

//DeleteNewsController.cs
[HttpPost]
public async Task<IActionResult> Delete(DeleteNews.Command command)
{
    try
    {
        await _mediator.Send(command);
    }
    catch (ValidationException e)
    {
        return View(new ManageNewsViewModel
        {
            Errors = e.Errors.ToList()
        });
    }

    return RedirectToRoute(RouteNames.NewsManage);
}
// DeleteNewsTests.cs
[Fact]
public void UserIsRedirectedAfterSuccessfulDeletion()
{
    // Arrange
    var controller = _fixture.ResolveController<DeleteNewsController>();

    // Act
    var result = (RedirectToActionResult) controller.Delete(new DeleteNews.Command { Id = 1 });
    
    // Assert
    result.ControllerName.ShouldBe(nameof(NewsController));
    result.ActionName.ShouldBe(nameof(NewsController.Index));
}

[Fact]
public void ErrorsAreReturnedToUser()
{
    // Arrange
    var controller = _fixture.ResolveController<DeleteNewsController>();

    // Act
    var result = (ViewResult) controller.Delete(new DeleteNews.Command { Id = 1 });
    var viewModel = result.Model as DeleteNewsViewModel;

    // Assert
    viewModel?.Errors.Count.ShouldBe(3);
}

At this point we've managed to successfully exercise end-to-end behaviour in our application whilst avoiding the difficulties often associated with with hard-to-test UI technologies. At the same time we've successfully avoided mocking out dependencies so our tests won't start breaking when we modify the implementation details.

There are no silver bullets

Testing is an imperfect size and there's rarely a one size fits all solution. As Fred Brooks put it, "there are no silver bullets", and this approach is no exception - it has its limitations which, depending on how you architect your application, could affect its viability. Let's see what they are:

  • No HTTP Requests
    As you may have noticed, there's no HTTP request, which means anything regarding HttpContext

  • Request filters No HTTP request also means any global filters you may have will not be invoked.

  • ModelState is not set
    As we're not generating an HTTP request, you'll notice ModelState validation is not set and would require mocking. To some this may or may not be a problem. Personally as someone that's a fan of MediatR or Magneto coupled with FluentValidation, my validation gets pushed down into my domain layer. Doing this also means I don't have to lean on validating my input models using yucky attributes.

Conclusion

Hopefully this post has given you a brief insight as to what Subcutaneous Testing is, the trade offs one has to make and an approach you could potentially use to test your applications in a similar a manner. There are a few approaches out there, but on occasion there are times where I wish to test a behaviour inside of my controller so this can do the trick.

Ultimately Subcutaneous Testing will enable you to test large parts of your application but will still leave you lacking the confidence you'd require in order to push code into production, this is where you could fill in the gaps with tests that exercise the UI.

The myth of the right tool for the job

Posted on Friday, 18 Jan 2019

The phrase "the right tool for the job" is one we've all heard in software development and we've all most likely said it at some point. However when you stop and think about what such a phrase actually means you begin to realise it's actually quite a problematic one, it makes too many assumptions. One could also go as far to say it has the potential to be quite a detrimental way to justify the use of a tool over other alternatives.

This post aims to take a look at what those five seemingly innocent words really mean, and hopefully by the end of this post you'll possibly reconsider using it in future, or at the very least be a little more conscious about its use.

Making assumptions in a world of variability

Often when you hear the aforementioned phrase used it's usually in the context of making an assertion or justification for the most suitable framework, language or service to be used in a given project. The problem with this is it makes too many assumptions. It's rare anyone truly knows the full scope and nature of a project upfront until it's done. You may have read the ten page project specification, or know the domain well, but ask yourself this - how many times have you been working on a project only to have the scope or specification changed underneath you. No one can predict the future, so why use a language that implies unquestionable certainty.

Ultimately building software is about making the right trade offs given the various different constraints and conditions (risks, costs, time to name a few), there are no "right" or "wrong" solutions, just ones that make the appropriate trade offs, this also applies to our tooling.

I'll leave you with this great tweet from John Allspaw:

Skipping XUnit tests based on runtime conditions

Posted on Wednesday, 02 Jan 2019

Have you ever needed to skip a test under a certain condition? Say, the presence or absence of an environmental variable, or some conditional runtime check? No, me neither - that was up until recently.

Skipping tests is usually a good practice to get into, but I use the word "usually" here because as with all things in software there sometimes certain constraints or parameters that may well justify doing so. In my case I needed to skip a certain test if I was running on AppVeyor in Linux.

Having done a bit of digging I found two options that would work:

Option 1 - Using the #if preprocessor directive

Depending on your scenario it might be possible to simple use an #if preprocessor directive to include the test. If you're wishing to exclude a test based on the operating system the test were running on then this may be a solution.

#if IS_LINUX
[Fact]
public void ShouldlyApi()
{
    ...
}
#endif

My scenario also involved checking the presence of environmental variables, which I'd rather do through code. This led me to the next approach which I felt was more suitable.

Preferred Option - Leveraging existing XUnit functionality

XUnit already has the ability to skip a test by providing a reason for skipping the test in question via the Skip property that exists within the Fact attribute:

[Fact(Skip = "Doesn't work at the moment")]
public void ClassScenarioShouldFail()
{
    ...
}

So all we have to do is extend the FactAttribute class and set that property:

public sealed class IgnoreOnAppVeyorLinuxFact : FactAttribute
{
    public IgnoreOnAppVeyorLinuxFact() {
        if(RuntimeInformation.IsOSPlatform(OSPlatform.Linux) && IsAppVeyor()) {
            Skip = "Ignore on Linux when run via AppVeyor";
        }
    }
    
    private static bool IsAppVeyor()
        => Environment.GetEnvironmentVariable("APPVEYOR") != null;
}

Now instead of using the traditional [Fact] attribute, we can use our new IgnoreOnAppVeyorLinuxFact attribute:

[IgnoreOnAppVeyorLinuxFact]
public void ShouldlyApi()
{
    ...
}

Aaaand, we're back...

Posted on Thursday, 08 Mar 2018

Hello! Sorry for the lack of posts recently - I haven't died or lost interested in blogging, I've been super busy with multiple things one of which broke my site. For this curious here's what I've been up to:

Building the new DDD South West website

Since getting involved with organising the DDD South West event last year, one thing that was unanimous around the team of organisers was how we needed a new website. The previous website was a free SharePoint hosted solution that had a number of problems (slow, design could be improved, lots of manual processes, speaker submissions by sending word documents etc), so after last year's conference I started working on the new DDD South West website, and with the 2018 conference approaching it was a bit of a panic to get it finished in time.

Luckily it was all sorted by the time we opened for speaker submissions (with a bit of JIT development). Surprisingly considering there are very few tests everything has worked without any problems, and what's more is it's all open source!

Here's a screenshot of the before and after, hopefully you'll agree it's a large improvement!

Before:

After:

Feel free to check the new website out over at https://dddsouthwest.com, it's built with all of the usual stuff:

  • .NET Core (MVC with MediatR)
  • PostgreSQL
  • Identity Server 4
  • Docker
  • Hosted on Linux behind nginx

Migrating my blog from .NET Core 1.x to 2.x.

I've been meaning to do this for a while now, and migrating a blog to a major version number when you know there's lots of breaking changes isn't something you charge at with gusto. Suffice to say it was painful and I ran into some weird issue where I quickly realised it was easier to just create a new project and start copying things over than to figure out what was going wrong.

I managed to get the website back up and running but still had a lot to fix on the login/admin side of things which uses OAuth and Google sign-in. So whilst the website was functional the admin area (and thus my ability to post) was not. Couple with the finishing the DDD South West website it all took a bit of time.

Anyway, it's all working now so that's a relief and I've got a few posts lined up.

.NET Oxford Talk - ASP.NET Core 2.0 Razor Deep Dive

A couple of weeks ago I had the pleasure of being invited to the .NET Oxford user group to talk about all of the new features in Razor for ASP.NET Core 2.0. It's a talk I've given a few times before but this time around I was keen to update it with a bit more focus on the newer features in 2.0, such as the ITagHelperComponent interface and Razor Pages.

As a .NET meet up organiser myself, I always enjoy the opportunity to visit other .NET focused meet ups as it's always interesting to see how they operate, and to meet other .NET developers in the wider UK community. It's also a great chance to borrow ideas to potentially incorporate into .NET South West.

The talk

Overall I was really happy with the way the talk went (though I do wonder if I harped on about how great Rider is a little too much) with a good number of questions throughout and afterwards. Each talk I give I always try and focus on one habit I've noticed I've picked up, or wish to improve on when presenting, whether it be talking slower, less or more energy etc. On this occasion it was spend less time looking at the screen behind me and more time focused on, and looking at the audience. Whilst the session wasn't recorded I was quite conscious of it throughout and feel I did much better than previous sessions.

Closing

All in all it's been a crazy couple of months and now everything is settling down I'm looking forward to getting back into regularly blogging.

Oh, and on one last note - I've been selected to speak at DDD Wales (their first one!) which I'm really looking forward to.

GlobalExceptionHandler.NET version 2 released

Posted on Friday, 08 Dec 2017

For anyone that regularly reads this blog will remember that recently I developed a convention based ASP.NET Core exception handling library named GlobalExceptionHandler.NET (if you missed the post you can read about it here).

GlobalExceptionHandler.NET in a nutshell
GlobalExceptionHandler.NET hands off ASP.NET Core's .UseExceptionHandler() endpoint and enables developers to configure HTTP responses (including the status codes) per exception type.

For instance, the following configuration:

app.UseExceptionHandler().WithConventions(x => {
  x.ContentType = "application/json";
  x.ForException<RecordNotFoundException>().ReturnStatusCode(HttpStatusCode.NotFound)
      .UsingMessageFormatter((ex, context) => JsonSerializer(new {
          Message = ex.Message
      }));
});

app.Map("/error", x => x.Run(y => throw new RecordNotFoundException("Record not be found")));

Will result in the following output if a RecordNotFoundException is thrown.

HTTP/1.1 404 Not Found
Date: Sat, 25 Nov 2017 01:47:51 GMT
Content-Type: application/json
Server: Kestrel
Cache-Control: no-cache
Pragma: no-cache
Transfer-Encoding: chunked
Expires: -1
{
  "Message": "Record not be found"
}

Improvements in Version 2

Whilst the initial version of GlobalExceptionHandler.NET was a good start, there were a few features and internal details that I was keen to flesh out and improve, so Version 2 was a pretty big overhaul. Let's take a look at what's changed in version 2.

It now extends the UseExceptionHandler() API

The first version of GlobalExceptionHandler.NET had its own implementation of UseExceptionHandler which would catch any exceptions thrown further down the ASP.NET Core middleware stack, there was no real motivation for creating a separate implementation other than I didn't realise how extensible the UseExceptionHandler API was.

As soon as I realised I could offload some of work to ASP.NET Core I was keen to do so.

Now invoked with WithConventions

Now I was using the UseExceptionHandler() endpoint, I was keen to have a more meaningful fluent approach to integrating with ASP.NET Core, so I ultimately went with a WithConventions() approach as the name felt a lot more natural.

Supports polymorphic types

One problem the previous version of GlobalExceptionHandler.NET had was it couldn't distinguish between exceptions of the same type. Version 2 will now look down the inheritance tree for the first matching type. To give an example.

Given the following exception type:

public class ExceptionA : BaseException {}
// startup.cs

app.UseExceptionHandler().WithConventions(x => {
  x.ContentType = "application/json";
  x.ForException<BaseException>().ReturnStatusCode(HttpStatusCode.BadRequest)
      .UsingMessageFormatter((e, c) => JsonSerializer(new {
          Message = "Base Exception response"
      }));
});

app.Map("/error", x => x.Run(y => throw new ExceptionA()));
HTTP/1.1 400 Bad Request
Date: Sat, 25 Nov 2017 01:47:51 GMT
Content-Type: application/json
Server: Kestrel
Cache-Control: no-cache
Pragma: no-cache
Transfer-Encoding: chunked
Expires: -1
{
  "Message": "Base Exception response"
}

As the above example will hopefully illustrate, as there is no configured response to ExceptionA GlobalExceptionHandler.NET goes to the next type in the inheritance tree to see if a formatter is specified for that type.

Content Negotiation

GlobalExceptionHandler.NET verion 2 now supports content negotiation via the optional GlobalExceptionHandler.ContentNegotiation.Mvc package.

Once included you no longer need to specify the content type or response serialisation type:

//Startup.cs

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvcCore().AddXmlSerializerFormatters();
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    app.UseExceptionHandler().WithConventions(x =>
    {
        x.ForException<RecordNotFoundException>().ReturnStatusCode(HttpStatusCode.NotFound)
            .UsingMessageFormatter(e => new ErrorResponse
            {
                Message = e.Message
            });
    });

    app.Map("/error", x => x.Run(y => throw new RecordNotFoundException("Record could not be found")));
}

Note how we had to include the AddMvcCore service, this is because ASP.NET Core MVC is required in order to take care of content negotiation which is a real shame as it would have been great to enable it without a dependency being required on MVC.

Now when an exception is thrown and the consumer has provided the Accept header:

GET /api/demo HTTP/1.1
Host: localhost:5000
Accept: text/xml

The response will be formatted according to the Accept header value:

HTTP/1.1 404 Not Found
Date: Tue, 05 Dec 2017 08:49:07 GMT
Content-Type: text/xml; charset=utf-8
Server: Kestrel
Cache-Control: no-cache
Pragma: no-cache
Transfer-Encoding: chunked
Expires: -1

<ErrorResponse 
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
  xmlns:xsd="http://www.w3.org/2001/XMLSchema">
  <Message>Record could not be found</Message>
</ErrorResponse>

Wrapping up

Hopefully this has given you a good idea of what you'll find in version 2 of the GlobalExceptionHandler.NET library. Moving forward there are a few further improvements I'd like to make around organising configuration on a domain by domain basis. And as always, the code is up on GitHub so feel free to take a look.

Going serverless with .NET Core, AWS Lambda and the Serverless framework

Posted on Wednesday, 08 Nov 2017

Recently I gave a talk titled 'Going serverless with AWS Lambda' where I briefly went through what the serverless is and the architectural advantages it gives you along with the trades offs to consider. Half way through the talk I then went on to demonstrate the Serverless framework and was surprised by the number of people that are currently experimenting with AWS Lambda or Azure Functions that have never heard of it, so much so that I thought I'd write a post demonstrating its value.

What is the Serverless framework and what problem does it aim to solve?

Serverless, which I'll refer to as the Serverless framework to avoid confusion, is a cloud provider agnostic toolkit designed to aid operations around building, managing and deploying serverless components, whether full-blown serverless architectures or disparate functions (or FaaS).

To give a more concrete example, Serverless framework aims to provide developers with an interface that abstracts away the vendor's cloud specific APIs and configuration whilst simultaneously providing you with additional tooling to be able to test and deploy functions with ease, perfect for rapid feedback or being able to integrate into your CI/CD pipelines.

Let's take a look.

Getting started with .NET Core and the Serverless Framework

First of all we're going to need to install the Serverless framework:

$ npm install serverless -g

Next let's see what Serverless framework templates are currently available:

$ serverless create --help

Note: In addition to the serverless command line argument, sls is a nice short hand equivalent, producing the same results:

$ sls create --help
Template for the service. Available templates:
"aws-nodejs",
"aws-python",
"aws-python3",
"aws-groovy-gradle",
"aws-java-maven",
"aws-java-gradle",
"aws-scala-sbt",
"aws-csharp",
"aws-fsharp",
"azure-nodejs",
"openwhisk-nodejs",
"openwhisk-python",
"openwhisk-swift",
"google-nodejs"

To create our .NET Core template we use the --template command:

$ serverless create --template aws-csharp --name demo

Let's take a moment to look at the files created by the Serverless framework and go through the more note worthy ones:

$ ls -la
.
..
.gitignore
Handler.cs
aws-csharp.csproj
build.cmd
build.sh
global.json
serverless.yml

Handler.cs
Opening Handler.cs reveals it's the function that will be invoked in response to an event such as notifications, S3 updates and so forth.

//Handler.cs

[assembly:LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]
namespace AwsDotnetCsharp
{
    public class Handler
    {
       public Response Hello(Request request)
       {
           return new Response("Go Serverless v1.0! Your function executed successfully!", request);
       }
    }
    ...
}

serverless.yml
This is where the magic happens. The serverless.yml file is your schema which defines the configuration of your Lambda(s) (or Azure functions) and how they interact with your wider architecture. Once configured Serverless framework generates a Cloud Formation template off of the back of this which AWS uses to provision the appropriate infrastructure.

global.json
Open global.json and you'll notice it's pinned to version 1.0.4 of the .NET Core framework, this is because as of the time of writing this .NET Core 2.0 isn't supported, though Amazon have promised support is on its way.

{
  "sdk": {
    "version": "1.0.4"
  }
}

Now, let's go ahead and create our Lambda.

Creating our .NET Core Lambda

For the purpose of this demonstration we're going to create a Lambda that's reachable via HTTP. In order to do this we're going to need to stand up an API Gateway in front of it. Normally doing this would require logging into the AWS Console and manually configuring an API Gateway, so it's a perfect example to demonstrate how Serverless framework can take care of all of a lot of heavy lifting.

Let's head over to our serverless.yml file and scroll down to the following section:

# serverless.yml
functions:
  hello:
    handler: CsharpHandlers::AwsDotnetCsharp.Handler::Hello

#    The following are a few example events you can configure
#    NOTE: Please make sure to change your handler code to work with those events
#    Check the event documentation for details
#    events:
#      - http:
#          path: users/create
#          method: get
#      - s3: ${env:BUCKET}
#      - schedule: rate(10 minutes)
#      - sns: greeter-topic
#      - stream: arn:aws:dynamodb:region:XXXXXX:table/foo/stream/1970-01-01T00:00:00.000
#      - alexaSkill
#      - iot:
#          sql: "SELECT * FROM 'some_topic'"
#      - cloudwatchEvent:
#          event:
#            source:
#              - "aws.ec2"
#            detail-type:
#              - "EC2 Instance State-change Notification"
#            detail:
#              state:
#                - pending
#      - cloudwatchLog: '/aws/lambda/hello'
#      - cognitoUserPool:
#          pool: MyUserPool
#          trigger: PreSignUp

#    Define function environment variables here
#    environment:
#      variable2: value2
...

This part of the serverless.yml file describes the various events that our Lambda should respond to. As we're going to be using API Gateway as our method of invocation we can remove a large portion of this for clarity, then uncomment the event and its properties pertaining to http:

functions:
  hello:
    handler: CsharpHandlers::AwsDotnetCsharp.Handler::Hello

#    The following are a few example events you can configure
#    NOTE: Please make sure to change your handler code to work with those events
#    Check the event documentation for details
   events:
     - http:
         path: users/create
         method: get

#    Define function environment variables here
#    environment:
#      variable2: value2
...

Creating our .NET Core C# Lambda

Because we're using HTTP as our method of invocation we need to add the Amazon.Lambda.APIGatewayEvents NuGet package to our Lambda and reference the correct request and return types, we can do that using the following .NET Core CLI command:

$ dotnet add package Amazon.Lambda.APIGatewayEvents

Now let's open our Handler.cs file and update our Lambda to return the correct response type:

public APIGatewayProxyResponse Hello(APIGatewayProxyRequest request, ILambdaContext context)
{
    // Log entries show up in CloudWatch
    context.Logger.LogLine("Example log entry\n");

    var response = new APIGatewayProxyResponse
    {
        StatusCode = (int)HttpStatusCode.OK,
        Body = "{ \"Message\": \"Hello World\" }",
        Headers = new Dictionary<string, string> {{ "Content-Type", "application/json" }}
    };

    return response;
}

Now we're set. Let's move on to deploying our Lambda.

Registering an account on AWS Lambda

If you're reading this then I assume you already have an account with AWS, if not you're going to need to head over to their registration page and sign up.

Setting AWS credentials in Serverless framework

In order to enable the Serverless framework to create Lambdas and accommodating infrastructure around our Lambdas we're going to need to set up our AWS credentials. The Serverless framework documentation does a good job of explaining how to do this, but for those that know how to generate keys in AWS you can set your credentials via the following command:

serverless config credentials --provider aws --key <Your Key> --secret <Your Secret>

Build and deploy our .NET Core Lambda

Now we're good to go!

Let's verify our setup by deploying our Lambda, this will give us an opportunity to see just how rapid the feedback cycle can be when using Serverless framework.

At this point if we weren't using the Serverless framework we'd have to manually package our Lambda up into a .zip file in a certain structure then manually log into AWS to upload our zip and create an infrastructure (in this instance API Gateway in front of our Lambda). But as we're using the Serverless framework it'll take care of all of the heavy lifting.

First let's build our .NET Core Lambda:

$ sh build.sh

or if you're on Windows:

$ build.cmd

Next we'll deploy it.

Deploying Lambdas using the Serverless framework is performed using the deploy argument. In this instance we'll set the output to be verbose using the -v flag so we can see what Serverless framework is doing:

$ serverless deploy -v

Once completed you should see output similar to the following:

$ serverless deploy -v

Serverless: Packaging service...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
CloudFormation - UPDATE_IN_PROGRESS - 
...
CloudFormation - UPDATE_COMPLETE - AWS::CloudFormation::Stack - demo-dev
Serverless: Stack update finished...
Service Information
service: demo
stage: dev
region: us-east-1
api keys:
  None
endpoints:
  GET - https://b2353kdlcc.execute-api.us-east-1.amazonaws.com/dev/users/create
functions:
  hello: demo-dev-hello

Stack Outputs
HelloLambdaFunctionQualifiedArn: arn:aws:lambda:us-east-1:082958828786:function:demo-dev-hello:2
ServiceEndpoint: https://b2353kdlcc.execute-api.us-east-1.amazonaws.com/dev
ServerlessDeploymentBucketName: demo-dev-serverlessdeploymentbucket-1o4sd9lppvgfv

Now if we were to log into our AWS account and navigate to the the Cloud Formation page in the region us-east-1 (see the console output) we'd see that Serverless framework has taken care of all of the heavy lifting in spinning our stack up.

Let's navigate to the endpoint address returned in the console output which is where our Lambda can be reached.

If all went as expected we should be greeted with a successful response, awesome!

HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 28
Connection: keep-alive
Date: Wed, 08 Nov 2017 02:51:33 GMT
x-amzn-RequestId: bb076390-c42f-11e7-89fc-6fcb7a11f609
X-Amzn-Trace-Id: sampled=0;root=1-5a027135-bc15ce531d1ef45e3eed7a9b
X-Cache: Miss from cloudfront
Via: 1.1 3943e81340bd903a74d536bc9599c3f3.cloudfront.net (CloudFront)
X-Amz-Cf-Id: ZDHCvVSR1DAPVUfrL8bU_IuWk3aMoAotdRKBjUIor16VcBPkIiNjNw==

{
  "Message": "Hello World"
}

In addition to invoking our Lambda manually via an HTTP request, we could also invoke it using the following Serverless framework command, where the -l flag will return any log output:

$ serverless invoke -f hello -l

{
    "statusCode": 200,
    "headers": {
        "Content-Type": "application/json"
    },
    "body": "{ \"Message\": \"Hello World\" }",
    "isBase64Encoded": false
}
--------------------------------------------------------------------
START RequestId: bbcbf6a3-c430-11e7-a2e3-132defc123e3 Version: $LATEST
Example log entry

END RequestId: bbcbf6a3-c430-11e7-a2e3-132defc123e3
REPORT RequestId: bbcbf6a3-c430-11e7-a2e3-132defc123e3	Duration: 17.02 ms	Billed Duration: 100 ms 	Memory Size: 1024 MB	Max Memory Used: 34 MB

Making modifications to our Lambda

At this point if we to make any further code changes we'd have to re-run the build script (build.sh or build.cmd depending on your platform) followed by the Serverless framework deploy function command:

$ serverless deploy function -f hello

However if we needed to modify the serverless.yml file then we'd have to deploy our infrastructure changes via the deploy command:

$ serverless deploy -v

The difference being the former is far faster as it will only deploy the source code where as the later will tear down your Cloud Formation stack and stand it back up again reflecting the changes made in your serverless.yml configuration.

Command recap

So, let's recap on the more important commands we've used:

Create our Lambda:

$ serverless create --template aws-csharp --name demo

Deploy our infrastructure and code (you must have built your Lambda before hand using one of the build scripts)

$ serverless deploy -v

Or just deploy the changes to our hello function (again, we need to have built our Lambda as above)

$ serverless deploy function -f hello

At this point we can invoke our Lambda, where -l is whether we want to include log output.

$ serverless invoke -f hello -l

If our commands were written in Python or Node then you could optionally use the invoke local command, however this isn't available for .NET Core.

$ serverless invoke local -f hello

Once finished with our demo function we can clean up after ourselves using the remove command:

$ serverless remove

Adding more Lambdas

Before we wrapping up, imagine we wanted to add more Lambdas to our project, to do this we can simply add another function name to the functions section of serverless.yml configuration file:

From this:

functions:
  hello:
    handler: CsharpHandlers::AwsDotnetCsharp.Handler::Hello
    events:
     - http:
         path: users/create
         method: get

To this:

functions:
  hello:
    handler: CsharpHandlers::AwsDotnetCsharp.Handler::Hello
    events:
     - http:
         path: users/create
         method: get
  world:
    handler: CsharpHandlers::AwsDotnetCsharp.Handler2::World
    events:
     - http:
         path: users/
         method: get

At this point all we'd need to do is create a new Handler class (for the sake of this demonstration I called it Handler2.cs) and make sure we set the handler property in our serverless.yml configuration appropriately, we'd also need to ensure we set our new Handler2 function name to World as to match the handler address in our serverless configuration.

As the additional function will require its own set of infrastructure we would need to run the build script and then use the following command to regenerate our stack:

$ serverless deploy -v

Once deployed we're able to navigate to our second function as we do our first.

We can also deploy our functions independent of one another by supplying the appropriate function name when executing the deploy function command:

$ serverless deploy function -f world

Conclusion

Hopefully this post has given you an idea as to how the Serverless framework can help you develop and manage your functions, whether using Azure, AWS, Google or any of the other providers supported.

If you're interested in learning more about the Serverless framework then I'd highly recommend checking out their documentation (which is plentiful and very well documented).

REST Client for VS Code, an elegant alternative to Postman

Posted on Wednesday, 18 Oct 2017

For sometime now I've been a huge proponent of Postman, working in an environment that has a large number of remote services meant Postman's ease of generating requests, the ability to manage collections, view historic requests and so forth made it my goto tool for hand crafted HTTP requests. However there have always been features I felt were missing, one such feature was the ability to copy and paste a raw RFC 2616 compliant HTTP request including request method, headers and body directly into Postman and fire it off without the need to manually tweak the request. This lead me to a discussion on Twitter where Darrel Miller recommended I check out the REST Client extension for Visual Studio Code.

REST Client for Visual Studio Code

After installing REST Client the first thing I noticed was how elegant it is. Simply create a new tab, paste in your raw HTTP request (ensuring the tab's Language Mode is set to either HTTP or Plaintext, more on this later) and in no time at all you'll see a "Send Request" button appear above your HTTP request allowing you to execute the request as is, no further modifications are required to tell REST Client how to parse or format.

Features

To give you a firm grasp of why you should consider adding REST Client to your tool chain, here are a few of the features that particularly stood out to me, organised in an easily consumable list format, because we all like lists:

No BS request building

The easiest form of an HTTP request you can send is to paste in a normal HTTP GET URL like so:

https://example.com/comments/1

Note: You can either paste your requests into a Plaintext window where you'll need to highlight the request and press the Send Request keyboard shortcut (Ctrl+Alt+R for Windows, or Cmd+Alt+R for macOS) or set the tab's Language Mode to HTTP where a "Send Request" will appear above the HTTP request.

If you want more control over your request then a raw HTTP request will do:

POST https://example.com/comments HTTP/1.1
content-type: application/json

{
    "name": "sample",
    "time": "Wed, 21 Oct 2015 18:27:50 GMT"
}

Once loaded you'll see the response appear in a separate pane. A nice detail that I really liked is the ability to hover my cursor over the request timer and get a breakdown of duration details, including times surrounding Socket, DNS, TCP, First Byte and Download.

Saving requests as collections for later use is a simple plain text .http file

Following the theme of low friction elegance, it's nice that saving requests for later use (or if you want to check them into your component's source control) are saved as simple plain text documents with an .http file extension.

Break down of requests

One of the gripes I had with Postman was requests separated by tabs. If you had a number of requests you were working with they'd quickly get lost amongst the number of tabs I tend to have open.

REST Client doesn't suffer the same fate as requests can be grouped in your documents and separated by comments, where three or more hash characters (#) are treated as delimiters between requests by REST Client.

Environments and Variables

REST Client has a concept of Environments and Variables, meaning if you work with different environments (ie QA, Staging and Production), you can easily switch between environments configured in the REST Client settings (see below), changing the set of variables configured without having to modify the requests.

Environments

"rest-client.environmentVariables": {
    "local": {
        "host": "localhost",
        "token": "test token"
    },
    "production": {
        "host": "example.com",
        "token": "product token"
    }
}

Variables

Variables on the other hand allow you to simply define variables in your document and reference them throughout.

@host = localhost:5000
@token = Bearer e975b15aa477ee440417ea069e8ef728a22933f0

GET https://{{host}}/api/comments/1 HTTP/1.1
Authorization: {{token}}

It's not Electron

I have nothing against Electron, but it's known to be a big of a resource hog, so much so that I rarely leave Postman open between sessions, where as I've always got VS Code (one Electron process is enough) open meaning it's far easier to dip into to test a few requests.

Conclusion

This post is just a brief overview of some of the features in REST Client, if you're open to trying an alternative to your current HTTP Request generation tool then I'd highly recommend you check it out, you can read more about it on the REST Client GitHub page.