Latest Blog Posts

C# 7 ValueTuple types and their limitations

Posted on Thursday, 20 Apr 2017

Having been experimenting and reading lots about C# 7's tuples recently I thought I'd summarise some of the more interesting aspects about this new C# language feature whilst highlighting some areas and limitations that may not be apparent when first using them.

Hopefully by the end of this post you'll have a firm understanding of the new feature and how it differs in comparison to the Tuple type introduced in .NET 4.

A quick look at tuple types as a language feature

Prior to C# 7, .NET's tuples were an awkward somewhat retrofitted approach to what's a powerful language feature. As a result you don't see them being used as much as they are in other language like Python or to some extend Go (which doesn't support Tuples, but has many features they provide such as multiple return values) - with this in mind it behoves me to briefly explain what Tuples are and why you'd use them for those that may not have touched them before.

So what are Tuples and where you would use them?

The tuples type's main strengths lie in allowing you to group types into a closely related data structure (much like creating a class to represent more than one value), this means they're particularly useful in cases such as returning more than one value from a method, for instance:

public class ValidationResult {
    public string Message { get; set; }
    public bool IsValid { get; set; }
}

var result = ValidateX(x);
if (!result.IsValid)
{
    Logger.Log(Error, result.Message);
}

Whilst there's nothing wrong with this example, sometimes we don't want to have to create a type to represent a set of data - we want types to work for us, not against us; this is where the tuple type's utility is.

In fact, in a languages such as Go (which allow multiple return types from a function) we see such a pattern used extensively throughout their standard library.

bytes, err := ioutil.ReadFile("file.json")
if err != nil {
    log.Fatal(err)
}

Multiple return types can also stop you from needing to use the convoluted TryParse method pattern with out parameters.

Now we've got that covered and we're all on the same page, let's continue.

In the beginning there was System.Tuple<T1, T2, .., T7>

Verbosity
Back in .NET 4 we saw the appearance of the System.Tuple<T> class which introduced a verbose and somewhat awkward API:

var person = new Tuple<string, int>("John Smith", 43);
Console.WriteLine($"Name: {person.Item1}, Age {person.Item2}");
// Output: Name: John Smith, Age: 43

Alternatively there's a static factory method that cleared things up a bit:

var person = Tuple.Create("John Smith", 43);

But there was still room for improvement such as:

No named elements
One of the weaknesses of the System.Tuple type is that you have to refer to your elements as Item1, Item2 etc instead of by their 'named' version (allowing you to unpack a tuple and reference the properties directly) like you can in Python:

name, age = person_tuple
print name

Garbage collection pressure
In addition the System.Tuple type is a reference type, meaning you pay the penalty of a heap allocation, thus increasing pressure on the garbage collector.

public class Tuple<T1> : IStructuralEquatable, IStructuralComparable, IComparable

Nonetheless the System.Tuple type scratched an itch and solved a problem, especially if you owned the API.

C# 7 Tuples to the rescue

With the introduction of the System.ValueTuple type in C# 7, a lot of these problems have been solved (it's worth mentioning that if you want to use or play with the new Tuple type you're going to need to download the following NuGet package.

Now in C# 7 you can do such things as:

Tuple literals

// This is awesome and really clears things up; we can even directly reference the named value!

var person = (Name: "John Smith", Age: 43);
Console.WriteLine(person.Name); // John Smith

Tuple (multiple) return types

(string, int) GetPerson()
{
    var name = "John Smith";
    var age = 32;
    return (name, age);
}

var person = GetPerson();
Console.WriteLine(person.Item1); // John Smith

Even named Tuple return types!

(string Name, int Age) GetPerson()
{
    var name = "John Smith";
    var age = 32;
    return (name, age);
}

var person = GetPerson();
Console.WriteLine(person.Name); // John Smith

If that wasn't enough you can also deconstruct types:

public class Person
{
    public string Name => "John Smith";
    public int Age => 43;

    public void Deconstruct(out string name, out int age)
    {
        name = Name;
        age = Age;
    }
}

...

var person = new Person();
var (name, age) = person;

Console.WriteLine(name); // John Smith

As you can see the System.ValueTuple greatly improves on the older version, allowing you to write far more declarative and succinct code.

It's a value type, baby!
In addition (if the name hadn't given it away!) C# 7's Tuple type is now a value type, meaning there's no heap allocation and one less de-allocation to worry about when compacting the GC heap. This means the ValueTuple can be used in the more performance critical code.

Now going back to our original example where we created a type to represent the return value of our validation method, we can delete that type (because deleting code is always a great feeling) and clean things up a bit:

var (message, isValid) = ValidateX(x);
if (!isvalid)
{
    Logger.Log(Log.Error, message);
}

Much better! We've now got the same code without the need to create a separate type just to represent our return value.

C# 7 Tuple's limitations

So far we've looked at what makes the ValueTuple special, but in order to know the full story we should look at what limitations exist so we can make an educated descision on when and where to use them.

Let's take the same person tuple and serialise it to a JSON object. With our named elements we should expect to see an object that resembles our tuple.

var person = (Name: "John", Last: "Smith");
var result = JsonConvert.SerializeObject(person);

Console.WriteLine(result);
// {"Item1":"John","Item2":"Smith"}

Wait, what? where have our keys gone?

To understand what's going on here we need to take a look at how ValueTuples work.

How the C# 7 ValueTuple type works

Let's take our GetPerson method example that returns a named tuple and check out the de-compiled source. No need to install a de-compiler for this, a really handy website called tryroslyn.azurewebsites.net will do everything we need.

// Our code
using System;
public class C {
    public void M() {
        var person = GetPerson();
        Console.WriteLine(person.Name + " is " + person.Age);
    }
    
    (string Name, int Age) GetPerson()
    {
        var name = "John Smith";
        var age = 32;
        return (name, age);
    }
}

You'll see that when de-compiled, the GetPerson method is simply syntactic sugar for the following:

// Our code de-compiled
public class C
{
    public void M()
    {
        ValueTuple<string, int> person = this.GetPerson();
        Console.WriteLine(person.Item1 + " is " + person.Item2);
    }
    [return: TupleElementNames(new string[] {
        "Name",
        "Age"
    })]
    private ValueTuple<string, int> GetPerson()
    {
        string item = "John Smith";
        int item2 = 32;
        return new ValueTuple<string, int>(item, item2);
    }
}

If you take a moment to look over the de-compiled source you'll see two areas of particular interest to us:

First of all, our Console.WriteLine() method call to our named elements have gone and been replaced with Item1 and Item2. What's happened to our named elements? Looking further down the code you'll see they've actually been pulled out and added via the TupleElementNames attribute.

...
[return: TupleElementNames(new string[] {
    "Name",
    "Age"
})]
...

This is because the ValueTuple type's named elements are erased at runtime, meaning there's no runtime representation of them. In fact, if we were to view the IL (within the TryRoslyn website switch the Decompiled dropdown to IL), you'll see any mention of our named elements have completely vanished!

IL_0000: nop // Do nothing (No operation)
IL_0001: ldarg.0 // Load argument 0 onto the stack
IL_0002: call instance valuetype [System.ValueTuple]System.ValueTuple`2<string, int32> C::GetPerson() // Call method indicated on the stack with arguments
IL_0007: stloc.0 // Pop a value from stack into local variable 0
IL_0008: ldloc.0 // Load local variable 0 onto stack
IL_0009: ldfld !0 valuetype [System.ValueTuple]System.ValueTuple`2<string, int32>::Item1 // Push the value of field of object (or value type) obj, onto the stack
IL_000e: ldstr " is " // Push a string object for the literal string
IL_0013: ldloc.0 // Load local variable 0 onto stack
IL_0014: ldfld !1 valuetype [System.ValueTuple]System.ValueTuple`2<string, int32>::Item2 // Push the value of field of object (or value type) obj, onto the stack
IL_0019: box [mscorlib]System.Int32 // Convert a boxable value to its boxed form
IL_001e: call string [mscorlib]System.String::Concat(object, object, object) // Call method indicated on the stack with arguments
IL_0023: call void [mscorlib]System.Console::WriteLine(string) // Call method indicated on the stack with arguments
IL_0028: nop  // Do nothing (No operation)
IL_0029: ret  // Return from method, possibly with a value

So what does that mean to us as developers?

No reflection on named elements

The absence of named elements in the compiled source means that it's not possible to use reflection to get those name elements via reflection, which limits ValueTuple's utility.

This is because under the bonnet the compiler is erasing the named elements and reverting to the Item1 and Item2 properties, meaning our serialiser doesn't have access to the properties.

I would highly recommend reading Marc Gravell's Exploring tuples as a library author post where he discusses a similar hurdle when trying to use tuples within Dapper.

No dynamic access to named elements

This also means when casting your tuple to a dynamic object results in the loss of the named elements. This can be witnessed by running the following example:

var person = (Name: "John", Last: "Smith");
var dynamicPerson = (dynamic)person;
Console.WriteLine(dynamicPerson.Name);

Results in the following error RuntimeBinder exception:

Unhandled Exception: Microsoft.CSharp.RuntimeBinder.RuntimeBinderException: 'System.ValueTuple<string,string>' does not contain a definition for 'Name'
   at CallSite.Target(Closure , CallSite , Object )
   at CallSite.Target(Closure , CallSite , Object )
   at TupleDemo.Program.Main(String[] args) in /Users/josephwoodward/Dev/TupleDemo/Program.cs:line 16

Thanks to Daniel Crabtree's post for highlighting this!

No using named Tuples in Razor views either (unless they're declared in your view)

Naturally the name erasure in C# 7 tuples also means that you cannot use the names in your view from your view models. For instance:

public class ExampleViewModel {

    public (string Name, int Age) Person => ("John Smith", 30);

}
public class HomeController : Controller
{
    ...
    public IActionResult About()
    {
        var model = new ExampleViewModel();

        return View(model);
    }
}
// About.cshtml
@model TupleDemo3.Models.ExampleViewModel

<h1>Hello @Model.Person.Name</h1>

Results in the following error:

'ValueTuple<string, int>' does not contain a definition for 'Name' and no extension method 'Name' accepting a first argument of type 'ValueTuple<string, int>' could be found (are you missing a using directive or an assembly reference?)

Though switching the print statement to @Model.Person.Item1 outputs the result you'd expect.

Conclusion

That's enough about Tuples for now. Some of the examples used in this post aren't approaches you'd use in real life, but hopefully go to demonstrate some of the limitations of the new type and where you can and can't use C# 7's new ValueTuple type.

Setting up a local Selenium Grid using Docker and .NET Core

Posted on Monday, 20 Mar 2017

Since jumping on the Docker bandwagon I've found its utility spans beyond the repeatable deployments and consistent runtime environment benefits that come with the use of containers. There's a whole host of tooling and use cases emerging which take advantage of containerisation technology, one use case that I recently discovered after a conversation with Phil Jones via Twitter is the ability to quickly set up a Selenium Grid.

Setting up and configuring a Selenium Grid has never been an simple process, but thanks to Docker it's suddenly got a whole lot easier. In addition, you're now able to run your own Selenium Grid locally and greatly speed up your tests' execution. If that isn't enough then another benefit is because the tests execute inside of a Docker container, you'll no longer be blocked by your browser navigating the website you're testing!

Let's take a look at how this can be done.

Note: For the impatient, I've put together a working example of the following post in a GitHub repository you can clone and run.

Selenium Grid Docker Compose file

For those that haven't touched Docker Compose (or Docker for that matter), a Docker Compose file is a Yaml based configuration document (often named docker-compose.yml) that allows you to configure your applications' Docker environment.

Without Docker Compose you'd need to manually run your individual Dockerfile files specifying their network connections and configuration parameters along the way. With Docker Compose you can configure everything in a single file and start your environment with a simple docker-compose up command.

Below is the Selenium Grid Docker Compose configuration you can copy and paste:

# docker-compose.yml

version: '2'
services:
    selenium_hub:
        image: selenium/hub:3.0.1-aluminum
        container_name: selenium_hub
        privileged: true
        ports:
            - 4444:4444
        environment:
            - GRID_TIMEOUT=120000
            - GRID_BROWSER_TIMEOUT=120000
        networks:
            - selenium_grid_internal

    nodechrome1:
        image: selenium/node-chrome-debug:3.0.1-aluminum
        privileged: true
        depends_on:
            - selenium_hub
        ports:
            - 5900
        environment:
            - no_proxy=localhost
            - TZ=Europe/London
            - HUB_PORT_4444_TCP_ADDR=selenium_hub
            - HUB_PORT_4444_TCP_PORT=4444
        networks:
            - selenium_grid_internal

    nodechrome2:
        image: selenium/node-chrome-debug:3.0.1-aluminum
        privileged: true
        depends_on:
            - selenium_hub
        ports:
            - 5900
        environment:
            - no_proxy=localhost
            - TZ=Europe/London
            - HUB_PORT_4444_TCP_ADDR=selenium_hub
            - HUB_PORT_4444_TCP_PORT=4444
        networks:
            - selenium_grid_internal

networks:
    selenium_grid_internal:

In the above Docker Compose file we've defined our Selenium Hub (selenium_hub) service, exposing it on port 4444 and attaching it to a custom network named selenium_grid_internal (which you'll see all of our nodes are on).

selenium_hub:
    image: selenium/hub:3.0.1-aluminum
    container_name: selenium_hub
    privileged: true
    ports:
        - 4444:4444
    environment:
        - GRID_TIMEOUT=120000
        - GRID_BROWSER_TIMEOUT=120000
    networks:
        - selenium_grid_internal

All that's remaining at this point is to add our individual nodes. In this instance I've added two Chrome based nodes, named nodechrome1 and nodechrome2:

nodechrome1:
    image: selenium/node-chrome-debug:3.0.1-aluminum
    privileged: true
    depends_on:
        - selenium_hub
    ports:
        - 5900
    environment:
        - no_proxy=localhost
        - TZ=Europe/London
        - HUB_PORT_4444_TCP_ADDR=selenium_hub
        - HUB_PORT_4444_TCP_PORT=4444
    networks:
        - selenium_grid_internal

nodechrome2:
    image: selenium/node-chrome-debug:3.0.1-aluminum
    ...

Note: If you wanted to add Firefox to the mix then you can replace the image: value with the following Docker image:

nodefirefox1:
    image: selenium/node-firefox-debug:3.0.1-aluminum
    ...

Now if we run docker-compose up you'll see our Selenium Grid environment will spring into action.

To verify everything is working correctly we can navigate to http://0.0.0.0:4444 in our browser where we should be greeted with the following page:

Connecting Selenium Grid from .NET Core

At the time of writing this post the official Selenium NuGet package does not support .NET Standard, however there's a pending pull request which adds support (the pull request has been on hold for a while as the Selenium team wanted to wait for the tooling to stabilise). In the mean time the developer that added support released it as a separate NuGet package which can be downloaded here.

Alternatively just create the following .csproj file and run the dotnet restore CLI command.

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp1.0</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.0.0-preview-20170123-02" />
    <PackageReference Include="xunit" Version="2.2.0-beta5-build3474" />
    <PackageReference Include="xunit.runner.visualstudio" Version="2.2.0-beta5-build1225" />
    <PackageReference Include="CoreCompat.Selenium.WebDriver" Version="3.2.0-beta003" />
  </ItemGroup>

</Project>

Next we'll create the following base class that will create a remote connection to our Selenium Grid:

public abstract class BaseTest
{
    private IWebDriver _driver;

    public IWebDriver GetDriver()
    {
        var capability = DesiredCapabilities.Chrome();
        if (_driver == null){
            _driver = new RemoteWebDriver(new Uri("http://0.0.0.0:4444/wd/hub/"), capability, TimeSpan.FromSeconds(600));
        }

        return _driver;
    }
}

After that we'll create a very simple (and trivial) test that checks for the existence of an ID on google.co.uk.

public class UnitTest1 : BaseTest 
{

    [Fact]
    public void TestForId()
    {
        using (var driver = GetDriver())
        {
            driver.Navigate().GoToUrl("http://www.google.co.uk");
            var element = driver.FindElement(By.Id("lst-ib"));
            Assert.True(element != null);
        }
    }

    ...

}

Now if we run our test (either via the dotnet test CLI command or from your editor or choice) we should see our Docker terminal console showing our Selenium Grid container jump into action as it starts executing the test one one of the registered Selenium Grid nodes.

At the moment we're only executing the one test so you'll only see one node running the test, but as you start to add more tests across multiple classes the Selenium Grid hub will start to distribute those tests across its cluster of nodes, dramatically increasing your test execution time.

If you'd like to give this a try then I've added all of the source code and Docker Compose file in a GitHub repository that you can clone and run.

The drawbacks

Before closing there are a few drawbacks to this method of running tests, especially if you're planning on doing it locally (instead of setting a grid up on a Virtual Machine via Docker).

Debugging is made harder
If you're planning on using Selenium Grid locally then you'll lose the visibility of what's happening in the browser as the tests are running within a Docker container. This means that in order to see the state of the web page on a failing test you'll need to switch to local execution using the Chrome / FireFox or Internet Explorer driver.

Reaching localhost from within a container
In this example we're executing the tests against an external domain (google.co.uk) that our container can resolve. However if you're planning on running tests against a local development environment then there will be some additional Docker configuration required to allow the container to access the Docker host's IP address.

Conclusion

Hopefully this post has broadened your options around Selenium based testing and demonstrated how pervasive Docker is becoming. I'm confident that the more Docker (and other container technology for that matter) matures, the more we'll see said technology being used for such use cases as we've witnessed in this post.

An in-depth look at the various ways of specifying the IP or host ASP.NET Core listens on

Posted on Friday, 10 Feb 2017

Recently I've been working on an ASP.NET Core application where I've needed to configure at runtime, the host address and port the webhost will listen on. For those that have built an ASP.NET Core app you'll know that the default approach generated by the .NET CLI is less than ideal in this case as it's hard-coded.

After a bit of digging I quickly realised there weren't any places that summarised all of the options available, so I thought I'd summarise it all in a post.

Enough talk, let's begin.

Don't set an IP

The first approach is to not specify any IP address (this means removing the .NET Core CLI template convention of using the .UseUrls() method. Without it the web host will listen on localhost:5000 by default.

Whilst this approach is far from ideal, it is an option so deserves a place here.

Hard-coded approach via .UseUrls()

As I alluded to earlier, the default approach that .NET Core's CLI uses is to hard-code the IP address in your application's program.cs file via the UseUrls(...) extension method that's available on the IWebHostBuilder interface.

If you take a look at the UseUrls extension method's signature you'll see the argument is an unbounded string array allowing you to specify more than one IP address that the web host will listen on (which may be preferable to you depending on your development machine, network configuration or preference as specifying more than one host address can save people running into issues between localhost vs 0.0.0.0 vs 127.0.0.1).

public static IWebHostBuilder UseUrls(this IWebHostBuilder hostBuilder, params string[] urls);

Adding multiple IP addresses can either be done by comma-separated strings, or a single string separated by a semi-colon; both result in the same configuration.

var host = new WebHostBuilder()
    .UseConfiguration(config)
    .UseKestrel()
    .UseUrls("http://0.0.0.0:5000", "http://localhost:5000")
    ///.UseUrls("http://0.0.0.0:5000;http://localhost:5000") also works
    .UseContentRoot(Directory.GetCurrentDirectory())
    .UseIISIntegration()
    .UseStartup<Startup>()
    .Build();

If you'd rather not explicitly list every IP address to listen on you can use a wildcard instead resulting in the web host binding to all IPv4 and IPv6 IPs addresses on port 5000:

.UseUrls("http://*:5000")

A bit about wildcards: The wildcard is not special in any way, in fact anything not recognised as an IP will be bound to all IPv4 or IPv6 addresses, so http://@£$%^&*:5000 is considered the same as "http://*:5000" and vice versa.

Whilst this hard-coded approach makes it easy to get your application up and running, the very fact that it's hard-coded does make it difficult to configure externally via automation such as a continuous integration/deployment pipeline.

Note: It's worth mentioning that setting a binding IP address directly on the WebHost as we are in this approach always takes preference over any of the other approaches listed in this post, but we'll go into this later.

Environment variables

You can also specify the IP your application listens on via an environment variable. To do this first you'll need to download the Microsoft.Extensions.Configuration package from NuGet then call the AddEnvironmentVariables() extension method on your ConfigurationBuilder object like so:

public static void Main(string[] args)
{
    var config = new ConfigurationBuilder()
        .AddEnvironmentVariables()
        .Build();

    var host = new WebHostBuilder()
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseConfiguration(config)
        .UseStartup<Startup>()
        .Build();

    host.Run();
}

Now if you were to set the following environment variable and run your application it will listen to the IP address specified:

ASPNETCORE_URLS=https://*:5123

Command line argument

The another option available is to supply the host name and port via a command line argument when your application's initially executed (notice how you can use one or more addresses as we did above).

dotnet run --urls "http://*:5000;http://*:6000"

or

dotnet YourApp.dll --urls "http://*:5000;http://*:6000"

Before you can use command line arguments, you're going to need the Microsoft.Extensions.Configuration.CommandLine package and update your Program.cs bootstrap configuration accordingly:

public static void Main(string[] args)
{
    var config = new ConfigurationBuilder()
        .AddCommandLine(args)
        .Build();

    var host = new WebHostBuilder()
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseConfiguration(config)
        .UseStartup<Startup>()
        .Build();

    host.Run();
}

Notice how I've removed the .ForUrls() method; this prevents .ForUrls() overwriting the IP address provided via command line.

hosting.json approach

Another popular approach to specifying your host and port address is to read the IP address from a .json file during the application boot up. Whilst you can name your configuration anything, the common approach appears to be hosting.json, with the contents of your file containing the IP address you want your application to listen on:

{
  "urls": "http://*:5000"
}

In order to use this approach you're first going to need to include the Microsoft.Extensions.Configuration.Json package, allowing you to load configurations via .json documents.

public static void Main(string[] args)
{
    var config = new ConfigurationBuilder()
        .SetBasePath(Directory.GetCurrentDirectory())
        .AddJsonFile("hosting.json", optional: true)
        .Build();

    var host = new WebHostBuilder()
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseConfiguration(config)
        .UseStartup<Startup>()
        .Build();

    host.Run();
}

Now when you run the dotnet run or dotnet YourApp.dll command you'll noticed the output will reflect the address specified within your hosting.json document.

Just a reminder that before publishing your application be sure to include your hosting file in your publishing options (in either your project.json or your .csproj file:

// project.json

"publishOptions": {
    "include": [
      "wwwroot",
      "Views",
      "appsettings.json",
      "web.config",
      "hosting.json"
    ]
}
// YourApp.csproj

<ItemGroup>
  <Content Update="wwwroot;Views;appsettings.json;web.config;hosting.json">
  <CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>
  </Content>
</ItemGroup>

Out of all of the approaches available this has to be my most preferred option. It's simple enough to overwrite or modify within your test/release pipeline whilst removing hurdles co-workers may need to jump through when wanting to download your source code and run the application (as opposed to the command line approach).

Order of preference

When it comes down to the order the IP addresses are loaded, I would recommend you check out the documentation here, especially this snippet:

You can override any of these environment variable values by specifying configuration (using UseConfiguration) or by setting the value explicitly (using UseUrls for instance). The host will use whichever option sets the value last. For this reason, UseIISIntegration must appear after UseUrls, because it replaces the URL with one dynamically provided by IIS. If you want to programmatically set the default URL to one value, but allow it to be overridden with configuration, you could configure the host as follows:

var config = new ConfigurationBuilder()
   .AddCommandLine(args)
   .Build();

var host = new WebHostBuilder()
   .UseUrls("http://*:1000") // default URL
   .UseConfiguration(config) // override from command line
   .UseKestrel()
   .Build();

Conclusion

Hopefully this post helped you gain a better understanding of the many options available to you when configuring what IP address you want your application to listen on, writing it has certainly helped me cement them in my mind!

C# IL Viewer for Visual Studio Code using Roslyn side project

Posted on Monday, 30 Jan 2017

For the past couple of weeks I've been working on an IL (Intermediate Language) Viewer for Visual Studio Code. As someone that develops on a Mac, I spend a lot of time doing C# in VS Code or JetBrains' Rider editor - however neither of them have the ability to view the IL generated (I know JetBrains are working on this for Rider) so I set out to fix this problem as a side project.

As someone that's never written a Visual Studio Code extension before it was a bit of an abmitious first extension, but enjoyable none the less.

Today I released the first version of the IL Viewer (0.0.1) to the Visual Studio Code Marketplace so it's available to download and try via the link below:

Download IL Viewer for Visual Studio Code

Download C# IL Viewer for Visual Studio Code or install it directly within Visual Studio Code by launching Quick Open (CMD+P for Mac or CTRL+P for Windows) and pasting in the follow command and press enter.

ext install vscodeilviewer

The source code is all up on GitHub so feel free to take a look, but be warned - it's a little messy right now as it was hacked together to get it working.

VS Code IL Viewer

For those interested in how it works, continue reading.

How does it work?

Visual Studio Code and HTTP Service

At its heart, Visual Studio Code is a glorified text editor. C# support is added via the hard work of the OmniSharp developers, which itself is backed by Roslyn. This means that in order to add any IL inspection capabilities I needed to either hook into OmniSharp or build my own external service that gets bundled within the extension. In the end I decided to go with the later.

When Visual Studio Code loads and detects the language is C# and the existence of a project.json file, it starts an external HTTP serivce (using Web API) which is bundled within the extension.

Moving forward I intend to switch this out for a console application communicating over stdin and stdout. This should speed up the overall responsiveness of the extension whilst reducing the resources required, but more importantly reduce the start up time of the IL viewer.

Inspecting the Intermediate Language

Initially I planned on making the Visual Studio Code IL Viewer extract the desired file's IL directly from its project's built DLL, however after a little experimentation this proved not to ideal as it required the solution to be built in order to view the IL, and built again for any inspections after changes, no matter how minor. It would also be blocking the user from doing any work whilst the project was building.

In the end I settled on an approach that builds just the .cs file you wish to inspect into an in memory assembly then extracts the IL and displays it to the user.

Including external dependencies

One problem with compiling just the source code in memory is that it doesn't include any external dependencies. As you'd expect, as soon as Roslyn encounters a reference for an external binary you get a compilation error. Luckily Roslyn has the ability to automatically include external dependencies via Roslyn's workspace API.

public static Compilation LoadWorkspace(string filePath)
{
    var projectWorkspace = new ProjectJsonWorkspace(filePath);

    var project = projectWorkspace.CurrentSolution.Projects.FirstOrDefault();
    var compilation = project.GetCompilationAsync().Result;

    return compilation;
}

After that the rest was relativly straight forward, I grab the syntax tree from the compilation unit of the project the load in any additional dependencies before using Mono's Cecil library to extract the IL (as Cecil supports .NET Standard it does not require the Mono runtime).

Once I have the IL I return the contents as a HTTP response then display it to the user in Visual Studio Code's split pane window.

Below is a simplified diagram of how it's all tied together:

IL Viewer

As I mentioned above, the source code is all available on GitHub so feel free to take a look. Contributions are also very welcome!

Year in review and looking at 2017

Posted on Tuesday, 10 Jan 2017

It's that time of the year again where I take a moment to reflect on the previous year and set some high level goals as to what I want to achieve or aim towards in the next.

I find these types of post really helpful at tracking progress and ensuring I stay on a positive projection as a software developer/engineer - a hobby that I'm lucky enough to have as a career.

Review of 2016

Last year was an amazing year for a couple of reasons, some planned and some not so. So, without further ado let's look at last year's personal targets and goals to see how I've done.

Goal 1: Speak at more conferences and meet-ups

I'm inclined to say I've smashed this. At the time of writing last year's post (February 2016) my speaking activities included relatively small user groups and meet ups. Since that post I've spoken at a few more meet ups but also at some rather large conferences which has been amazing.

The talks include:

  • Why software engineering is a great career choice
    TechSpark Graduates Conference 2016 - 1st December 2016

  • Going cross-platform with ASP.NET Core
    BrisTech 2016 - 3rd November 2016

  • Going cross-platform with ASP.NET Core
    Tech Exeter Conference - 8th October 2016

  • Building rich client-side applications using Angular 2
    DDD 11 (Reading) - 3rd September 2016

  • .NET Rocks Podcast - Angular 2 CLI (Command Line Interface)
    24th August 2016

  • Angular 2
    BristolJS - May 25th, 2016

  • Angular 2
    Taunton Developers' Meetup - June 9th, 2016

It's a really great feeling reflecting on my very first talk (a lightening talk on TypeScript) and how nervous I was in comparison to now. Don't get me wrong, the nerves are still there, but as many will know - you're just able to cope with them better.

.All in all I'm extremely satisfied at how far I've progressed in this area and how much my confidence speaking in public has grown. This is certainly something I wish to continue to do in 2017.

Goal 2: Start an Exeter based .NET User Group

At the time of setting this goal I was working in Exeter where there were no .NET focused user group, something that surprised me given the number of .NET specific jobs in the city.

To cut a long story short, I'd started an Exeter .NET user group and was in the process of organising the first meet up when I got a job opportunity at Just Eat in Bristol and ended up taking over the Bristol based .NET South West user group as the organiser was stepping down and closing the rather large, well established group down. Having been to a couple of the meet ups at the user group it was a shame to see it end, and given the fact I was now working in Bristol I decided to step forward to take over along with a couple of the other members.

Since then we (me and the the other organisers) have been really active in keeping .NET South West alive and well, organising a range of speakers on variety of .NET related topics.

Goal 3: Continue contributing to the .NET OSS ecosystem

This year I've create a number of small open-source libraries, and projects (Angular 2 piano sight reading game the one that's received the most attention). However whilst I've been contributing on and off to other libraries, I don't feel my contributions have been at a level that I'm happy with, so this will be a goal for 2017.

Bonus Goal - F#

Making a start learning F# was one of my bonus goals that I was hoping to achieve during 2016, however other than making a few changes to a couple of lines of F#, this is a no-go.

In all honesty, I'm still on the bench as to whether I want to learn F# - my only motivation being to learn a functional language (growing knowledge and thinking in a different paradigm); where as there are other languages I'm interested in learning that aren't necessarily tied to the .NET eco-system.

Other notable events and achievements in 2016

In addition to the aforementioned goals and achievements in 2016, there have also been others.

  • New job as a .NET software engineer at Just Eat in Bristol - a seriously awesome company that has a lot of really talented developers and interesting problems to work on.
  • Co-organiser of DDD South West conference
  • Heavy investment of time in learning .NET Core and Docker
  • Became co-organiser of the .NET South West user group

Goals for 2017

With a review of 2016 out of the way let's take a quick look at plans for 2017.

Goal 1: Continue to grow as a speaker, speaking at larger events

I've really loved speaking at meet ups and conferences, it's a truly rewarding experience both on a personal development and professional development level. There's very little more satisfying in life than pushing yourself outside of your comfort zone. So in 2017 I'm really keen to continue to pursue this by talking at larger conferences and events.

Goal 2: More focus on contributing to open-source projects

Whilst I'm satisfied with my contributions to the open-source world, with personal projects and contributions to other projects, it's definitely an area I would like to continue to pursue. So in 2017 I'm aiming I'm looking for other larger projects I invest in and contribute to on a long term basis.

Goal 3: Learn another language

Where as I previously set myself a 2016 goal of Learn F#, this time around I'm going to keep my options open. I've recently been learning a little Go, but also really interested in Rust, so this year I'm going to simply set a goal to learn a new language. As it stands, it looks like it's between Go and Rust, with F# still being a possibility.

Conclusion

Overall it's been a great year, I'm really keen to keep the pace up on public speaking as it's far too easy to rest on one's laurels, so here's to 2017 and the challenges it brings!

In-memory C# compilation (and .dll generation) using Roslyn

Posted on Wednesday, 28 Dec 2016

Recently I've been hard at work on my first Visual Studio Code extension and one of the requirements is to extract IL from a .dll binary. This introduces a question though, do I build the solution (blocking the user whilst their project is building), read the .dll from disk then extract the IL, or do I compile the project in memory behind the scenes, then stream the assembly to Roslyn? Ultimately I went with the later approach and was pleasantly surprised at how easy Roslyn makes this - surprised enough that I thought it deserved its own blog post.

Before we continue, let me take a moment to explain what Roslyn is for those that may not fully understand what it is.

What is Roslyn?

Roslyn is an open-source C# and VB compiler as a service platform.

The key words to take away with you here are "compiler as a service"; let me explain.

Traditionally compilers have been a black box of secrets that are hard to extend or harness, especially for any tooling or code analysis purposes. Take ReSharper for instance; Resharper has a lot of code analysis running under the bonnet that allows it to offer refactoring advice. In order for the ReSharper team to provide this they had to build their own analysis tools that would manually parse your solution's C# inline with the .NET runtime - the .NET platform provided no assistance with this, essentially meaning they had to duplicate a lot of the work the compiler was doing.

This has since changed with the introduction of Roslyn. For the past couple of years Microsoft have been rewriting the C# compiler in C# (I know, it's like a compiler Inception right?) and opening it up via a whole host of APIs that are easy to prod, poke and interrogate. This opening up of the C# compiler has resulted in a whole array of code analysis tooling such as better StyleCop integration and debugging tools like OzCode and the like. What's more, you can also harness Roslyn for other purposes such as tests that fail as soon as common code smells are introduced into a project.

Let's start

So now we all know what Roslyn is, let's take a look at how we can use it to compile a project in memory. In this post we will be taking some C# code written in plain text, turning it into a syntax tree that the compiler can understand then using Roslyn to compile it, resulting in a streaming in-memory assembly.

Create our project

In this instance I'm using .NET Core on a Mac but this will also work on Windows, so let's begin by creating a new console application by using the .NET Core CLI.

dotnet new -t console

Now, add the following dependencies to your project.json file:

"dependencies": {
    "Microsoft.CodeAnalysis.CSharp.Workspaces": "1.3.2",
    "Mono.Cecil": "0.10.0-beta1-v2",
    "System.ValueTuple": "4.3.0-preview1-24530-04"
},

For those interested, here is a copy of the project.json file in its entirety:

{
  "version": "1.0.0-*",
  "buildOptions": {
    "debugType": "portable",
    "emitEntryPoint": true
  },
  "dependencies": {
    "Microsoft.CodeAnalysis.CSharp.Workspaces": "1.3.2",
    "Mono.Cecil": "0.10.0-beta1-v2",
    "System.ValueTuple": "4.3.0-preview1-24530-04"
  },
  "frameworks": {
    "netcoreapp1.1": {
      "dependencies": {
        "Microsoft.NETCore.App": {
          "type": "platform",
          "version": "1.0.1"
        }
      },
      "imports": "portable-net45+win8+wp8+wpa81"
    }
  }
}

Once we've restored our project using the dotnet restore command, the next step is to write a simple class to represent our source code. This code could be read from a web form, a database or a file on disk. In this instance I'm hard-coding it into the application itself for simplicity.

public class Program {

    public static void Main(string[] args)
    {

        var code = @"
        using System;
        public class ExampleClass {
            
            private readonly string _message;

            public ExampleClass()
            {
                _message = ""Hello World"";
            }

            public string getMessage()
            {
                return _message;
            }

        }";

        CreateAssemblyDefinition(code);
    }

    public static void CreateAssemblyDefinition(string code)
    {
        var sourceLanguage = new CSharpLanguage();
        SyntaxTree syntaxTree = sourceLanguage.ParseText(code, SourceCodeKind.Regular);

        ...
    }

}

Getting stuck into Roslyn

Now we've got the base of our project sorted, let's dive into some of the Roslyn API.

First we're going to want to create an interface we'll use to define the language we want to use. In this instance it'll be C#, but Roslyn also supports VB.

public interface ILanguageService
{
    SyntaxTree ParseText(string code, SourceCodeKind kind);

    Compilation CreateLibraryCompilation(string assemblyName, bool enableOptimisations);
}

Next we're going to need to parse our plain text C#, so we'll begin by working on the implementation of the ParseText method.

public class CSharpLanguage : ILanguageService
{
    private static readonly LanguageVersion MaxLanguageVersion = Enum
        .GetValues(typeof(LanguageVersion))
        .Cast<LanguageVersion>()
        .Max();

    public SyntaxTree ParseText(string sourceCode, SourceCodeKind kind) {
        var options = new CSharpParseOptions(kind: kind, languageVersion: MaxLanguageVersion);

        // Return a syntax tree of our source code
        return CSharpSyntaxTree.ParseText(sourceCode, options);
    }

    public Compilation CreateLibraryCompilation(string assemblyName, bool enableOptimisations) {
        throw new NotImplementedException();
    }
}

As you'll see the implementation is rather straight forward and simply involved us setting a few parse options such as the language features we expect to see being parsed (marked via the languageVersion parameter) along with the SourceCodeKind enum.

Looking further into Roslyn's SyntaxTree

At this point I feel it's worth mentioning that if you're interested in learning more about Roslyn then I would recommend spending a bit of time looking into Roslyn's Syntax Tree API. Josh Varty's posts on this subject are a great resource I would recommend.

I would also recommend taking a look at LINQ Pad, which amongst other great features, has the ability to show you a syntax tree generated by Roslyn your code. For instance, here is a generated syntax tree of our ExampleClass code we're using in this post:

http://assets.josephwoodward.co.uk/blog/linqpad_tree2.png

Now our C# has been parsed and turned into a data structure the C# compiler can understand, let's look at using Roslyn to compile it.

Compiling our Syntax Tree

Continuing with the CreateAssemblyDefinition method, let's compile our syntax tree:

public static void CreateAssemblyDefinition(string code)
{
    var sourceLanguage = new CSharpLanguage();
    SyntaxTree syntaxTree = sourceLanguage.ParseText(code, SourceCodeKind.Regular);

    Compilation compilation = sourceLanguage
      .CreateLibraryCompilation(assemblyName: "InMemoryAssembly", enableOptimisations: false)
      .AddReferences(_references)
      .AddSyntaxTrees(syntaxTree);

    ...
}

At this point we're going to want to fill in the implementation of our CreateLibraryCompilation method within our CSharpLanguage class. We'll start this by passing the appropriate arguments into an instance of CSharpCompilationOptions. This includes:

  • outputKind - We're outputting a (dll) Dynamically Linked Library
  • optimizationLevel - Whether we want our C# output to be optimised
  • allowUnsafe - Whether we want the our C# code to allow the use of unsafe code or not
public class CSharpLanguage : ILanguageService
{
    private readonly IReadOnlyCollection<MetadataReference> _references = new[] {
          MetadataReference.CreateFromFile(typeof(Binder).GetTypeInfo().Assembly.Location),
          MetadataReference.CreateFromFile(typeof(ValueTuple<>).GetTypeInfo().Assembly.Location)
      };

    ...

    public Compilation CreateLibraryCompilation(string assemblyName, bool enableOptimisations) {
      var options = new CSharpCompilationOptions(
          OutputKind.DynamicallyLinkedLibrary,
          optimizationLevel: enableOptimisations ? OptimizationLevel.Release : OptimizationLevel.Debug,
          allowUnsafe: true);

      return CSharpCompilation.Create(assemblyName, options: options, references: _references);
  }
}

Now we've specified our compiler options, we invoke the Create factory method where we also need to specify the assembly name we want our in-memory assembly to be called (InMemoryAssembly in our case, passed in when calling our CreateLibraryCompilation method), along with additional references required to compile our source code. In this instance, as we're targeting C# 7, we need to supply the compilation unit with the ValueTuple structs implementation. If we were targeting an older version of C# then this would not be required.

All that's left to do now is to call Roslyn's emit(Stream stream) method that takes a Stream input parameter and we're sorted!

public static void CreateAssemblyDefinition(string code)
{
    ...

    Compilation compilation = sourceLanguage
        .CreateLibraryCompilation(assemblyName: "InMemoryAssembly", enableOptimisations: false)
        .AddReferences(_references)
        .AddSyntaxTrees(syntaxTree);

    var stream = new MemoryStream();
    var emitResult = compilation.Emit(stream);
    
    if (emitResult.Success){
        stream.Seek(0, SeekOrigin.Begin);
        AssemblyDefinition assembly = AssemblyDefinition.ReadAssembly(stream);
    }
}

From here I'm then able to pass my AssemblyDefinition to a method that extracts the IL and I'm good to go!

Conclusion

Whilst this post is quite narrow in its focus (I can't imagine everyone is looking to compile C# in memory!), hopefully it's served as a primer in spiking your interest in Roslyn and what it's capable of doing. Roslyn is a truly powerful platform that I wish more languages offered. As mentioned before there are some great resources available go into much more depth. I would especially recommend Josh Varty's posts on the subject.

In-memory testing using ASP.NET Core

Posted on Tuesday, 06 Dec 2016

A fundamental problem with integration testing over finer-grained tests such as unit testing is that in order to integration test your component or application you need to spin up a running instance of your application so you can reach it over HTTP, run your tests and then spin it down afterwards.

Spinning up instances of your application can lead to a lot of additional work when it comes to running your tests within any type of continuous deployment or delivery pipeline. This has certainly become easier with the introduction of the cloud, but still requires a reasonable investment of time and effort to setup, as well slowing down your deployment/delivery pipeline.

An alternative approach to running your integration or end-to-end tests is to utilise in-memory testing. This is where your application is spun up in memory via an in-memory server and has the tests run against it. An additional benefit to running your tests this way is you're no longer testing any of your host OS's network stack either (which in most cases will be configured differently to your production server's stack anyway).

TestServer package

Thankfully in-memory testing can be performed easily in ASP.NET Core thanks to the Microsoft.AspNetCore.TestHost NuGet package.

Let's take a moment to look at the TestServer API exposed by the TestHost library:

public class TestServer : IServer, IDisposable
{
    public TestServer(IWebHostBuilder builder);

    public Uri BaseAddress { get; set; }
    public IWebHost Host { get; }

    public HttpClient CreateClient();
    public HttpMessageHandler CreateHandler();
    
    public RequestBuilder CreateRequest(string path);
    public WebSocketClient CreateWebSocketClient();
    public void Dispose();
}

As you'll see, the API has all the necessary endpoints we'll need to spin our application up in memory.

For those that are regular readers of my blog, you'll remember we used the same TestServer package to run integration tests on middleware back in July. This time we'll be using it to run our Web API application in memory and run our tests against it. We'll then assert that the response received is expected.

Enough talk, let's get started.

Running Web API in memory

Setting up Web API

In this instance I'm going to be using ASP.NET Core Web API. In my case I've created a small Web API project using the ASP.NET Core yoman project template. You'll also note that I've stripped a few things out of the application for the sake of making the post as easier to follow. Here are the few files that really matter:

Startup.cs (nothing out of the ordinary here)

public class Startup
{
    public Startup(IHostingEnvironment env)
    {
        var builder = new ConfigurationBuilder()
            .SetBasePath(env.ContentRootPath)
            .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
            .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
            .AddEnvironmentVariables();
        Configuration = builder.Build();
    }

    public IConfigurationRoot Configuration { get; }

    // This method gets called by the runtime. Use this method to add services to the container.
    public void ConfigureServices(IServiceCollection services)
    {
        // Add framework services.
        services.AddMvc();
    }

    // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
    {
        loggerFactory.AddConsole(Configuration.GetSection("Logging"));
        loggerFactory.AddDebug();

        app.UseMvc();
    }
}

ValuesController.cs

[Route("api/[controller]")]
public class ValuesController : Controller
{
    [HttpGet]
    public string Get()
    {
        return "Hello World!";
    }
}

All we've got here is a simple WebAPI application that returns a single Hello World! value from ValueController when you fire a GET request to /api/values/.

Running Web API in memory

At this point I've created a test project alongside my Web API one and added the Microsoft.AspNetCore.TestHost package to my test project's package.json file.

package.json

...
"dependencies": {
    "dotnet-test-xunit": "2.2.0-preview2-build1029",
    "xunit": "2.2.0-beta2-build3300",
    "Microsoft.AspNetCore.TestHost": "1.0.0",
    "TestWebAPIApplication":{
        "target":"project"
    }
},
...

Next, we'll create our first test class, and bootstrap our WebAPI project. Pay particular attention to our web application's Startup.cs that's being passed into the WebHostBuilder's UseStartup<T> method. You'll notice this is exactly the same way we bootstrap our application within Program.cs (the bootstrap file we use when deploying our application).

public class ExampleTestClass
{
    private IWebHostBuilder CreateWebHostBuilder(){
        var config = new ConfigurationBuilder().Build();
        
        var host = new WebHostBuilder()
            .UseConfiguration(config)
            .UseStartup<Startup>();

        return host;
    }

    ...
}

Writing our test

At this point we're ready to write our test, so let's create a new instance of TestServer which takes and instance of IWebHostBuilder.

public TestServer(IWebHostBuilder builder);

As you can see from the following trivial example, we're simply capturing the response from the controller invoked when calling /api/values, which is our case is the ValuesController.

[Fact]
public async Task PassingTest()
{
    var webHostBuilder = CreateWebHostBuilder();
    var server = new TestServer(webHostBuilder);

    using(var client = server.CreateClient()){
        var requestMessage = new HttpRequestMessage(new HttpMethod("GET"), "/api/values/");
        var responseMessage = await client.SendAsync(requestMessage);

        var content = await responseMessage.Content.ReadAsStringAsync();

        Assert.Equal(content, "Hello World!");
    }
}

Now, when we call Assert.Equals on our test we should see the test has passed.

Running test UnitTest.Class1.PassingTest...
Test passed 

Conclusion

Hopefully this post has given you enough insight into how you can run your application in memory for purposes such as integration or feature testing. Naturally there's a lot more you could do to simplify and speed up the tests by limiting the number of times TestServer is created.

New Blog, now .NET Core, Docker and Linux powered - and soon to be open-sourced

Posted on Saturday, 26 Nov 2016

A little while ago I concluded that my blog was looking a bit long in the tooth and decided that since I'm looking at .NET Core, what better opportunity to put it through its paces than by rewriting my blog using it.

I won't go into too much detail as how the blog is built as it's nothing anyone wouldn't have seen before, but as someone that likes to read about people's experiencing building things, I thought I'd break down how it's built and what I learned whilst doing it.

Here's an overview of some of the technology I've used:

  • .NET Core / ASP.NET Core MVC
  • MediatR
  • Azure SQL
  • Dapper
  • Redis
  • Google Authentication
  • Docker
  • Nginx
  • Ubuntu

.NET Core / ASP.NET Core MVC

Having been following the development of .NET Core, I thought I'd wait until things stabilised before I started migrating. Unfortunately I didn't wait long enough and had to go through the the pain many developers experiences with breaking changes.

I also ran into various issues such as waiting for libraries to migrate to .NET Standard, and other problems such as RSS feed generation for which no .NET Standard libraries exist, primarily because System.ServiceModel.Syndication is not .NET Core compatible just yet. None of these are deal breakers, with work arounds out there, but none-the-less tripped me up along the way. That said, whilst running into these issues I did keep reminding myself that this is what happens when you start building with frameworks and libraries still in beta - so no hard feelings.

In fact, I've been extremely impressed with the direction and features in ASP.NET Core and look forward to building more with it moving forward.

MediatR

I've never been a fan of the typical N-teir approach to building an application primarily because it encourages you to split your application into various horizontal slices (generally UI, Business Logic and Data Access), which often leads of a ridged design filled with lots of very large mixed concerns. Instead I prefer breaking my application up into vertical slices based on features, such as Blog, Pages, Admin etc.

MediatR helps me do this and at the same time allows you to model your application's commands and queries, turning a HTTP request into a pipeline to which you handle and return a response. This has an added effect of keeping your controllers nice and skinny as the only responsibility of the Controller is to pass the request into MediatR's pipeline.

Below is a simplified example of what a controller looks like, forming the request then delegating it to the appropriate handler:

// Admin Controller
public class BlogAdminController {

    private readonly IMediator _mediator;

    public BlogAdminController(IMediator mediator)
    {
        _mediator = mediator;
    }

    [Route("/admin/blog/edit/{id:int}")]
    public IActionResult Edit(BlogPostEdit.Query query)
    {
        BlogPostEdit.Response model = _mediator.Send(query);

        return View(model);
    }
}
public class BlogPostEdit
{
    public class Query : IRequest<Response>
    {
        public int Id { get; set; }
    }

    public class BlogPostEditRequestHandler : IRequestHandler<Query, Response>
    {
        private readonly IBlogAdminStorage _storage;

        public BlogPostEditRequestHandler(IBlogAdminStorage storage)
        {
            _storage = storage;
        }

        public Response Handle(Query request)
        {
            var blogPost = _storage.GetBlogPost(request.Id);
            if (blogPost == null)
                throw new RecordNotFoundException(string.Format("Blog post Id {0} not found", request.Id.ToString()));

            return new Response
            {
                BlogPostEditModel = blogPost
            };
        }
    }

    public class Response
    {
        public BlogPostEditModel BlogPostEditModel { get; set; }
    }
}

A powerful feature of MediatR's pipelining approach is you can start to use the decorator pattern to handle cross-cutting concerns like caching, logging and even validation.

If you're interested in reading more about MediatR then I'd highly recommend Jimmy Bogard's video on favouring slices rather than layers, where he covers MediatR and its architectural benefits.

Google Authentication

I wanted to keep login simple, and not have to worry about storing passwords. With this in mind I decided to go with Google Authentication for logging in, which I cover in my Social authentication via Google in ASP.NET Core MVC post.

Docker, Ubuntu and Redis

Having read loads on Docker but never having played with it, migrating my blog to .NET Core seemed like a perfect opportunity to get stuck into Docker to see what all the fuss was about.

Having been using Docker for a couple of months now I'm completely sold on how it changes the deployment and development landscape.

This isn't the right post to go into too much detail about Docker, but no doubt you're aware of roughly what it does by now and if you're considering taking it for a spin to see what it can do for you then I would highly recommend it.

Docker's made configuring my application to run on Ubuntu with Redis and Nginx an absolute breeze. No longer do I have to spin up individual services and packing website up and deploy it. Now I simply have to publish an image to a repository, pull it down to my host and run docker-compose up.

Don't get me wrong, Docker's certainly not the golden bullet that some say it is, but it's definitely going to make your life easier in most cases.

Open-sourcing the blog

I redeveloped the blog in mind of open-sourcing it, so once I've finished tidying it up I'll put it up on my GitHub account so you can download it and give it a try for yourself. It's no Orchard CMS, but it'll do the job for me - and potentially you.

Getting started with Elastic Search in Docker

Posted on Tuesday, 15 Nov 2016

Having recently spend a lot of time experimenting with Docker, other than repeatable deployment and runtime environments, one of the great benefits promised by the containerisation movement is how it can supplement your local development environment.

No longer do you need to simultaneously waste time and slow down your local development machine by installing various services like Redis, Postgres and other dependencies. You can simply download a Docker image and boot up your development environment. Then, once you're finished with it tear it down again.

In fact, a lot of the Docker images for such services are maintained by the development teams and companies themselves.

I'd never fully appreciated this dream until recently when I partook in the quarterly three day hackathon at work, where time was valuable and we didn't waste time downloading and installing the required JDK just to get Elastic Search installed.

In fact, I was so impressed with Docker and Elastic Search that it compelled me to write this post.

So without further ado, let's get started.

What is Elastic Search?

Elasticsearch is a search server based on Lucene. It provides a distributed, multitenant-capable full-text search engine with a RESTful web interface and schema-free JSON documents.

Now that that's out of the way, let's get going. First thing's first, you're going to need Docker

Installing Docker

If the title didn't give it away, we're going to be setting up Elastic Search up locally using Docker, so if you don't have Docker installed then you're going to need to head over to the Docker download page and download/install it.

Setting up Elastic Search

Next we're going to need to find the Elastic Search Docker image.

To do this we're going to head over to Docker Hub and seach for Elastic Search (here's a direct link for the lazy or those pressed for time).

What is Docker Hub? For new to Docker, Docker Hub is a repository of popular Docker images, many of which are officially owned and supported by the owners of the software.

Pulling and running the Elastic Search Docker image

To run the Elastic Search Docker image we're first going to need to pull it down to our local machine. To do this open your command prompt or terminal (any directory is fine as nothing is downloaded to the current directory) and execute the following Docker command:

docker pull elasticsearch

Running Elastic Search

Next we want to run our Elastic Search image, do to that we need to type the following command into our terminal:

docker run -d -p 9200:9200 -p 9300:9300 elasticsearch

Let's breakdown the command:

  • First we're telling Docker we want to run an image in a container via the 'run' command.

  • The -d argument will run the container in detached mode. This means it will run as a separate background process, as opposed to short-lived process that will run and immediately terminate once it has finished executing.

  • Moving on, the -p arguments tell the container to open and bind our local machine's port 9200 and 9300 to port 9200 and 9300 in the Docker container.

  • Then at the end we specify the Docker image we wish to start running - in this case, the Elastic Search image.

Note: At this point, if you're new to Docker then it's worth knowing that our container's storage will be deleted when we tear down our Docker container. If you wish to persist the data then we have to use the -v flag to map the Docker's drive volume to our local disk, as opposed to the default location which is the Docker's container.

If we want to map the volume to our local disk then we'd need to run the following command instead of the one mentioned above:

docker run -d -v "$HOME/Documents/elasticsearchconf/":/usr/share/elasticsearch/data -p 9200:9200 -p 9300:9300 elasticsearch

This will map our $HOME/Documents/elasticsearchconf folder to the container's /usr/share/elasticsearch/data directory.

Checking our Elastic Search container is up and running

If the above command worked successfully then we should see the Elastic Search container up and running, we can check this by executing the following command that lists all running containers:

docker ps

To verify Elastic Search is running, you should also be able to navigate to http://localhost:9200 and see output similar to this:

{
  "name" : "f0t5zUn",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "3loHHcMnR_ekDxE1Yc1hpQ",
  "version" : {
    "number" : "5.0.0",
    "build_hash" : "253032b",
    "build_date" : "2016-10-26T05:11:34.737Z",
    "build_snapshot" : false,
    "lucene_version" : "6.2.0"
  },
  "tagline" : "You Know, for Search"
}

Container isn't running?

If for some reason you container isn't running, then you can run the following command to see all containers, whether running or not.

docker ps -a

Then once you've identified the container you just tried to run (hint: it should be at the top) then run the following command, including the first 3 or 4 characters from the Container Id column:

docker logs 0403

This will print out the containers logs, giving you a bit of information as to what could have gone wrong.

Connecting to Elastic Search

Now that our Docker container is up and running, let's get our hands dirty with Elastic Search via their RESTful API.

Indexing data

Let's begin by indexing some data in Elastic Search. We can do this by posting the following product it to our desired index (where product is our index name, and television is our type):

// HTTP POST:
http://localhost:9200/product/television

Message Body:
{"Name": "Samsung Flatscreen Television", "Price": "£899"}

If successful, you should get the following response from Elastic Search:

{
    "_index":"product",
    "_type":"television",
    "_id":"AVhIJ4ACuKcehcHggtFP",
    "_version":1,
    "result":"created",
    "_shards":{
        "total":2,
        "successful":1,
        "failed":0
    },
    "created":true
}

Searching for data

Now we've got some data in our index, let's perform a simple search for it.

Performing the following GET request should return the following data:

// HTTP GET:
http://localhost:9200/_search?q=samsung

The response should look something like this:

{
    "took":16,
    "timed_out":false,
    "_shards":{
        "total":15,
        "successful":15,
        "failed":0
    },
    "hits":{
        "total":1,
        "max_score":0.2876821,
        "hits":[{
                "_index":"product",
                "_type":"television",
                "_id":"AVhIJ4ACuKcehcHggtFP",
                "_score":0.2876821,
                "_source": {
                    "Name": "Samsung Flatscreen Television","Price": "£899"
                }
            }]
        }
}

One of the powerful features of Elastic Search is its  full-text search capabilities, enabling you to perform some truely impressive search queries against your indexed data.

For more on the search options available to you I would recommend you check out this resource.

Deleting Indexed data

To delete indexed data you can perform a delete request, passing the object ID, like so:

// HTTP Delete
http://localhost:9200/product/television/AVhIJ4ACuKcehcHggtFP

Moving Forward

So far we've been using Elastic Search's API query our Elastic Search index. If you'd prefer something more visual that will aid you in your exploration and discovery of the Elastic Search structured query syntax then I'd highly recommend you check out ElasticSearch-Head; a web frontend for your Elastic Search cluster. 

To get started with ElasticSearch-Head you simply clone the repository to your local drive, open the index.html file and point it at your http://localhost:9200 endpoint.

If you have experience issues connecting your web client to your Dockerised Elastic Search cluster then it could be because of CORS permissions. Instead of fiddling around with configurations I simply installed and enabled this Chrome plugin to get around it.

Now you can use the web UI to view the search tab to discover more of Elastic Search's complex structured query syntax.

Going cross-platform with ASP.NET Core talk at Bristech 2016

Posted on Thursday, 03 Nov 2016

Today I had the pleasure of delivering a last minute replacement talk on going cross-platform with ASP.NET Core at BrisTech 2016. The original speaker dropped out just a couple of days before the conference so I was more than happy to step in and deliver the talk. Overall I'm really happy with how it turned out, and considering the talk was a last minute stand-in talk I was really pleased with the number of attendees.

During the talk I discussed the future of the .NET platform and Microsoft's cross-platform commitment, the performance benefits, differences between .NET and .NET Core including ASP.NET differences, then proceeded to demonstrate the new .NET command line interface and building an ASP.NET Core application using .NET Core.

To put the proof in the pudding so to speak, to demonstrate that .NET Core really is cross platform, I did all of this on a Macbook Pro.

Going cross-platform with ASP.NET Core talk at BrisTech 2016