Latest Blog Posts

Year in review and looking at 2017

Posted on Tuesday, 10 Jan 2017

It's that time of the year again where I take a moment to reflect on the previous year and set some high level goals as to what I want to achieve or aim towards in the next.

I find these types of post really helpful at tracking progress and ensuring I stay on a positive projection as a software developer/engineer - a hobby that I'm lucky enough to have as a career.

Review of 2016

Last year was an amazing year for a couple of reasons, some planned and some not so. So, without further ado let's look at last year's personal targets and goals to see how I've done.

Goal 1: Speak at more conferences and meet-ups

I'm inclined to say I've smashed this. At the time of writing last year's post (February 2016) my speaking activities included relatively small user groups and meet ups. Since that post I've spoken at a few more meet ups but also at some rather large conferences which has been amazing.

The talks include:

  • Why software engineering is a great career choice
    TechSpark Graduates Conference 2016 - 1st December 2016

  • Going cross-platform with ASP.NET Core
    BrisTech 2016 - 3rd November 2016

  • Going cross-platform with ASP.NET Core
    Tech Exeter Conference - 8th October 2016

  • Building rich client-side applications using Angular 2
    DDD 11 (Reading) - 3rd September 2016

  • .NET Rocks Podcast - Angular 2 CLI (Command Line Interface)
    24th August 2016

  • Angular 2
    BristolJS - May 25th, 2016

  • Angular 2
    Taunton Developers' Meetup - June 9th, 2016

It's a really great feeling reflecting on my very first talk (a lightening talk on TypeScript) and how nervous I was in comparison to now. Don't get me wrong, the nerves are still there, but as many will know - you're just able to cope with them better.

.All in all I'm extremely satisfied at how far I've progressed in this area and how much my confidence speaking in public has grown. This is certainly something I wish to continue to do in 2017.

Goal 2: Start an Exeter based .NET User Group

At the time of setting this goal I was working in Exeter where there were no .NET focused user group, something that surprised me given the number of .NET specific jobs in the city.

To cut a long story short, I'd started an Exeter .NET user group and was in the process of organising the first meet up when I got a job opportunity at Just Eat in Bristol and ended up taking over the Bristol based .NET South West user group as the organiser was stepping down and closing the rather large, well established group down. Having been to a couple of the meet ups at the user group it was a shame to see it end, and given the fact I was now working in Bristol I decided to step forward to take over along with a couple of the other members.

Since then we (me and the the other organisers) have been really active in keeping .NET South West alive and well, organising a range of speakers on variety of .NET related topics.

Goal 3: Continue contributing to the .NET OSS ecosystem

This year I've create a number of small open-source libraries, and projects (Angular 2 piano sight reading game the one that's received the most attention). However whilst I've been contributing on and off to other libraries, I don't feel my contributions have been at a level that I'm happy with, so this will be a goal for 2017.

Bonus Goal - F#

Making a start learning F# was one of my bonus goals that I was hoping to achieve during 2016, however other than making a few changes to a couple of lines of F#, this is a no-go.

In all honesty, I'm still on the bench as to whether I want to learn F# - my only motivation being to learn a functional language (growing knowledge and thinking in a different paradigm); where as there are other languages I'm interested in learning that aren't necessarily tied to the .NET eco-system.

Other notable events and achievements in 2016

In addition to the aforementioned goals and achievements in 2016, there have also been others.

  • New job as a .NET software engineer at Just Eat in Bristol - a seriously awesome company that has a lot of really talented developers and interesting problems to work on.
  • Co-organiser of DDD South West conference
  • Heavy investment of time in learning .NET Core and Docker
  • Became co-organiser of the .NET South West user group

Goals for 2017

With a review of 2016 out of the way let's take a quick look at plans for 2017.

Goal 1: Continue to grow as a speaker, speaking at larger events

I've really loved speaking at meet ups and conferences, it's a truly rewarding experience both on a personal development and professional development level. There's very little more satisfying in life than pushing yourself outside of your comfort zone. So in 2017 I'm really keen to continue to pursue this by talking at larger conferences and events.

Goal 2: More focus on contributing to open-source projects

Whilst I'm satisfied with my contributions to the open-source world, with personal projects and contributions to other projects, it's definitely an area I would like to continue to pursue. So in 2017 I'm aiming I'm looking for other larger projects I invest in and contribute to on a long term basis.

Goal 3: Learn another language

Where as I previously set myself a 2016 goal of Learn F#, this time around I'm going to keep my options open. I've recently been learning a little Go, but also really interested in Rust, so this year I'm going to simply set a goal to learn a new language. As it stands, it looks like it's between Go and Rust, with F# still being a possibility.

Conclusion

Overall it's been a great year, I'm really keen to keep the pace up on public speaking as it's far too easy to rest on one's laurels, so here's to 2017 and the challenges it brings!

In-memory C# compilation (and .dll generation) using Roslyn

Posted on Wednesday, 28 Dec 2016

Recently I've been hard at work on my first Visual Studio Code extension and one of the requirements is to extract IL from a .dll binary. This introduces a question though, do I build the solution (blocking the user whilst their project is building), read the .dll from disk then extract the IL, or do I compile the project in memory behind the scenes, then stream the assembly to Roslyn? Ultimately I went with the later approach and was pleasantly surprised at how easy Roslyn makes this - surprised enough that I thought it deserved its own blog post.

Before we continue, let me take a moment to explain what Roslyn is for those that may not fully understand what it is.

What is Roslyn?

Roslyn is an open-source C# and VB compiler as a service platform.

The key words to take away with you here are "compiler as a service"; let me explain.

Traditionally compilers have been a black box of secrets that are hard to extend or harness, especially for any tooling or code analysis purposes. Take ReSharper for instance; Resharper has a lot of code analysis running under the bonnet that allows it to offer refactoring advice. In order for the ReSharper team to provide this they had to build their own analysis tools that would manually parse your solution's C# inline with the .NET runtime - the .NET platform provided no assistance with this, essentially meaning they had to duplicate a lot of the work the compiler was doing.

This has since changed with the introduction of Roslyn. For the past couple of years Microsoft have been rewriting the C# compiler in C# (I know, it's like a compiler Inception right?) and opening it up via a whole host of APIs that are easy to prod, poke and interrogate. This opening up of the C# compiler has resulted in a whole array of code analysis tooling such as better StyleCop integration and debugging tools like OzCode and the like. What's more, you can also harness Roslyn for other purposes such as tests that fail as soon as common code smells are introduced into a project.

Let's start

So now we all know what Roslyn is, let's take a look at how we can use it to compile a project in memory. In this post we will be taking some C# code written in plain text, turning it into a syntax tree that the compiler can understand then using Roslyn to compile it, resulting in a streaming in-memory assembly.

Create our project

In this instance I'm using .NET Core on a Mac but this will also work on Windows, so let's begin by creating a new console application by using the .NET Core CLI.

dotnet new -t console

Now, add the following dependencies to your project.json file:

"dependencies": {
    "Microsoft.CodeAnalysis.CSharp.Workspaces": "1.3.2",
    "Mono.Cecil": "0.10.0-beta1-v2",
    "System.ValueTuple": "4.3.0-preview1-24530-04"
},

For those interested, here is a copy of the project.json file in its entirety:

{
  "version": "1.0.0-*",
  "buildOptions": {
    "debugType": "portable",
    "emitEntryPoint": true
  },
  "dependencies": {
    "Microsoft.CodeAnalysis.CSharp.Workspaces": "1.3.2",
    "Mono.Cecil": "0.10.0-beta1-v2",
    "System.ValueTuple": "4.3.0-preview1-24530-04"
  },
  "frameworks": {
    "netcoreapp1.1": {
      "dependencies": {
        "Microsoft.NETCore.App": {
          "type": "platform",
          "version": "1.0.1"
        }
      },
      "imports": "portable-net45+win8+wp8+wpa81"
    }
  }
}

Once we've restored our project using the dotnet restore command, the next step is to write a simple class to represent our source code. This code could be read from a web form, a database or a file on disk. In this instance I'm hard-coding it into the application itself for simplicity.

public class Program {

    public static void Main(string[] args)
    {

        var code = @"
        using System;
        public class ExampleClass {
            
            private readonly string _message;

            public ExampleClass()
            {
                _message = ""Hello World"";
            }

            public string getMessage()
            {
                return _message;
            }

        }";

        CreateAssemblyDefinition(code);
    }

    public static void CreateAssemblyDefinition(string code)
    {
        var sourceLanguage = new CSharpLanguage();
        SyntaxTree syntaxTree = sourceLanguage.ParseText(code, SourceCodeKind.Regular);

        ...
    }

}

Getting stuck into Roslyn

Now we've got the base of our project sorted, let's dive into some of the Roslyn API.

First we're going to want to create an interface we'll use to define the language we want to use. In this instance it'll be C#, but Roslyn also supports VB.

public interface ILanguageService
{
    SyntaxTree ParseText(string code, SourceCodeKind kind);

    Compilation CreateLibraryCompilation(string assemblyName, bool enableOptimisations);
}

Next we're going to need to parse our plain text C#, so we'll begin by working on the implementation of the ParseText method.

public class CSharpLanguage : ILanguageService
{
    private static readonly LanguageVersion MaxLanguageVersion = Enum
        .GetValues(typeof(LanguageVersion))
        .Cast<LanguageVersion>()
        .Max();

    public SyntaxTree ParseText(string sourceCode, SourceCodeKind kind) {
        var options = new CSharpParseOptions(kind: kind, languageVersion: MaxLanguageVersion);

        // Return a syntax tree of our source code
        return CSharpSyntaxTree.ParseText(sourceCode, options);
    }

    public Compilation CreateLibraryCompilation(string assemblyName, bool enableOptimisations) {
        throw new NotImplementedException();
    }
}

As you'll see the implementation is rather straight forward and simply involved us setting a few parse options such as the language features we expect to see being parsed (marked via the languageVersion parameter) along with the SourceCodeKind enum.

Looking further into Roslyn's SyntaxTree

At this point I feel it's worth mentioning that if you're interested in learning more about Roslyn then I would recommend spending a bit of time looking into Roslyn's Syntax Tree API. Josh Varty's posts on this subject are a great resource I would recommend.

I would also recommend taking a look at LINQ Pad, which amongst other great features, has the ability to show you a syntax tree generated by Roslyn your code. For instance, here is a generated syntax tree of our ExampleClass code we're using in this post:

http://assets.josephwoodward.co.uk/blog/linqpad_tree2.png

Now our C# has been parsed and turned into a data structure the C# compiler can understand, let's look at using Roslyn to compile it.

Compiling our Syntax Tree

Continuing with the CreateAssemblyDefinition method, let's compile our syntax tree:

public static void CreateAssemblyDefinition(string code)
{
    var sourceLanguage = new CSharpLanguage();
    SyntaxTree syntaxTree = sourceLanguage.ParseText(code, SourceCodeKind.Regular);

    Compilation compilation = sourceLanguage
      .CreateLibraryCompilation(assemblyName: "InMemoryAssembly", enableOptimisations: false)
      .AddReferences(_references)
      .AddSyntaxTrees(syntaxTree);

    ...
}

At this point we're going to want to fill in the implementation of our CreateLibraryCompilation method within our CSharpLanguage class. We'll start this by passing the appropriate arguments into an instance of CSharpCompilationOptions. This includes:

  • outputKind - We're outputting a (dll) Dynamically Linked Library
  • optimizationLevel - Whether we want our C# output to be optimised
  • allowUnsafe - Whether we want the our C# code to allow the use of unsafe code or not
public class CSharpLanguage : ILanguageService
{
    private readonly IReadOnlyCollection<MetadataReference> _references = new[] {
          MetadataReference.CreateFromFile(typeof(Binder).GetTypeInfo().Assembly.Location),
          MetadataReference.CreateFromFile(typeof(ValueTuple<>).GetTypeInfo().Assembly.Location)
      };

    ...

    public Compilation CreateLibraryCompilation(string assemblyName, bool enableOptimisations) {
      var options = new CSharpCompilationOptions(
          OutputKind.DynamicallyLinkedLibrary,
          optimizationLevel: enableOptimisations ? OptimizationLevel.Release : OptimizationLevel.Debug,
          allowUnsafe: true);

      return CSharpCompilation.Create(assemblyName, options: options, references: _references);
  }
}

Now we've specified our compiler options, we invoke the Create factory method where we also need to specify the assembly name we want our in-memory assembly to be called (InMemoryAssembly in our case, passed in when calling our CreateLibraryCompilation method), along with additional references required to compile our source code. In this instance, as we're targeting C# 7, we need to supply the compilation unit with the ValueTuple structs implementation. If we were targeting an older version of C# then this would not be required.

All that's left to do now is to call Roslyn's emit(Stream stream) method that takes a Stream input parameter and we're sorted!

public static void CreateAssemblyDefinition(string code)
{
    ...

    Compilation compilation = sourceLanguage
        .CreateLibraryCompilation(assemblyName: "InMemoryAssembly", enableOptimisations: false)
        .AddReferences(_references)
        .AddSyntaxTrees(syntaxTree);

    var stream = new MemoryStream();
    var emitResult = compilation.Emit(stream);
    
    if (emitResult.Success){
        stream.Seek(0, SeekOrigin.Begin);
        AssemblyDefinition assembly = AssemblyDefinition.ReadAssembly(stream);
    }
}

From here I'm then able to pass my AssemblyDefinition to a method that extracts the IL and I'm good to go!

Conclusion

Whilst this post is quite narrow in its focus (I can't imagine everyone is looking to compile C# in memory!), hopefully it's served as a primer in spiking your interest in Roslyn and what it's capable of doing. Roslyn is a truly powerful platform that I wish more languages offered. As mentioned before there are some great resources available go into much more depth. I would especially recommend Josh Varty's posts on the subject.

In-memory testing using ASP.NET Core

Posted on Tuesday, 06 Dec 2016

A fundamental problem with integration testing over finer-grained tests such as unit testing is that in order to integration test your component or application you need to spin up a running instance of your application so you can reach it over HTTP, run your tests and then spin it down afterwards.

Spinning up instances of your application can lead to a lot of additional work when it comes to running your tests within any type of continuous deployment or delivery pipeline. This has certainly become easier with the introduction of the cloud, but still requires a reasonable investment of time and effort to setup, as well slowing down your deployment/delivery pipeline.

An alternative approach to running your integration or end-to-end tests is to utilise in-memory testing. This is where your application is spun up in memory via an in-memory server and has the tests run against it. An additional benefit to running your tests this way is you're no longer testing any of your host OS's network stack either (which in most cases will be configured differently to your production server's stack anyway).

TestServer package

Thankfully in-memory testing can be performed easily in ASP.NET Core thanks to the Microsoft.AspNetCore.TestHost NuGet package.

Let's take a moment to look at the TestServer API exposed by the TestHost library:

public class TestServer : IServer, IDisposable
{
    public TestServer(IWebHostBuilder builder);

    public Uri BaseAddress { get; set; }
    public IWebHost Host { get; }

    public HttpClient CreateClient();
    public HttpMessageHandler CreateHandler();
    
    public RequestBuilder CreateRequest(string path);
    public WebSocketClient CreateWebSocketClient();
    public void Dispose();
}

As you'll see, the API has all the necessary endpoints we'll need to spin our application up in memory.

For those that are regular readers of my blog, you'll remember we used the same TestServer package to run integration tests on middleware back in July. This time we'll be using it to run our Web API application in memory and run our tests against it. We'll then assert that the response received is expected.

Enough talk, let's get started.

Running Web API in memory

Setting up Web API

In this instance I'm going to be using ASP.NET Core Web API. In my case I've created a small Web API project using the ASP.NET Core yoman project template. You'll also note that I've stripped a few things out of the application for the sake of making the post as easier to follow. Here are the few files that really matter:

Startup.cs (nothing out of the ordinary here)

public class Startup
{
    public Startup(IHostingEnvironment env)
    {
        var builder = new ConfigurationBuilder()
            .SetBasePath(env.ContentRootPath)
            .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
            .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
            .AddEnvironmentVariables();
        Configuration = builder.Build();
    }

    public IConfigurationRoot Configuration { get; }

    // This method gets called by the runtime. Use this method to add services to the container.
    public void ConfigureServices(IServiceCollection services)
    {
        // Add framework services.
        services.AddMvc();
    }

    // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
    {
        loggerFactory.AddConsole(Configuration.GetSection("Logging"));
        loggerFactory.AddDebug();

        app.UseMvc();
    }
}

ValuesController.cs

[Route("api/[controller]")]
public class ValuesController : Controller
{
    [HttpGet]
    public string Get()
    {
        return "Hello World!";
    }
}

All we've got here is a simple WebAPI application that returns a single Hello World! value from ValueController when you fire a GET request to /api/values/.

Running Web API in memory

At this point I've created a test project alongside my Web API one and added the Microsoft.AspNetCore.TestHost package to my test project's package.json file.

package.json

...
"dependencies": {
    "dotnet-test-xunit": "2.2.0-preview2-build1029",
    "xunit": "2.2.0-beta2-build3300",
    "Microsoft.AspNetCore.TestHost": "1.0.0",
    "TestWebAPIApplication":{
        "target":"project"
    }
},
...

Next, we'll create our first test class, and bootstrap our WebAPI project. Pay particular attention to our web application's Startup.cs that's being passed into the WebHostBuilder's UseStartup<T> method. You'll notice this is exactly the same way we bootstrap our application within Program.cs (the bootstrap file we use when deploying our application).

public class ExampleTestClass
{
    private IWebHostBuilder CreateWebHostBuilder(){
        var config = new ConfigurationBuilder().Build();
        
        var host = new WebHostBuilder()
            .UseConfiguration(config)
            .UseStartup<Startup>();

        return host;
    }

    ...
}

Writing our test

At this point we're ready to write our test, so let's create a new instance of TestServer which takes and instance of IWebHostBuilder.

public TestServer(IWebHostBuilder builder);

As you can see from the following trivial example, we're simply capturing the response from the controller invoked when calling /api/values, which is our case is the ValuesController.

[Fact]
public async Task PassingTest()
{
    var webHostBuilder = CreateWebHostBuilder();
    var server = new TestServer(webHostBuilder);

    using(var client = server.CreateClient()){
        var requestMessage = new HttpRequestMessage(new HttpMethod("GET"), "/api/values/");
        var responseMessage = await client.SendAsync(requestMessage);

        var content = await responseMessage.Content.ReadAsStringAsync();

        Assert.Equal(content, "Hello World!");
    }
}

Now, when we call Assert.Equals on our test we should see the test has passed.

Running test UnitTest.Class1.PassingTest...
Test passed 

Conclusion

Hopefully this post has given you enough insight into how you can run your application in memory for purposes such as integration or feature testing. Naturally there's a lot more you could do to simplify and speed up the tests by limiting the number of times TestServer is created.

New Blog, now .NET Core, Docker and Linux powered - and soon to be open-sourced

Posted on Saturday, 26 Nov 2016

A little while ago I concluded that my blog was looking a bit long in the tooth and decided that since I'm looking at .NET Core, what better opportunity to put it through its paces than by rewriting my blog using it.

I won't go into too much detail as how the blog is built as it's nothing anyone wouldn't have seen before, but as someone that likes to read about people's experiencing building things, I thought I'd break down how it's built and what I learned whilst doing it.

Here's an overview of some of the technology I've used:

  • .NET Core / ASP.NET Core MVC
  • MediatR
  • Azure SQL
  • Dapper
  • Redis
  • Google Authentication
  • Docker
  • Nginx
  • Ubuntu

.NET Core / ASP.NET Core MVC

Having been following the development of .NET Core, I thought I'd wait until things stabilised before I started migrating. Unfortunately I didn't wait long enough and had to go through the the pain many developers experiences with breaking changes.

I also ran into various issues such as waiting for libraries to migrate to .NET Standard, and other problems such as RSS feed generation for which no .NET Standard libraries exist, primarily because System.ServiceModel.Syndication is not .NET Core compatible just yet. None of these are deal breakers, with work arounds out there, but none-the-less tripped me up along the way. That said, whilst running into these issues I did keep reminding myself that this is what happens when you start building with frameworks and libraries still in beta - so no hard feelings.

In fact, I've been extremely impressed with the direction and features in ASP.NET Core and look forward to building more with it moving forward.

MediatR

I've never been a fan of the typical N-teir approach to building an application primarily because it encourages you to split your application into various horizontal slices (generally UI, Business Logic and Data Access), which often leads of a ridged design filled with lots of very large mixed concerns. Instead I prefer breaking my application up into vertical slices based on features, such as Blog, Pages, Admin etc.

MediatR helps me do this and at the same time allows you to model your application's commands and queries, turning a HTTP request into a pipeline to which you handle and return a response. This has an added effect of keeping your controllers nice and skinny as the only responsibility of the Controller is to pass the request into MediatR's pipeline.

Below is a simplified example of what a controller looks like, forming the request then delegating it to the appropriate handler:

// Admin Controller
public class BlogAdminController {

    private readonly IMediator _mediator;

    public BlogAdminController(IMediator mediator)
    {
        _mediator = mediator;
    }

    [Route("/admin/blog/edit/{id:int}")]
    public IActionResult Edit(BlogPostEdit.Query query)
    {
        BlogPostEdit.Response model = _mediator.Send(query);

        return View(model);
    }
}
public class BlogPostEdit
{
    public class Query : IRequest<Response>
    {
        public int Id { get; set; }
    }

    public class BlogPostEditRequestHandler : IRequestHandler<Query, Response>
    {
        private readonly IBlogAdminStorage _storage;

        public BlogPostEditRequestHandler(IBlogAdminStorage storage)
        {
            _storage = storage;
        }

        public Response Handle(Query request)
        {
            var blogPost = _storage.GetBlogPost(request.Id);
            if (blogPost == null)
                throw new RecordNotFoundException(string.Format("Blog post Id {0} not found", request.Id.ToString()));

            return new Response
            {
                BlogPostEditModel = blogPost
            };
        }
    }

    public class Response
    {
        public BlogPostEditModel BlogPostEditModel { get; set; }
    }
}

A powerful feature of MediatR's pipelining approach is you can start to use the decorator pattern to handle cross-cutting concerns like caching, logging and even validation.

If you're interested in reading more about MediatR then I'd highly recommend Jimmy Bogard's video on favouring slices rather than layers, where he covers MediatR and its architectural benefits.

Google Authentication

I wanted to keep login simple, and not have to worry about storing passwords. With this in mind I decided to go with Google Authentication for logging in, which I cover in my Social authentication via Google in ASP.NET Core MVC post.

Docker, Ubuntu and Redis

Having read loads on Docker but never having played with it, migrating my blog to .NET Core seemed like a perfect opportunity to get stuck into Docker to see what all the fuss was about.

Having been using Docker for a couple of months now I'm completely sold on how it changes the deployment and development landscape.

This isn't the right post to go into too much detail about Docker, but no doubt you're aware of roughly what it does by now and if you're considering taking it for a spin to see what it can do for you then I would highly recommend it.

Docker's made configuring my application to run on Ubuntu with Redis and Nginx an absolute breeze. No longer do I have to spin up individual services and packing website up and deploy it. Now I simply have to publish an image to a repository, pull it down to my host and run docker-compose up.

Don't get me wrong, Docker's certainly not the golden bullet that some say it is, but it's definitely going to make your life easier in most cases.

Open-sourcing the blog

I redeveloped the blog in mind of open-sourcing it, so once I've finished tidying it up I'll put it up on my GitHub account so you can download it and give it a try for yourself. It's no Orchard CMS, but it'll do the job for me - and potentially you.

Getting started with Elastic Search in Docker

Posted on Tuesday, 15 Nov 2016

Having recently spend a lot of time experimenting with Docker, other than repeatable deployment and runtime environments, one of the great benefits promised by the containerisation movement is how it can supplement your local development environment.

No longer do you need to simultaneously waste time and slow down your local development machine by installing various services like Redis, Postgres and other dependencies. You can simply download a Docker image and boot up your development environment. Then, once you're finished with it tear it down again.

In fact, a lot of the Docker images for such services are maintained by the development teams and companies themselves.

I'd never fully appreciated this dream until recently when I partook in the quarterly three day hackathon at work, where time was valuable and we didn't waste time downloading and installing the required JDK just to get Elastic Search installed.

In fact, I was so impressed with Docker and Elastic Search that it compelled me to write this post.

So without further ado, let's get started.

What is Elastic Search?

Elasticsearch is a search server based on Lucene. It provides a distributed, multitenant-capable full-text search engine with a RESTful web interface and schema-free JSON documents.

Now that that's out of the way, let's get going. First thing's first, you're going to need Docker

Installing Docker

If the title didn't give it away, we're going to be setting up Elastic Search up locally using Docker, so if you don't have Docker installed then you're going to need to head over to the Docker download page and download/install it.

Setting up Elastic Search

Next we're going to need to find the Elastic Search Docker image.

To do this we're going to head over to Docker Hub and seach for Elastic Search (here's a direct link for the lazy or those pressed for time).

What is Docker Hub? For new to Docker, Docker Hub is a repository of popular Docker images, many of which are officially owned and supported by the owners of the software.

Pulling and running the Elastic Search Docker image

To run the Elastic Search Docker image we're first going to need to pull it down to our local machine. To do this open your command prompt or terminal (any directory is fine as nothing is downloaded to the current directory) and execute the following Docker command:

docker pull elasticsearch

Running Elastic Search

Next we want to run our Elastic Search image, do to that we need to type the following command into our terminal:

docker run -d -p 9200:9200 -p 9300:9300 elasticsearch

Let's breakdown the command:

  • First we're telling Docker we want to run an image in a container via the 'run' command.

  • The -d argument will run the container in detached mode. This means it will run as a separate background process, as opposed to short-lived process that will run and immediately terminate once it has finished executing.

  • Moving on, the -p arguments tell the container to open and bind our local machine's port 9200 and 9300 to port 9200 and 9300 in the Docker container.

  • Then at the end we specify the Docker image we wish to start running - in this case, the Elastic Search image.

Note: At this point, if you're new to Docker then it's worth knowing that our container's storage will be deleted when we tear down our Docker container. If you wish to persist the data then we have to use the -v flag to map the Docker's drive volume to our local disk, as opposed to the default location which is the Docker's container.

If we want to map the volume to our local disk then we'd need to run the following command instead of the one mentioned above:

docker run -d -v "$HOME/Documents/elasticsearchconf/":/usr/share/elasticsearch/data -p 9200:9200 -p 9300:9300 elasticsearch

This will map our $HOME/Documents/elasticsearchconf folder to the container's /usr/share/elasticsearch/data directory.

Checking our Elastic Search container is up and running

If the above command worked successfully then we should see the Elastic Search container up and running, we can check this by executing the following command that lists all running containers:

docker ps

To verify Elastic Search is running, you should also be able to navigate to http://localhost:9200 and see output similar to this:

{
  "name" : "f0t5zUn",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "3loHHcMnR_ekDxE1Yc1hpQ",
  "version" : {
    "number" : "5.0.0",
    "build_hash" : "253032b",
    "build_date" : "2016-10-26T05:11:34.737Z",
    "build_snapshot" : false,
    "lucene_version" : "6.2.0"
  },
  "tagline" : "You Know, for Search"
}

Container isn't running?

If for some reason you container isn't running, then you can run the following command to see all containers, whether running or not.

docker ps -a

Then once you've identified the container you just tried to run (hint: it should be at the top) then run the following command, including the first 3 or 4 characters from the Container Id column:

docker logs 0403

This will print out the containers logs, giving you a bit of information as to what could have gone wrong.

Connecting to Elastic Search

Now that our Docker container is up and running, let's get our hands dirty with Elastic Search via their RESTful API.

Indexing data

Let's begin by indexing some data in Elastic Search. We can do this by posting the following product it to our desired index (where product is our index name, and television is our type):

// HTTP POST:
http://localhost:9200/product/television

Message Body:
{"Name": "Samsung Flatscreen Television", "Price": "£899"}

If successful, you should get the following response from Elastic Search:

{
    "_index":"product",
    "_type":"television",
    "_id":"AVhIJ4ACuKcehcHggtFP",
    "_version":1,
    "result":"created",
    "_shards":{
        "total":2,
        "successful":1,
        "failed":0
    },
    "created":true
}

Searching for data

Now we've got some data in our index, let's perform a simple search for it.

Performing the following GET request should return the following data:

// HTTP GET:
http://localhost:9200/_search?q=samsung

The response should look something like this:

{
    "took":16,
    "timed_out":false,
    "_shards":{
        "total":15,
        "successful":15,
        "failed":0
    },
    "hits":{
        "total":1,
        "max_score":0.2876821,
        "hits":[{
                "_index":"product",
                "_type":"television",
                "_id":"AVhIJ4ACuKcehcHggtFP",
                "_score":0.2876821,
                "_source": {
                    "Name": "Samsung Flatscreen Television","Price": "£899"
                }
            }]
        }
}

One of the powerful features of Elastic Search is its  full-text search capabilities, enabling you to perform some truely impressive search queries against your indexed data.

For more on the search options available to you I would recommend you check out this resource.

Deleting Indexed data

To delete indexed data you can perform a delete request, passing the object ID, like so:

// HTTP Delete
http://localhost:9200/product/television/AVhIJ4ACuKcehcHggtFP

Moving Forward

So far we've been using Elastic Search's API query our Elastic Search index. If you'd prefer something more visual that will aid you in your exploration and discovery of the Elastic Search structured query syntax then I'd highly recommend you check out ElasticSearch-Head; a web frontend for your Elastic Search cluster. 

To get started with ElasticSearch-Head you simply clone the repository to your local drive, open the index.html file and point it at your http://localhost:9200 endpoint.

If you have experience issues connecting your web client to your Dockerised Elastic Search cluster then it could be because of CORS permissions. Instead of fiddling around with configurations I simply installed and enabled this Chrome plugin to get around it.

Now you can use the web UI to view the search tab to discover more of Elastic Search's complex structured query syntax.

Executing JavaScript inside of .NET Core using JavaScriptServices

Posted on Wednesday, 28 Sep 2016

Recently, we were lucky enough to have Steve Sanderson speak at .NET South West, a Bristol based .NET meet up I help organise. His talk was titled SPAs (Single Page Applications) on ASP.NET Core and featured a whole host of impressive tools and APIs he's been developing at Microsoft, all aimed at aiding developers building single page applications (including Angular, Knockout and React) on the ASP.NET Core platform.

As Steve was demonstrating all of these amazing APIs (including  server side rendering of Angular 2/Reacts applications, Angular 2 validation that was integrated with .NET Core MVC's validation) the question that was at the end of everyone's tongue was "How's he doing this?!".

When the opportunity finally arose, Steve demonstrated what I think it one of the coolest parts of the talk - the JavaScriptServices middleware - the topic of this blog post.

Before continuing, if you develop single page apps in either Angular, React or Knockout then I'd highly recommend you check out the talk, which can also be found here.

What is JavaScriptServices?

JavaScriptServices is a .NET Core middleware library that plugs into the .NET Core pipeline which uses Node to execute JavaScript (naturally this also includes Node modules) at runtime. This means that in order to use JavaScriptServices you have to have Node installed the host machine.

How does it work and what application does it have? Let's dive in and take a look!

Setting up JavaScriptServices

Before we continue, it's worth mentioning that it looks like the package is currently going through a rename (from NodeServices to JavaScriptServices) - so you'll notice the API and NuGet package is referenced NodeServices, yet I'm referring to JavaScriptServices throughout. Now that that's out of the way, let's continue!

First of all, as mentioned above, JavaScriptServices relies on Node being installed on the host machine, so if you don't have Node installed then head over to NodeJs.org to download and install it. If you've already got Node installed then you're good to continue.

As I alluded to earlier, setting up the JavaScriptServices middleware is as easy as setting up any other piece of middleware in the in the new .NET Core framework. Simply include the JavaScriptServices NuGet package in your solution:

Install-Package Microsoft.AspNetCore.NodeServices -Pre

Then reference it in your Startup.cs file's ConfigureServices method:

public void ConfigureServices(IServiceCollection services)
{
    services.AddNodeServices();
}

Now we have the following interface at our disposal for calling Node modules:

public interface INodeServices : IDisposable
{
    Task<T> InvokeAsync<T>(string moduleName, params object[] args);
    Task<T> InvokeAsync<T>(CancellationToken cancellationToken, string moduleName, params object[] args);

    Task<T> InvokeExportAsync<T>(string moduleName, string exportedFunctionName, params object[] args);
    Task<T> InvokeExportAsync<T>(CancellationToken cancellationToken, string moduleName, string exportedFunctionName, params object[] args);
}

Basic Usage

Now we've got JavaScriptServices setup, let's look at getting started with a simple use case and run through how we can execute some trivial JavaScript in our application and capture the output.

First we'll begin by creating a simple JavaScript file that contains the a Node module that returns a greeting message:

// greeter.js
module.exports = function (callback, firstName, surname) {

    var greet = function (firstName, surname) {
        return "Hello " + firstName + " " + surname;
    }

    callback(null, greet(firstName, surname));
}

Next, we inject an instance of INodeServices into our controller and invoke our Node module by calling InvokeAsync<T> where T is our module's return type (a string in this instance).

public DemoController {

    private readonly INodeServices _nodeServices;

    public DemoController(INodeServices nodeServices)
    {
        _nodeServices = nodeServices;
    }

    public async Task<IActionResult> Index()
    {
        string greetingMessage = await _nodeServices.InvokeAsync<string>("./scripts/js/greeter", "Joseph", "Woodward");

        ...
    }

}

Whilst this is a simple example, hopefully it's demonstrated how easy it and given you an idea as to how powerful this can potentiall be. Now let's go one further.

Taking it one step further - transpiling ES6/ES2015 to ES5, including source mapping files

Whilst front end task runners such as Grunt and Gulp have their place, what if we were writing ES6 code and didn't want to have to go through the hassle of setting up a task runner just to transpile our ES2015 JavaScript?

What if we could transpile our Javascript at runtime in our ASP.NET Core application? Wouldn't that be cool? Well, we can do just this with JavaScriptServices!

First we need to include a few Babel packages to transpile our ES6 code down to ES5. So let's go ahead and create a packages.json in the root of our solution and install the listed packages by executing _npm install _at the same level as our newly created packages.json file.

{
    "name": "nodeservicesexamples",
    "version": "0.0.0",
    "dependencies": {
        "babel-core": "^6.7.4",
        "babel-preset-es2015": "^6.6.0"
    }
}

Now all we need to register the NodeServices service in the ConfigureServices method of our Startup.cs class:

public void ConfigureServices(IServiceCollection services)
{
    ...
    services.AddNodeServices();
    ...
}

After this we want to create our Node module that will invoke the Babel transpiler - this will also include the source mapping files.

// /node/transpilation.js

var fs = require('fs');
var babelCore = require('babel-core');

module.exports = function(cb, physicalPath, requestPath) {
    var originalContents = fs.readFileSync(physicalPath);
    var result = babelCore.transform(originalContents, {
        presets: ['es2015'],
        sourceMaps: 'inline',
        sourceFileName: '/sourcemapped' + requestPath
    });

    cb(null, result.code);
}

Now comes the interesting part. On every request we want to check to see if the HTTP request being made is for a .js extension. If it is, then we want to pass its contents to our JavaScriptServices instance to transpile it to ES6/2015 JavaScript, then finish off by writing the output to the output response.

At this point I think it's only fair to say that if you were doing this in production then you'd probably want some form of caching of output. This would prevent the same files being transpiled on every request - but hopefully the follow example is enough to give you an idea as to what it would look like:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, INodeServices nodeServices)
{
    ...

    // Dynamically transpile any .js files under the '/js/' directory
    app.Use(next => async context => {
        var requestPath = context.Request.Path.Value;
        if (requestPath.StartsWith("/js/") && requestPath.EndsWith(".js")) {
            var fileInfo = env.WebRootFileProvider.GetFileInfo(requestPath);
            if (fileInfo.Exists) {
                var transpiled = await nodeServices.InvokeAsync<string>("./node/transpilation.js", fileInfo.PhysicalPath, requestPath);
                await context.Response.WriteAsync(transpiled);
                return;
            }
        }

        await next.Invoke(context);
    });

    ...
}

Here, all we're doing is checking the ending path of every request to see if the file exists within the /js/ folder and ends with .js. Any matches are then checked to see if the file exists on disk, then is passed to the transpilation.js module we created earlier. The transpilation module will then run the contents of the file through Babel and return the output to JavaScriptServices, which then proceeds to write to our application's response object before invoking the next handler in our HTTP pipeline.

Now that's all set up, let's go ahead and give it a whirl. If we create a simple ES2016 Javascript class in a wwwroot/js/ folder and reference it within our view in a script tag.

// wwwroot/js/example.js

class Greeter {
    getMessage(name){
        return "Hello " + name + "!";
    }
}

var greeter = new Greeter();
console.log(greeter.getMessage("World"));

Now, when we load our application and navigate to our example.js file via your browser's devtools you should see it's been transpiled to ES2015!

Conclusion

Hopefully this post has giving you enough of an understanding of the JavaScriptServices package to demonstrate how powerful the library really is. With the abundance of Node modules available there's all sorts of functionality you can build into your application, or application's build process. Have fun!

Building rich client side apps using Angular 2 talk at DDD 11 in Reading

Posted on Saturday, 03 Sep 2016

This weekend I had the pleasure of speaking in front of this friendly bunch (pictured) at DDD 11, a .NET focused developers' conference - this year hosted out of Microsoft's Reading offices.

My talk topic? Building Rich client-side applications using Angular 2.

As a regular speaker at meet ups and the occasional podcast, this year I've been keen to step it up and move into the conference space. Speaking is something that I love doing, and DDD 11 was a great opportunity that I didn't want to miss.

Having spotted a tweet that DDD were accepting talk proposals a number of weeks ago, I quickly submitted a few talks I had up my sleeve - with Building rich client-side apps using Angular 2 being the talk that received enough votes to get me a speaking slot - yay!

My talk was one of the last talks of the day so I had to contend with a room full of sleepy heads, tired after a long day of networking and programming related sessions (myself included). With this in mind I decided it would be best to step up the energy levels with a hope of keeping people more engaged.

Overall I think the talk went well (though in hindsight I could have slowed down a little bit!) and received what I felt was a good reception, and it was great to have people in the audience (some of whom are speakers themselves) approach me afterwards saying they enjoyed the talk.

Angular 2 CLI interview with .NET Rocks

Posted on Tuesday, 23 Aug 2016

Recently I once again (for those that remember my last talk with Carl and Richard was about shortcuts and productivity games in Visual Studio) had the pleasure of talking with Carl Franklin and Richard Campbell on the .NET Rocks Show about the Angular 2 command line interface. With all that's been happening with Angular 2 it was great to have the opportunity to spend a bit of time talking about the tooling around the framework and some of the features the Angular 2 CLI affords us.

So if you've been looking at Angular 2 recently then be sure to check out show 1339 and leave any feedback you may have.

Integration testing your ASP.NET Core middleware using TestServer

Posted on Sunday, 31 Jul 2016

Lately I've been working on a piece of middleware that simplifies temporarily or permanently redirecting a URL from one path to another, whilst easily expressing the permanency of the redirect by sending the appropriate 301 or 302 HTTP Status Code to the browser.

If you've ever re-written a website and had to ensure old, expired URLs didn't result in 404 errors and lost traffic then you'll know what a pain this can be when dealing with a large number of expired URLs. Thankfully using .NET Core's new middleware approach, this task becomes far easier - so much so that I've wrapped it into a library I intend to publish to NuGet:

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    ...
    app.UseRequestRedirect(r => r.ForPath("/32/old-path.html").RedirectTo("/32/new-path").Permanently());
    app.UseRequestRedirect(r => r.ForPath("/33/old-path.html").RedirectTo("/33/new-path").Temporarily());
    app.UseRequestRedirect(r => r.ForPath("/34/old-path.html").RedirectTo(DeferredQueryDbForPath).Temporarily());
    ...
}

private string DeferredQueryDbForPath(string oldPath){
    /* Query database for new path only if old path is hit */
    return newPath;
}

Whilst working on this middleware I was keen to add some integration tests for a more complete range of tests. After a bit of digging I noticed that doing so was actually really simple, thanks to the _TestServe_r class available as part of the Microsoft.AspNetCore.TestHost package.

What is Test Server?

TestServer is a lightweight and configurable host server, designed solely for testing purposes. Its ability to create and serve test requests without the need for a real web host is it's true value, making it perfect for testing middleware libraries (amongst other things!) that take a request, act upon it and eventually return a response. In the case of the aforementioned middleware, our response we'll be testing will be a status code informing the browser that the page has moved along with the destination the page is located at.

TestServer Usage

As mentioned above, you'll find the TestServer class within the Microsoft.AspNetCore.TestHost package, so first of all you'll need to add it to your test project either by using the following NuGet command:

Install-Package Microsoft.AspNetCore.TestHost

Or by updating your project.json file directly:

"dependencies": {
    ...
    "Microsoft.AspNetCore.TestHost": "1.0.0",
    ...
},

Once the NuGet package has downloaded we're ready start creating our tests.

After creating our test class the first thing we need to do is configure an instance of WebHostBuilder, ensuring we add our configured middleware to the pipeline. Once configured we create our instance of TestServer which then bootstraps our test server based on the supplied WebHostBuilder configurations.

[Fact]
public async void Should_Redirect_Permanently()
{
    // Arrange
    var builder = new WebHostBuilder()
        .Configure(app => {
            app.UseRequestRedirect(r => r.ForPath("/old/").RedirectTo("/new/").Permanently());
        }
    );

    var server = new TestServer(builder);

    // Act
    ...
}

Next we need to manually create a new HTTP Request, passing the parameters required to exercise our middleware. In this instance, using the redirect middleware, all I need to do is create a new GET request to the path outlined in my arrange code snippet above. Once created we simply pass our newly created HttpRequestMessage to our configured instance of TestServer.

// Act
var requestMessage = new HttpRequestMessage(new HttpMethod("GET"), "/old/");
var responseMessage = await server.CreateClient().SendAsync(requestMessage);

// Assert
...

Now all that's left is to assert our test using the response we received from our TestServer.SendAsync() method call. In the example below I'm using assertion library Shouldly to assert that the correct status code is emitted and the correct path (/new/) is returned in the header.

// Assert
responseMessage.StatusCode.ShouldBe(HttpStatusCode.MovedPermanently);
responseMessage.Headers.Location.ToString().ShouldBe("/new/");

The complete test will look like this:

public class IntegrationTests
{
    [Fact]
    public async void Should_Redirect_Permanently()
    {
        // Arrange
        var builder = new WebHostBuilder()
            .Configure(app => {
                app.UseRequestRedirect(r => r.ForPath("/old/").RedirectTo("/new/").Permanently());
            }
        );

        var server = new TestServer(builder);

        // Act
        var requestMessage = new HttpRequestMessage(new HttpMethod("GET"), "/old/");
        var responseMessage = await server.CreateClient().SendAsync(requestMessage);

        // Assert
        responseMessage.StatusCode.ShouldBe(HttpStatusCode.MovedPermanently);
        responseMessage.Headers.Location.ToString().ShouldBe("/new/");
    }
}

Conclusion

In this post we looked at how simple it is to test our middleware using the TestServer instance. Whilst the above example is quite trivial, hopefully it provides you with enough of an understanding as to how you can start writing integration or functional tests for your middleware.

Proxying HTTP requests in ASP.NET Core using Kestrel

Posted on Saturday, 02 Jul 2016

A bit of a short post this week but hopefully one that will save some a bit of googling!

Recently I've been doing a lot of ASP.NET Core MVC and in one project I've been working on I have an admin login area. My first instinct was to create the login section as an Area within the main application project.

Creating a login area in the same application would work, but other than sharing an /admin/ path it really was a completely separate application. It has different concerns, a completely different UI (in this instance it's an Angular 2 application talking to a back-end API). For these reasons, creating the admin section as an MVC Area just felt wrong - so I began to look at what Kestrel could offer in terms of proxying requests to another application. This way I could keep my user facing website as one project and the administration area as another, allowing them to grow independent of one another.

Whilst proxying requests is possible in IIS, I was keen to use Kestrel as I'd like the option of hosting the application across various platforms, so I was keen to see what Kestrel had to offer.

Enter the ASP.NET Core Proxy Middleware!

After a little digging it came to no surprise that there was some middleware that made proxying requests a breeze. The middleware approach to ASP.NET Core MVC lends itself to such a task and setup was so simple that I felt it merited a blog post.

After installing the Microsoft.AspNetCore.Proxy NuGet package via the "Install-Package Microsoft.AspNetCore.Proxy" command, all I had to do was hook the proxy middleware up to my pipeline using the MapWhen method within my application's Startup class:

// Startup.cs
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    ...

    app.MapWhen(IsAdminPath, builder => builder.RunProxy(new ProxyOptions
    {
        Scheme = "http",
        Host = "localhost",
        Port = "8081"
    }));

    ...

}

private static bool IsAdminPath(HttpContext httpContext)
{
    return httpContext.Request.Path.Value.StartsWith(@"/admin/", StringComparison.OrdinalIgnoreCase);
}

As you can see, all I'm doing is passing a method that checks the path begins with /admin/.

Once setup, all you need to do is set your second application (in this instance it's my admin application) to the configured port. You can do this within the Program class via the UseUrls extension method:

// Program.cs
public class Program
{
    public static void Main(string[] args)
    {
        var host = new WebHostBuilder()
            .UseKestrel()
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseUrls("http://localhost:8081")
            .UseIISIntegration()
            .UseStartup<Startup>()
            .Build();

        host.Run();
    }
}

Now, if you start up your application and navigate to /admin/ (or whatever path you've specified) the request should be proxied to your secondary application!

Happy coding!