Latest Blog Posts

New Blog, now .NET Core, Docker and Linux powered - and soon to be open-sourced

Posted on Saturday, 26 Nov 2016

A little while ago I concluded that my blog was looking a bit long in the tooth and decided that since I'm looking at .NET Core, what better opportunity to put it through its paces than by rewriting my blog using it.

I won't go into too much detail as how the blog is built as it's nothing anyone wouldn't have seen before, but as someone that likes to read about people's experiencing building things, I thought I'd break down how it's built and what I learned whilst doing it.

Here's an overview of some of the technology I've used:

  • .NET Core / ASP.NET Core MVC
  • MediatR
  • Azure SQL
  • Dapper
  • Redis
  • Google Authentication
  • Docker
  • Nginx
  • Ubuntu

.NET Core / ASP.NET Core MVC

Having been following the development of .NET Core, I thought I'd wait until things stabilised before I started migrating. Unfortunately I didn't wait long enough and had to go through the the pain many developers experiences with breaking changes.

I also ran into various issues such as waiting for libraries to migrate to .NET Standard, and other problems such as RSS feed generation for which no .NET Standard libraries exist, primarily because System.ServiceModel.Syndication is not .NET Core compatible just yet. None of these are deal breakers, with work arounds out there, but none-the-less tripped me up along the way. That said, whilst running into these issues I did keep reminding myself that this is what happens when you start building with frameworks and libraries still in beta - so no hard feelings.

In fact, I've been extremely impressed with the direction and features in ASP.NET Core and look forward to building more with it moving forward.

MediatR

I've never been a fan of the typical N-teir approach to building an application primarily because it encourages you to split your application into various horizontal slices (generally UI, Business Logic and Data Access), which often leads of a ridged design filled with lots of very large mixed concerns. Instead I prefer breaking my application up into vertical slices based on features, such as Blog, Pages, Admin etc.

MediatR helps me do this and at the same time allows you to model your application's commands and queries, turning a HTTP request into a pipeline to which you handle and return a response. This has an added effect of keeping your controllers nice and skinny as the only responsibility of the Controller is to pass the request into MediatR's pipeline.

Below is a simplified example of what a controller looks like, forming the request then delegating it to the appropriate handler:

// Admin Controller
public class BlogAdminController {

    private readonly IMediator _mediator;

    public BlogAdminController(IMediator mediator)
    {
        _mediator = mediator;
    }

    [Route("/admin/blog/edit/{id:int}")]
    public IActionResult Edit(BlogPostEdit.Query query)
    {
        BlogPostEdit.Response model = _mediator.Send(query);

        return View(model);
    }
}
public class BlogPostEdit
{
    public class Query : IRequest<Response>
    {
        public int Id { get; set; }
    }

    public class BlogPostEditRequestHandler : IRequestHandler<Query, Response>
    {
        private readonly IBlogAdminStorage _storage;

        public BlogPostEditRequestHandler(IBlogAdminStorage storage)
        {
            _storage = storage;
        }

        public Response Handle(Query request)
        {
            var blogPost = _storage.GetBlogPost(request.Id);
            if (blogPost == null)
                throw new RecordNotFoundException(string.Format("Blog post Id {0} not found", request.Id.ToString()));

            return new Response
            {
                BlogPostEditModel = blogPost
            };
        }
    }

    public class Response
    {
        public BlogPostEditModel BlogPostEditModel { get; set; }
    }
}

A powerful feature of MediatR's pipelining approach is you can start to use the decorator pattern to handle cross-cutting concerns like caching, logging and even validation.

If you're interested in reading more about MediatR then I'd highly recommend Jimmy Bogard's video on favouring slices rather than layers, where he covers MediatR and its architectural benefits.

Google Authentication

I wanted to keep login simple, and not have to worry about storing passwords. With this in mind I decided to go with Google Authentication for logging in, which I cover in my Social authentication via Google in ASP.NET Core MVC post.

Docker, Ubuntu and Redis

Having read loads on Docker but never having played with it, migrating my blog to .NET Core seemed like a perfect opportunity to get stuck into Docker to see what all the fuss was about.

Having been using Docker for a couple of months now I'm completely sold on how it changes the deployment and development landscape.

This isn't the right post to go into too much detail about Docker, but no doubt you're aware of roughly what it does by now and if you're considering taking it for a spin to see what it can do for you then I would highly recommend it.

Docker's made configuring my application to run on Ubuntu with Redis and Nginx an absolute breeze. No longer do I have to spin up individual services and packing website up and deploy it. Now I simply have to publish an image to a repository, pull it down to my host and run docker-compose up.

Don't get me wrong, Docker's certainly not the golden bullet that some say it is, but it's definitely going to make your life easier in most cases.

Open-sourcing the blog

I redeveloped the blog in mind of open-sourcing it, so once I've finished tidying it up I'll put it up on my GitHub account so you can download it and give it a try for yourself. It's no Orchard CMS, but it'll do the job for me - and potentially you.

Getting started with Elastic Search in Docker

Posted on Tuesday, 15 Nov 2016

Having recently spend a lot of time experimenting with Docker, other than repeatable deployment and runtime environments, one of the great benefits promised by the containerisation movement is how it can supplement your local development environment.

No longer do you need to simultaneously waste time and slow down your local development machine by installing various services like Redis, Postgres and other dependencies. You can simply download a Docker image and boot up your development environment. Then, once you're finished with it tear it down again.

In fact, a lot of the Docker images for such services are maintained by the development teams and companies themselves.

I'd never fully appreciated this dream until recently when I partook in the quarterly three day hackathon at work, where time was valuable and we didn't waste time downloading and installing the required JDK just to get Elastic Search installed.

In fact, I was so impressed with Docker and Elastic Search that it compelled me to write this post.

So without further ado, let's get started.

What is Elastic Search?

Elasticsearch is a search server based on Lucene. It provides a distributed, multitenant-capable full-text search engine with a RESTful web interface and schema-free JSON documents.

Now that that's out of the way, let's get going. First thing's first, you're going to need Docker

Installing Docker

If the title didn't give it away, we're going to be setting up Elastic Search up locally using Docker, so if you don't have Docker installed then you're going to need to head over to the Docker download page and download/install it.

Setting up Elastic Search

Next we're going to need to find the Elastic Search Docker image.

To do this we're going to head over to Docker Hub and seach for Elastic Search (here's a direct link for the lazy or those pressed for time).

What is Docker Hub? For new to Docker, Docker Hub is a repository of popular Docker images, many of which are officially owned and supported by the owners of the software.

Pulling and running the Elastic Search Docker image

To run the Elastic Search Docker image we're first going to need to pull it down to our local machine. To do this open your command prompt or terminal (any directory is fine as nothing is downloaded to the current directory) and execute the following Docker command:

docker pull elasticsearch

Running Elastic Search

Next we want to run our Elastic Search image, do to that we need to type the following command into our terminal:

docker run -d -p 9200:9200 -p 9300:9300 elasticsearch

Let's breakdown the command:

  • First we're telling Docker we want to run an image in a container via the 'run' command.

  • The -d argument will run the container in detached mode. This means it will run as a separate background process, as opposed to short-lived process that will run and immediately terminate once it has finished executing.

  • Moving on, the -p arguments tell the container to open and bind our local machine's port 9200 and 9300 to port 9200 and 9300 in the Docker container.

  • Then at the end we specify the Docker image we wish to start running - in this case, the Elastic Search image.

Note: At this point, if you're new to Docker then it's worth knowing that our container's storage will be deleted when we tear down our Docker container. If you wish to persist the data then we have to use the -v flag to map the Docker's drive volume to our local disk, as opposed to the default location which is the Docker's container.

If we want to map the volume to our local disk then we'd need to run the following command instead of the one mentioned above:

docker run -d -v "$HOME/Documents/elasticsearchconf/":/usr/share/elasticsearch/data -p 9200:9200 -p 9300:9300 elasticsearch

This will map our $HOME/Documents/elasticsearchconf folder to the container's /usr/share/elasticsearch/data directory.

Checking our Elastic Search container is up and running

If the above command worked successfully then we should see the Elastic Search container up and running, we can check this by executing the following command that lists all running containers:

docker ps

To verify Elastic Search is running, you should also be able to navigate to http://localhost:9200 and see output similar to this:

{
  "name" : "f0t5zUn",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "3loHHcMnR_ekDxE1Yc1hpQ",
  "version" : {
    "number" : "5.0.0",
    "build_hash" : "253032b",
    "build_date" : "2016-10-26T05:11:34.737Z",
    "build_snapshot" : false,
    "lucene_version" : "6.2.0"
  },
  "tagline" : "You Know, for Search"
}

Container isn't running?

If for some reason you container isn't running, then you can run the following command to see all containers, whether running or not.

docker ps -a

Then once you've identified the container you just tried to run (hint: it should be at the top) then run the following command, including the first 3 or 4 characters from the Container Id column:

docker logs 0403

This will print out the containers logs, giving you a bit of information as to what could have gone wrong.

Connecting to Elastic Search

Now that our Docker container is up and running, let's get our hands dirty with Elastic Search via their RESTful API.

Indexing data

Let's begin by indexing some data in Elastic Search. We can do this by posting the following product it to our desired index (where product is our index name, and television is our type):

// HTTP POST:
http://localhost:9200/product/television

Message Body:
{"Name": "Samsung Flatscreen Television", "Price": "£899"}

If successful, you should get the following response from Elastic Search:

{
    "_index":"product",
    "_type":"television",
    "_id":"AVhIJ4ACuKcehcHggtFP",
    "_version":1,
    "result":"created",
    "_shards":{
        "total":2,
        "successful":1,
        "failed":0
    },
    "created":true
}

Searching for data

Now we've got some data in our index, let's perform a simple search for it.

Performing the following GET request should return the following data:

// HTTP GET:
http://localhost:9200/_search?q=samsung

The response should look something like this:

{
    "took":16,
    "timed_out":false,
    "_shards":{
        "total":15,
        "successful":15,
        "failed":0
    },
    "hits":{
        "total":1,
        "max_score":0.2876821,
        "hits":[{
                "_index":"product",
                "_type":"television",
                "_id":"AVhIJ4ACuKcehcHggtFP",
                "_score":0.2876821,
                "_source": {
                    "Name": "Samsung Flatscreen Television","Price": "£899"
                }
            }]
        }
}

One of the powerful features of Elastic Search is its  full-text search capabilities, enabling you to perform some truely impressive search queries against your indexed data.

For more on the search options available to you I would recommend you check out this resource.

Deleting Indexed data

To delete indexed data you can perform a delete request, passing the object ID, like so:

// HTTP Delete
http://localhost:9200/product/television/AVhIJ4ACuKcehcHggtFP

Moving Forward

So far we've been using Elastic Search's API query our Elastic Search index. If you'd prefer something more visual that will aid you in your exploration and discovery of the Elastic Search structured query syntax then I'd highly recommend you check out ElasticSearch-Head; a web frontend for your Elastic Search cluster. 

To get started with ElasticSearch-Head you simply clone the repository to your local drive, open the index.html file and point it at your http://localhost:9200 endpoint.

If you have experience issues connecting your web client to your Dockerised Elastic Search cluster then it could be because of CORS permissions. Instead of fiddling around with configurations I simply installed and enabled this Chrome plugin to get around it.

Now you can use the web UI to view the search tab to discover more of Elastic Search's complex structured query syntax.

Executing JavaScript inside of .NET Core using JavaScriptServices

Posted on Wednesday, 28 Sep 2016

Recently, we were lucky enough to have Steve Sanderson speak at .NET South West, a Bristol based .NET meet up I help organise. His talk was titled SPAs (Single Page Applications) on ASP.NET Core and featured a whole host of impressive tools and APIs he's been developing at Microsoft, all aimed at aiding developers building single page applications (including Angular, Knockout and React) on the ASP.NET Core platform.

As Steve was demonstrating all of these amazing APIs (including  server side rendering of Angular 2/Reacts applications, Angular 2 validation that was integrated with .NET Core MVC's validation) the question that was at the end of everyone's tongue was "How's he doing this?!".

When the opportunity finally arose, Steve demonstrated what I think it one of the coolest parts of the talk - the JavaScriptServices middleware - the topic of this blog post.

Before continuing, if you develop single page apps in either Angular, React or Knockout then I'd highly recommend you check out the talk, which can also be found here.

What is JavaScriptServices?

JavaScriptServices is a .NET Core middleware library that plugs into the .NET Core pipeline which uses Node to execute JavaScript (naturally this also includes Node modules) at runtime. This means that in order to use JavaScriptServices you have to have Node installed the host machine.

How does it work and what application does it have? Let's dive in and take a look!

Setting up JavaScriptServices

Before we continue, it's worth mentioning that it looks like the package is currently going through a rename (from NodeServices to JavaScriptServices) - so you'll notice the API and NuGet package is referenced NodeServices, yet I'm referring to JavaScriptServices throughout. Now that that's out of the way, let's continue!

First of all, as mentioned above, JavaScriptServices relies on Node being installed on the host machine, so if you don't have Node installed then head over to NodeJs.org to download and install it. If you've already got Node installed then you're good to continue.

As I alluded to earlier, setting up the JavaScriptServices middleware is as easy as setting up any other piece of middleware in the in the new .NET Core framework. Simply include the JavaScriptServices NuGet package in your solution:

Install-Package Microsoft.AspNetCore.NodeServices -Pre

Then reference it in your Startup.cs file's ConfigureServices method:

public void ConfigureServices(IServiceCollection services)
{
    services.AddNodeServices();
}

Now we have the following interface at our disposal for calling Node modules:

public interface INodeServices : IDisposable
{
    Task<T> InvokeAsync<T>(string moduleName, params object[] args);
    Task<T> InvokeAsync<T>(CancellationToken cancellationToken, string moduleName, params object[] args);

    Task<T> InvokeExportAsync<T>(string moduleName, string exportedFunctionName, params object[] args);
    Task<T> InvokeExportAsync<T>(CancellationToken cancellationToken, string moduleName, string exportedFunctionName, params object[] args);
}

Basic Usage

Now we've got JavaScriptServices setup, let's look at getting started with a simple use case and run through how we can execute some trivial JavaScript in our application and capture the output.

First we'll begin by creating a simple JavaScript file that contains the a Node module that returns a greeting message:

// greeter.js
module.exports = function (callback, firstName, surname) {

    var greet = function (firstName, surname) {
        return "Hello " + firstName + " " + surname;
    }

    callback(null, greet(firstName, surname));
}

Next, we inject an instance of INodeServices into our controller and invoke our Node module by calling InvokeAsync<T> where T is our module's return type (a string in this instance).

public DemoController {

    private readonly INodeServices _nodeServices;

    public DemoController(INodeServices nodeServices)
    {
        _nodeServices = nodeServices;
    }

    public async Task<IActionResult> Index()
    {
        string greetingMessage = await _nodeServices.InvokeAsync<string>("./scripts/js/greeter", "Joseph", "Woodward");

        ...
    }

}

Whilst this is a simple example, hopefully it's demonstrated how easy it and given you an idea as to how powerful this can potentiall be. Now let's go one further.

Taking it one step further - transpiling ES6/ES2015 to ES5, including source mapping files

Whilst front end task runners such as Grunt and Gulp have their place, what if we were writing ES6 code and didn't want to have to go through the hassle of setting up a task runner just to transpile our ES2015 JavaScript?

What if we could transpile our Javascript at runtime in our ASP.NET Core application? Wouldn't that be cool? Well, we can do just this with JavaScriptServices!

First we need to include a few Babel packages to transpile our ES6 code down to ES5. So let's go ahead and create a packages.json in the root of our solution and install the listed packages by executing _npm install _at the same level as our newly created packages.json file.

{
    "name": "nodeservicesexamples",
    "version": "0.0.0",
    "dependencies": {
        "babel-core": "^6.7.4",
        "babel-preset-es2015": "^6.6.0"
    }
}

Now all we need to register the NodeServices service in the ConfigureServices method of our Startup.cs class:

public void ConfigureServices(IServiceCollection services)
{
    ...
    services.AddNodeServices();
    ...
}

After this we want to create our Node module that will invoke the Babel transpiler - this will also include the source mapping files.

// /node/transpilation.js

var fs = require('fs');
var babelCore = require('babel-core');

module.exports = function(cb, physicalPath, requestPath) {
    var originalContents = fs.readFileSync(physicalPath);
    var result = babelCore.transform(originalContents, {
        presets: ['es2015'],
        sourceMaps: 'inline',
        sourceFileName: '/sourcemapped' + requestPath
    });

    cb(null, result.code);
}

Now comes the interesting part. On every request we want to check to see if the HTTP request being made is for a .js extension. If it is, then we want to pass its contents to our JavaScriptServices instance to transpile it to ES6/2015 JavaScript, then finish off by writing the output to the output response.

At this point I think it's only fair to say that if you were doing this in production then you'd probably want some form of caching of output. This would prevent the same files being transpiled on every request - but hopefully the follow example is enough to give you an idea as to what it would look like:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, INodeServices nodeServices)
{
    ...

    // Dynamically transpile any .js files under the '/js/' directory
    app.Use(next => async context => {
        var requestPath = context.Request.Path.Value;
        if (requestPath.StartsWith("/js/") && requestPath.EndsWith(".js")) {
            var fileInfo = env.WebRootFileProvider.GetFileInfo(requestPath);
            if (fileInfo.Exists) {
                var transpiled = await nodeServices.InvokeAsync<string>("./node/transpilation.js", fileInfo.PhysicalPath, requestPath);
                await context.Response.WriteAsync(transpiled);
                return;
            }
        }

        await next.Invoke(context);
    });

    ...
}

Here, all we're doing is checking the ending path of every request to see if the file exists within the /js/ folder and ends with .js. Any matches are then checked to see if the file exists on disk, then is passed to the transpilation.js module we created earlier. The transpilation module will then run the contents of the file through Babel and return the output to JavaScriptServices, which then proceeds to write to our application's response object before invoking the next handler in our HTTP pipeline.

Now that's all set up, let's go ahead and give it a whirl. If we create a simple ES2016 Javascript class in a wwwroot/js/ folder and reference it within our view in a script tag.

// wwwroot/js/example.js

class Greeter {
    getMessage(name){
        return "Hello " + name + "!";
    }
}

var greeter = new Greeter();
console.log(greeter.getMessage("World"));

Now, when we load our application and navigate to our example.js file via your browser's devtools you should see it's been transpiled to ES2015!

Conclusion

Hopefully this post has giving you enough of an understanding of the JavaScriptServices package to demonstrate how powerful the library really is. With the abundance of Node modules available there's all sorts of functionality you can build into your application, or application's build process. Have fun!

Building rich client side apps using Angular 2 talk at DDD 11 in Reading

Posted on Saturday, 03 Sep 2016

This weekend I had the pleasure of speaking in front of this friendly bunch (pictured) at DDD 11, a .NET focused developers' conference - this year hosted out of Microsoft's Reading offices.

My talk topic? Building Rich client-side applications using Angular 2.

As a regular speaker at meet ups and the occasional podcast, this year I've been keen to step it up and move into the conference space. Speaking is something that I love doing, and DDD 11 was a great opportunity that I didn't want to miss.

Having spotted a tweet that DDD were accepting talk proposals a number of weeks ago, I quickly submitted a few talks I had up my sleeve - with Building rich client-side apps using Angular 2 being the talk that received enough votes to get me a speaking slot - yay!

My talk was one of the last talks of the day so I had to contend with a room full of sleepy heads, tired after a long day of networking and programming related sessions (myself included). With this in mind I decided it would be best to step up the energy levels with a hope of keeping people more engaged.

Overall I think the talk went well (though in hindsight I could have slowed down a little bit!) and received what I felt was a good reception, and it was great to have people in the audience (some of whom are speakers themselves) approach me afterwards saying they enjoyed the talk.

Angular 2 CLI interview with .NET Rocks

Posted on Tuesday, 23 Aug 2016

Recently I once again (for those that remember my last talk with Carl and Richard was about shortcuts and productivity games in Visual Studio) had the pleasure of talking with Carl Franklin and Richard Campbell on the .NET Rocks Show about the Angular 2 command line interface. With all that's been happening with Angular 2 it was great to have the opportunity to spend a bit of time talking about the tooling around the framework and some of the features the Angular 2 CLI affords us.

So if you've been looking at Angular 2 recently then be sure to check out show 1339 and leave any feedback you may have.

Integration testing your ASP.NET Core middleware using TestServer

Posted on Sunday, 31 Jul 2016

Lately I've been working on a piece of middleware that simplifies temporarily or permanently redirecting a URL from one path to another, whilst easily expressing the permanency of the redirect by sending the appropriate 301 or 302 HTTP Status Code to the browser.

If you've ever re-written a website and had to ensure old, expired URLs didn't result in 404 errors and lost traffic then you'll know what a pain this can be when dealing with a large number of expired URLs. Thankfully using .NET Core's new middleware approach, this task becomes far easier - so much so that I've wrapped it into a library I intend to publish to NuGet:

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    ...
    app.UseRequestRedirect(r => r.ForPath("/32/old-path.html").RedirectTo("/32/new-path").Permanently());
    app.UseRequestRedirect(r => r.ForPath("/33/old-path.html").RedirectTo("/33/new-path").Temporarily());
    app.UseRequestRedirect(r => r.ForPath("/34/old-path.html").RedirectTo(DeferredQueryDbForPath).Temporarily());
    ...
}

private string DeferredQueryDbForPath(string oldPath){
    /* Query database for new path only if old path is hit */
    return newPath;
}

Whilst working on this middleware I was keen to add some integration tests for a more complete range of tests. After a bit of digging I noticed that doing so was actually really simple, thanks to the _TestServe_r class available as part of the Microsoft.AspNetCore.TestHost package.

What is Test Server?

TestServer is a lightweight and configurable host server, designed solely for testing purposes. Its ability to create and serve test requests without the need for a real web host is it's true value, making it perfect for testing middleware libraries (amongst other things!) that take a request, act upon it and eventually return a response. In the case of the aforementioned middleware, our response we'll be testing will be a status code informing the browser that the page has moved along with the destination the page is located at.

TestServer Usage

As mentioned above, you'll find the TestServer class within the Microsoft.AspNetCore.TestHost package, so first of all you'll need to add it to your test project either by using the following NuGet command:

Install-Package Microsoft.AspNetCore.TestHost

Or by updating your project.json file directly:

"dependencies": {
    ...
    "Microsoft.AspNetCore.TestHost": "1.0.0",
    ...
},

Once the NuGet package has downloaded we're ready start creating our tests.

After creating our test class the first thing we need to do is configure an instance of WebHostBuilder, ensuring we add our configured middleware to the pipeline. Once configured we create our instance of TestServer which then bootstraps our test server based on the supplied WebHostBuilder configurations.

[Fact]
public async void Should_Redirect_Permanently()
{
    // Arrange
    var builder = new WebHostBuilder()
        .Configure(app => {
            app.UseRequestRedirect(r => r.ForPath("/old/").RedirectTo("/new/").Permanently());
        }
    );

    var server = new TestServer(builder);

    // Act
    ...
}

Next we need to manually create a new HTTP Request, passing the parameters required to exercise our middleware. In this instance, using the redirect middleware, all I need to do is create a new GET request to the path outlined in my arrange code snippet above. Once created we simply pass our newly created HttpRequestMessage to our configured instance of TestServer.

// Act
var requestMessage = new HttpRequestMessage(new HttpMethod("GET"), "/old/");
var responseMessage = await server.CreateClient().SendAsync(requestMessage);

// Assert
...

Now all that's left is to assert our test using the response we received from our TestServer.SendAsync() method call. In the example below I'm using assertion library Shouldly to assert that the correct status code is emitted and the correct path (/new/) is returned in the header.

// Assert
responseMessage.StatusCode.ShouldBe(HttpStatusCode.MovedPermanently);
responseMessage.Headers.Location.ToString().ShouldBe("/new/");

The complete test will look like this:

public class IntegrationTests
{
    [Fact]
    public async void Should_Redirect_Permanently()
    {
        // Arrange
        var builder = new WebHostBuilder()
            .Configure(app => {
                app.UseRequestRedirect(r => r.ForPath("/old/").RedirectTo("/new/").Permanently());
            }
        );

        var server = new TestServer(builder);

        // Act
        var requestMessage = new HttpRequestMessage(new HttpMethod("GET"), "/old/");
        var responseMessage = await server.CreateClient().SendAsync(requestMessage);

        // Assert
        responseMessage.StatusCode.ShouldBe(HttpStatusCode.MovedPermanently);
        responseMessage.Headers.Location.ToString().ShouldBe("/new/");
    }
}

Conclusion

In this post we looked at how simple it is to test our middleware using the TestServer instance. Whilst the above example is quite trivial, hopefully it provides you with enough of an understanding as to how you can start writing integration or functional tests for your middleware.

Proxying HTTP requests in ASP.NET Core using Kestrel

Posted on Saturday, 02 Jul 2016

A bit of a short post this week but hopefully one that will save some a bit of googling!

Recently I've been doing a lot of ASP.NET Core MVC and in one project I've been working on I have an admin login area. My first instinct was to create the login section as an Area within the main application project.

Creating a login area in the same application would work, but other than sharing an /admin/ path it really was a completely separate application. It has different concerns, a completely different UI (in this instance it's an Angular 2 application talking to a back-end API). For these reasons, creating the admin section as an MVC Area just felt wrong - so I began to look at what Kestrel could offer in terms of proxying requests to another application. This way I could keep my user facing website as one project and the administration area as another, allowing them to grow independent of one another.

Whilst proxying requests is possible in IIS, I was keen to use Kestrel as I'd like the option of hosting the application across various platforms, so I was keen to see what Kestrel had to offer.

Enter the ASP.NET Core Proxy Middleware!

After a little digging it came to no surprise that there was some middleware that made proxying requests a breeze. The middleware approach to ASP.NET Core MVC lends itself to such a task and setup was so simple that I felt it merited a blog post.

After installing the Microsoft.AspNetCore.Proxy NuGet package via the "Install-Package Microsoft.AspNetCore.Proxy" command, all I had to do was hook the proxy middleware up to my pipeline using the MapWhen method within my application's Startup class:

// Startup.cs
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    ...

    app.MapWhen(IsAdminPath, builder => builder.RunProxy(new ProxyOptions
    {
        Scheme = "http",
        Host = "localhost",
        Port = "8081"
    }));

    ...

}

private static bool IsAdminPath(HttpContext httpContext)
{
    return httpContext.Request.Path.Value.StartsWith(@"/admin/", StringComparison.OrdinalIgnoreCase);
}

As you can see, all I'm doing is passing a method that checks the path begins with /admin/.

Once setup, all you need to do is set your second application (in this instance it's my admin application) to the configured port. You can do this within the Program class via the UseUrls extension method:

// Program.cs
public class Program
{
    public static void Main(string[] args)
    {
        var host = new WebHostBuilder()
            .UseKestrel()
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseUrls("http://localhost:8081")
            .UseIISIntegration()
            .UseStartup<Startup>()
            .Build();

        host.Run();
    }
}

Now, if you start up your application and navigate to /admin/ (or whatever path you've specified) the request should be proxied to your secondary application!

Happy coding!

Fira Mono - An exceptional programming font, and now with (optional) ligatures

Posted on Friday, 17 Jun 2016

I've always been a fan of customising my IDE or text editor of choice, and one such customisation (and often the first thing I do after installing an editor) is setup my font of choice which has long been Google's Droid Sans font.

Recently however, I was introduced to a rather interesting but delightful looking typeface called Fira Mono that's been designed by Mozilla specifically for their Firefox OS.

At first I didn't think much of the font, but the more I saw him using it the more it grew on me. Eventually I decided to download it and give it a try. And having been using it now for a number of days I've no intention of switching back to Droid Sans.

Fira Mono in Visual Studio Code:

Fira Mono in Visual Studio:

Would you like ligatures with that?

If you're a developer that likes your programming fonts with ligatures then there is a version available that includes ligatures called Fira Code.

Downloading Fira Mono

The font itself is open-source so if you're interesting it giving it a try then download it via Font Squirrel here, then once extracted to your fonts directory load up Visual Studio (or restart so Visual Studio can load the font) and to go Tools > Options > Environment > Fonts and Colors and select it from the Font dropdown.

Having experimented with various font sizes (In Visual Studio only), font size 9 appears to work really well with Fira Mono.

As mentioned above, if you'd like the ligatures then head over here to download them.

Select by camel case - the greatest ReSharper setting you never knew

Posted on Monday, 06 Jun 2016

One of ReSharper's most powerful features is the sheer number of additional shortcuts it adds to Visual Studio, and out of the arsenal of shortcuts available my most used shortcuts have to be the ones that enable me to modify and manipulate code in as fewer wasted keystrokes as possible. Ultimately, this boils down to the following two shortcuts: 

1: Expand/Shrink Selection (CTRL + ALT + Right to expand, CTRL + ALT + Left to shrink)
This shortcut enables you to expand a selection by scope, meaning pressing CTRL + ALT + Right to expand your selection will start by highlighting the code your cursor is over, then the line then function scope, class and namespace. Check out the following gif to see an example. Be sure to watch the selected area!

Expand/Shrink Selection:

2: Increase selection by word (CTRL + Shift + Right to expand, CTRL + Shift + Left to shrink)

A few of you will probably notice that this shortcut isn't really a ReSharper shortcut - and you'd be right. But none the less, once harnessed, increase/decrease selection by word is extremely powerful when it comes to renaming variables, methods, classes etc. and will serve you well if mastered.

Where am I going with this?

Whilst the aforementioned shortcuts are excellent tools to add to your shortcut toolbox, one thing I always wished they would do was the expand selection by camel case, allowing me to highlight words with more precision and save me the additional key presses when it comes to highlighting the latter part of a variable in order to rename it. For instance, instead of highlighting the whole word in one go (say, the word ProductService, for example), it would first highlight the word Product, followed by Service after the second key press.

Having wanted to do this for sometime now, I was pleasantly surprised when I stumbled across a ReSharper setting to enable just this. This setting can be enabled by going to ReSharper > Options > Environment > Editor > Editor Behaviour and selecting the Use CamelHumps checkbox.

The problem I've found when enabling this is the setting overwrites the default behaviour of CTRL  + ALT + Left / Right. Whilst this may be fine for some, I would rather the ability to choose when to highlight by word and when to highlight by camel case. Luckily you can do just this via the ReSharper_HumpPrev and ReSharper_HumpNext commands that are available for binding in Visual Studio.

To do this do the following:

  1. Open the Visual Studio Options window from Tools > Options
  2. Expand Environment and scroll down to Keyboard
  3. Map the two commands ReSharper_HumpNext and ReSharper_HumpPrev to the key mappings you wish (E.g. ALT+Right Arrow and ALT+Left Arrow) by selecting the command from the list and entering the key mapping in the Press shortcut keys text box, then click Assign.

Now, with UseCamelHumps enabled and my shortcut keys customised, I can choose between the default selection by string, or extend selection by camel case - giving me even more code-editing precision!

Social authentication via Google in ASP.NET Core MVC

Posted on Sunday, 29 May 2016

Lately I've been re-developing my blog and moving it to .NET Core MVC. As I've been doing this I decided to change authentication methods to take advantage of Google's OAuth API as I didn't want the hastle of managing username and passwords.

Initially, I started looking at the SimpleAuthentication library - but quickly realised ASP.NET Core already provided support for third party authentication providers via the Microsoft.AspNet.Authentication library.

Having implemented cookie based authentication I thought I'd take a moment to demonstrate how easy it is with ASP.NET's new middleware functionality.

Let's get started.

Sign up for Google Auth Service

Before we start, we're going to need to register our application with the Google Developer's Console and create a Client Id and Client Secret (which we'll use later on in this demonstration).

  1. To do this go to Google's developer console and click "Create Project". Enter your project name (in this instance it's called BlogAuth) then click Create.
  2. Next we need to enable authentication with Google's social API (Google+). Within the Overview page click the Google+ API link located under Social API and click Enable once the Google+ page has loaded.
  3. At this point you should see a message informing you that we need to create our credentials. To do this click the Credentials link on the left hand side, or the Go to Credentials button.
  4. Go across to the OAuth Consent Screen and enter a name of the application you're setting up. This name is visible to the user when authenticating. Once done, click Save.
  5. At this point we need to create our ClientId and ClientSecret, so go across to the Credentials tab and click Create Credentials and select OAuth client ID from the dropdown then select Web Application.
  6. Now we need to enter our app details. Enter an app name (used for recognising the app within Google Developer Console) and enter your domain into the Authorized JavaScript origins. If you're developing locally then enter your localhost address into this field including port number.
  7. Next enter the return path into the Authorized redirect URIs field. This is a callback path that Google will use to set the authorisation cookie. In this instance we'll want to enter http://:/signin-google (where domain and port are the values you entered in step 6).
  8. Once done click Create.
  9. You should now be greeted with a screen displaying your Client ID and Client Secret. Take a note of these as we'll need them shortly.

Once you've stored your Client ID and Secret somewhere you're safe to close the Google Developer Console window.

Authentication middleware setup

With our Client ID and Client Secret in hand, our next step is to set up authentication within our application. Before we start, we first need to to import the Microsoft.AspNet.Authentication package (Microsoft.AspNetCore.Authorization if you're using RC2 or later) into our solution via NuGet using the following command:

// RC1
install Microsoft.AspNet.Authentication

// RC2
install Microsoft.AspNetCore.Authorization

Once installed it's time to hook it up to ASP.NET Core's pipeline within your solution's Startup.cs file.

First we need to register our authentication scheme with ASP.NET. within the ConfigureServices method:

public IServiceProvider ConfigureServices(IServiceCollection services)
{
    ...

    // Add authentication middleware and inform .NET Core MVC what scheme we'll be using
    services.AddAuthentication(options => options.SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme);

    ...
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{

    ...
    // Adds a cookie-based authentication middleware to application
    app.UseCookieAuthentication(new CookieAuthenticationOptions
    {
        LoginPath = "/account/login",
        AuthenticationScheme = "Cookies",
        AutomaticAuthenticate = true,
        AutomaticChallenge = true
    });

    // Plugin Google Authentication configuration options
    app.UseGoogleAuthentication(new GoogleOptions
    {
        ClientId = "your_client_id",
        ClientSecret = "your_client_secret",
        Scope = { "email", "openid" }
    });

    ...

}

In terms of configuring our ASP.NET Core MVC application to use Google for authentication - we're done! (yes, it's that easy, thanks to .NET Core MVC's middleware approach). 

All that's left to do now is to plumb in our UI and controllers.

Setting up our controller

First, let's go ahead and create the controller that we'll use to authenticate our users. We'll call this controller AccountController:

public class AccountController : Controller
{
    [HttpGet]
    public IActionResult Login()
    {
        return View();
    }

    public IActionResult External(string provider)
    {
        var authProperties = new AuthenticationProperties
        {
            // Specify where to return the user after successful authentication with Google
            RedirectUri = "/account/secure"
        };

        return new ChallengeResult(provider, authProperties);
    }

    [Authorize]
    public IActionResult Secure()
    {
        // Yay, we're secured! Any unauthenticated access to this action will be redirected to the login screen
        return View();
    }

    public async Task<IActionResult> LogOut()
    {
        await HttpContext.Authentication.SignOutAsync("Cookies");

        return RedirectToAction("Index", "Homepage");
    }
}

Now we've created our AccountController that we'll use to authenticate users, we also need to create our views for the Login and Secure controller actions. Please be aware that these are rather basic and are simply as a means to demonstrate the process of logging in via Google authentication.

// Login.cshtml

<h2>Login</h2>
<a href="/account/external?provider=Google">Sign in with Google</a>

// Secure.cshtml

<h2>Secured!</h2>
This page can only be accessed by authenticated users.

Now, if we fire up our application and head to the /login/ page and click Sign in with Google we should be taken to the Google account authentication screen. Once we click Continue  we should be automatically redirected back to our /secure/ page as expected!