Latest Blog Posts

Fira Mono - An exceptional programming font, and now with (optional) ligatures

Posted on Friday, 17 Jun 2016

I've always been a fan of customising my IDE or text editor of choice, and one such customisation (and often the first thing I do after installing an editor) is setup my font of choice which has long been Google's Droid Sans font.

Recently however, I was introduced to a rather interesting but delightful looking typeface called Fira Mono that's been designed by Mozilla specifically for their Firefox OS.

At first I didn't think much of the font, but the more I saw him using it the more it grew on me. Eventually I decided to download it and give it a try. And having been using it now for a number of days I've no intention of switching back to Droid Sans.

Fira Mono in Visual Studio Code:

Fira Mono in Visual Studio:

Would you like ligatures with that?

If you're a developer that likes your programming fonts with ligatures then there is a version available that includes ligatures called Fira Code.

Downloading Fira Mono

The font itself is open-source so if you're interesting it giving it a try then download it via Font Squirrel here, then once extracted to your fonts directory load up Visual Studio (or restart so Visual Studio can load the font) and to go Tools > Options > Environment > Fonts and Colors and select it from the Font dropdown.

Having experimented with various font sizes (In Visual Studio only), font size 9 appears to work really well with Fira Mono.

As mentioned above, if you'd like the ligatures then head over here to download them.

Select by camel case - the greatest ReSharper setting you never knew

Posted on Monday, 06 Jun 2016

One of ReSharper's most powerful features is the sheer number of additional shortcuts it adds to Visual Studio, and out of the arsenal of shortcuts available my most used shortcuts have to be the ones that enable me to modify and manipulate code in as fewer wasted keystrokes as possible. Ultimately, this boils down to the following two shortcuts: 

1: Expand/Shrink Selection (CTRL + ALT + Right to expand, CTRL + ALT + Left to shrink)
This shortcut enables you to expand a selection by scope, meaning pressing CTRL + ALT + Right to expand your selection will start by highlighting the code your cursor is over, then the line then function scope, class and namespace. Check out the following gif to see an example. Be sure to watch the selected area!

Expand/Shrink Selection:

2: Increase selection by word (CTRL + Shift + Right to expand, CTRL + Shift + Left to shrink)

A few of you will probably notice that this shortcut isn't really a ReSharper shortcut - and you'd be right. But none the less, once harnessed, increase/decrease selection by word is extremely powerful when it comes to renaming variables, methods, classes etc. and will serve you well if mastered.

Where am I going with this?

Whilst the aforementioned shortcuts are excellent tools to add to your shortcut toolbox, one thing I always wished they would do was the expand selection by camel case, allowing me to highlight words with more precision and save me the additional key presses when it comes to highlighting the latter part of a variable in order to rename it. For instance, instead of highlighting the whole word in one go (say, the word ProductService, for example), it would first highlight the word Product, followed by Service after the second key press.

Having wanted to do this for sometime now, I was pleasantly surprised when I stumbled across a ReSharper setting to enable just this. This setting can be enabled by going to ReSharper > Options > Environment > Editor > Editor Behaviour and selecting the Use CamelHumps checkbox.

The problem I've found when enabling this is the setting overwrites the default behaviour of CTRL  + ALT + Left / Right. Whilst this may be fine for some, I would rather the ability to choose when to highlight by word and when to highlight by camel case. Luckily you can do just this via the ReSharper_HumpPrev and ReSharper_HumpNext commands that are available for binding in Visual Studio.

To do this do the following:

  1. Open the Visual Studio Options window from Tools > Options
  2. Expand Environment and scroll down to Keyboard
  3. Map the two commands ReSharper_HumpNext and ReSharper_HumpPrev to the key mappings you wish (E.g. ALT+Right Arrow and ALT+Left Arrow) by selecting the command from the list and entering the key mapping in the Press shortcut keys text box, then click Assign.

Now, with UseCamelHumps enabled and my shortcut keys customised, I can choose between the default selection by string, or extend selection by camel case - giving me even more code-editing precision!

Social authentication via Google in ASP.NET Core MVC

Posted on Sunday, 29 May 2016

Lately I've been re-developing my blog and moving it to .NET Core MVC. As I've been doing this I decided to change authentication methods to take advantage of Google's OAuth API as I didn't want the hastle of managing username and passwords.

Initially, I started looking at the SimpleAuthentication library - but quickly realised ASP.NET Core already provided support for third party authentication providers via the `Microsoft.AspNet.Authentication` library.

Having implemented cookie based authentication I thought I'd take a moment to demonstrate how easy it is with ASP.NET's new middleware functionality.

Let's get started.

Sign up for Google Auth Service

Before we start, we're going to need to register our application with the Google Developer's Console and create a Client Id and Client Secret (which we'll use later on in this demonstration).

  1. To do this go to Google's developer console and click "Create Project". Enter your project name (in this instance it's called BlogAuth) then click Create.
  2. Next we need to enable authentication with Google's social API (Google+). Within the Overview page click the Google+ API link located under Social API and click Enable once the Google+ page has loaded.
  3. At this point you should see a message informing you that we need to create our credentials. To do this click the Credentials link on the left hand side, or the Go to Credentials button.
  4. Go across to the OAuth Consent Screen and enter a name of the application you're setting up. This name is visible to the user when authenticating. Once done, click Save.
  5. At this point we need to create our ClientId and ClientSecret, so go across to the Credentials tab and click Create Credentials and select OAuth client ID from the dropdown then select Web Application.
  6. Now we need to enter our app details. Enter an app name (used for recognising the app within Google Developer Console) and enter your domain into the Authorized JavaScript origins. If you're developing locally then enter your localhost address into this field including port number.
  7. Next enter the return path into the Authorized redirect URIs field. This is a callback path that Google will use to set the authorisation cookie. In this instance we'll want to enter http://<domain>:<port>/signin-google (where domain and port are the values you entered in step 6).
  8. Once done click Create.
  9. You should now be greeted with a screen displaying your Client ID and Client Secret. Take a note of these as we'll need them shortly.

Once you've stored your Client ID and Secret somewhere you're safe to close the Google Developer Console window.

Authentication middleware setup

With our Client ID and Client Secret in hand, our next step is to set up authentication within our application. Before we start, we first need to to import the Microsoft.AspNet.Authentication package (Microsoft.AspNetCore.Authorization if you're using RC2 or later) into our solution via NuGet using the following command:

// RC1
install Microsoft.AspNet.Authentication

// RC2
install Microsoft.AspNetCore.Authorization

Once installed it's time to hook it up to ASP.NET Core's pipeline within your solution's `Startup.cs` file.

First we need to register our authentication scheme with ASP.NET. within the `ConfigureServices` method:

public IServiceProvider ConfigureServices(IServiceCollection services)
    // Add authentication middleware and inform .NET Core MVC what scheme we'll be using
    services.AddAuthentication(options => options.SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme);


public void Configure(IApplicationBuilder app, IHostingEnvironment env)

    // Adds a cookie-based authentication middleware to application
    app.UseCookieAuthentication(new CookieAuthenticationOptions
        LoginPath = "/account/login",
        AuthenticationScheme = "Cookies",
        AutomaticAuthenticate = true,
        AutomaticChallenge = true

    // Plugin Google Authentication configuration options
    app.UseGoogleAuthentication(new GoogleOptions
        ClientId = "your_client_id",
        ClientSecret = "your_client_secret",
        Scope = { "email", "openid" }


In terms of configuring our ASP.NET Core MVC application to use Google for authentication - we're done! (yes, it's that easy, thanks to .NET Core MVC's middleware approach). 

All that's left to do now is to plumb in our UI and controllers.

Setting up our controller

First, let's go ahead and create the controller that we'll use to authenticate our users. We'll call this controller AccountController:

public class AccountController : Controller
    public IActionResult Login()
        return View();
    public IActionResult External(string provider)
        var authProperties = new AuthenticationProperties
            // Specify where to return the user after successful authentication with Google
            RedirectUri = "/account/secure"

        return new ChallengeResult(provider, authProperties);
    public IActionResult Secure()
        // Yay, we're secured! Any unauthenticated access to this action will be redirected to the login screen
        return View();

    public async Task<IActionResult> LogOut()
        await HttpContext.Authentication.SignOutAsync("Cookies");

        return RedirectToAction("Index", "Homepage");

Now we've created our AccountController that we'll use to authenticate users, we also need to create our views for the Login and Secure controller actions. Please be aware that these are rather basic and are simply as a means to demonstrate the process of logging in via Google authentication.

// Login.cshtml

<a href="/account/external?provider=Google">Sign in with Google</a>
// Secure.cshtml

This page can only be accessed by authenticated users.

Now, if we fire up our application and head to the /login/ page and click Sign in with Google we should be taken to the Google account authentication screen. Once we click Continue  we should be automatically redirected back to our /secure/ page as expected!

ASP.NET Core tag helpers - with great power comes great responsibility

Posted on Monday, 09 May 2016

I recently watched a Build 2016 talk by N. Taylor Mullen where Taylor demonstrated the power of ASP.NET Core MVC's new tag helpers. Whilst I've been keeping up to date with the changes and improvements being made to Razor, there were a couple times my jaw dropped as Taylor talked about points that were completely new to me. These points really highlighted how powerful the Razor engine is becoming - but as Ben Parker said in Spiderman, "With great power comes great responsibility".

This post serves as a review of how powerful the Razor tag engine is, but also a warning of potential pitfalls you may encounter as your codebase grows.

The power of ASP.NET Core MVC's tag engine

For those of you that haven't been keeping up to date with the changes in ASP.NET Core MVC, one of the new features included within Razor are Tag Helpers. At their essence, tag helpers allow you to replace Razor's jarring syntax with a more natural HTML-like syntax. If we take moment to compare a new tag helper to the equivalent Razor function you'll see the difference (remember, you can still use the HTML helpers, Tag helpers do not replace them and will happily work side by side in the same view).

// Before - HTML Helpers
@Html.ActionLink("Click me", "MyController", "MyAction", { @class="my-css-classname", data_my_attr="my-attribute"}) 

// After - Tag Helpers
<a asp-controller="MyController" asp-action="MyAction" class="my-css-classname" my-attr="my-attribute">Click me</a>

Whilst both of these will output the same HTML, it's clear to see how much more natural the tag helper syntax looks and feels. Infact, with data prefix being optional when using data attributes in HTML, you could mistake the tag helper for HTML (more on this later).

Building your own tag helpers

It goes without saying that we're able to create our own tag helpers, and this is where they get extremely powerful. Let's start by creating a tag helper from start to finish. The follow example is a trivial example, but if you stick stick with me hopefully you'll see why I chose this example as we near the end. So let's begin by creating a tag helper that automatically adds a link-juice preserving rel="nofollow" attribute to links outbound links:

public class NoFollowTagHelper : TagHelper
    // Public properties becomes available on our custom tag as an attribute.
    public string Href { get; set; }

    public override void Process(TagHelperContext context, TagHelperOutput output)
        output.TagName = "a"; // Specify our tag output name
        output.TagMode = TagMode.StartTagAndEndTag; // The type of tag we wish to create

        output.Attributes["href"] = Href;
        if (!output.Attributes["href"].Value.ToString().Contains(""))
            output.Attributes["rel"] = "nofollow";

        base.Process(context, output);

Before continuing, it's worth noting that our derived class (NoFollowTagHelper) is what will become our custom tag helper name; Razor will add hyphens between the uppercase character and then lowercase the string. It will also remove the word TagHelper from the class name if it exists.

Before we can use our tag helper we need to tell Razor where to find it.

Loading our tag helper

To load our tag helper we need to add it to our `_ViewImports.cshtml` file. The _ViewImport's sole purpose is to reference our assemblies relating to the views to save us littering our views with references to assemblies. Using the _ViewImport we can do this in one place, much like we used to specify custom HTLM Helpers in the Web.config in previous versions of ASP.NET MVC.

// _ViewImports.cshtml
@using TagHelperDemo
@using TagHelperDemo.Models
@using TagHelperDemo.ViewModels.Account
@using TagHelperDemo.ViewModels.Manage
@using Microsoft.AspNet.Identity
@addTagHelper "*, Microsoft.AspNet.Mvc.TagHelpers"
@addTagHelper "*, TagHelperDemo" // Reference our assembly containing our tag helper here.

The asterisk will load all tag helpers within the TagHelperDemo assembly. If you wish to only load a single tag helper you can specify it like so:

// _ViewImports.cshtml
@addTagHelper "ImageLoaderTagHelper, TagHelperDemo"

Using our tag helper

Now that we've created our tag helper and referenced it, any references to <no-follow> elements will be transformed into no follow anchor links if the href is going to an external domain:

// Our custom tag helper
<no-follow href="">Thanks for visiting</no-follow>
<no-follow href="/about">About</no-follow>
<!-- The transformed output -->
<a href="" rel="nofollow">Thanks for visiting</a>
<a href="/about">About</a>

But wait! There's more!

Ok, so creating custom no-follow tags isn't ideal and is quite silly when we can just type normal HTML, so let's go one step further. With the new tag helper syntax you can actually transform normal HTML tags too! Let's demonstrate this awesomeness by modifiyng our nofollow tag helper:

[HtmlTargetElement("a", Attributes = "href")]
public class NoFollowTagHelper : TagHelper
    public override void Process(TagHelperContext context, TagHelperOutput output)
        var href = output.Attributes["href"];
        if (!href.Value.ToString().Contains(""))
            output.Attributes["rel"] = "nofollow";

        base.Process(context, output);

As you'll see, we've removed some redundant code and added the HtmlTargetElement attribute. This attribute is what allows us to target existing HTML elements and add additional functionality. Now, if we look at our Razor code ALL of our anchors have been processed by our NoFollowTagHelper class and only those with outbound links have been transformed:

<!-- Before -->
<a href="">Thanks for visiting</a>
<a href="/about">About</a>

<!-- After -->
<a href="" rel="nofollow">Thanks for visiting</a>
<a href="/about">About</a>

We've retrospectively changed the output of our HTML without needing to go through our codebase! For those that have worked on large applications and needed to create some kind of consistency between views, you'll hopefully understand how powerful this can be and the potential use cases for it. In fact, this is exactly how ASP.NET uses the tilde (~/) to locate the path an img src path - see for yourself here.

Moving on

So far we've spent the duration of this blog post talking about how powerful ASP.NET Core MVC's Tag Helpers, but with all great powers comes great responsibility - so let's take a moment to look at the downsides of tag helpers and ways we can prevent potential pitfalls as we use them.

The Responsibility

They look just like HTML
When the ASP.NET team first revealed tag helpers to the world there were mixed reactions over the syntax. The powers of Tag Helpers were clear, but some people feel the blurring of lines between HTML and Razor breaks separation of concerns between HTML and Razor. Take the following comments taken from Scott Hanselman's ASP.NET 5 (vNext) Work in Progress - Exploring TagHelpers post demonstrate the feelings of some:

What are the design goals for this feature? I see less razor syntax, but just exchanged for more non-standard html-like markup. I'm still fond of the T4MVC style of referencing controller actions.

This seems very tough to get on-board with. My biggest concern is how do we easily discern between which more obscure attributes are "TagHelper" related vs which ones are part of the HTML spec? When my company hires new devs, I can't rely on the fact that they would realize that "action" is a "server-side" attribute, but then things like "media" and "type" are HTML ... not to mention how hard it would be to tell the difference if I'm trying to maintain code where folks have mixed server-side/html attributes.

This lack of distinction between HTML and Razor quickly becomes apparent when you open a view file in a text editor that doesn't support the syntax highlighting of Tag Helpers that Visual Studio does. Can you spot what's HTML and what's a tag helper the following screenshot?

The solution

Luckily there is a solution to help people discern between HTML and Razor, and that's to force prefixes using the @tagHelperPrefix declaration.

By adding the @tagHelperPrefix declaration to the top of your view file you're able to force prefixes on all of the tags within that current view:

// Index.cshtml
@tagHelperPrefix "helper:"


    <helper:a asp-controller="MyController" asp-action="MyAction" class="my-css-classname" my-attr="my-attribute">Click me</helper:a>

With a tagHelperPrefix declaration specified in a page, any tag helper that isn't prefixed with the specified prefix will be completely ignored by the Razor engine (note: You also have to prefix the closing tag). If the tag is an enclosing tag that wraps some content, then the body of the tag helper will be emitted:

// Index.cshtml - specify helper prefix
@tagHelperPrefix "helper:"


    // Haven't specified helper: prefix
    <asp-controller="MyController" asp-action="MyAction" class="my-css-classname" my-attr="my-attribute">Click me</a>
<!-- Output without prefix: -->
    Click me

One problem that may arise from this solution is you may forget to add the prefix declaration to your view file. To combat this you can add the prefix declaration to the _ViewImport.cshtml file (which we talked about earlier). As all views automatically inherit from _ViewImport, our prefix rule will cascade down through the rest of our views. As you'd expect, this change will force all tag helpers to require your prefix - even the native .NET MVC tags including any anchors or image HTML tags that feature a tilde:

Unexpected HTML behaviour

Earlier in this article, the second version of NoFollowTagHelper demonstrated how we can harness the power of Tag Helpers to transform any HTML element. Whilst the ability to perform such transformations is a extremely powerful, we're effectively taking HTML, a simple markup language with very little native functionality, and giving it this new power. 

Let me try and explain.

If you were to copy and paste a page of HTML into a .html file and look at it, you'd be confident that there's no magic going on with the markup - effectively what you see is what you get. Now rename that HTML page to .cshtml and put it in an ASP.NET Core MVC application with a number of tag helpers that don't require a prefix and you'll no longer have that same confidence. This lack of confidence can create uncertainty in what was once static HTML. It's the same problem you have when performing DOM manipulations using JavaScript, which is why I prefer to prefix selectors targeted by JavaScript with a 'js', making it clear to the reader that it's being touched by JavaScript (as opposed to selecting DOM elements by classes used styling purposes).

To demonstrate this with an example, what does the following HTML element do?

<img src="/profile.jpg" alt="Profile Picture" />

In any other context it would be a simple profile picture from the root of your application. Considering the lack of classes or Id attributes you'd be fairly confident there's no JavaScript targeting it too.

With Tag Helpers added to the equation this HTML isn't what you expect. When rendered it actually becomes the following:

<img src="" alt="Profile Picture" />

Ultimately, the best way to avoid this unpredictable behaviour is to ensure you use prefixes, or at least be ensure your team are clear as to what tag helpers or are not active.

On closing

Hopefully this post has been valuable in highlighting the good, the bad and the potentially ugly consequences of Tag Helpers in your codebase. They're an extremely powerful addition to the .NET framework, but as with all things - there is potential to shoot yourself in the foot.

Capturing IIS / ASP.NET traffic in Fiddler

Posted on Monday, 25 Apr 2016

Recently, whilst debugging an issue I needed to capture the traffic being sent from my local application to an external RESTful web service. In this instance, I needed to see the contents of a JWT token being passed to the service to verify some of the data. Thankfully, Fiddler by Telerik is just the tool for the job.

What is fiddler?

Fiddler is a super powerful, free web debugging proxy tool created by the guys and girls at Telerik. Once launched, Fiddler will capture all incoming and outgoing traffic from your machine, allowing you to analyse traffic, manipulate HTTP (and HTTPS!) requests and perform a whole host of traffic based operations. It's a fantastic tool for debugging and if you don't have it I'd highly recommend you take a look at it. Did I say it's 100% free too?

Capturing ASP.NET / IIS Traffic

By default, Fiddler is configured to register itself as the system proxy for Microsoft Windows Internet Services (WinInet) - the HTTP layer used by IE (and other browsers), Microsoft Office, and many other products. Whilst this default configuration is suitable for the majority of your debugging, if you wish to capture traffic from IIS (which bypasses WinInet), we'll need to re-route our IIS traffic through Fiddler by modifying our application's Web.config.

Step 1: Update your Web.config

To do this, simply open your Web.config and add the following snippet of code after the <configSections> element.

    <defaultProxy enabled="true">
        <proxy proxyaddress="" bypassonlocal="False"/>

Step 2: Configure Fiddler to use the same port

Now that we've routed our IIS traffic through port 8888, we have to configure Fiddler to listen to the same port. To do this simple open Fiddler, go to Tools > Fiddler Options > Connections then change the port listed within the "Fiddler listens on port" setting to 8888.

Now if you fire up your application you'll start to see your requests stacking up in Fiddler ready for your inspection.

Happy debugging!

Publishing your first NuGet package in 5 easy steps

Posted on Friday, 15 Apr 2016

So you've just finished writing a small .NET library for a one-off project and you pause for a moment and think "I should stick this on NuGet - others may find this useful!". You know what NuGet is and how it works, but having never published a package before you're unsure what to do and are short on time. If this is the case then hopefully this post will help you out and show you just how painless creating your first NuGet package is.

Let's begin!

Step 1: Download the NuGet command line tool

First, you'll need to download the NuGet command line tool. You can do this by going here and downloading the latest version (beneath the Windows x86 Commandline heading).

Step 2: Add the NuGet executable to your PATH system variables

Now we've downloaded the NuGet executable we want to add it to our PATH system variables. At this point, you could simply reference the executable directly - but before long you'll be wanting to contribute more of your libraries and adding it to your PATH system variables will save you more work in the long run.

If you already know how to add PATH variables then jump to Step 3, if not then read on.

Adding the nuget command to your PATH system variables

First, move the NuGet.exe you downloaded to a suitable location (I store mine in C:/NuGet/). Now, right-click My Computer (This PC if you're on Windows 10). Click "Advanced System Settings" then click the "Environment Variables" button located within the Advanced tab. From here double-click the PATH variable in the top panel and create a new entry by adding the path to the directory that contains your NuGet.exe file (in this instance it's C:/NuGet/).

Now, if all's done right you should be able to open a Command Prompt window, type "nuget" and you'll be greeted with a list of NuGet commands.

Step 3: Create a Package.nuspec configuration file

In short, a .nuspec file is an XML-based configuration file that describes our NuGet package and its contents. For further reading on the role of the .nuspec file see the nuspec reference on

To create a .nuspec package manifest file, let's go to the root of our project we wish to publish (that's where I prefer to keep my .nuspec file as it can be added to source control) and open a Command Prompt window (Tip: Typing "cmd" in the folder path of your explorer window will automatically open a Command Prompt window pointing to the directory).

Now type "nuget spec" into your Command Prompt window. If all goes well you should be greeted with a message saying "Created 'Package.nuspec' successfully". If so, you should now see a Package.nuspec file your project folder.

Let's take a moment to look inside of our newly created Package.nuspec file. It should look a little like below:

<?xml version="1.0"?>
    <description>Package description</description>
    <releaseNotes>Summary of changes made in this release of the package.</releaseNotes>
    <copyright>Copyright 2016</copyright>
    <tags>Tag1 Tag2</tags>
      <dependency id="SampleDependency" version="1.0" />

As you can see, all of the parameters are pretty self-explanatory (see the docs here if you have any uncertainties on what the configuration does). The only one you may have a question about is the dependencies node - this is simply a list of the dependencies and their version that your NuGet package may have (See here for more info).

Now we've generated our NuGet configuration file, let's take a moment to fill it in.

Once you're done, your manifest file should look a little like below. The next step is to reference the files to be packaged. In the following example, you'll see that I've referenced the Release .dll file (take a look at the documentation here for more file options). If you haven't noticed, I've also removed the <dependencies> node as my package doesn't have any additional dependencies.

<?xml version="1.0"?>
<package >
    <authors>Joseph Woodward</authors>
    <owners>Joseph Woodward</owners>
    <description>Slugity is a simple, configuration based class library that's designed to create search engine friendly URL slugs</description>
    <releaseNotes>Initial release of Slugity.</releaseNotes>
    <copyright>Copyright 2016</copyright>
    <tags>Slug, slug creator, slug generator, url, url creator</tags>
    <file src="bin\Release\Slugity.dll" target="lib\NET40" />

Step 4: Creating your NuGet package

Once your Package.nuspec file has been filled in, let's create our NuGet package! To do this simply run the following command, replacing the path to the .csproj file with your own.

nuget pack Slugity/Slugity.csproj -Prop Configuration=Release -Verbosity detailed

If you have the time, take a moment to look over the various pack command options in the NuGet documentation.

Once the above command has been run, if all has gone well then you should see a YourPackageName.nupkg file appear in your project directory. This is our NuGet package that we can now submit to!

Step 5: Submitting your NuGet package to

We're almost there! Now all we need to do is head over to and submit our package. If you're not already registered at then you'll need to do so (see the "Register / Sign in" link at the top right of the homepage). Next, go to your Account page and click "Upload a package". Now all you need to do is upload your .nupkg package and verify the package details that will use for your package listing!

Congratulations on submitting your first NuGet package!

I hope this guide has been helpful to you, and as always, if you have any questions then leave them in the comments!

Tips for learning on the job

Posted on Tuesday, 29 Mar 2016

Today, whilst out walking the dog I was listening to an episode of Developer Tea titled "Learning on the Job". As someone that's always looking for smarter ways of learning, this episode was of particular interest to me. In fact, it got me thinking - what tips would I recommend for learning on the job?

I find learning on the job interesting for a few reasons. First of all, you spend a good portion of time at your computer programming. This very fact means you have plenty of opportunities to improve your knowledge and understanding of the language or framework you're using. However, you also have to balance this opportunity with the fact that you're working and have things to do and deadlines to meet. These external pressures mean you have to be smart about how you learn.

One of the strengths of a good developer is their ability to learn on the fly, and we should always be looking for opportunities to learn.

With this in mind. This post is a collection what I've found to be effective ways of learning on the job, whilst not letting it get in the way of your work.

REPLs, Fiddlers and online editors are your friends

REPLs are a great tool for learning. Within my browser's bookmark toolbar I have a whole host of online REPLs or Fiddlers for different languages. Languages such as C#, TypeScript and JavaScript.

If I'm researching a problem or happen to come across some code that I'm struggling to understand, I'll often take a moment to launch the appropriate REPL or Fiddler and experiment with the output to make sure I fully understand what the code is doing. These can be an invaluable tool for testing that base library function you've not never used before, test the output of the algorithm you've found or learn what your transpiled TypeScript code will look like. In fact, some online tools such as .NET Fiddle also have the ability to decompile code all the way to the IL/MSIL level - enabling you to gain an even greater understanding of what your code is doing.

Here are a few of the online tools I would recommend:

This aptly brings me onto my next tip.

Don't just copy and paste that code

Above I talked about REPLs and online Fiddlers. Use them! We've all copied and pasted some code on Stack Overflow to get something working. Take a moment to figure out exactly WHY the code is working and what your code is doing. Copying and pasting code that you don't understand is both dangerous and a wasted opportunity to learn something new. If you're too busy then bookmark it, or better yet - some of the editors listed above such as .NET Fiddle allow you to save snippets of code for later.

Write it down in a notepad

This tip I've found particularly effective and goes beyond learning whilst on the job, but I'll expound on this in a moment.

If you stumble across one of those bits of information that you think "Wow, I didn't know that. I should remember that!", take a moment and write it down on a notepad. The process of writing it down helps your new found knowledge to your long-term memory. Another tip for further improving your long-term memory is to revisit some of your notes a couple of hours, or days later. Repetition is another proven way to improve the retention of such information.

Having a notebook on you is also useful when reading books. If you happen to read a sentence of paragraph whilst reading a book, take a moment to write it down in your own words in your notebook. I guarantee you'll be able to recall the information later that day far clearer than you would have had you continued reading.

Share it with a co-worker or summarise it in a quick tweet

If you've just solved a problem or stumbled across a valuable piece of information then share it with your nearest co-worker. The very fact that you're talking about it or explaining the concept will help engrain it into your long-term memory. You also have the added benefit of sharing information knowledge with others.

If no-one's around then you could always talk it through with your rubber duck, or summarise your new knowledge in a quick tweet.

Stack Overflow

Stack Overflow is another great tool for learning on the job. If you're a registered user then you should take full advantage of the ability to favourite answers or questions. Doing so enables you to review them in more detail at a time more suitable.

Pocket and other alternatives

Bookmark sites like Pocket are another powerful tool when it comes to learning on the job and go a step beyond bookmarking.

Whilst struggling with that problem, if you happened to have come across an interesting article but didn't have the time to read it in its entirety then why not add it to your personal Pocket list and fill in the knowledge gap at a later date whilst you're waiting for the bus. Many of the apps like Pocket include the ability to automatically sync bookmarks with your smartphone for offline viewing making it perfect for commuting.

I hope people find these tips valuable. I'm always interested in hearing other recommendations for learning on the job so if there's anything I've not mentioned then please feel free to add it to the comments!

Easy slug generation with Slugity

Posted on Friday, 18 Mar 2016

This year, as outlined in my goals for 2016 post, I've been keen to turn random bits of code I've written across a variety of projects, into fully-fledged open-sourced libraries that can be downloaded via NuGet. My first library aims to tackle a minor problem any developer that works on public facing web-based applications faces - generating search engine friendly URL slugs.

Introducing Slugity - The highly configurable C# slug generator

Whilst creating Slugity, my main focus was configure-ability. People, myself included, have opinions on how a URL should be formatted. Some people like the camel-case URLs that frameworks such as ASP.NET MVC encourage, other dogmatically favour lowercase slugs. Some people still like to strip stop-words from their slugs in order to shorten the overall URL length whilst retaining keyword density within their slugs. Having jumped from pattern to pattern across a variety of projects myself, I was keen to develop Slugity to cater to these needs.

Having been working on Slugity for a number of days now, below are the configuration end-points included in its first release:

  • Customisable slug text case (CamelCase, LowerCase, Ignore)
  • String Separator
  • Slug's maximum length
  • Optional replacement characters
  • Stop Words

Slugity setup and configuring

Given the fact that slugs aren't the most complicated strings to get right, setting up Slugity is a breeze. It comes with a default configuration that should be suitable for the majority of its users, myself includes - but it's simple enough to configure if the default settings don't meet your requirements.

Using default configuration options:

var slugity = new Slugity();
string slug = slugity.GenerateSlug("A <span style="font-weight: bold">customisable</a> slug generation library");

//Output: a-customisable-slug-generation-library

Configuring Slugity:

If Slugity's default configuration doesn't meet your requirements then configuration is easy:

var configuration = new SlugityConfig
    TextCase = TextCase.LowerCase,
    StringSeparator = '_',
    MaxLength = 60

configuration.ReplacementCharacters.Add("eat", "munch on");

var slugity = new Slugity(configuration);
string slug = slugity.GenerateSlug("I like to eat lettuce");

//Output: i_like_to_munch_on_lettuce

Moving forward I'd like to add the ability to replace the various formatting classes that are responsible for the various formatting and cleaning of the slug.

The code is currently up on GitHub and can be viewed here. I'm in the process of finishing off the last bits and pieces and then getting it added to NuGet, so stay tuned.


The library has recently been added to NuGet and can be downloaded here, or via the following NuGet command:

Install-Package Slugity

OzCode Review - The magical extension that takes the pain out of debugging

Posted on Thursday, 10 Mar 2016

TLDR; Yes, OzCode is easily worth the money. It doesn't take you long to realise the amount of time it saves you in debugging quickly pays for itself over time.

Debugging code can be both hard and time-consuming. Thankfully, the guys over at OzCode have been working hard to battle the debugging time sink-hole by developing OzCode, a magically intelligent debugging extension for Visual Studio.

Remembering back to when I first installed OzCode (when it was in its beta stage!), it took all of a few minutes to see that the OzCode team were onto a winner. I clearly remember entering into debug mode by pressing F5 and watching my Visual Studio code window light up with a host of new debugging tools. Tools that I've been crying out for in Visual Studio (or other editors for that matter!) for a long time. In fact, OzCode has changed the way I debug altogether.

What does OzCode cost, and is it worth it?

Before continuing, let's go over the price. OzCode is a paid for Visual Studio extension, coming in at an extremely reasonable $79 (£55) for a personal licence. Is it worth the money? Definitely.

I'm always happy to pay for good software and OzCode is no exception. Suffice it to say, it doesn't take long to realise the value it provides and how quickly it pays for itself in the time it saves you. Don't believe me? Go try the free 30-day trial and see for yourself. I purchased it as soon as it was out of public beta as it was clear it was worth the money.

It's also worth mentioning that OzCode do provide open-source licences and to Microsoft MVPs.

So without further ado, let's take a deeper look at OzCode and some of the features that make it one of the first extensions I install when setting up a new development machine, and why I think you'll love it.

Magic Glance

OzCode's magic glance feature is by far the biggest change to how you'll debug your code, and probably the first thing you'll notice when entering into debug mode. Normally, when you're trying to find out the value of a variable or property within Visual Studio you need to hover over the variable(s) you'd like to inspect. This is less than perfect as the more variables you have to inspect then the more time it'll take you hovering over them to keep track of what's changing. This is what OzCode's Magic Glance feature steps in.

With OzCode installed and Magic Glance at your disposal, you're treated to a helpful display of variable/parameter values without the requirement to hover over them. This also helps you build a better mental model and holistic view of the data you're debugging.


Collections (or any derivative of an array) receive an even larger dosage of OzCode love with the Reveal feature. If you've ever had to view specific properties within a collection then you'll know it's not the most pleasant experience.

OzCode simplifies the process of reviewing data in a collection by providing you with the ability to mark properties within a collection as favourites (via a toggleable star icon) which make them instantly visible above the variable.


Linked closely to the aforementioned Reveal function, Search adds the additional benefit of being able to search a collection for a particular value. To do this simply inspect the collection and enter the value you're searching for. OzCode will then filter the results that match your input. By default OzCode will perform a shallow search on your collection (I'd imagine this is for performance purposes) - however, if your object graph is deeper then you can easily drill down even deeper.


If you've ever had to split a complex expression up in order to see the values whilst debugging then OzCode's Investigate feature will be music to your ears. When reaching a complex (or simple) if statement, OzCode adds visual aids to the expressions to indicate whether they're true of false.

Quick Attach to a process

OzCode's Quick Attach to a process feature is easily one of my most used features - and I'll explain why in a second. Depending on your project type, you either have to enter debugging mode by pressing F5, or by manually attaching your build to a process. With OzCode, they've greatly simplified this process, and as a keyboard shortcut junkie - they've also provided shortcuts to make it even faster.

Using shortcuts to attach to a process via quickly became my de facto way of entering debugging mode, and it saves me a tonne of time. Friends and colleagues are often impressed at how quickly I'm able to enter debugging mode, to which I explain that it's all thanks OzCode. Debugging is now a single keypress away and I love it! (as opposed to multiple clicks, or waiting for Visual Studio to launch my application in a new browser window)

Loads more features

These are just a few of my favourite features that easily make OzCode worth the money.

If you want to see a full list of features then I'd recommend taking a look over OzCode's features page.

Features include:

  • Predict the Future
  • Live Coding
  • Conditional Breakpoints
  • When Set... Break
  • Exceptions Trails
  • Predicting Exceptions
  • Trace
  • Foresee
  • Compare
  • Custom Expressions
  • Quick Actions
  • Show All Instances

Future of OzCode

The future of OzCode is looking bright. With code analysis tools now being able to leverage Roslyn platform I can't wait to see what wizardry the OzCode guys come up with next. For a sneak peek at some of the great features on the horizon I would definitely recommend this recent OzCode webinar. If you've only got a few minutes then I would highly recommend you take that time to check out the 28-minute mark at the LINQ visualisation.

I am in no way associated to OzCode. I'm just an extremely happy user of a great Visual Studio extension that I purchased when it was first released and still use to this day.

Any questions about this review or OzCode itself then feel free to ask!

The Ajax response object pattern

Posted on Sunday, 21 Feb 2016

For some time now I've been using what I like to call the Ajax response object pattern to great success across a variety of projects, enough so that I thought it merited its own blog post.

The Ajax response object pattern is an incredibly simple pattern to implement, but goes a long way to help promote a consistent API to handling most Ajax responses, and hopefully by the end of this post you'll be able to see the value in implementing such an approach.

Before we dig into what the Ajax response object pattern is and how to implement it, let's take a moment to look at the problem it aims to solve.

The common way to handle Ajax responses

Typically you'll see a variation of the following approach to handling an asynchronous javascript response (the important part here is the handling of the response rather than how we created it), and whilst the code may vary slightly (the example used is a trivial implementation) hopefully it will give you an idea as to what we're trying to do.

// The back end
public class ProfileController : Controller


    public ActionResult GetProfile(int profileId)
        var userProfile = this.profileService.GetUserProfile(profileId);

        return View(new ProfileViewModel(userProfile));
// The Javascript / JQuery
    type: "GET",
    url: "/Profile/LoadProfile/" + $('#userProfileId').val(),
    dataType: "html",
    success: function (data, textStatus, jqXHR) {
    error: function (jqXHR, textStatus, errorThrown) {
        // Correctly handle error

As you can see from the above code all we're doing is creating our asynchronous javascript request to our back end to retrieve a profile based on the profile id provided. The DOM is then updated and the response loaded via the call to JQuery's html() method.

What's wrong with this approach?

Nothing. There's nothing wrong with this approach and it's in fact, a perfectly acceptable way to perform and handle ajax requests/responses - however, there is room for improvement. To see what can be improved ask yourself the following questions:

  1. What does our code tell us about the state of the ajax response?
    Judging by our code, we can tell that our HTTP request to our LoadProfile controller action was successful as our success method is being invoked - however what does this REALLY tell us about the state of our response? How to do we know that our payload is in fact a user profile. After all, our success method (or our status code of 200 Ok) simply tells us that the server successfully responded to our HTTP request - it doesn't tell us that our domain validation conditions within our service were met.
  2. Is it easy to reason with?
    When programming we should always aim to write code succinct and clear. Code that leaves no room for ambiguity and removes any guesswork. Does the above solution do this?
  3. How could we pass additional context to our Javascript?
    As we're returning an HTML response to be rendered to the user, what happens if we wanted to pass additional context to the Javascript such as a notification of an error, or some data we wish to display to the user via a modal dialog?
  4. Are we promoting any kind of consistency?
    What happens if our next Ajax response returns a Json object? That will need to be handled in a completely different way. If we have multiple developers working on a project then they'll each probably implement ajax responses in various ways.

A better approach - let's model our Ajax response

Object-orientated programming is all about modelling. Wikipedia says it best: 

Object-oriented programming (OOP) is a programming language model organized around objects rather than "actions" and data rather than logic.

To us mere mortals (as opposed to computers), modelling behaviour/data makes it easier for us to grasp and reason with - this premise is what helped make the Object-orientated programming paradigm popular in the early to mid-1990s. Looking back over our previous example, are we really modelling our response?

When we start passing values around (such as HTML in the previous example), we're missing out on the benefits gained from creating a model around our expected behaviour. So, instead of simply returning a plain HTML response or a JSON object containing just our data, let's try and model our HTTP response and see what benefits it will bring us. This is where the Ajax response object pattern can help.

The Ajax response object pattern

Implementation of the Ajax response object pattern simple. To resolve the concerns and questions raised above, we simply need to model our Ajax response, allowing to us to add additional context to our asynchronous HTTP response which we can then reason with within our Javascript.

The following example is the object I tend to favour when applying the Ajax response object pattern - but your implementation may vary depending on what you're doing.

public class AjaxResponse
    public bool Success { get; set; }

    public string ErrorMessage { get; set; }

    public string RedirectUrl { get; set; }

    public object ResponseData { get; set; }

    public static AjaxResponse CreateSuccessfulResult(object responseData)
        return new AjaxResponse
            Success = true,
            ResponseData = responseData

We can then use the AjaxResponse object to model our HTTP response to something like the following:

public JsonResult GetProfile(int profileId)
    var response = new AjaxResponse();

        var userProfile = this.profileService.GetUserProfile(profileId);

        response.Success = true;
        response.ResponseData = RenderPartialViewToString("Profile", new ProfileViewModel(userProfile));
    catch (RecordNotFoundException exception)
        response.Success = false;
        response.ErrorMessage = exception.Message;

    return Json(response, JsonRequestBehavior.AllowGet);

Earlier we were rendering the view and returning just the HTML payload in the response, now we need to render the view to a string and pass to the ResponseData property. This way we can make use of the additional properties such as whether the response the user is expecting was successful, if not then we can supply and error message. Because our ResponseData property is an object base type, we can use it to store any type, including Json.

Below is an implementation of the RenderPartialViewToString method I often create in a base controller when writing ASP.NET MVC applications.

public class BaseController : Controller
    protected string RenderPartialViewToString(string viewName, object model)
        if (string.IsNullOrEmpty(viewName))
            viewName = this.ControllerContext.RouteData.GetRequiredString("action");
        this.ViewData.Model = model;
        using (var stringWriter = new StringWriter())
            ViewEngineResult partialView = ViewEngines.Engines.FindPartialView(this.ControllerContext, viewName);
            ViewContext viewContext = new ViewContext(this.ControllerContext, partialView.View, this.ViewData, this.TempData, (TextWriter)stringWriter);
            partialView.View.Render(viewContext, (TextWriter)stringWriter);
            return stringWriter.GetStringBuilder().ToString();

Now that we've modelled our response we have the ability to provide the response consumer far more context, this context enables us to better reason with our server response. In the cases where we may need to perform any kind of front end action based on the outcome of the response, we can easily do.

// The Javascript / JQuery
    type: "GET",
    url: "/Profile/LoadProfile/" + $('#userProfileId').val(),
    dataType: "json"
    success: function (data, textStatus, jqXHR) {
        if (ajaxResponse.Success === true) {
        } else {
    error: function (jqXHR, textStatus, errorThrown) {

Additionally, if the approach is used throughout a team then you're promoting a consistent API that you can build around. This makes the development of general error handlers far easier. What's more, if you're using TypeScript in your codebase then you can continue to leverage the benefits by casting your response to a TypeScript implementation of the ajaxResponse class and gain all the intelli-sense and tooling support goodness that comes with TypeScript.

// TypeScript implementation
export interface IAjaxResponse {
    Success: boolean;
    ErrorMessage: string;
    RedirectUrl: string;
    ResponseData: any;

That's all for now. Thoughts and comments welcome!

About Me

I'm a software and web application developer living in Somerset within the UK and I eat/sleep software and web development. Programming has been a passion of mine from a young age and I consider myself extremely lucky that I am able to do what I love doing as profession. If I’m not found with my family then you’ll see me at my keyboard writing some form of code.

Read More

Feel free to follow me on any of the channels below.

profile for JoeMighty at Stack Overflow, Q&A for professional and enthusiast programmers

Recent Blog Posts