Latest Blog Posts

Using the Surface Pro 4 Type Cover on the Surface Pro 3

Posted on Wednesday, 11 Nov 2015

Last September I made the decision to trade in my Macbook Pro and upgrade to a Surface Pro 3. This decision wasn't an easy one to make but I'm a sucker for portability and loved the hybrid approach of the Surface Pro 3 and its ability to act both as a laptop and a tablet.

After a week of programming on my new Surface Pro 3 I decided to put together a rather lengthy post detailing my thoughts and experiences of using the device as a development machine, and whilst the I was, and still am extremely happy with the Surface Pro 3, one aspect that I felt was really lagging was the keyboard - my main criticism being that the keys were too close together and the trackpad wasn't accurate enough, and slightly on the small side for my liking. It seems that these criticisms weren't just ones shared by me, but by many others - but with the release of the Surface Pro 4 and the backwards compatible Surface Pro 4 Type Cover it's clear that Microsoft have been listening to the feedback as the 4's Type Cover goes a long way in correcting these niggles.

In addition to improvements over the existing Type Cover Microsoft released a version that includes a fingerprint scanner - however it doesn't look like it's going to be released in the UK any time soon so being the impatient person I am, I went ahead and ordered myself a standard Type Cover.

The keyboard

The biggest improvement between the Surface Pro 3 keyboard and the 4 is the keyboard size. The keyboard itself has increased further to the edges of the Type Cover with the additional room allowing for much needed gaps between the keys. As someone with rather fat fingers this gap between the keys has gone a long way to increasing my typing accuracy, and thus speed - something extremely important for someone who does  a lot of programming on the Surface Pro 3.

I addition to the aforementioned key spacing, the keys feel a lot more solid than the previous keyboard - it's minor but it definitely lends to the typing experience on the new keyboard.

Other improvements that have been made to the keyboard are:

  • The FN (function) key now features a small light so you know whether your function keys are active or not.
  • The keyboard now features a dedicated print screen button and insert key.

The Trackpad

The next major improvement the Surface Pro 4 Type Cover has undergone in comparison to predecessor is the track pad. Not only is the track pad larger (Microsoft say 40% larger) but it's also made of glass - resulting in it being far smoother and FAR MORE responsive. Whilst the Surface Pro 3's track pad was enough to get the job done, I would never have said it was pleasurable to use. It always felt like hard work - however these recent changes have resulted in a much better experience.

Overall, if you're an owner of a Surface Pro 3 and you want to give your device a brand new lease of life then I'd highly recommend upgrading to the Surface Pro 4 Type Cover.

Podcasts for .NET Developers

Posted on Tuesday, 03 Nov 2015

With an hour long commute to and from work each day I find myself listening to a lot of podcasts. Not only do they make my car journey go much faster but listening to them also help me keep upto date with all of the latest goings on in the software industry. With this in mind I thought it would be beneficial to collate the .NET (or web development related) podcasts that I would recommend.

Before I continue if there are any podcasts that any readers of my blog would recommend then I'm always keen to be introduced to new ones, so please feel free to comment!

.NET Rocks!

Website URL:
Release Frequency: 3 times a week (Tuesday, Wednesday and Thursday)
Average Episode Duration: 60 minutes

.NET Rocks! is easily the staple of my podcasts. With over 1159 episodes in their show archive hosts Carl Franklink and Richard Campbell do a fantastic job of delivering a great, high-production value podcast that features a broad range of .NET (and on occasion not do .NET) focused podcasts. Carl and Richard travel around creating their podcasts, presenting opporunities to interview some really high profile members of the .NET community. .NET Rocks especially deserves my number 1 spot for the sheer number of episodes they've released over the years and the regularity of their releases.

MS Dev Show

Website URL:
Release Frequency: Once a week
Average Episode Duration: 60 minutes

MS Dev Show is a recent addition to my long list of favourite development related podcasts but has certainly left a good impression. Since first learning about MS Dev Show just a few weeks ago I've already made a good start on burning my way through their archive.


Website URL:
Release Frequency: Once a week
Average Episode Duration: 30 - 40 minutes

Along with the aforemention podcasts, I would also highly recommend Scott Hanselman's Hanselminutes podcast. Having been listening to Hanselminutes long before I got into the .NET world, I've always been a great admirer of Scott Hanselman's ability effortlessly communicate concepts, ideas and thoughts in a clear manner - if he has the ability to slow down time to find the perfect word or sentence that expresses what he's trying to communicate. Whilst not all episodes are development related, they're always insightful and interesting.

Eat, Sleep, Code

Website URL:​
Release Frequency: Twice a month
Average Episode Duration: 30 - 40 minutes

Eat Sleep Code is a podcast created by the guys and girls over at Telerik and is certainly worth adding to your podcast aggregator of choice. Each episode spans between 30 to 40 minutes and is a healthy mix of programming related subjects ranging from .NET to Javascript and mobile development.

Herding Code

Website URL:
Release Frequency: Once a month, sometimes longer
Average Episode Duration: 20 - 30 minutes

Herding Code is another get .NET focused podcast that regularly features a variety of subjects ranging from software architecture to mobile phones and javascript libraries. Whilst I would definitely recommend adding it to your podcast aggregator of choice be aware that the eposide release cycle is rather inconsistent.

Software Engineering Radio

Website URL:
Release Frequency: 2 or 3 times a month
Average Episode Duration: 60 minutes

Whilst not specifically .NET focused (more Java) but often featuring more advanced subjects that apply to general application development, Software Engineering Radio deserves a mention for its sheer technical depth of discussions.

Javascript Jabber

Website URL:
Release Frequency: Once a week
Average Episode Duration: 1 hour

Whilst Javascript Jabber is not a .NET focused podcast, with way the .NET and application development landscape is changing most of us .NET (specifically ASP.NET) developers are no doubt familiar with Javascript or will become increasingly familiar with it over the next few years - especially with tooling changes we're seeing in ASP.NET MVC 6. For this reason I feel Javascript Jabber deserves a place on the list. Javascript Jabber regularly features a variety of guests, most often Ruby related, however the hosts do a great job of keeping it Javascript focused. Personally I've found Javascript Jabber a great way to keep upto date with the never ending emergence of Javascript libraries.


That's it for now. I'm always on the lookout for new podcasts to ease my commute so feel free to mention any I've missed in the comments below and I'll be sure to add them to the list!

Using Roslyn to look for code smells

Posted on Friday, 09 Oct 2015

Since first hearing about Roslyn I was instantly excited about its application in being able to easily parse C# and could clearly see the benefits it could bring anyone wishing to perform analysis on an existing codebase - not to mention the great things it can do for the Visual Studio plugins and extensions.

Having been playing around with Roslyn for a couple of days now I thought I'd share a cool snippet of code I put together to demonstrate how easy it is to parse C# code using the power of just a few Roslyn NuGet packages.

In this using Roslyn to analyse an existing codebase and flag any classes that are guilty of the 'Too many parameters' code smell.

First, let's load the solution:

First of all we're going to need to load our solution...

public static class WorkspaceSolution
    public static IEnumerable<Document> LoadSolution(string solutionDir)
        var solutionFilePath = Path.GetFullPath(solutionDir);

        MSBuildWorkspace workspace = MSBuildWorkspace.Create();
        Solution solution = workspace.OpenSolutionAsync(solutionFilePath).Result;
        var documents = new List<Document>();
        foreach (var projectId in solution.ProjectIds)
            var project = solution.GetProject(projectId);
            foreach (var documentId in project.DocumentIds)
                Document document = solution.GetDocument(documentId);
                if (document.SupportsSyntaxTree) documents.Add(document);

        return documents;

As you can see from the above snippet this is actually quite simple thanks to the Microsoft.CodeAnalysis.MSBuild package, more specifically the MSBuildWorkspace class. This allows us to scan our solution and create a collection of any documents within it.

Analysing the documents (aka. class files)

Now we've got a list of all of the documents within our solution we can begin to iterate through them, parsing the syntax tree until we come to the method parameters (though performing the same analysis on constructors is equally as easy - but for the sake of this demonstration we'll look at methods).

At this point we can check to see if the number of parameters exceeds our threshold.

List<MethodDeclarationSyntax> methods = documents.SelectMany(x => x.GetSyntaxRootAsync().Result.DescendantNodes().OfType<MethodDeclarationSyntax>()).ToList();

var smellyClasses = new Dictionary<string, int>();
int paramThreshold = 5;
foreach (MethodDeclarationSyntax methodDeclarationSyntax in methods)
    ParameterListSyntax parameterList = methodDeclarationSyntax.ParameterList;
    if (parameterList.Parameters.Count >= 5)
        smellyClasses.Add(methodDeclarationSyntax.Identifier.SyntaxTree.FilePath, parameterList.Parameters.Count);

Let's put it to use

If you're still not sold on how powerful this can be, let's move this into a unit test that can easily be included as a base suite of unit tests - allowing us to easily enforce rules on our development teams.

public void Methods_ShouldNotHaveTooManyParams(int totalParams)
    List<MethodDeclarationSyntax> methods = this.documents.SelectMany(x => x.GetSyntaxRootAsync().Result.DescendantNodes().OfType<MethodDeclarationSyntax>()).ToList();
    foreach (MethodDeclarationSyntax methodDeclarationSyntax in methods)
        ParameterListSyntax parameterList = methodDeclarationSyntax.ParameterList;

        // Assert
        parameterList.Parameters.Count.ShouldBeLessThanOrEqualTo(totalParams, "File Location: " + methodDeclarationSyntax.Identifier.SyntaxTree.FilePath);

As you'll see in the code snippet below we then use Shouldly unit testing framework and its awesome error messages to inform us of the class that's currently violating our code smell rule!

If you're interested in having a play with this then you can find the code up on GitHub here. I'd also recommend checking out this post if you're interested in learning more about Roslyn and how it works. Another great resource that gave me the idea for using Roslyn to create tests is this blog post by @filip_woj.

Detecting CSS breakpoints in Javascript

Posted on Monday, 21 Sep 2015

A short blog post today but one I still feel will be helpful to a few of you.

Recently I was writing some Javascript for a responsive HTML5 front end that I was working on and needed to find out what CSS media breakpoint the browser was currently in. Naturally my initial reaction was to use Javascript to detect the browser width and go from there; however doing it this way immediately felt dirty as it would result in a duplication of breakpoint values throughout the application - something I was keen to avoid.

After a bit of head-scratching the solution I ended up with was to target the device specific show/hide classes that most CSS framework provide.

In the following primitive example I'm using the Unsemantic CSS grid framework, but almost all CSS frameworks provide similar classes - though even if you're not using a framework there's nothing to stop you from creating your own.


<a href="#" id="clickMeButton">Click Me</a>

<!-- These can go just before the </body> tag -->
<div class="show-on-desktop"></div>
<div class="show-on-mobile"></div>


.show-on-desktop { display: none;}
.show-on-mobile { display: none; }

/* Mobile */
@media (max-width: 300px) { 
    .show-on-mobile { display: block; }

/* Desktop */
@media (min-width: 301px) {    
    .show-on-desktop { display: block; }


   $('#clickMeButton').on('click', function(){
       } else if ($('.show-on-mobile').is(':visible')){

You can check out this JSFiddle to see a working example of the above code - if you resize the browser and click the button you'll notice the alert message will change from "Desktop" to "Mobile".

Whilst looking at similar solutions I found this StackOverflow answer which goes one further and turns the same approach into a Bootstrap plugin for easier use - awesome!

Setting up origin pull on Azure CDN

Posted on Thursday, 20 Aug 2015

By now we're all aware of the benefits that using a content delivery network bring any web application (reducing HTTP requests to server, reduced latency to download assets, reduced load on server etc) but one thing that's always been difficult with content delivery networks that don't have origin pull abilities is to keep your website/web applications assets (when I refer to assets I'm referring to CSS, media such as images, Javascript etc) upto date on the content delivery network.

Without a content delivery network all requests to your application's assets will resolve to the same server making managing your assets easy; however as soon as you start offloading your assets onto a content delivery network you have to start thinking about asset management, and this will depend on the type of CDN you're using and whether they support Origin Push, Pull or both.

If you're reading this post then I assume you're familiar with Origin Pull and how it differs from Push, but if not then I'll briefly explain the differences and some of the pros and cons of each.

Origin Push

When using Origin Push the user is responsible for the uploading of assets. This can be configured by your application, build/deployment process or manually via protocols such as FTP. This comes with its own set of benefits and downloads:


  • Far greater flexibility (you can decide exactly what content is uploaded, when it expires and when it's updated).
  • More efficient use of traffic - content is only uploaded when it's new or changed.


  • Additional steps required to manage content (manually uploading within deployment phase or asset uploading from within your application).

Origin Pull

With Origin Pull the responsibility of manually uploading your web application's assets is alleviated. Instead of building uploading assets into your deployment chain or into your application you replace the hostname of your asset with the CDN's hostname (you can also configure a custom CNAME DNS record) and it will:

  1. Serve the asset if it already has it, or...
  2. If the CDN has never seen the asset before (or it's expired) it will instead pull it from your application's server and cache it on the CDN servers for future hits. Once the CDN has pulled the asset and cached it, it will then serve the asset to the initial request - all of this happens extremely quickly without the user realising what's happened.

As with everything in the technology world, it too has its pros and cons, these being:


  • Easy to setup (doesn't require any API to upload the assets)
  • Easier to maintain (new assets are automatically pulled from your server, freeing you of any requirements to manually upload them)
  • Minimises storage space (only requested assets are uploaded)


  • Visitor's first request to an asset is ever so slightly slower as it the CDN runs through the Origin Pull steps described above (setting expiration headers can help reduce this).

Now we're aware of the differences between Origin Push and Origin Pull let's go ahead and set it up in Microsoft Azure. As with all content delivery networks the cost to host content on a content delivery network is pittance for the majority of us. At the time of writing this blog post Azure currently charge £0.0532 per GB for the first 10TB and £0.0489 per GB thereafter (depending on your zone). For further breakdowns I'd highly recommend checking out Azure's CDN pricing for yourself using Azure's handy CDN calculator,  but as you can see the cost is nothing to ever worry about unless you're an extremely large organisation dealing with hundreds or thousands of requests a second or delivery large files.

Setting up Origin Pull in Microsoft Azure

1: Setting up your CDN

First of all we need to log into our Azure Portal.

Note: At the time of writing this blog post the Azure team are in the process of moving the CDN service from the older Azure portal to the new one - so at this point we have to jump across to the old version of the portal.

Once we've clicked "Go" and been taken across to the older Azure Portal we need to click New > App Services > CDN > Quick Create and begin setting up our CDN endpoint.

When we're talking within the contexts of a content delivery network, the origin is the location where the CDN will fetch the content from in order to cache.

In this instance I want to setup Origin Pull for this website which happens to be hosted on Azure - in this case I would select Web Apps and choose my Azure website domain within the Origin URL field. Depending on your use case would depend on the Origin Type that you'd select, but there's enough types to cover most use cases, these are:

  • Storage Accounts (blob storage on Azure)
  • Web Apps (this is what we'll use)
  • Cloud Services
  • Custom Origin (allowing you to specify a custom origin URL - see this article for more details)

Now our CDN has been set up to pull from our custom URL (in my case a Web App within Azure) we can either begin to use the CDN using the address provided in the URL column of the CDN list, or we can go one step further and setup a custom subdomain turning out CDN URL from "" to "". The added benefit of doing this means if you ever switch to a different CDN provider you only need configure your CDN subdomain, as opposed to changing any instance of the azure specific domain.

Setting up a custom CDN subdomain (optional)

If you wish to start using your CDN without configuring a custom subdomain then skip to the next heading, if not then continue reading.

To set up a custom domain we need to configure our newly created CDN and setup a custom domain. To do this click your CDN's name (as opposed to the Manage CDN button you might have spotted - the difference being is we want to configure our CDN rather than manage our uploaded assets).

Then down the bottom of the page click the "Manage Domains" button. Whilst you're at it I would also recommend turning on the "Enable Query Strings" setting as this will allow you to reference assets that have query strings in the URL.

At this point we need to jump across to our domain's DNS provider and setup a custom CNAME record pointing to one of the Azure specific URLs listed in the popup dialog you will see after clicking "Manage Domains".

At this point we're almost done. All we've got to do is wait for our DNS changes to take effect, this can take anywhere upto 24 hours - however from experience it's generally must faster than that.

Using our newly created CDN using Origin Pull

Now we've set up our CDN to use Origin Pull and customised our custom CDN subdomain (or not, depending on whether you chose to skip the custom domain set) we're ready to start using it.

Note: Before going all out I would recommend just changing one or two assets on your web site or web application just to ensure it's working as expected.

To go this open up your application in your IDE and add your CDN URL (whether it's the Azure specific address or the custom subdomain) to the beginning of your asset path.

If all has gone well once you upload the changes to your hosting provider your assets should start getting delivered from Azure's CDN. What's more, if you use a tool such as Pingdom's Page Speed tool you should notice a dramatic difference in loading speed as assets are:

A: Being loaded from the nearest geographical CDN node, reducing the latency of pulling the assets directly from your webserver wherever that may be.

B: Requesting and pull the asset down a separate domain and thus by-passing the web browser imposed limit of simultaneous requests to remote host (or domain) whilst loading the page.


So there we have it. If you've followed closely and everything's gone according to plan you should have your CDN setup and your website or web application's assets being cached by the CDN.

Now that our assets are being cached if we wish to update them within Azure be sure to either set your expired headers and wait for the content to expire (not the best approach), or alternatively implement a means of cache busting. I would recommend this blog post by Mads Kristensen for a simple means of creating a unique string for assets, forcing the CDN to pull a fresh version of the file when it changes.

Setting up StructureMap 3 in ASP.NET MVC 6 (vNext)

Posted on Thursday, 30 Jul 2015

With the recent release of Visual Studio 2015 (and the bundled early version of MVC6) I was excited to start giving MVC 6 (aka vNext) a whirl. I've spent time playing with the earlier versions of vNext (I refer to vNext as the early, pre-beta version of MVC6) when it was originally published to GitHub. However now it's included within Visual Studio 2015 and somewhat stable (as opposed to the constant changes to the framework in earlier releases) I'm really keen to get stuck in. So much so that I've decided to port this blog to MVC 6.

Whilst MVC 6 comes with a Dependency Injection / Inversion of Control framework baked in I was keen to switch it out for StructureMap, my favourite IoC framework. However, due to the fact that MVC 6 is still in development and not yet production ready there's currently no official adapter for StructureMap. After spending a bit of time digging I found a solution that got me up a running - so hopefully if you're reading this and looking at implementing StructureMap into an early MVC 6 project then I'll be able to save you a bit of time.

IoC container integration differences between MVC 5 vs MVC 6

The first thing you'll notice when integrating an IoC Container into MVC 6 is there have been changes to the DI API. In earlier versions of MVC you could either resolve your dependencies within the ControllerFactory or alternatively use a Common Service Locator and pass ot to DependencyResolver.SetResolver(object commonServiceLocator), like so:

The old MVC 3,4 and 5 way:

public class ExampleResolver : ServiceLocatorImplBase
    protected override object DoGetInstance(Type serviceType, string key)
        // Resolve and return instance of type from your container.

    protected override IEnumerable<object> DoGetAllInstances(Type serviceType)
        // Resolve and return instance of types from your container.

Then registering it with ASP.NET MVC within Global.asax.cs like:

var structureMapResolver = new ExampleResolver(structureMapContainer);

The new MVC 6 way:

With MVC 6 things are a little different.

With Dependency Injection being a first class citizen in ASP.NET MVC 6 and available throughout the framework (instead of being restricted to the Controllers or scattering your application with Service Locators (anti-pattern!)) now we're required to return an instance of IServiceScope (see below), which in itself is a service locator, however the difference being that it's constrained to one single location, though whether this is really the best approach has faced criticism.

public IServiceProvider ConfigureServices(IServiceCollection services)
    // Existing Framework services

    var container = new Container();
    container.Configure(x =>
        x.Scan(scanning =>


    // Our framework extension point

    return container.GetInstance<IServiceProvider>();

You'll see from the code above that the custom IContainer.Populate() extension method is our custom framework extension point - requiring the use of IoC Container specific adapters. As ASP.NET MVC 6 (vNext) is still in development there are only a few adapters available (with the StructureMap adapter pull-request still pending) but after a little googling, tinkering and thanks to this post on the StructureMap Google group I was able to use the following solution to get it working within my ASP.NET MVC 6 application:

It's worth mentioning that at the time of writing this post the below code sample works, however MVC 6 is still in a state of change so you may find bits have changed when you come to use it. I'd certainly recommend keeping an eye on the StructureMap adapter pull-request to see if the adapter has been finalised so you could get it from the official source, but for the mean time if you're keen to start playing with ASP.NET MVC 6 then the following adapter should let you do just that.

The final solution

public static class StructureMapRegistration
    public static void Populate(this IContainer container, IEnumerable<ServiceDescriptor> descriptors)
        var populator = new StructureMapPopulator(container);

internal class StructureMapPopulator
    private IContainer _container;

    public StructureMapPopulator(IContainer container)
        _container = container;

    public void Populate(IEnumerable<ServiceDescriptor> descriptors)
        _container.Configure(c =>
            c.For<IServiceProvider>().Use(new StructureMapServiceProvider(_container));

            foreach (var descriptor in descriptors)
                switch (descriptor.Lifetime)
                    case ServiceLifetime.Singleton:
                        Use(c.For(descriptor.ServiceType).Singleton(), descriptor);
                    case ServiceLifetime.Transient:
                        Use(c.For(descriptor.ServiceType), descriptor);
                    case ServiceLifetime.Scoped:
                        Use(c.For(descriptor.ServiceType), descriptor);

    private static void Use(GenericFamilyExpression expression, ServiceDescriptor descriptor)
        if (descriptor.ImplementationFactory != null)
            expression.Use(Guid.NewGuid().ToString(), context => { return descriptor.ImplementationFactory(context.GetInstance<IServiceProvider>()); });
        else if (descriptor.ImplementationInstance != null)
        else if (descriptor.ImplementationType != null)
            throw new InvalidOperationException("IServiceDescriptor is invalid");

internal class StructureMapServiceProvider : IServiceProvider
    private readonly IContainer _container;

    public StructureMapServiceProvider(IContainer container, bool scoped = false)
        _container = container;

    public object GetService(Type type)
            return _container.GetInstance(type);
            return null;

internal class StructureMapServiceScope : IServiceScope
    private IContainer _container;
    private IContainer _childContainer;
    private IServiceProvider _provider;

    public StructureMapServiceScope(IContainer container)
        _container = container;
        _childContainer = _container.GetNestedContainer();
        _provider = new StructureMapServiceProvider(_childContainer, true);

    public IServiceProvider ServiceProvider
        get { return _provider; }

    public void Dispose()
        _provider = null;
        if (_childContainer != null)

internal class StructureMapServiceScopeFactory : IServiceScopeFactory
    private IContainer _container;

    public StructureMapServiceScopeFactory(IContainer container)
        _container = container;

    public IServiceScope CreateScope()
        return new StructureMapServiceScope(_container);


Reflections on recent TypeScript hands-on session I presented

Posted on Tuesday, 14 Jul 2015

I recently blogged about a TypeScript presentation I gave at a local meetup group I frequently attend (as an aside - I also ended up standing in to give the same presentation during a recent Developers Developers Developers South West conference lunch break; something that I wasn't expecting to do as I sat down to eat my lunch, but never liking to say no was compelled to do).

This time around I decided to spread the TypeScript love by doing an hour and a half hands-on TypeScripy workshop where I took the group through the fundamentals of TypeScript and the benefits it can bring a large or small Javascript.

This post is a quick reflection on what materials I discussed and what I could have done better.

Naturally the session opened with the mandatory introduction of who I was, what TypeScript was and wasn't and what problems it was designed to solve - during this period I gave a quick demonstration of some of its basic features (transpilation, type hinting etc). Once people had a better understanding of what TypeScript aimed to do, we all loaded up the TypeScript Playground (REALLY useful as it means everyone can code along without needing to worry about downloading an IDE that supports TypeScript) and got our hands dirty by writing some code.

As someone who loves TypeScript I was really keen to demonstrate the following points; points that really sold me on TypeScript when I first started experimenting with it:

  1. How painless converting an existing JavaScript codebase into TypeScript by gradually introducing it over time.
  2. How TypeScript does a great job of not getting in your way. For example, your code still compiles even if the TypeScript compiler is throwing a compile error.
  3. And finally how TypeScript is not a new complex language to learn; it doesn't introduce any new DSL and it doesn't stop you from writing your JavaScript or JQuery - something that I feel is a very strong selling point to TypeScript especially those that have grown fond of Javascript and understand the value in learning it and adding it to your knowledge stack.

An example used to demonstrate how TypeScript can be used with existing JavaScript code via TypeScript's declaration syntax:

declare class CalculatorDefinition {
	add(val1: number, val2: number): number;
	minus(val1: number, val2: number): number;

var OurCalculator = (function (){

	function Calculator(){
	Calculator.prototype.add = function(val1, val2){
		return val1 + val2;
	Calculator.prototype.minus = function(val1, val2){
		return val1 + val2;

	return Calculator;


var calculator = <CalculatorDefinition> new OurCalculator();
alert(calculator.add(1, 9))

After this demonstration we started to go through some of the other areas of TypeScript such as its support of ES6 features and its ability to transpile our ES6 features such as classes down into ES5 compatible Javascript. Whilst I briefly demonstrated TypeScripts transpilation benefits early on in  the workshop we were at a great point in our code to switch our calculator from a Immediately Invoked Function Expression returning a Calculator function into a ES6 class, resulting in the code being transpiled into exactly the same output.

Fleshing out our JavaScript code with an ES6 class:

class Calculator {
	add(val1: number, val2: number) {
		return val1 + val2;
	minus(val1: number, val2: number) {
		return val1 - val2;

var calculator = new Calculator ();

TypeScript, TypeScript Definition files and 3rd party plugins

We continued to flesh out the calculator class even further first by creating some a constructor that took an object literal (giving me a good opportunity to demonstrate creating object literal types in TypeScript), then switching it out for an interface (program to an interface over implementation!). With time running out and knowing we had some JQuery users in the group, I was keen to demonstrate TypeScript Definitions and how it can aid development using 3rd party plugins such as JQuery.

Due to TypeScript Playgrounds' inability to import TypeScript Definition files I had to switch over to WebStorm at this point (which was a great opportunity to talk about TypeScripts IDE support). From here I was able to demonstrate the benefits of using TypeScript with JQuery and how the typing resulted in far better tooling and refactoring support in editors such as WebStorm. At this point I was over the set time by a few minutes so it seemed a good point to start to wind down by opening up for any final questions. As it was a small group of people I was encouraging questions from the get-go but it's always good to give people one last opportunity to have any burning questions answered before the end of the session.

As I was closing down I quickly explained something I've always felt is very important in understanding new technologies - regardless of whether everyone goes out and uses TypeScript tomorrow, at least they have enough knowledge on the subject to know when it's the right tool for the job. So if one day their JavaScript starts to become unwieldy - they'll be well armed knowing there's a TypeScript hammer out there waiting to solve that maintenance nightmare nail.

Final reflections and what I would have done differently in hindsight

The feedback from the attendees was good. One of my concerns when presenting the hands-on session was the varying skill level of the attendees could result in me waiting too long for people to finish programming or not give people enough time. I tried my best to mitigate this by giving plenty of time to catch up by sprinkling the code writing parts with brief discussions on what we were going to do next - something the meetup organisers said they felt worked well.

I was particularly happy that one of the attendees approached me afterwards and said he was relatively new to web development (but not programming) and felt the session helped give me a better understanding of JavaScript and how he could get to grips with it using a structure that he's paradigm that he's familiar with.

Moving forward

What this workshop and the brief talk I gave during the lunchtime sessions at DDD South West have highlighted is that I really enjoying teaching and talking about programming subjects. I love talking about programming as it's always been a passion of mine - but I'm especially keen to continue to present more talks and sessions moving forward.

SOLID Architecture in Slices not Layers by Jimmy Bogard

Posted on Sunday, 05 Jul 2015

A bit of a short post today (I've been hard at work practising for an upcoming piano grading exam) but certainly something I plan on discussing further is alternative approcaches to the typical n-tier application structure we're all so used to.

If you've got a  spare hour, or even if you don't - I'd highly recommend you watch this recent talk by Jimmy Bogard on why the n-tier application pattern isn't always the right approach and what alternatives exist. Having used the approach he talks about (modelling the request and command/query segregation using Mediatr) in a few sites (this blog included) I'm beginning to see the real value in it.

Whilst there are sections of the talk I'd have loved to see covered in more depth due to mixed feelings, the parts relating to CQS (command, query segregation) are very insightful.

Note: Watch out for 25:20 when you can see the logo I creator for Mediatr! :D

Setting up TypeScript in Visual Studio Code

Posted on Friday, 05 Jun 2015

When I first heard that Visual Studio Code comes with TypeScript support out of the box I had to give it a try. Being a huge fan of TypeScript and the benefits it brings to developers and large code bases I was glad to see how easy it was to set up. Below I'll take you through the steps that you need to take in order to set up a TypeScript project in Visual Studio Code.

Setting up TypeScript in Visual Studio Code

Step 1: First of all you're going to want to go ahead and open Visual Studio Code. Once open you'll need to go to File > Open Folder and select a folder you'd like to save your TypeScript Project into.

Step 2: Now we need to create our project config file. We can do this by clicking New File within the Project row located within the Explorer side panel. Let's call the config file "tsconfig.json".

Step 3: Next we're going to need to configure our project's config file by specifying what module loader to use and what ECMAScript verion we want to target. Do this by copying and pasting the following configurations:

    "compilerOptions": {
        "module": "amd",
        "target": "ES5",
        "sourceMap": true

Note: At this point our project is now setup to allow us to create TypeScript files with the .ts extension and benefit from the TypeScript goodness we all love; however as we've not created a build task for our project no Javascript files will be transpiled.

Setting up our Visual Studio Code task runner

If we try to build our TypeScript project (CTRL + B for Windows) from within Visual Studio Code you'll receive an error appear informing you that no task runners are configured. From here we can click "Configure Task Runner" within the Visual Studio Code notification to setup a TypeScript build task.

To add a task to build our TypeScript project all we need to do is add a property called "isShellCommand: true" to our tasks.json file and replace the the following property:

"args": ["YourDummyTsFile.ts"]

with this one:

"args": ["${file}"]
"isShellCommand" : true // add this property

These commends execute the TypeScript compiler built into Visual Studio Code as a shell command, and pass the file that you're building to the compiler ready for transpilation, converting your TypeScript files into .js files. Now once you press the build keys (CTRL + B) you'll see a .js file appear beneath the TypeScript file that you've created.

Happy coding!

Block or blacklist IP addresses/referral URLs in ASP.NET MVC with TrafficCop

Posted on Friday, 22 May 2015

Since starting to blog almost a year and a half ago my traffic has slowly but steadily grown month on month. In addition to this growth in blog traffic I've also noticed a sharp rise in the amount of referral spam I'm getting. At first it started off slowly but having looked back over a recent month's worth of traffic it's getting out of control - so much so that it's really starting to mess with my website's analytics.

What is referral spam?

Wikipedia sums up referral spam quite nicely:

Referrer spam (also known as log spam or referrer bombing) is a kind of spamdexing (spamming aimed at search engines). The technique involves making repeated web site requests using a fake referer URL to the site the spammer wishes to advertise.

As you can see from the screenshot taken directly from my Google Analytics account over the month, alot of hits are coming directly from referral spammers, specifically spammers trying to advertise ther social media buttons share buttons.

Having looked at solutions to prevent referral spamming, or at least stop it block it from Google Analytics, I came up with nothing. Google Analytics offers means of filtering out traffic from selected referrers but that involed setting up different filter profiles and given the amount of referral spam I'm getting I feel it's going to be a constant battle. So instead, over the course of a few evenings and work lunch breaks I created a simple solution to manage the referral spam more effectively and with finer control.

TrafficCop to the rescue!

Over the past week during the evenings and work lunchbreaks I've been working on TrafficCop; a simple package that once included in your project allows you to create rules/policies that each HTTP request is checked against. Requests that match the policy can then have a specific action performed against them.

Full instructions can be found on its GitHub page here but I'll give you a basic run through as to how it can be set up below. Before we go into detail though, I'd like to highlight that this is an early version so changes may occur.

Block bad IP addresses in ASP.NET MVC

In this example imagine we wanted to block or blacklist certain IP addresses, preventing them from accessing our website.

First of all we would need to create a TrafficCop registry that we register with TrafficCop. A registry contains our various profiles (policies) that we want to screen our traffic against. You can register as many policies as you'd like within your Traffic Cop registry.

Step 1: Creating your TrafficCop Registry

You can see below that I'm register two policies, BlockBadWebsitePolicy and RedirectReallyBadWebsitePolicy

public class ExampleRegistry : TrafficCopRegistration
    public ExampleRegistry()
        this.RegisterRoutePolicy(new BlockBadWebsitePolicy());
        this.RegisterRoutePolicy(new RedirectReallyBadWebsitePolicy());

Step 2: Creating a policy to register with TrafficCop

Before we can register our policies with TrafficCop we first need to create them. Creating policies is easy; simply create a policy class that derives from TrafficCopRoutePolicy.cs, doing this forces you class to implement two methods, one for screening each HTTP request and the other for performing an action on requests that matched the screening.

In this example I'm screening traffic for incoming requests from IP address and if matched, display a 404 Page Not Found error. If you wanted to block a range of IP addresses then you could add further logic to the Match method such as reading IP addresses from a database or text file then caching the results in HttpRuntime.Cache to speed up further policy matches.

public class BlockBadWebsitePolicy : TrafficCopRoutePolicy
    public override bool Match(IRequestContext requestContext)
        bool isBadTraffic = (requestContext.IpAddress == "");
        return isBadTraffic;

    public override void MatchAction(object actions)
        var act = actions as TrafficCopActions;
        if (act != null) act.PageNotFound404();

Final Step: Register our configuration with our application

Now that we've successfully configured our TrafficCop policies and registration we have to set TrafficCop up in our ASP.NET MVC application. To do this we need to register TrafficCop within our application's Global.asax.cs file. First within the Application_Start method, passing our custom TrafficCop registry that we created earlier in Step 1, like so:

protected void Application_Start()
    TrafficCop.Register(new ExampleRegistry());

Then we need to set TrafficCop to watch every HTTP request by calling its Watch method within the Application_BeginRequest method of our Global.asax.cs file.

protected void Application_BeginRequest()

Now, any request that comes in that matches the policy we've set will automatically be handled by TrafficCop, allowing you to shape traffic as you see fit.

As mentioned above, this is the first early release so be prepared for possible changes; however if you have any improvements or changes you'd like to see made then please feel free to leave them in the comments or submit a pull request via TrafficCop's GitHub page.

That's all for now!

About Me

I'm a software and web application developer living in Somerset within the UK and I eat/sleep software and web development. Programming has been a passion of mine from a young age and I consider myself extremely lucky that I am able to do what I love doing as profession. If I’m not found with my family then you’ll see me at my keyboard writing some form of code.

Read More

Feel free to follow me on any of the channels below.

profile for JoeMighty at Stack Overflow, Q&A for professional and enthusiast programmers

Recent Blog Posts