Latest Blog Posts

Setting up origin pull on Azure CDN

Posted on Thursday, 20 Aug 2015

By now we're all aware of the benefits that using a content delivery network bring any web application (reducing HTTP requests to server, reduced latency to download assets, reduced load on server etc) but one thing that's always been difficult with content delivery networks that don't have origin pull abilities is to keep your website/web applications assets (when I refer to assets I'm referring to CSS, media such as images, Javascript etc) upto date on the content delivery network.

Without a content delivery network all requests to your application's assets will resolve to the same server making managing your assets easy; however as soon as you start offloading your assets onto a content delivery network you have to start thinking about asset management, and this will depend on the type of CDN you're using and whether they support Origin Push, Pull or both.

If you're reading this post then I assume you're familiar with Origin Pull and how it differs from Push, but if not then I'll briefly explain the differences and some of the pros and cons of each.

Origin Push

When using Origin Push the user is responsible for the uploading of assets. This can be configured by your application, build/deployment process or manually via protocols such as FTP. This comes with its own set of benefits and downloads:


  • Far greater flexibility (you can decide exactly what content is uploaded, when it expires and when it's updated).
  • More efficient use of traffic - content is only uploaded when it's new or changed.


  • Additional steps required to manage content (manually uploading within deployment phase or asset uploading from within your application).

Origin Pull

With Origin Pull the responsibility of manually uploading your web application's assets is alleviated. Instead of building uploading assets into your deployment chain or into your application you replace the hostname of your asset with the CDN's hostname (you can also configure a custom CNAME DNS record) and it will:

  1. Serve the asset if it already has it, or...
  2. If the CDN has never seen the asset before (or it's expired) it will instead pull it from your application's server and cache it on the CDN servers for future hits. Once the CDN has pulled the asset and cached it, it will then serve the asset to the initial request - all of this happens extremely quickly without the user realising what's happened.

As with everything in the technology world, it too has its pros and cons, these being:


  • Easy to setup (doesn't require any API to upload the assets)
  • Easier to maintain (new assets are automatically pulled from your server, freeing you of any requirements to manually upload them)
  • Minimises storage space (only requested assets are uploaded)


  • Visitor's first request to an asset is ever so slightly slower as it the CDN runs through the Origin Pull steps described above (setting expiration headers can help reduce this).

Now we're aware of the differences between Origin Push and Origin Pull let's go ahead and set it up in Microsoft Azure. As with all content delivery networks the cost to host content on a content delivery network is pittance for the majority of us. At the time of writing this blog post Azure currently charge £0.0532 per GB for the first 10TB and £0.0489 per GB thereafter (depending on your zone). For further breakdowns I'd highly recommend checking out Azure's CDN pricing for yourself using Azure's handy CDN calculator,  but as you can see the cost is nothing to ever worry about unless you're an extremely large organisation dealing with hundreds or thousands of requests a second or delivery large files.

Setting up Origin Pull in Microsoft Azure

1: Setting up your CDN

First of all we need to log into our Azure Portal.

Note: At the time of writing this blog post the Azure team are in the process of moving the CDN service from the older Azure portal to the new one - so at this point we have to jump across to the old version of the portal.

Once we've clicked "Go" and been taken across to the older Azure Portal we need to click New > App Services > CDN > Quick Create and begin setting up our CDN endpoint.

When we're talking within the contexts of a content delivery network, the origin is the location where the CDN will fetch the content from in order to cache.

In this instance I want to setup Origin Pull for this website which happens to be hosted on Azure - in this case I would select Web Apps and choose my Azure website domain within the Origin URL field. Depending on your use case would depend on the Origin Type that you'd select, but there's enough types to cover most use cases, these are:

  • Storage Accounts (blob storage on Azure)
  • Web Apps (this is what we'll use)
  • Cloud Services
  • Custom Origin (allowing you to specify a custom origin URL - see this article for more details)

Now our CDN has been set up to pull from our custom URL (in my case a Web App within Azure) we can either begin to use the CDN using the address provided in the URL column of the CDN list, or we can go one step further and setup a custom subdomain turning out CDN URL from "" to "". The added benefit of doing this means if you ever switch to a different CDN provider you only need configure your CDN subdomain, as opposed to changing any instance of the azure specific domain.

Setting up a custom CDN subdomain (optional)

If you wish to start using your CDN without configuring a custom subdomain then skip to the next heading, if not then continue reading.

To set up a custom domain we need to configure our newly created CDN and setup a custom domain. To do this click your CDN's name (as opposed to the Manage CDN button you might have spotted - the difference being is we want to configure our CDN rather than manage our uploaded assets).

Then down the bottom of the page click the "Manage Domains" button. Whilst you're at it I would also recommend turning on the "Enable Query Strings" setting as this will allow you to reference assets that have query strings in the URL.

At this point we need to jump across to our domain's DNS provider and setup a custom CNAME record pointing to one of the Azure specific URLs listed in the popup dialog you will see after clicking "Manage Domains".

At this point we're almost done. All we've got to do is wait for our DNS changes to take effect, this can take anywhere upto 24 hours - however from experience it's generally must faster than that.

Using our newly created CDN using Origin Pull

Now we've set up our CDN to use Origin Pull and customised our custom CDN subdomain (or not, depending on whether you chose to skip the custom domain set) we're ready to start using it.

Note: Before going all out I would recommend just changing one or two assets on your web site or web application just to ensure it's working as expected.

To go this open up your application in your IDE and add your CDN URL (whether it's the Azure specific address or the custom subdomain) to the beginning of your asset path.

If all has gone well once you upload the changes to your hosting provider your assets should start getting delivered from Azure's CDN. What's more, if you use a tool such as Pingdom's Page Speed tool you should notice a dramatic difference in loading speed as assets are:

A: Being loaded from the nearest geographical CDN node, reducing the latency of pulling the assets directly from your webserver wherever that may be.

B: Requesting and pull the asset down a separate domain and thus by-passing the web browser imposed limit of simultaneous requests to remote host (or domain) whilst loading the page.


So there we have it. If you've followed closely and everything's gone according to plan you should have your CDN setup and your website or web application's assets being cached by the CDN.

Now that our assets are being cached if we wish to update them within Azure be sure to either set your expired headers and wait for the content to expire (not the best approach), or alternatively implement a means of cache busting. I would recommend this blog post by Mads Kristensen for a simple means of creating a unique string for assets, forcing the CDN to pull a fresh version of the file when it changes.

Setting up StructureMap 3 in ASP.NET MVC 6 (vNext)

Posted on Thursday, 30 Jul 2015

With the recent release of Visual Studio 2015 (and the bundled early version of MVC6) I was excited to start giving MVC 6 (aka vNext) a whirl. I've spent time playing with the earlier versions of vNext (I refer to vNext as the early, pre-beta version of MVC6) when it was originally published to GitHub. However now it's included within Visual Studio 2015 and somewhat stable (as opposed to the constant changes to the framework in earlier releases) I'm really keen to get stuck in. So much so that I've decided to port this blog to MVC 6.

Whilst MVC 6 comes with a Dependency Injection / Inversion of Control framework baked in I was keen to switch it out for StructureMap, my favourite IoC framework. However, due to the fact that MVC 6 is still in development and not yet production ready there's currently no official adapter for StructureMap. After spending a bit of time digging I found a solution that got me up a running - so hopefully if you're reading this and looking at implementing StructureMap into an early MVC 6 project then I'll be able to save you a bit of time.

IoC container integration differences between MVC 5 vs MVC 6

The first thing you'll notice when integrating an IoC Container into MVC 6 is there have been changes to the DI API. In earlier versions of MVC you could either resolve your dependencies within the ControllerFactory or alternatively use a Common Service Locator and pass ot to DependencyResolver.SetResolver(object commonServiceLocator), like so:

The old MVC 3,4 and 5 way:

public class ExampleResolver : ServiceLocatorImplBase
    protected override object DoGetInstance(Type serviceType, string key)
        // Resolve and return instance of type from your container.

    protected override IEnumerable<object> DoGetAllInstances(Type serviceType)
        // Resolve and return instance of types from your container.

Then registering it with ASP.NET MVC within Global.asax.cs like:

var structureMapResolver = new ExampleResolver(structureMapContainer);

The new MVC 6 way:

With MVC 6 things are a little different.

With Dependency Injection being a first class citizen in ASP.NET MVC 6 and available throughout the framework (instead of being restricted to the Controllers or scattering your application with Service Locators (anti-pattern!)) now we're required to return an instance of IServiceScope (see below), which in itself is a service locator, however the difference being that it's constrained to one single location, though whether this is really the best approach has faced criticism.

public IServiceProvider ConfigureServices(IServiceCollection services)
    // Existing Framework services

    var container = new Container();
    container.Configure(x =>
        x.Scan(scanning =>


    // Our framework extension point

    return container.GetInstance<IServiceProvider>();

You'll see from the code above that the custom IContainer.Populate() extension method is our custom framework extension point - requiring the use of IoC Container specific adapters. As ASP.NET MVC 6 (vNext) is still in development there are only a few adapters available (with the StructureMap adapter pull-request still pending) but after a little googling, tinkering and thanks to this post on the StructureMap Google group I was able to use the following solution to get it working within my ASP.NET MVC 6 application:

It's worth mentioning that at the time of writing this post the below code sample works, however MVC 6 is still in a state of change so you may find bits have changed when you come to use it. I'd certainly recommend keeping an eye on the StructureMap adapter pull-request to see if the adapter has been finalised so you could get it from the official source, but for the mean time if you're keen to start playing with ASP.NET MVC 6 then the following adapter should let you do just that.

The final solution

public static class StructureMapRegistration
    public static void Populate(this IContainer container, IEnumerable<ServiceDescriptor> descriptors)
        var populator = new StructureMapPopulator(container);

internal class StructureMapPopulator
    private IContainer _container;

    public StructureMapPopulator(IContainer container)
        _container = container;

    public void Populate(IEnumerable<ServiceDescriptor> descriptors)
        _container.Configure(c =>
            c.For<IServiceProvider>().Use(new StructureMapServiceProvider(_container));

            foreach (var descriptor in descriptors)
                switch (descriptor.Lifetime)
                    case ServiceLifetime.Singleton:
                        Use(c.For(descriptor.ServiceType).Singleton(), descriptor);
                    case ServiceLifetime.Transient:
                        Use(c.For(descriptor.ServiceType), descriptor);
                    case ServiceLifetime.Scoped:
                        Use(c.For(descriptor.ServiceType), descriptor);

    private static void Use(GenericFamilyExpression expression, ServiceDescriptor descriptor)
        if (descriptor.ImplementationFactory != null)
            expression.Use(Guid.NewGuid().ToString(), context => { return descriptor.ImplementationFactory(context.GetInstance<IServiceProvider>()); });
        else if (descriptor.ImplementationInstance != null)
        else if (descriptor.ImplementationType != null)
            throw new InvalidOperationException("IServiceDescriptor is invalid");

internal class StructureMapServiceProvider : IServiceProvider
    private readonly IContainer _container;

    public StructureMapServiceProvider(IContainer container, bool scoped = false)
        _container = container;

    public object GetService(Type type)
            return _container.GetInstance(type);
            return null;

internal class StructureMapServiceScope : IServiceScope
    private IContainer _container;
    private IContainer _childContainer;
    private IServiceProvider _provider;

    public StructureMapServiceScope(IContainer container)
        _container = container;
        _childContainer = _container.GetNestedContainer();
        _provider = new StructureMapServiceProvider(_childContainer, true);

    public IServiceProvider ServiceProvider
        get { return _provider; }

    public void Dispose()
        _provider = null;
        if (_childContainer != null)

internal class StructureMapServiceScopeFactory : IServiceScopeFactory
    private IContainer _container;

    public StructureMapServiceScopeFactory(IContainer container)
        _container = container;

    public IServiceScope CreateScope()
        return new StructureMapServiceScope(_container);


Reflections on recent TypeScript hands-on session I presented

Posted on Tuesday, 14 Jul 2015

I recently blogged about a TypeScript presentation I gave at a local meetup group I frequently attend (as an aside - I also ended up standing in to give the same presentation during a recent Developers Developers Developers South West conference lunch break; something that I wasn't expecting to do as I sat down to eat my lunch, but never liking to say no was compelled to do).

This time around I decided to spread the TypeScript love by doing an hour and a half hands-on TypeScripy workshop where I took the group through the fundamentals of TypeScript and the benefits it can bring a large or small Javascript.

This post is a quick reflection on what materials I discussed and what I could have done better.

Naturally the session opened with the mandatory introduction of who I was, what TypeScript was and wasn't and what problems it was designed to solve - during this period I gave a quick demonstration of some of its basic features (transpilation, type hinting etc). Once people had a better understanding of what TypeScript aimed to do, we all loaded up the TypeScript Playground (REALLY useful as it means everyone can code along without needing to worry about downloading an IDE that supports TypeScript) and got our hands dirty by writing some code.

As someone who loves TypeScript I was really keen to demonstrate the following points; points that really sold me on TypeScript when I first started experimenting with it:

  1. How painless converting an existing JavaScript codebase into TypeScript by gradually introducing it over time.
  2. How TypeScript does a great job of not getting in your way. For example, your code still compiles even if the TypeScript compiler is throwing a compile error.
  3. And finally how TypeScript is not a new complex language to learn; it doesn't introduce any new DSL and it doesn't stop you from writing your JavaScript or JQuery - something that I feel is a very strong selling point to TypeScript especially those that have grown fond of Javascript and understand the value in learning it and adding it to your knowledge stack.

An example used to demonstrate how TypeScript can be used with existing JavaScript code via TypeScript's declaration syntax:

declare class CalculatorDefinition {
	add(val1: number, val2: number): number;
	minus(val1: number, val2: number): number;

var OurCalculator = (function (){

	function Calculator(){
	Calculator.prototype.add = function(val1, val2){
		return val1 + val2;
	Calculator.prototype.minus = function(val1, val2){
		return val1 + val2;

	return Calculator;


var calculator = <CalculatorDefinition> new OurCalculator();
alert(calculator.add(1, 9))

After this demonstration we started to go through some of the other areas of TypeScript such as its support of ES6 features and its ability to transpile our ES6 features such as classes down into ES5 compatible Javascript. Whilst I briefly demonstrated TypeScripts transpilation benefits early on in  the workshop we were at a great point in our code to switch our calculator from a Immediately Invoked Function Expression returning a Calculator function into a ES6 class, resulting in the code being transpiled into exactly the same output.

Fleshing out our JavaScript code with an ES6 class:

class Calculator {
	add(val1: number, val2: number) {
		return val1 + val2;
	minus(val1: number, val2: number) {
		return val1 - val2;

var calculator = new Calculator ();

TypeScript, TypeScript Definition files and 3rd party plugins

We continued to flesh out the calculator class even further first by creating some a constructor that took an object literal (giving me a good opportunity to demonstrate creating object literal types in TypeScript), then switching it out for an interface (program to an interface over implementation!). With time running out and knowing we had some JQuery users in the group, I was keen to demonstrate TypeScript Definitions and how it can aid development using 3rd party plugins such as JQuery.

Due to TypeScript Playgrounds' inability to import TypeScript Definition files I had to switch over to WebStorm at this point (which was a great opportunity to talk about TypeScripts IDE support). From here I was able to demonstrate the benefits of using TypeScript with JQuery and how the typing resulted in far better tooling and refactoring support in editors such as WebStorm. At this point I was over the set time by a few minutes so it seemed a good point to start to wind down by opening up for any final questions. As it was a small group of people I was encouraging questions from the get-go but it's always good to give people one last opportunity to have any burning questions answered before the end of the session.

As I was closing down I quickly explained something I've always felt is very important in understanding new technologies - regardless of whether everyone goes out and uses TypeScript tomorrow, at least they have enough knowledge on the subject to know when it's the right tool for the job. So if one day their JavaScript starts to become unwieldy - they'll be well armed knowing there's a TypeScript hammer out there waiting to solve that maintenance nightmare nail.

Final reflections and what I would have done differently in hindsight

The feedback from the attendees was good. One of my concerns when presenting the hands-on session was the varying skill level of the attendees could result in me waiting too long for people to finish programming or not give people enough time. I tried my best to mitigate this by giving plenty of time to catch up by sprinkling the code writing parts with brief discussions on what we were going to do next - something the meetup organisers said they felt worked well.

I was particularly happy that one of the attendees approached me afterwards and said he was relatively new to web development (but not programming) and felt the session helped give me a better understanding of JavaScript and how he could get to grips with it using a structure that he's paradigm that he's familiar with.

Moving forward

What this workshop and the brief talk I gave during the lunchtime sessions at DDD South West have highlighted is that I really enjoying teaching and talking about programming subjects. I love talking about programming as it's always been a passion of mine - but I'm especially keen to continue to present more talks and sessions moving forward.

SOLID Architecture in Slices not Layers by Jimmy Bogard

Posted on Sunday, 05 Jul 2015

A bit of a short post today (I've been hard at work practising for an upcoming piano grading exam) but certainly something I plan on discussing further is alternative approcaches to the typical n-tier application structure we're all so used to.

If you've got a  spare hour, or even if you don't - I'd highly recommend you watch this recent talk by Jimmy Bogard on why the n-tier application pattern isn't always the right approach and what alternatives exist. Having used the approach he talks about (modelling the request and command/query segregation using Mediatr) in a few sites (this blog included) I'm beginning to see the real value in it.

Whilst there are sections of the talk I'd have loved to see covered in more depth due to mixed feelings, the parts relating to CQS (command, query segregation) are very insightful.

Note: Watch out for 25:20 when you can see the logo I creator for Mediatr! :D

Setting up TypeScript in Visual Studio Code

Posted on Friday, 05 Jun 2015

When I first heard that Visual Studio Code comes with TypeScript support out of the box I had to give it a try. Being a huge fan of TypeScript and the benefits it brings to developers and large code bases I was glad to see how easy it was to set up. Below I'll take you through the steps that you need to take in order to set up a TypeScript project in Visual Studio Code.

Setting up TypeScript in Visual Studio Code

Step 1: First of all you're going to want to go ahead and open Visual Studio Code. Once open you'll need to go to File > Open Folder and select a folder you'd like to save your TypeScript Project into.

Step 2: Now we need to create our project config file. We can do this by clicking New File within the Project row located within the Explorer side panel. Let's call the config file "tsconfig.json".

Step 3: Next we're going to need to configure our project's config file by specifying what module loader to use and what ECMAScript verion we want to target. Do this by copying and pasting the following configurations:

    "compilerOptions": {
        "module": "amd",
        "target": "ES5",
        "sourceMap": true

Note: At this point our project is now setup to allow us to create TypeScript files with the .ts extension and benefit from the TypeScript goodness we all love; however as we've not created a build task for our project no Javascript files will be transpiled.

Setting up our Visual Studio Code task runner

If we try to build our TypeScript project (CTRL + B for Windows) from within Visual Studio Code you'll receive an error appear informing you that no task runners are configured. From here we can click "Configure Task Runner" within the Visual Studio Code notification to setup a TypeScript build task.

To add a task to build our TypeScript project all we need to do is add a property called "isShellCommand: true" to our tasks.json file and replace the the following property:

"args": ["YourDummyTsFile.ts"]

with this one:

"args": ["${file}"]
"isShellCommand" : true // add this property

These commends execute the TypeScript compiler built into Visual Studio Code as a shell command, and pass the file that you're building to the compiler ready for transpilation, converting your TypeScript files into .js files. Now once you press the build keys (CTRL + B) you'll see a .js file appear beneath the TypeScript file that you've created.

Happy coding!

Block or blacklist IP addresses/referral URLs in ASP.NET MVC with TrafficCop

Posted on Friday, 22 May 2015

Since starting to blog almost a year and a half ago my traffic has slowly but steadily grown month on month. In addition to this growth in blog traffic I've also noticed a sharp rise in the amount of referral spam I'm getting. At first it started off slowly but having looked back over a recent month's worth of traffic it's getting out of control - so much so that it's really starting to mess with my website's analytics.

What is referral spam?

Wikipedia sums up referral spam quite nicely:

Referrer spam (also known as log spam or referrer bombing) is a kind of spamdexing (spamming aimed at search engines). The technique involves making repeated web site requests using a fake referer URL to the site the spammer wishes to advertise.

As you can see from the screenshot taken directly from my Google Analytics account over the month, alot of hits are coming directly from referral spammers, specifically spammers trying to advertise ther social media buttons share buttons.

Having looked at solutions to prevent referral spamming, or at least stop it block it from Google Analytics, I came up with nothing. Google Analytics offers means of filtering out traffic from selected referrers but that involed setting up different filter profiles and given the amount of referral spam I'm getting I feel it's going to be a constant battle. So instead, over the course of a few evenings and work lunch breaks I created a simple solution to manage the referral spam more effectively and with finer control.

TrafficCop to the rescue!

Over the past week during the evenings and work lunchbreaks I've been working on TrafficCop; a simple package that once included in your project allows you to create rules/policies that each HTTP request is checked against. Requests that match the policy can then have a specific action performed against them.

Full instructions can be found on its GitHub page here but I'll give you a basic run through as to how it can be set up below. Before we go into detail though, I'd like to highlight that this is an early version so changes may occur.

Block bad IP addresses in ASP.NET MVC

In this example imagine we wanted to block or blacklist certain IP addresses, preventing them from accessing our website.

First of all we would need to create a TrafficCop registry that we register with TrafficCop. A registry contains our various profiles (policies) that we want to screen our traffic against. You can register as many policies as you'd like within your Traffic Cop registry.

Step 1: Creating your TrafficCop Registry

You can see below that I'm register two policies, BlockBadWebsitePolicy and RedirectReallyBadWebsitePolicy

public class ExampleRegistry : TrafficCopRegistration
    public ExampleRegistry()
        this.RegisterRoutePolicy(new BlockBadWebsitePolicy());
        this.RegisterRoutePolicy(new RedirectReallyBadWebsitePolicy());

Step 2: Creating a policy to register with TrafficCop

Before we can register our policies with TrafficCop we first need to create them. Creating policies is easy; simply create a policy class that derives from TrafficCopRoutePolicy.cs, doing this forces you class to implement two methods, one for screening each HTTP request and the other for performing an action on requests that matched the screening.

In this example I'm screening traffic for incoming requests from IP address and if matched, display a 404 Page Not Found error. If you wanted to block a range of IP addresses then you could add further logic to the Match method such as reading IP addresses from a database or text file then caching the results in HttpRuntime.Cache to speed up further policy matches.

public class BlockBadWebsitePolicy : TrafficCopRoutePolicy
    public override bool Match(IRequestContext requestContext)
        bool isBadTraffic = (requestContext.IpAddress == "");
        return isBadTraffic;

    public override void MatchAction(object actions)
        var act = actions as TrafficCopActions;
        if (act != null) act.PageNotFound404();

Final Step: Register our configuration with our application

Now that we've successfully configured our TrafficCop policies and registration we have to set TrafficCop up in our ASP.NET MVC application. To do this we need to register TrafficCop within our application's Global.asax.cs file. First within the Application_Start method, passing our custom TrafficCop registry that we created earlier in Step 1, like so:

protected void Application_Start()
    TrafficCop.Register(new ExampleRegistry());

Then we need to set TrafficCop to watch every HTTP request by calling its Watch method within the Application_BeginRequest method of our Global.asax.cs file.

protected void Application_BeginRequest()

Now, any request that comes in that matches the policy we've set will automatically be handled by TrafficCop, allowing you to shape traffic as you see fit.

As mentioned above, this is the first early release so be prepared for possible changes; however if you have any improvements or changes you'd like to see made then please feel free to leave them in the comments or submit a pull request via TrafficCop's GitHub page.

That's all for now!

ObjectFactory replacement in StructureMap 3 - What you need to know

Posted on Wednesday, 29 Apr 2015

When StructureMap 3 was released early last year the ObjectFactory.cs class included an Obselete message that read "ObjectFactory will be removed in a future 4.0 release of StructureMap. Favor the usage of the Container class for future work". This harmless message caused a lot of confusion amoung some of StructureMap's users, followed by countless posts on StackOverflow looking for replacements and ways to update configurations to remove any dependencies and references to StructureMap's Object Factory.

Having had this question myself and having spent a reasonable amount of time reading suggestions and solutions online along with doing what I can to help other's with the same questions on StackOverflow I felt it would be a good idea to to collate this information into one helpful post for anyone else with the same object factory replacement questions.

Why remove ObjectFactory in the first place?

One question I've been seeing a lot of is why remove the ObjectFactory in the first place?

To cut a long story short, as StructureMap's author Jeremy Miller clearly points out in his Google Group's post, referencing the ObjectFactory anywhere in your code tightly couples your code to the ObjectFactory and thus, StructureMap. Doing this completely defeats the purpose of dependency injection - the problem that StructureMap set out to solve in the first place so the move makes total sense.

For those unsure of what I'm referring to then I'll go into a little more detail.

The problem with ObjectFactory

StructureMap aims to solve the problem of managing your application's dependencies, there are three main types of dependency injection methods that are all supported by StructureMap and other Inversion of Control Containers - these are Constructor Injection, Setter Injection and Interface Injection. I won't go into detail about these DI methods as I assume if you're reading this article then you've setup StructureMap with some knowledge of dependency management, however if you are unsure then this Wikipedia page sums these up the various types of dependency injection up quite nicely.

All of these methods of dependency injection are perfectly valid, some better more preferable than others (I say more preferable because they all have their place given the context they're used in). Ultimately these three methods of dependency injection all achieve the task of reducing the number of dependencies within a class.

In order to take advantage of the above methods of dependency injection you have to have some form of parent object that's responsible for creating an intance of the class and resolving any dependencies it may have. ASP.NET MVC comes with extension points that you can tie your IoC Container of choice into resulting in dependencies such as those injected into a controller being resolved via your IoC container.

This is all well and good, but what happens when you're working on a class or module that doesn't allow you to resolve dependencies using your configured IoC container (ASMX Web Services are especially guilty of this as it was written at a time when IoC containers a new concept and as a result the architecture doesn't provide you with any means of integrating an IoC container and using it to its full potential).

When this happens you're left to wrestle with the application to try and manage dependencies the best you can. Previously this is where the ObjectFactory would come in as it's a static instance that you could call form anywhere in your codebase and use it to resolve the calling class's dependencies. This pattern is commonly known as the Service Locator pattern and whilst it might appear harmless at first, using it can quickly become problematic for many reasons, so much so that the Service Locator pattern is known as an anti-pattern - primarily because of the tight couple that is created between your class and the ObjectFactory (amoung other reasons).

Bye-bye ObjectFactory. Now what alternatives do I have?

Generally speaking you should always favour constructor injection for dependencies; Martin Fowler sums this up quite nicely in this post where he says:

Constructors with parameters give you a clear statement of what it means to create a valid object in an obvious place. If there's more than one way to do it, create multiple constructors that show the different combinations.

ASP.NET MVC and WebAPI provide you with the necessary endpoints to achieve this in the majority of cases such as Controllers and any dependencies within them. However there are certain areas of ASP.NET MVC and WebAPI where constructor injection isn't quite so simple, and in some circumstances - impossible. So at this point we need to re-evaluate where we're using the ObjectFactory because in some instances we can remove the reliance on the ObjectFactory from our code. A prime example of this is attributes:

Dependency Injection Friendly Attributes

One area you frequently see the use of StructureMap's ObjectFactory are within attributes, and whilst it's encourged that your attributes do not contain behaviour (something injecting dependencies into an attribute often hints at) there are means of harnessing constructor injection within attributes as described in Mark Seeman's Passive Attributes blog post without introducing behaviour into your attributes.

If no other form of dependency injection is possible

If you've exhausted all possible avenues for managing your class's dependencies without the need for a service locator then unfortunately the only option you're left with is to create your own instance of the ObjectFactory as StructureMap's creator, Jeremy Miller stated with the following response to this question on Twitter.

Assembly scanning isn’t cheap and you (almost?) always wanna cache the results. So, yeah, in that case you’re going to have to write your own ObjectFactory. Someday all that bad old MS tech will go away.

With this advice coming from the horse's mouth so the speak, the following is a simple implementation (taken from this StackOverflow answer following a discussion on the creation of a container with the answer's author) of what the ObjectFactory is essentially doing that you can use in your code.

public static class ObjectFactory
    private static readonly Lazy<Container> _containerBuilder =
            new Lazy<Container>(defaultContainer, LazyThreadSafetyMode.ExecutionAndPublication);

    public static IContainer Container
       get { return _containerBuilder.Value; }

     private static Container defaultContainer()
        return new Container(x =>
               // Load your configuration

I would suggest you adapt the code above to suit your needs to prevent yourself duplicating your SM configuration.


People have questioned why the ObjectFactory must be removed from StructureMap, suggesting that it should be left in and used at the user's own peril; however I'm in favour of removing it as it makes you stop and think about your application's architecture. By removing the ObjectFactory we're also forced to further decouple our application from StructureMap by creating our own service locator meaning if you every decide to switch to another IoC Container then you won't have to go through your application picking out the ObjectFactory.

My first code contributions to an open source project

Posted on Monday, 30 Mar 2015

Release 2.5.0 of the Shoudly assertion framework is a bit of a significant release for me. Not because it introduces some great enhancements to what's already a fantastic library but because it marks what I feel is my first real contribution to an open source project.

I recently blogged about my targets for 2015, one of which was to make a concerted effort to get more involved in the open source community; more specifically - to start contributing and give back to projects that I've been using and taken so much from. Having been programming for a number of years now, why has it taken so long to get around to giving back?

What's taken so long?

I've been programming in one form or another for about a decade now, and professionally for 8 of the 10 years. I first started with PHP, then eventually moved onto writing Java for the purposes of Android development and have recently fallen in love with .NET and C#. During my time programming I'm ashamed to say that I've not contributed to any projects in what I would consider a meaningful manor. When I look at what's held me back it generally boils down to the following two reasons:

Focusing too much on personal side projects
Side projects are a great way to learn new technologies, however they do come with its own set of downsides when it comes to progression as a developer, these are:

  • You don't get the exposure to other people's code (and thus new techniques and alternative or better ways of doing things) that you do when collaborating on a project.
  • Unless you're working on a project you plan on open sourcing yourself AND it becomes hugely popular you don't get feedback and critique on your code.

    Now don't get me wrong, I'm not saying working on side projects is a bad idea, side projects are a fantastic way to improve on your existing skillset - I'm merely saying that a variety of project types can lead to a more varied array of learning avenues.

Just not good enough (aka imposter syndrome):
Like many software developers I suffer from imposter syndrome:

Impostor syndrome is a psychological phenomenon in which people are unable to internalize their accomplishments. Despite external evidence of their competence, those with the syndrome remain convinced that they are frauds and do not deserve the success they have achieved. Proof of success is dismissed as luck, timing, or as a result of deceiving others into thinking they are more intelligent and competent than they believe themselves to be.

When I look at some of the great open source projects I use it's really difficult not to compare your ability as a software developer with those that can create and craft such an awesome piece of software - ultimately this comparison can cause you to feel like you're not good enough to contribute to a project, in which case my thought process was often "I'm not a good enough developer, maybe once I've improved will I give contributing back a try".

My first pull request

In order to start contributing to a project you have to find a suitable open source project, and naturally it always helps if it's one you've already got experience with and use on a day to day basis. In this instance, whilst developing this blog I stumbled upon a unit testing assertion library called Shoudly.

What is Shouldly?

Shoudly's focus is to provide a more natural assertion framework to developers, turning a normal asserstion into a far more natural and readable alternative. Take the following examples for instance:

// Your typical assertion
Assert.That(contestant.Points, Is.EqualTo(1337)); // Produces a rather unhelpful error message "Expected 1337 but was 0"

Compare the above example to Shoudly:

// The Shouldly way!
contestant.Points.ShouldBe(1337); // Produces "contestant.Points should be 1337 but was 0"

I don't know about you but I know which test arrangement and test failure message I'd rather work with - and having been using Shoudly extensively for writing unit tests for my blog it was the natural fit - what's more is there were plenty of "Up for Grabs" issues available on Shouldly's Github page that were suitable for someone new to the codebase, so I took my pick and got stuck in.

A few hours later, once I'd finished working on the chosen issue, I re-ran the unit tests I proceeded to send my first pull requst. 

At this point I feel it's worth mentioning that Shouldly's author Jake Ginnivan has been a massive help in helping me understand some of the more challenging aspects of Git (such as rebasing commits in order to keep a project's Git history in good shape). Jake was more than happy to help me with any questions or problems I had and offered advice and help without making me feel like I was a nuisance as I fumbled my way around Git.

Moving forward

Overall I found contributing to Shouldly an extremely rewarding experience that I plan on continuing to persue. Hopefully once the set of issues that I'm currently working my way through are completed there will be more issues that I'll be able to pick up.

If anyone reading this is considering taking their first steps in contributing to open source projects then I would most definitely recommend it. Start small like I did by picking relatively easy tasks and then start to work your way up.

Presentation from a recent Introduction to Typescript talk I gave

Posted on Thursday, 26 Mar 2015

Recently I stumbled upon a local web development meetup that I've started to attend. It's a great group of people from various different backgrounds such as Java, PHP and .NET. The meetup is kindly hosted by a local Performance and Governance management software vendor who not only hosts the event but also supplies beer and pizza to the attendees.

At a recent meetup there was a call for volenteers to perform a small 15 minute grok talk on any software development related subject that would suit the range of developers that attend the meetup, so I decided to put myself forward and do a talk on TypeScript. Given the time constraints I decided not to go too deep into the internals of the language but instead opt to provide the attendees with general overview of what TypeScript is, what problems it solves and how it can increase maintainability of large Javascript heavy applications.

Overall I felt the talk went well with a healthy number of questions asked at the end of it that I tried my best to answer. There was, however, one instance where I slipped up as a result of misunderstanding what was a very straight forward question; however this was the first talk I've given so I'm not going to be too harsh on myself. Giving the talk made me realise how much I enjoy talking about software development and has certainly made me keen to give more talks in the future.

The presentation can be found on my GitHub account or you can view the presentation slides on Google Drive here.

Changing StructureMap configuration at runtime

Posted on Tuesday, 24 Feb 2015

Lately I've been (re-)reading Mark Seemann's fantastic Dependency Inection in .NET book. I originally bought the book when I first started learning ASP.NET MVC and IoC container usages and whilst I learned a lot from it at the time, the fact that I'd not been exposed to IoC containers for long enough meant that some content didn't sink in. Ultimately this comes down to the fact that when you can relate things to real life experiences your ability to remember the solution is far better than if you've never run into the issue before.

Having been using ASP.NET MVC and taking full advantage of IoC Containers for sometime now I've found going back through the book far mre rewarding beneficial as I have those dependency management scars that I can related to. 

Fast forward a few days and as I'm performing my regular sweep of the StructureMap related questions on StackOverflow (I find spending time participating in the SO community by answering questions a great way to learn) I came across a question titled How to inject different instance(s) for different context in ASP.NET MVC using StructureMap?. Having read the question and the problem the user was tackling, it quickly became apparent that I'd just finished reading a chapter that demonstrated a viable solution to such a problem, and have ran into this very same problem but in a different form before.

Before diving into the solution, let's look at the problem in a bit more depth.

A bit more about the problem

The problem the author of the post in question was experiencing was that they needed to inject a different type of data storage mechanism depending on configuration data. In this particular case the author wanted to switch a storage mechanism from a database to file storage at the flick of a switch. Whilst this is quite a specific problem, it's a common hurdle that anyone writing an application of any size will encounter eventually, which ultimately boils down to how do I configure StructureMap during runtime?

I've run into this exact same problem during previous projects in the following instances:

  • Switching an image store from database or file storage to CDN storage based on configuration.
  • Reading application settings from a database or your application's web.config.

The Solution

In order to switch data stores during runtime we need to be able to switch an interface's implementation on the fly and a design pattern that can help us achieve such is the the Abstract factory pattern.

What is an abstract factory?
The abstract factory pattern provides a way to encapsulate a group of individual factories that have a common theme without specifying their concrete classes. In normal usage, the client software creates a concrete implementation of the abstract factory and then uses the generic interface of the factory to create the concrete objects that are part of the theme. The client doesn't know (or care) which concrete objects it gets from each of these internal factories, since it uses only the generic interfaces of their products. This pattern separates the details of implementation of a set of objects from their general usage and relies on object composition, as object creation is implemented in methods exposed in the factory interface.

Our abstract factory will be responsible performing a runtime check to see whether the file store should be a database or the file system - or any other file store for that matter. One we know what type of file store is to be used our factory can create an instance of our stoage implementation that will will implement an interface that all of our persistance options will have to implement.

We will then hook our abstract factory up to StructureMap to ensure that our factory's dependencies can be injected and that our factory can return the correct storage implementation based on the runtime configuration settings. Let's begin!

First of all we need to create our Abstract factory:

public class DataStoreAbstractFactory
    public DataStoreAbstractFactory(IRepository repository)
        /* Dependencies that are required to determine persistence implementation can be injected here; these dependencies will be managed by our IoC Container (StructureMap in this case) */

    public IDataStoreInstance ConfigureStorage()
/* This is the method that does most of the work, it will determine our settings at runtime and create an instance of the appropriate storage based on our configuration. */ 

You'll notice that our abstract factory returns an in implementation of IDatastoreInstance; this interface is the contract that defines our storage implementation.

Below is a rather simplified example of our IDataStoreInstance interface the data storage implementation, but it's enough to give you an idea of how it would work.

public interface IDataStoreInstance
    void Save();
public class DatabaseStorage : IDataStoreInstance
    public DatabaseStorage(IRepository dbStoreRepository){

    public void Save()
        // Implementation details of persisting data in a database

public class FileStorage : IDataStoreInstance
    public FileStorage(IRepository fileStoreRepository){

    public void Save()
        // Implementation details of persisting data in a file system

Registering our Abstract factory with our IoC Container

Now that we've setup our abstract factory and added all of the implementation details for creating the appropriate instance of our IDataStoreInstance interface depending on the configuration, all that's left to do is hook it up to our IoC Container of choice. In this instance I'm using StructureMap as it's one I'm most familiar but any IoC Container will do the trick.

When we configure our IoC Container we want to ensure that any instances of IDataStoreInstance are created by our ConfigureStorage method within our DataStoreAbstractFactory class; the code sample below demonstrates how can do this in StructureMap:

public class StorageRegistry : Registry
    public StorageRegistry()
        this.For<IDataStoreInstance>().Use(ctx => ctx.GetInstance<DataStoreAbstractFactory>().ConfigureStorage());

By using StructureMap's GetInstance<> method we're able to let StructureMap manage the creation of our Datastore factory and the dependencies it relies on, allowing us to inject our configuration implementation that we'll use to check the file storage type at runtime. These dependencies can then be passed into our IDataStoreInstance.


Now, when we inject an implementation of our IDataStoreInstance interface our Datastore abstract factory is called into action and returns the appropriate data store implementation based on our configuration and we can continue to program against our IDataStoreInstance interface without having to worry about the underlying implementation.

public class UpdateController : Controller
    public IDataStoreInstance StorageInstance { get; set; }

    public UpdateController(IDataStoreInstance storageInstance)
        StorageInstance = storageInstance;

    public ActionResult Index()






About Me

I'm a software and web application developer living in Somerset within the UK and I love eating and sleeping software/web development. I stay up stupidly late most nights programming and consider myself lucky that I get to do what I love doing for a living. If I’m not found with my family then you’ll see me at my keyboard writing some form of code.

Read More

Feel free to follow me on any of the channels below.

profile for JoeMighty at Stack Overflow, Q&A for professional and enthusiast programmers

Recent Blog Posts