Latest Blog Posts

Tips on starting and running a .NET user group

Posted on Thursday, 22 Jun 2017

As someone that organises and runs the .NET South West in Bristol, I've had a number of people online and in person approach me expressing an interest in starting a .NET focused meet up but not sure where to start; so much so that I thought it would be good to summarise the challenges and hurdles of running a meet up in a succinct blog post.

.NET South West meet up

Running a user group isn't a sail in the park, but it's not hard or particularly time consuming either. Hopefully this post will provide some valuable information and reassurance to those looking to create and foster a local .NET focused community of people wishing to learn, share, expand their knowledge and meet local developers with similar passions and interests.

1. You can do it

The the very first question that can play through your mind is whether you're capable of starting and running such a meet up. If you're having any self-doubts about whether you're knowledgeable enough to run a user group, or whether you have the confidence to organise it - don't.

Running a user group is an incredibly rewarding experience that starts off small and grows, as it grows you grow with it. Everyone that attends user groups are there to learn, that also applies to the organiser(s) too. So don't let any hesitations or self-doubts get in your way.

2. Gauging the interest in a .NET user group

One of the first hurdles you face when starting a user group is trying to gauge the level of interest that exists in your local area.

I've found a great way to gauge interest is to simply create a meet up group on the popular user group organising site meetup.com informing people that you're interested in seeing what the level of interest is like. You can create an event with no date and set the title to "To be announced" then leaving it active for a few months. Meetup.com notifies people with similar interests of the new user group and over time you start to get people joining the group waiting for the first meet to be announced. In the meantime your meet up page has a forum where you can start conversations with some of the new members where you can look for assistance or ask if anyone knows of a suitable venue.

This time is a great opportunity to get to know local developers before the meet up.

3. Having an online presence

When it comes to organising a meet up, websites like meetup.com make the whole process a breeze. The service isn't free (starting at $9.99 - see more here) but personally I would say it's worth it in terms of how much effort it saves you. MeetUp provides services such as:

  • Posting and announcing meet ups to members
  • Sending out regular emails to your user group
  • Increases meet up visibility to local developers on the platform
  • Semi-brandable pages (though this could be better)
  • Ability to link to add sponsors within your meet up page

If you are on a budget then there are free alternatives such as the free tier of EventBrite which you can link to from a website you could set up.

4. Many hands make light work

Starting a meet up requires a lot of work, once the meet up is running the number of hours required to keep it ticking along dramatically reduces. That said, there are times where you may have personal commitments that make it difficult to focus on the meet up - so why not look to see if anyone else is interested in helping?

If you don't have any close friends or work colleagues that are interested in helping you can mention you're looking for help on your meet up page discussed previously. There's also nothing to stop you from talking to members once the meet up is under way to see if anyone is interested in helping out. If you do have people interested in helping then why not create a Slack channel for your group where you can stay organised.

5. The Venue

Next up is the venue, this is often the most challenging part as most venues will require payment for a room. All is not lost though as there are a few options available to you:

  • Look for sponsorship to cover the cost of the venue - some companies (such as recruitment companies) are often open to ideas of sponsoring meet ups in one way or another for publicity. Naturally this all depends on your stance around having recruitment companies at your meet up.

  • Approach software companies to see if they are interested in hosting the event. Often you'll find software companies are geared up for hosting meet ups and happy to do so in exchange for interacting with the community (and potentially saving them recruitment costs).

  • Small pubs - I know of a few meet up organisers who host at pubs in a back room as the venue are often aware that a few people stay behind to have a few drinks so it works in their favour too.

Ultimately you want to ensure is you have consistency, so talk to the venue to and make it clear that you're looking for a long-term solution.

6. Speakers

Once you've got your venue sorted the next task you face ( and this will be a regular one) is sourcing speakers. Luckily this finding speakers is often reasonably simple and once your meet up becomes established you'll start to find you have speakers approaching you with interest in giving a talk. I would also recommend looking at other near by meet ups for past speakers and making contact with them via Twitter. Networking at conferences is also a great way of finding potential speakers too.

In addition to the aforementioned suggestions, Microsoft also have a handy Microsoft Evangelists (UK only) for finding Evangelists nearby that are often more than happy to travel to your user group to give a talk.

Finally, encourage attendees of your meet up to give talks. You're trying to foster a community, so try to drive engagement and ownership by opening up space for short 15 minute lightning talks.

7. Sponsorship / Competitions and Prizes

Once your user group is off the ground I would recommend reaching out to software companies to see if they provide sponsorship for meet ups in the shape of prize licences or extended trials - for instance, JetBrains are well known for their awesome community support programme which I'd highly recommend taking a look at.

Some companies require your meet up to be a certain size, some are more flexible on what they can provide, often being happy to ship swag such as stickers and t-shirts instead which can be given away as prizes during your meet up (though if you're accepting swag from abroad then do be sure to clarify import tax so you don't get stung).

Swag and prizes aren't essential for a meet up, but it's something worth considering to spice things up a bit.

Go for it

Hopefully this post has given you some ideas if you are considering setting up a meet up. Organising a meet up and running it is an extremely satisfying responsibility and it's great seeing a community of developers coming together to share knowledge and learn from one another. So what are you waiting for, go for it!

Retrospective of DDD 12 in Reading

Posted on Monday, 12 Jun 2017

Yesterday was my second time both attending and speaking at the DDD conference run out of the Microsoft offices in Reading, and as a tradition I like to finish the occasion by writing a short retrospective post highlighting the day.

Arrival

Living just two hours' drive from Reading I decided to drive up early in the morning with fellow Just Eater Stuart Lang who was also giving a talk. After arriving and grabbing our speaker polo shirts we headed to the speaker room to say hello to the other speakers - some of whom I know from previous conferences and through organising DDD South West.

Speakers' Room

I always enjoying spending time in the speakers' room. As a relative newcomer to speaking I find it's a great opportunity to get some tips from more experienced speakers as well as geek out about programming. In addition, I still had some preparation to do so it's a quiet room where I can refine and tweak my talk slides.

Talk 1 - Goodbye REST; Hello GraphQL by Sandeep Singh

Even though I had preparation to take care of for my post-lunch talk, I was determined to attend Sandeep Singh's talk on GraphQL as it's a technology I've heard lots about via podcasts and have been interested to learn more. In addition, working at Just Eat where we have a lot of distributed services that are extremely chatty over HTTP - I was interested to see if and where GraphQL could help.

Having met Sandeep Singh for the first time at DDD South West he's clearly a knowledgeable guy so I was expecting great thing, and he delivered. The talk was very informative and by the end of the talk Sandeep had demonstrated the power of GraphQL (along with well-balanced considerations that need to be made) and answered the majority of questions I had forming in my notes as the talk progressed. It's definitely sparked my interest in the GraphQL and I'm keen to start playing with it.

My talk - Building a better Web API architecture using CQRS

Having submitted two talks to DDD Reading (this and my Razor Deep Dive I delivered at DDD South West), the one that received the most votes was this talk, a topic and architectural style I've been extremely interested in for a number of years now (long before MediatR was a thing!).

Having spoken at DDD Reading before, this year my talk was held in a rather intimidating room called Chicago that seats up to 90 with a stage overlooking the attendees. All in all I was happy with the way the talk went, however I did burn through my slides far faster than I did during practice, luckily though the attendees had plenty of questions so I had plenty of opportunity to answer and expand on them with the remaining time. I'll chalk this down to experience and learn from it.

I must say that one of my concerns whilst preparing the talk was the split opinions around what CQRS really is, and how it differs to Bertrand Meyer's formulation of CQS (coincidentally, there was a healthy debate around the definition moments before my talk in the speakers' room between two speakers well-versed in the area!).

Talk 3 - Async in C# - The Good, the Bad and the Ugly by Stuart Lang

Having worked with Stuart for a year now and known him for slightly longer, it's clear that his knowledge on async/await is spot on and certainly far deeper than any other developer I've met. Having seen a slightly alternative version of his Stuart's talk delivered internally at Just Eat I was familiar with the narrative, however I was keen to attend to this talk because C#'s async language construct is deep area and one I'm interested in.

Overall the talk went really well with a nice break in the middle allowing for time for questions before moving on (something I may have to try in my talk moving forward).

Conclusion

Overall DDD 12 was an awesome day, I'd love to have attended more talks and spend more time speaking with people but keen to deliver my talk to the best of my ability I had work to be done. None the less after my talk was over it was great catching up with familiar faces and meeting new people (easily one of my favourite parts of a conference as I'm a bit of a chatterbox!).

I'll close this post with a massive thanks to the event organisers (having helped organise DDD South West this year I have full appreciation for the hard work and time that goes into organising such an event), and also the sponsors - without them the conference would not have been possible.

Until next year!

Another day, another fantastic DDD (DDD South West 7)

Posted on Friday, 12 May 2017

Saturday 6th of May marked the day of another great DDD South West event. Having attended other DDD events around the UK I've always felt they had a special community feel to them, a feeling I've not felt at other conferences. This year's DDD South West event was particularly special to me not only because I was selected to speak at the conference, but because this year I was part of the organisation team.

This post is just a short summary of the highs and lows of organising the conference and the day itself.

Organising the conference

This year I was honoured to be part of the organisation team for DDD South West, and I loved every minute of it. Myself and the other organisers (there were 5 of us in total, some of whom have been organising the conference for well over 5 or 6 years!) would hold regular meetings via Skype, breaking down responsibilities such as sponsorship, catering, speaker related tasks and finance. Initially these meetings were about a month apart, but as the conference drew closer and the pressure started to set in, we would meet weekly.

During the organising of DDD Southwest I've gained a true appreciation for the amount of effort conference organisers (especially those that run non-profit events in their spare time, such as those I've had the pleasure of working with) put in to organising an event for the community.

On the day everything went as expected with no hiccups, though as I was speaking on the day I was definitely a lot more stressed than I would have been otherwise. After the event we all headed over to the Just Eat offices for an after party, which I'll cover shortly.

For more information on the day, there are two fantastic write ups by Craig Phillips and Dan Clarke that I'd highly recommend reading.

ASP.NET Core Razor Deep Dive talk

Whilst organising DDD South west 7, I figured why not pile the stress on and submit a few sessions. Last year I caught the speaking bug and this year I was keen to continue to pursue it, so I submitted 3 sessions each on very different subjects and was quite surprised to see the ASP.NET Core Razor Deep Dive was selected. It's not the sexiest topic to talk about, but non-the-less it was a great opportunity to share some experience and give people information they can take away and directly apply in real life (something I always try to do when putting together a talk).

The talk itself was focused on the new Razor changes and features introduced in ASP.NET Core, why and where you'd use them. The topics included:

  • Razor as a view engine and it's syntax
  • Tag helpers and how powerful they are
  • Create your own tag helpers for some really interesting use cases - this part was especially great as I could see a lot of people in the audience had the "light bulb" moment. This brings me great joy as I know there's something they were able to take away from the talk.
  • Why Partial Views and Child Actions are limiting
  • View Components and how you can use them to create more modular, reusable views
  • And finished off with the changes people can expect to see in Razor when ASP.NET Core 2.0 is released (ITagHelperComponents and Razor Pages)

Overall I was really happy with the talk and the turnout. The feedback I received was great, with lots of mentions of the word "engaging" (which as a relatively new speaker still trying to find his own style, is always positive to hear).

DDD South West 7 after party

Once all was done and dusted and the conference drew to a close, a large majority of us took a 5 minute stroll over to the Just Eat offices for an after party where free pizza and beer was on offer for the attendees (Thanks Just Eat!).

After a long day of ensuring the event was staying on track coupled with the stresses of talking, it was great to be able to unwind and enjoy spending time mingling with the various attendees and sponsors, all of whom made the event possible.

Bring on DDD South West 8!

C# 7 ValueTuple types and their limitations

Posted on Thursday, 20 Apr 2017

Having been experimenting and reading lots about C# 7's tuples recently I thought I'd summarise some of the more interesting aspects about this new C# language feature whilst highlighting some areas and limitations that may not be apparent when first using them.

Hopefully by the end of this post you'll have a firm understanding of the new feature and how it differs in comparison to the Tuple type introduced in .NET 4.

A quick look at tuple types as a language feature

Prior to C# 7, .NET's tuples were an awkward somewhat retrofitted approach to what's a powerful language feature. As a result you don't see them being used as much as they are in other language like Python or to some extend Go (which doesn't support Tuples, but has many features they provide such as multiple return values) - with this in mind it behoves me to briefly explain what Tuples are and why you'd use them for those that may not have touched them before.

So what are Tuples and where you would use them?

The tuples type's main strengths lie in allowing you to group types into a closely related data structure (much like creating a class to represent more than one value), this means they're particularly useful in cases such as returning more than one value from a method, for instance:

public class ValidationResult {
    public string Message { get; set; }
    public bool IsValid { get; set; }
}

var result = ValidateX(x);
if (!result.IsValid)
{
    Logger.Log(Error, result.Message);
}

Whilst there's nothing wrong with this example, sometimes we don't want to have to create a type to represent a set of data - we want types to work for us, not against us; this is where the tuple type's utility is.

In fact, in a languages such as Go (which allow multiple return types from a function) we see such a pattern used extensively throughout their standard library.

bytes, err := ioutil.ReadFile("file.json")
if err != nil {
    log.Fatal(err)
}

Multiple return types can also stop you from needing to use the convoluted TryParse method pattern with out parameters.

Now we've got that covered and we're all on the same page, let's continue.

In the beginning there was System.Tuple<T1, T2, .., T7>

Verbosity
Back in .NET 4 we saw the appearance of the System.Tuple<T> class which introduced a verbose and somewhat awkward API:

var person = new Tuple<string, int>("John Smith", 43);
Console.WriteLine($"Name: {person.Item1}, Age {person.Item2}");
// Output: Name: John Smith, Age: 43

Alternatively there's a static factory method that cleared things up a bit:

var person = Tuple.Create("John Smith", 43);

But there was still room for improvement such as:

No named elements
One of the weaknesses of the System.Tuple type is that you have to refer to your elements as Item1, Item2 etc instead of by their 'named' version (allowing you to unpack a tuple and reference the properties directly) like you can in Python:

name, age = person_tuple
print name

Garbage collection pressure
In addition the System.Tuple type is a reference type, meaning you pay the penalty of a heap allocation, thus increasing pressure on the garbage collector.

public class Tuple<T1> : IStructuralEquatable, IStructuralComparable, IComparable

Nonetheless the System.Tuple type scratched an itch and solved a problem, especially if you owned the API.

C# 7 Tuples to the rescue

With the introduction of the System.ValueTuple type in C# 7, a lot of these problems have been solved (it's worth mentioning that if you want to use or play with the new Tuple type you're going to need to download the following NuGet package.

Now in C# 7 you can do such things as:

Tuple literals

// This is awesome and really clears things up; we can even directly reference the named value!

var person = (Name: "John Smith", Age: 43);
Console.WriteLine(person.Name); // John Smith

Tuple (multiple) return types

(string, int) GetPerson()
{
    var name = "John Smith";
    var age = 32;
    return (name, age);
}

var person = GetPerson();
Console.WriteLine(person.Item1); // John Smith

Even named Tuple return types!

(string Name, int Age) GetPerson()
{
    var name = "John Smith";
    var age = 32;
    return (name, age);
}

var person = GetPerson();
Console.WriteLine(person.Name); // John Smith

If that wasn't enough you can also deconstruct types:

public class Person
{
    public string Name => "John Smith";
    public int Age => 43;

    public void Deconstruct(out string name, out int age)
    {
        name = Name;
        age = Age;
    }
}

...

var person = new Person();
var (name, age) = person;

Console.WriteLine(name); // John Smith

As you can see the System.ValueTuple greatly improves on the older version, allowing you to write far more declarative and succinct code.

It's a value type, baby!
In addition (if the name hadn't given it away!) C# 7's Tuple type is now a value type, meaning there's no heap allocation and one less de-allocation to worry about when compacting the GC heap. This means the ValueTuple can be used in the more performance critical code.

Now going back to our original example where we created a type to represent the return value of our validation method, we can delete that type (because deleting code is always a great feeling) and clean things up a bit:

var (message, isValid) = ValidateX(x);
if (!isvalid)
{
    Logger.Log(Log.Error, message);
}

Much better! We've now got the same code without the need to create a separate type just to represent our return value.

C# 7 Tuple's limitations

So far we've looked at what makes the ValueTuple special, but in order to know the full story we should look at what limitations exist so we can make an educated descision on when and where to use them.

Let's take the same person tuple and serialise it to a JSON object. With our named elements we should expect to see an object that resembles our tuple.

var person = (Name: "John", Last: "Smith");
var result = JsonConvert.SerializeObject(person);

Console.WriteLine(result);
// {"Item1":"John","Item2":"Smith"}

Wait, what? where have our keys gone?

To understand what's going on here we need to take a look at how ValueTuples work.

How the C# 7 ValueTuple type works

Let's take our GetPerson method example that returns a named tuple and check out the de-compiled source. No need to install a de-compiler for this, a really handy website called tryroslyn.azurewebsites.net will do everything we need.

// Our code
using System;
public class C {
    public void M() {
        var person = GetPerson();
        Console.WriteLine(person.Name + " is " + person.Age);
    }
    
    (string Name, int Age) GetPerson()
    {
        var name = "John Smith";
        var age = 32;
        return (name, age);
    }
}

You'll see that when de-compiled, the GetPerson method is simply syntactic sugar for the following:

// Our code de-compiled
public class C
{
    public void M()
    {
        ValueTuple<string, int> person = this.GetPerson();
        Console.WriteLine(person.Item1 + " is " + person.Item2);
    }
    [return: TupleElementNames(new string[] {
        "Name",
        "Age"
    })]
    private ValueTuple<string, int> GetPerson()
    {
        string item = "John Smith";
        int item2 = 32;
        return new ValueTuple<string, int>(item, item2);
    }
}

If you take a moment to look over the de-compiled source you'll see two areas of particular interest to us:

First of all, our Console.WriteLine() method call to our named elements have gone and been replaced with Item1 and Item2. What's happened to our named elements? Looking further down the code you'll see they've actually been pulled out and added via the TupleElementNames attribute.

...
[return: TupleElementNames(new string[] {
    "Name",
    "Age"
})]
...

This is because the ValueTuple type's named elements are erased at runtime, meaning there's no runtime representation of them. In fact, if we were to view the IL (within the TryRoslyn website switch the Decompiled dropdown to IL), you'll see any mention of our named elements have completely vanished!

IL_0000: nop // Do nothing (No operation)
IL_0001: ldarg.0 // Load argument 0 onto the stack
IL_0002: call instance valuetype [System.ValueTuple]System.ValueTuple`2<string, int32> C::GetPerson() // Call method indicated on the stack with arguments
IL_0007: stloc.0 // Pop a value from stack into local variable 0
IL_0008: ldloc.0 // Load local variable 0 onto stack
IL_0009: ldfld !0 valuetype [System.ValueTuple]System.ValueTuple`2<string, int32>::Item1 // Push the value of field of object (or value type) obj, onto the stack
IL_000e: ldstr " is " // Push a string object for the literal string
IL_0013: ldloc.0 // Load local variable 0 onto stack
IL_0014: ldfld !1 valuetype [System.ValueTuple]System.ValueTuple`2<string, int32>::Item2 // Push the value of field of object (or value type) obj, onto the stack
IL_0019: box [mscorlib]System.Int32 // Convert a boxable value to its boxed form
IL_001e: call string [mscorlib]System.String::Concat(object, object, object) // Call method indicated on the stack with arguments
IL_0023: call void [mscorlib]System.Console::WriteLine(string) // Call method indicated on the stack with arguments
IL_0028: nop  // Do nothing (No operation)
IL_0029: ret  // Return from method, possibly with a value

So what does that mean to us as developers?

No reflection on named elements

The absence of named elements in the compiled source means that it's not possible to use reflection to get those name elements via reflection, which limits ValueTuple's utility.

This is because under the bonnet the compiler is erasing the named elements and reverting to the Item1 and Item2 properties, meaning our serialiser doesn't have access to the properties.

I would highly recommend reading Marc Gravell's Exploring tuples as a library author post where he discusses a similar hurdle when trying to use tuples within Dapper.

No dynamic access to named elements

This also means when casting your tuple to a dynamic object results in the loss of the named elements. This can be witnessed by running the following example:

var person = (Name: "John", Last: "Smith");
var dynamicPerson = (dynamic)person;
Console.WriteLine(dynamicPerson.Name);

Results in the following error RuntimeBinder exception:

Unhandled Exception: Microsoft.CSharp.RuntimeBinder.RuntimeBinderException: 'System.ValueTuple<string,string>' does not contain a definition for 'Name'
   at CallSite.Target(Closure , CallSite , Object )
   at CallSite.Target(Closure , CallSite , Object )
   at TupleDemo.Program.Main(String[] args) in /Users/josephwoodward/Dev/TupleDemo/Program.cs:line 16

Thanks to Daniel Crabtree's post for highlighting this!

No using named Tuples in Razor views either (unless they're declared in your view)

Naturally the name erasure in C# 7 tuples also means that you cannot use the names in your view from your view models. For instance:

public class ExampleViewModel {

    public (string Name, int Age) Person => ("John Smith", 30);

}
public class HomeController : Controller
{
    ...
    public IActionResult About()
    {
        var model = new ExampleViewModel();

        return View(model);
    }
}
// About.cshtml
@model TupleDemo3.Models.ExampleViewModel

<h1>Hello @Model.Person.Name</h1>

Results in the following error:

'ValueTuple<string, int>' does not contain a definition for 'Name' and no extension method 'Name' accepting a first argument of type 'ValueTuple<string, int>' could be found (are you missing a using directive or an assembly reference?)

Though switching the print statement to @Model.Person.Item1 outputs the result you'd expect.

Conclusion

That's enough about Tuples for now. Some of the examples used in this post aren't approaches you'd use in real life, but hopefully go to demonstrate some of the limitations of the new type and where you can and can't use C# 7's new ValueTuple type.

Setting up a local Selenium Grid using Docker and .NET Core

Posted on Monday, 20 Mar 2017

Since jumping on the Docker bandwagon I've found its utility spans beyond the repeatable deployments and consistent runtime environment benefits that come with the use of containers. There's a whole host of tooling and use cases emerging which take advantage of containerisation technology, one use case that I recently discovered after a conversation with Phil Jones via Twitter is the ability to quickly set up a Selenium Grid.

Setting up and configuring a Selenium Grid has never been an simple process, but thanks to Docker it's suddenly got a whole lot easier. In addition, you're now able to run your own Selenium Grid locally and greatly speed up your tests' execution. If that isn't enough then another benefit is because the tests execute inside of a Docker container, you'll no longer be blocked by your browser navigating the website you're testing!

Let's take a look at how this can be done.

Note: For the impatient, I've put together a working example of the following post in a GitHub repository you can clone and run.

Selenium Grid Docker Compose file

For those that haven't touched Docker Compose (or Docker for that matter), a Docker Compose file is a Yaml based configuration document (often named docker-compose.yml) that allows you to configure your applications' Docker environment.

Without Docker Compose you'd need to manually run your individual Dockerfile files specifying their network connections and configuration parameters along the way. With Docker Compose you can configure everything in a single file and start your environment with a simple docker-compose up command.

Below is the Selenium Grid Docker Compose configuration you can copy and paste:

# docker-compose.yml

version: '2'
services:
    selenium_hub:
        image: selenium/hub:3.0.1-aluminum
        container_name: selenium_hub
        privileged: true
        ports:
            - 4444:4444
        environment:
            - GRID_TIMEOUT=120000
            - GRID_BROWSER_TIMEOUT=120000
        networks:
            - selenium_grid_internal

    nodechrome1:
        image: selenium/node-chrome-debug:3.0.1-aluminum
        privileged: true
        depends_on:
            - selenium_hub
        ports:
            - 5900
        environment:
            - no_proxy=localhost
            - TZ=Europe/London
            - HUB_PORT_4444_TCP_ADDR=selenium_hub
            - HUB_PORT_4444_TCP_PORT=4444
        networks:
            - selenium_grid_internal

    nodechrome2:
        image: selenium/node-chrome-debug:3.0.1-aluminum
        privileged: true
        depends_on:
            - selenium_hub
        ports:
            - 5900
        environment:
            - no_proxy=localhost
            - TZ=Europe/London
            - HUB_PORT_4444_TCP_ADDR=selenium_hub
            - HUB_PORT_4444_TCP_PORT=4444
        networks:
            - selenium_grid_internal

networks:
    selenium_grid_internal:

In the above Docker Compose file we've defined our Selenium Hub (selenium_hub) service, exposing it on port 4444 and attaching it to a custom network named selenium_grid_internal (which you'll see all of our nodes are on).

selenium_hub:
    image: selenium/hub:3.0.1-aluminum
    container_name: selenium_hub
    privileged: true
    ports:
        - 4444:4444
    environment:
        - GRID_TIMEOUT=120000
        - GRID_BROWSER_TIMEOUT=120000
    networks:
        - selenium_grid_internal

All that's remaining at this point is to add our individual nodes. In this instance I've added two Chrome based nodes, named nodechrome1 and nodechrome2:

nodechrome1:
    image: selenium/node-chrome-debug:3.0.1-aluminum
    privileged: true
    depends_on:
        - selenium_hub
    ports:
        - 5900
    environment:
        - no_proxy=localhost
        - TZ=Europe/London
        - HUB_PORT_4444_TCP_ADDR=selenium_hub
        - HUB_PORT_4444_TCP_PORT=4444
    networks:
        - selenium_grid_internal

nodechrome2:
    image: selenium/node-chrome-debug:3.0.1-aluminum
    ...

Note: If you wanted to add Firefox to the mix then you can replace the image: value with the following Docker image:

nodefirefox1:
    image: selenium/node-firefox-debug:3.0.1-aluminum
    ...

Now if we run docker-compose up you'll see our Selenium Grid environment will spring into action.

To verify everything is working correctly we can navigate to http://0.0.0.0:4444 in our browser where we should be greeted with the following page:

Connecting Selenium Grid from .NET Core

At the time of writing this post the official Selenium NuGet package does not support .NET Standard, however there's a pending pull request which adds support (the pull request has been on hold for a while as the Selenium team wanted to wait for the tooling to stabilise). In the mean time the developer that added support released it as a separate NuGet package which can be downloaded here.

Alternatively just create the following .csproj file and run the dotnet restore CLI command.

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp1.0</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.0.0-preview-20170123-02" />
    <PackageReference Include="xunit" Version="2.2.0-beta5-build3474" />
    <PackageReference Include="xunit.runner.visualstudio" Version="2.2.0-beta5-build1225" />
    <PackageReference Include="CoreCompat.Selenium.WebDriver" Version="3.2.0-beta003" />
  </ItemGroup>

</Project>

Next we'll create the following base class that will create a remote connection to our Selenium Grid:

public abstract class BaseTest
{
    private IWebDriver _driver;

    public IWebDriver GetDriver()
    {
        var capability = DesiredCapabilities.Chrome();
        if (_driver == null){
            _driver = new RemoteWebDriver(new Uri("http://0.0.0.0:4444/wd/hub/"), capability, TimeSpan.FromSeconds(600));
        }

        return _driver;
    }
}

After that we'll create a very simple (and trivial) test that checks for the existence of an ID on google.co.uk.

public class UnitTest1 : BaseTest 
{

    [Fact]
    public void TestForId()
    {
        using (var driver = GetDriver())
        {
            driver.Navigate().GoToUrl("http://www.google.co.uk");
            var element = driver.FindElement(By.Id("lst-ib"));
            Assert.True(element != null);
        }
    }

    ...

}

Now if we run our test (either via the dotnet test CLI command or from your editor or choice) we should see our Docker terminal console showing our Selenium Grid container jump into action as it starts executing the test one one of the registered Selenium Grid nodes.

At the moment we're only executing the one test so you'll only see one node running the test, but as you start to add more tests across multiple classes the Selenium Grid hub will start to distribute those tests across its cluster of nodes, dramatically increasing your test execution time.

If you'd like to give this a try then I've added all of the source code and Docker Compose file in a GitHub repository that you can clone and run.

The drawbacks

Before closing there are a few drawbacks to this method of running tests, especially if you're planning on doing it locally (instead of setting a grid up on a Virtual Machine via Docker).

Debugging is made harder
If you're planning on using Selenium Grid locally then you'll lose the visibility of what's happening in the browser as the tests are running within a Docker container. This means that in order to see the state of the web page on a failing test you'll need to switch to local execution using the Chrome / FireFox or Internet Explorer driver.

Reaching localhost from within a container
In this example we're executing the tests against an external domain (google.co.uk) that our container can resolve. However if you're planning on running tests against a local development environment then there will be some additional Docker configuration required to allow the container to access the Docker host's IP address.

Conclusion

Hopefully this post has broadened your options around Selenium based testing and demonstrated how pervasive Docker is becoming. I'm confident that the more Docker (and other container technology for that matter) matures, the more we'll see said technology being used for such use cases as we've witnessed in this post.

An in-depth look at the various ways of specifying the IP or host ASP.NET Core listens on

Posted on Friday, 10 Feb 2017

Recently I've been working on an ASP.NET Core application where I've needed to configure at runtime, the host address and port the webhost will listen on. For those that have built an ASP.NET Core app you'll know that the default approach generated by the .NET CLI is less than ideal in this case as it's hard-coded.

After a bit of digging I quickly realised there weren't any places that summarised all of the options available, so I thought I'd summarise it all in a post.

Enough talk, let's begin.

Don't set an IP

The first approach is to not specify any IP address (this means removing the .NET Core CLI template convention of using the .UseUrls() method. Without it the web host will listen on localhost:5000 by default.

Whilst this approach is far from ideal, it is an option so deserves a place here.

Hard-coded approach via .UseUrls()

As I alluded to earlier, the default approach that .NET Core's CLI uses is to hard-code the IP address in your application's program.cs file via the UseUrls(...) extension method that's available on the IWebHostBuilder interface.

If you take a look at the UseUrls extension method's signature you'll see the argument is an unbounded string array allowing you to specify more than one IP address that the web host will listen on (which may be preferable to you depending on your development machine, network configuration or preference as specifying more than one host address can save people running into issues between localhost vs 0.0.0.0 vs 127.0.0.1).

public static IWebHostBuilder UseUrls(this IWebHostBuilder hostBuilder, params string[] urls);

Adding multiple IP addresses can either be done by comma-separated strings, or a single string separated by a semi-colon; both result in the same configuration.

var host = new WebHostBuilder()
    .UseConfiguration(config)
    .UseKestrel()
    .UseUrls("http://0.0.0.0:5000", "http://localhost:5000")
    ///.UseUrls("http://0.0.0.0:5000;http://localhost:5000") also works
    .UseContentRoot(Directory.GetCurrentDirectory())
    .UseIISIntegration()
    .UseStartup<Startup>()
    .Build();

If you'd rather not explicitly list every IP address to listen on you can use a wildcard instead resulting in the web host binding to all IPv4 and IPv6 IPs addresses on port 5000:

.UseUrls("http://*:5000")

A bit about wildcards: The wildcard is not special in any way, in fact anything not recognised as an IP will be bound to all IPv4 or IPv6 addresses, so http://@£$%^&*:5000 is considered the same as "http://*:5000" and vice versa.

Whilst this hard-coded approach makes it easy to get your application up and running, the very fact that it's hard-coded does make it difficult to configure externally via automation such as a continuous integration/deployment pipeline.

Note: It's worth mentioning that setting a binding IP address directly on the WebHost as we are in this approach always takes preference over any of the other approaches listed in this post, but we'll go into this later.

Environment variables

You can also specify the IP your application listens on via an environment variable. To do this first you'll need to download the Microsoft.Extensions.Configuration package from NuGet then call the AddEnvironmentVariables() extension method on your ConfigurationBuilder object like so:

public static void Main(string[] args)
{
    var config = new ConfigurationBuilder()
        .AddEnvironmentVariables()
        .Build();

    var host = new WebHostBuilder()
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseConfiguration(config)
        .UseStartup<Startup>()
        .Build();

    host.Run();
}

Now if you were to set the following environment variable and run your application it will listen to the IP address specified:

ASPNETCORE_URLS=https://*:5123

Command line argument

The another option available is to supply the host name and port via a command line argument when your application's initially executed (notice how you can use one or more addresses as we did above).

dotnet run --urls "http://*:5000;http://*:6000"

or

dotnet YourApp.dll --urls "http://*:5000;http://*:6000"

Before you can use command line arguments, you're going to need the Microsoft.Extensions.Configuration.CommandLine package and update your Program.cs bootstrap configuration accordingly:

public static void Main(string[] args)
{
    var config = new ConfigurationBuilder()
        .AddCommandLine(args)
        .Build();

    var host = new WebHostBuilder()
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseConfiguration(config)
        .UseStartup<Startup>()
        .Build();

    host.Run();
}

Notice how I've removed the .ForUrls() method; this prevents .ForUrls() overwriting the IP address provided via command line.

hosting.json approach

Another popular approach to specifying your host and port address is to read the IP address from a .json file during the application boot up. Whilst you can name your configuration anything, the common approach appears to be hosting.json, with the contents of your file containing the IP address you want your application to listen on:

{
  "urls": "http://*:5000"
}

In order to use this approach you're first going to need to include the Microsoft.Extensions.Configuration.Json package, allowing you to load configurations via .json documents.

public static void Main(string[] args)
{
    var config = new ConfigurationBuilder()
        .SetBasePath(Directory.GetCurrentDirectory())
        .AddJsonFile("hosting.json", optional: true)
        .Build();

    var host = new WebHostBuilder()
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseConfiguration(config)
        .UseStartup<Startup>()
        .Build();

    host.Run();
}

Now when you run the dotnet run or dotnet YourApp.dll command you'll noticed the output will reflect the address specified within your hosting.json document.

Just a reminder that before publishing your application be sure to include your hosting file in your publishing options (in either your project.json or your .csproj file:

// project.json

"publishOptions": {
    "include": [
      "wwwroot",
      "Views",
      "appsettings.json",
      "web.config",
      "hosting.json"
    ]
}
// YourApp.csproj

<ItemGroup>
  <Content Update="wwwroot;Views;appsettings.json;web.config;hosting.json">
  <CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>
  </Content>
</ItemGroup>

Out of all of the approaches available this has to be my most preferred option. It's simple enough to overwrite or modify within your test/release pipeline whilst removing hurdles co-workers may need to jump through when wanting to download your source code and run the application (as opposed to the command line approach).

Order of preference

When it comes down to the order the IP addresses are loaded, I would recommend you check out the documentation here, especially this snippet:

You can override any of these environment variable values by specifying configuration (using UseConfiguration) or by setting the value explicitly (using UseUrls for instance). The host will use whichever option sets the value last. For this reason, UseIISIntegration must appear after UseUrls, because it replaces the URL with one dynamically provided by IIS. If you want to programmatically set the default URL to one value, but allow it to be overridden with configuration, you could configure the host as follows:

var config = new ConfigurationBuilder()
   .AddCommandLine(args)
   .Build();

var host = new WebHostBuilder()
   .UseUrls("http://*:1000") // default URL
   .UseConfiguration(config) // override from command line
   .UseKestrel()
   .Build();

Conclusion

Hopefully this post helped you gain a better understanding of the many options available to you when configuring what IP address you want your application to listen on, writing it has certainly helped me cement them in my mind!

C# IL Viewer for Visual Studio Code using Roslyn side project

Posted on Monday, 30 Jan 2017

For the past couple of weeks I've been working on an IL (Intermediate Language) Viewer for Visual Studio Code. As someone that develops on a Mac, I spend a lot of time doing C# in VS Code or JetBrains' Rider editor - however neither of them have the ability to view the IL generated (I know JetBrains are working on this for Rider) so I set out to fix this problem as a side project.

As someone that's never written a Visual Studio Code extension before it was a bit of an abmitious first extension, but enjoyable none the less.

Today I released the first version of the IL Viewer (0.0.1) to the Visual Studio Code Marketplace so it's available to download and try via the link below:

Download IL Viewer for Visual Studio Code

Download C# IL Viewer for Visual Studio Code or install it directly within Visual Studio Code by launching Quick Open (CMD+P for Mac or CTRL+P for Windows) and pasting in the follow command and press enter.

ext install vscodeilviewer

The source code is all up on GitHub so feel free to take a look, but be warned - it's a little messy right now as it was hacked together to get it working.

VS Code IL Viewer

For those interested in how it works, continue reading.

How does it work?

Visual Studio Code and HTTP Service

At its heart, Visual Studio Code is a glorified text editor. C# support is added via the hard work of the OmniSharp developers, which itself is backed by Roslyn. This means that in order to add any IL inspection capabilities I needed to either hook into OmniSharp or build my own external service that gets bundled within the extension. In the end I decided to go with the later.

When Visual Studio Code loads and detects the language is C# and the existence of a project.json file, it starts an external HTTP serivce (using Web API) which is bundled within the extension.

Moving forward I intend to switch this out for a console application communicating over stdin and stdout. This should speed up the overall responsiveness of the extension whilst reducing the resources required, but more importantly reduce the start up time of the IL viewer.

Inspecting the Intermediate Language

Initially I planned on making the Visual Studio Code IL Viewer extract the desired file's IL directly from its project's built DLL, however after a little experimentation this proved not to ideal as it required the solution to be built in order to view the IL, and built again for any inspections after changes, no matter how minor. It would also be blocking the user from doing any work whilst the project was building.

In the end I settled on an approach that builds just the .cs file you wish to inspect into an in memory assembly then extracts the IL and displays it to the user.

Including external dependencies

One problem with compiling just the source code in memory is that it doesn't include any external dependencies. As you'd expect, as soon as Roslyn encounters a reference for an external binary you get a compilation error. Luckily Roslyn has the ability to automatically include external dependencies via Roslyn's workspace API.

public static Compilation LoadWorkspace(string filePath)
{
    var projectWorkspace = new ProjectJsonWorkspace(filePath);

    var project = projectWorkspace.CurrentSolution.Projects.FirstOrDefault();
    var compilation = project.GetCompilationAsync().Result;

    return compilation;
}

After that the rest was relativly straight forward, I grab the syntax tree from the compilation unit of the project the load in any additional dependencies before using Mono's Cecil library to extract the IL (as Cecil supports .NET Standard it does not require the Mono runtime).

Once I have the IL I return the contents as a HTTP response then display it to the user in Visual Studio Code's split pane window.

Below is a simplified diagram of how it's all tied together:

IL Viewer

As I mentioned above, the source code is all available on GitHub so feel free to take a look. Contributions are also very welcome!

Year in review and looking at 2017

Posted on Tuesday, 10 Jan 2017

It's that time of the year again where I take a moment to reflect on the previous year and set some high level goals as to what I want to achieve or aim towards in the next.

I find these types of post really helpful at tracking progress and ensuring I stay on a positive projection as a software developer/engineer - a hobby that I'm lucky enough to have as a career.

Review of 2016

Last year was an amazing year for a couple of reasons, some planned and some not so. So, without further ado let's look at last year's personal targets and goals to see how I've done.

Goal 1: Speak at more conferences and meet-ups

I'm inclined to say I've smashed this. At the time of writing last year's post (February 2016) my speaking activities included relatively small user groups and meet ups. Since that post I've spoken at a few more meet ups but also at some rather large conferences which has been amazing.

The talks include:

  • Why software engineering is a great career choice
    TechSpark Graduates Conference 2016 - 1st December 2016

  • Going cross-platform with ASP.NET Core
    BrisTech 2016 - 3rd November 2016

  • Going cross-platform with ASP.NET Core
    Tech Exeter Conference - 8th October 2016

  • Building rich client-side applications using Angular 2
    DDD 11 (Reading) - 3rd September 2016

  • .NET Rocks Podcast - Angular 2 CLI (Command Line Interface)
    24th August 2016

  • Angular 2
    BristolJS - May 25th, 2016

  • Angular 2
    Taunton Developers' Meetup - June 9th, 2016

It's a really great feeling reflecting on my very first talk (a lightening talk on TypeScript) and how nervous I was in comparison to now. Don't get me wrong, the nerves are still there, but as many will know - you're just able to cope with them better.

.All in all I'm extremely satisfied at how far I've progressed in this area and how much my confidence speaking in public has grown. This is certainly something I wish to continue to do in 2017.

Goal 2: Start an Exeter based .NET User Group

At the time of setting this goal I was working in Exeter where there were no .NET focused user group, something that surprised me given the number of .NET specific jobs in the city.

To cut a long story short, I'd started an Exeter .NET user group and was in the process of organising the first meet up when I got a job opportunity at Just Eat in Bristol and ended up taking over the Bristol based .NET South West user group as the organiser was stepping down and closing the rather large, well established group down. Having been to a couple of the meet ups at the user group it was a shame to see it end, and given the fact I was now working in Bristol I decided to step forward to take over along with a couple of the other members.

Since then we (me and the the other organisers) have been really active in keeping .NET South West alive and well, organising a range of speakers on variety of .NET related topics.

Goal 3: Continue contributing to the .NET OSS ecosystem

This year I've create a number of small open-source libraries, and projects (Angular 2 piano sight reading game the one that's received the most attention). However whilst I've been contributing on and off to other libraries, I don't feel my contributions have been at a level that I'm happy with, so this will be a goal for 2017.

Bonus Goal - F#

Making a start learning F# was one of my bonus goals that I was hoping to achieve during 2016, however other than making a few changes to a couple of lines of F#, this is a no-go.

In all honesty, I'm still on the bench as to whether I want to learn F# - my only motivation being to learn a functional language (growing knowledge and thinking in a different paradigm); where as there are other languages I'm interested in learning that aren't necessarily tied to the .NET eco-system.

Other notable events and achievements in 2016

In addition to the aforementioned goals and achievements in 2016, there have also been others.

  • New job as a .NET software engineer at Just Eat in Bristol - a seriously awesome company that has a lot of really talented developers and interesting problems to work on.
  • Co-organiser of DDD South West conference
  • Heavy investment of time in learning .NET Core and Docker
  • Became co-organiser of the .NET South West user group

Goals for 2017

With a review of 2016 out of the way let's take a quick look at plans for 2017.

Goal 1: Continue to grow as a speaker, speaking at larger events

I've really loved speaking at meet ups and conferences, it's a truly rewarding experience both on a personal development and professional development level. There's very little more satisfying in life than pushing yourself outside of your comfort zone. So in 2017 I'm really keen to continue to pursue this by talking at larger conferences and events.

Goal 2: More focus on contributing to open-source projects

Whilst I'm satisfied with my contributions to the open-source world, with personal projects and contributions to other projects, it's definitely an area I would like to continue to pursue. So in 2017 I'm aiming I'm looking for other larger projects I invest in and contribute to on a long term basis.

Goal 3: Learn another language

Where as I previously set myself a 2016 goal of Learn F#, this time around I'm going to keep my options open. I've recently been learning a little Go, but also really interested in Rust, so this year I'm going to simply set a goal to learn a new language. As it stands, it looks like it's between Go and Rust, with F# still being a possibility.

Conclusion

Overall it's been a great year, I'm really keen to keep the pace up on public speaking as it's far too easy to rest on one's laurels, so here's to 2017 and the challenges it brings!

In-memory C# compilation (and .dll generation) using Roslyn

Posted on Wednesday, 28 Dec 2016

Recently I've been hard at work on my first Visual Studio Code extension and one of the requirements is to extract IL from a .dll binary. This introduces a question though, do I build the solution (blocking the user whilst their project is building), read the .dll from disk then extract the IL, or do I compile the project in memory behind the scenes, then stream the assembly to Roslyn? Ultimately I went with the later approach and was pleasantly surprised at how easy Roslyn makes this - surprised enough that I thought it deserved its own blog post.

Before we continue, let me take a moment to explain what Roslyn is for those that may not fully understand what it is.

What is Roslyn?

Roslyn is an open-source C# and VB compiler as a service platform.

The key words to take away with you here are "compiler as a service"; let me explain.

Traditionally compilers have been a black box of secrets that are hard to extend or harness, especially for any tooling or code analysis purposes. Take ReSharper for instance; Resharper has a lot of code analysis running under the bonnet that allows it to offer refactoring advice. In order for the ReSharper team to provide this they had to build their own analysis tools that would manually parse your solution's C# inline with the .NET runtime - the .NET platform provided no assistance with this, essentially meaning they had to duplicate a lot of the work the compiler was doing.

This has since changed with the introduction of Roslyn. For the past couple of years Microsoft have been rewriting the C# compiler in C# (I know, it's like a compiler Inception right?) and opening it up via a whole host of APIs that are easy to prod, poke and interrogate. This opening up of the C# compiler has resulted in a whole array of code analysis tooling such as better StyleCop integration and debugging tools like OzCode and the like. What's more, you can also harness Roslyn for other purposes such as tests that fail as soon as common code smells are introduced into a project.

Let's start

So now we all know what Roslyn is, let's take a look at how we can use it to compile a project in memory. In this post we will be taking some C# code written in plain text, turning it into a syntax tree that the compiler can understand then using Roslyn to compile it, resulting in a streaming in-memory assembly.

Create our project

In this instance I'm using .NET Core on a Mac but this will also work on Windows, so let's begin by creating a new console application by using the .NET Core CLI.

dotnet new -t console

Now, add the following dependencies to your project.json file:

"dependencies": {
    "Microsoft.CodeAnalysis.CSharp.Workspaces": "1.3.2",
    "Mono.Cecil": "0.10.0-beta1-v2",
    "System.ValueTuple": "4.3.0-preview1-24530-04"
},

For those interested, here is a copy of the project.json file in its entirety:

{
  "version": "1.0.0-*",
  "buildOptions": {
    "debugType": "portable",
    "emitEntryPoint": true
  },
  "dependencies": {
    "Microsoft.CodeAnalysis.CSharp.Workspaces": "1.3.2",
    "Mono.Cecil": "0.10.0-beta1-v2",
    "System.ValueTuple": "4.3.0-preview1-24530-04"
  },
  "frameworks": {
    "netcoreapp1.1": {
      "dependencies": {
        "Microsoft.NETCore.App": {
          "type": "platform",
          "version": "1.0.1"
        }
      },
      "imports": "portable-net45+win8+wp8+wpa81"
    }
  }
}

Once we've restored our project using the dotnet restore command, the next step is to write a simple class to represent our source code. This code could be read from a web form, a database or a file on disk. In this instance I'm hard-coding it into the application itself for simplicity.

public class Program {

    public static void Main(string[] args)
    {

        var code = @"
        using System;
        public class ExampleClass {
            
            private readonly string _message;

            public ExampleClass()
            {
                _message = ""Hello World"";
            }

            public string getMessage()
            {
                return _message;
            }

        }";

        CreateAssemblyDefinition(code);
    }

    public static void CreateAssemblyDefinition(string code)
    {
        var sourceLanguage = new CSharpLanguage();
        SyntaxTree syntaxTree = sourceLanguage.ParseText(code, SourceCodeKind.Regular);

        ...
    }

}

Getting stuck into Roslyn

Now we've got the base of our project sorted, let's dive into some of the Roslyn API.

First we're going to want to create an interface we'll use to define the language we want to use. In this instance it'll be C#, but Roslyn also supports VB.

public interface ILanguageService
{
    SyntaxTree ParseText(string code, SourceCodeKind kind);

    Compilation CreateLibraryCompilation(string assemblyName, bool enableOptimisations);
}

Next we're going to need to parse our plain text C#, so we'll begin by working on the implementation of the ParseText method.

public class CSharpLanguage : ILanguageService
{
    private static readonly LanguageVersion MaxLanguageVersion = Enum
        .GetValues(typeof(LanguageVersion))
        .Cast<LanguageVersion>()
        .Max();

    public SyntaxTree ParseText(string sourceCode, SourceCodeKind kind) {
        var options = new CSharpParseOptions(kind: kind, languageVersion: MaxLanguageVersion);

        // Return a syntax tree of our source code
        return CSharpSyntaxTree.ParseText(sourceCode, options);
    }

    public Compilation CreateLibraryCompilation(string assemblyName, bool enableOptimisations) {
        throw new NotImplementedException();
    }
}

As you'll see the implementation is rather straight forward and simply involved us setting a few parse options such as the language features we expect to see being parsed (marked via the languageVersion parameter) along with the SourceCodeKind enum.

Looking further into Roslyn's SyntaxTree

At this point I feel it's worth mentioning that if you're interested in learning more about Roslyn then I would recommend spending a bit of time looking into Roslyn's Syntax Tree API. Josh Varty's posts on this subject are a great resource I would recommend.

I would also recommend taking a look at LINQ Pad, which amongst other great features, has the ability to show you a syntax tree generated by Roslyn your code. For instance, here is a generated syntax tree of our ExampleClass code we're using in this post:

http://assets.josephwoodward.co.uk/blog/linqpad_tree2.png

Now our C# has been parsed and turned into a data structure the C# compiler can understand, let's look at using Roslyn to compile it.

Compiling our Syntax Tree

Continuing with the CreateAssemblyDefinition method, let's compile our syntax tree:

public static void CreateAssemblyDefinition(string code)
{
    var sourceLanguage = new CSharpLanguage();
    SyntaxTree syntaxTree = sourceLanguage.ParseText(code, SourceCodeKind.Regular);

    Compilation compilation = sourceLanguage
      .CreateLibraryCompilation(assemblyName: "InMemoryAssembly", enableOptimisations: false)
      .AddReferences(_references)
      .AddSyntaxTrees(syntaxTree);

    ...
}

At this point we're going to want to fill in the implementation of our CreateLibraryCompilation method within our CSharpLanguage class. We'll start this by passing the appropriate arguments into an instance of CSharpCompilationOptions. This includes:

  • outputKind - We're outputting a (dll) Dynamically Linked Library
  • optimizationLevel - Whether we want our C# output to be optimised
  • allowUnsafe - Whether we want the our C# code to allow the use of unsafe code or not
public class CSharpLanguage : ILanguageService
{
    private readonly IReadOnlyCollection<MetadataReference> _references = new[] {
          MetadataReference.CreateFromFile(typeof(Binder).GetTypeInfo().Assembly.Location),
          MetadataReference.CreateFromFile(typeof(ValueTuple<>).GetTypeInfo().Assembly.Location)
      };

    ...

    public Compilation CreateLibraryCompilation(string assemblyName, bool enableOptimisations) {
      var options = new CSharpCompilationOptions(
          OutputKind.DynamicallyLinkedLibrary,
          optimizationLevel: enableOptimisations ? OptimizationLevel.Release : OptimizationLevel.Debug,
          allowUnsafe: true);

      return CSharpCompilation.Create(assemblyName, options: options, references: _references);
  }
}

Now we've specified our compiler options, we invoke the Create factory method where we also need to specify the assembly name we want our in-memory assembly to be called (InMemoryAssembly in our case, passed in when calling our CreateLibraryCompilation method), along with additional references required to compile our source code. In this instance, as we're targeting C# 7, we need to supply the compilation unit with the ValueTuple structs implementation. If we were targeting an older version of C# then this would not be required.

All that's left to do now is to call Roslyn's emit(Stream stream) method that takes a Stream input parameter and we're sorted!

public static void CreateAssemblyDefinition(string code)
{
    ...

    Compilation compilation = sourceLanguage
        .CreateLibraryCompilation(assemblyName: "InMemoryAssembly", enableOptimisations: false)
        .AddReferences(_references)
        .AddSyntaxTrees(syntaxTree);

    var stream = new MemoryStream();
    var emitResult = compilation.Emit(stream);
    
    if (emitResult.Success){
        stream.Seek(0, SeekOrigin.Begin);
        AssemblyDefinition assembly = AssemblyDefinition.ReadAssembly(stream);
    }
}

From here I'm then able to pass my AssemblyDefinition to a method that extracts the IL and I'm good to go!

Conclusion

Whilst this post is quite narrow in its focus (I can't imagine everyone is looking to compile C# in memory!), hopefully it's served as a primer in spiking your interest in Roslyn and what it's capable of doing. Roslyn is a truly powerful platform that I wish more languages offered. As mentioned before there are some great resources available go into much more depth. I would especially recommend Josh Varty's posts on the subject.

In-memory testing using ASP.NET Core

Posted on Tuesday, 06 Dec 2016

A fundamental problem with integration testing over finer-grained tests such as unit testing is that in order to integration test your component or application you need to spin up a running instance of your application so you can reach it over HTTP, run your tests and then spin it down afterwards.

Spinning up instances of your application can lead to a lot of additional work when it comes to running your tests within any type of continuous deployment or delivery pipeline. This has certainly become easier with the introduction of the cloud, but still requires a reasonable investment of time and effort to setup, as well slowing down your deployment/delivery pipeline.

An alternative approach to running your integration or end-to-end tests is to utilise in-memory testing. This is where your application is spun up in memory via an in-memory server and has the tests run against it. An additional benefit to running your tests this way is you're no longer testing any of your host OS's network stack either (which in most cases will be configured differently to your production server's stack anyway).

TestServer package

Thankfully in-memory testing can be performed easily in ASP.NET Core thanks to the Microsoft.AspNetCore.TestHost NuGet package.

Let's take a moment to look at the TestServer API exposed by the TestHost library:

public class TestServer : IServer, IDisposable
{
    public TestServer(IWebHostBuilder builder);

    public Uri BaseAddress { get; set; }
    public IWebHost Host { get; }

    public HttpClient CreateClient();
    public HttpMessageHandler CreateHandler();
    
    public RequestBuilder CreateRequest(string path);
    public WebSocketClient CreateWebSocketClient();
    public void Dispose();
}

As you'll see, the API has all the necessary endpoints we'll need to spin our application up in memory.

For those that are regular readers of my blog, you'll remember we used the same TestServer package to run integration tests on middleware back in July. This time we'll be using it to run our Web API application in memory and run our tests against it. We'll then assert that the response received is expected.

Enough talk, let's get started.

Running Web API in memory

Setting up Web API

In this instance I'm going to be using ASP.NET Core Web API. In my case I've created a small Web API project using the ASP.NET Core yoman project template. You'll also note that I've stripped a few things out of the application for the sake of making the post as easier to follow. Here are the few files that really matter:

Startup.cs (nothing out of the ordinary here)

public class Startup
{
    public Startup(IHostingEnvironment env)
    {
        var builder = new ConfigurationBuilder()
            .SetBasePath(env.ContentRootPath)
            .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
            .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
            .AddEnvironmentVariables();
        Configuration = builder.Build();
    }

    public IConfigurationRoot Configuration { get; }

    // This method gets called by the runtime. Use this method to add services to the container.
    public void ConfigureServices(IServiceCollection services)
    {
        // Add framework services.
        services.AddMvc();
    }

    // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
    {
        loggerFactory.AddConsole(Configuration.GetSection("Logging"));
        loggerFactory.AddDebug();

        app.UseMvc();
    }
}

ValuesController.cs

[Route("api/[controller]")]
public class ValuesController : Controller
{
    [HttpGet]
    public string Get()
    {
        return "Hello World!";
    }
}

All we've got here is a simple WebAPI application that returns a single Hello World! value from ValueController when you fire a GET request to /api/values/.

Running Web API in memory

At this point I've created a test project alongside my Web API one and added the Microsoft.AspNetCore.TestHost package to my test project's package.json file.

package.json

...
"dependencies": {
    "dotnet-test-xunit": "2.2.0-preview2-build1029",
    "xunit": "2.2.0-beta2-build3300",
    "Microsoft.AspNetCore.TestHost": "1.0.0",
    "TestWebAPIApplication":{
        "target":"project"
    }
},
...

Next, we'll create our first test class, and bootstrap our WebAPI project. Pay particular attention to our web application's Startup.cs that's being passed into the WebHostBuilder's UseStartup<T> method. You'll notice this is exactly the same way we bootstrap our application within Program.cs (the bootstrap file we use when deploying our application).

public class ExampleTestClass
{
    private IWebHostBuilder CreateWebHostBuilder(){
        var config = new ConfigurationBuilder().Build();
        
        var host = new WebHostBuilder()
            .UseConfiguration(config)
            .UseStartup<Startup>();

        return host;
    }

    ...
}

Writing our test

At this point we're ready to write our test, so let's create a new instance of TestServer which takes and instance of IWebHostBuilder.

public TestServer(IWebHostBuilder builder);

As you can see from the following trivial example, we're simply capturing the response from the controller invoked when calling /api/values, which is our case is the ValuesController.

[Fact]
public async Task PassingTest()
{
    var webHostBuilder = CreateWebHostBuilder();
    var server = new TestServer(webHostBuilder);

    using(var client = server.CreateClient()){
        var requestMessage = new HttpRequestMessage(new HttpMethod("GET"), "/api/values/");
        var responseMessage = await client.SendAsync(requestMessage);

        var content = await responseMessage.Content.ReadAsStringAsync();

        Assert.Equal(content, "Hello World!");
    }
}

Now, when we call Assert.Equals on our test we should see the test has passed.

Running test UnitTest.Class1.PassingTest...
Test passed 

Conclusion

Hopefully this post has given you enough insight into how you can run your application in memory for purposes such as integration or feature testing. Naturally there's a lot more you could do to simplify and speed up the tests by limiting the number of times TestServer is created.