Latest Blog Posts

GraphiQL in ASP.NET Core

Posted on Thursday, 10 Aug 2017

Having recently attended a talk on GraphQL and read about GitHib's glowing post surrounding choice to use GraphQL over REST for their API, I was interested in having a play to see what all of the fuss was about. For those that aren't sure what GraphQL is or where it fits in the stack let me give a brief overview.

What is GraphQL?

GraphQL is a query language (hence QL in the name) created and open-sourced by Facebook that allows you to query an HTTP (or other protocols for that matter) based API. Let me demonstrate with a simple example:

Say you're consuming a REST API on the following resource identifier:

GET service.com/user/1

The response returns on this URI is the following JSON object is:

{
    "Id": 1,
    "FirstName": "Richard",
    "Surname": "Hendricks",
    "Gender": "Male",
    "Age": 31,
    "Occupation": "Pied Piper CEO",
    "RoleId": 5,
    ...
}

Now as a mobile app developer calling this service you're aware of the constraints on bandwidth users face when connected to a mobile network, so returning the whole JSON blob when you're only interested in their FirstName and Surname properties is wasteful, this is called over-fetching data, GraphQL solves this by letting you as a consumer dictate your data needs, as opposed to having it forced upon you by the service.

This problem is a fundamental requirement that REST doesn't solve (in fairness to REST, it never set out to solve this problem, however as the internet has changed it's a problem that does exist).

This is where GraphQL comes in.

Using GraphQL we're given control as consumers to dictate what our data requirements are, so instead of calling the aforementioned URI, instead we POST a query to a GraphQL endpoint (often /graphql) in the following shape:

{
    Id,
    FirstName,
    Surname
}

Our Web API powered GraphQL server fulfills the request, returning the following response:

{
    "Id": 1,
    "FirstName": "Richard",
    "Surname": "Hendrix"
}

This also applies to under-fetching which can best be described as when you have to make multiple calls to join data (following the above example, retrieving the RoleId only to then call another endpoint to get the Role information). In GraphQL's case, we could represent that with the following query, which would save us an additional HTTP request:

{
    Id,
    FirstName,
    Surname,
    Role {
        RoleName
    }
}

The GraphQL query language includes a whole host of other functionality including static type checking, query functions and the like so I would recommend checking it out when you can (or stay tuned for a later post I'm in the process of writing where I demonstrate how to set it up in .NET Core).

So what is GraphiQL?

Now you know what GraphQL is, GraphiQL (pronounced 'graphical') is a web-based JavaScript powered editor that allows you to query a GraphQL endpoint, taking full advantage of the static type checking and intellisense promised by GraphQL. You can consider it the Swagger of the GraphQL world.

In fact, I'd suggest you taking a moment to go try a live exampe of GraphiQL here and see how GraphQL's static type system and help you discover data that's available to you via the documentation and intellisense. GitHub also allow you to query your GitHub activity via their example GraphiQL endpoint too.

Introducing GraphiQL.NET

Traditionally if you wanted to set this up you'd need to configure a whole host of node modules and JavaScript files to enable you to do this, however given .NET Core's powerful middleware/pipeline approach creating a GraphiQL middleware seemed like the obvious way to enable a GraphiQL endpoint.

Now you no longer need to worry about taking a dependency on Node or NPM in your ASP.NET Core solution and can instead add GraphiQL support via a simple middleware calls using GraphiQL.NET (before continuing I feel it's worth mentioning all of the code is up on GitHub.

Setting up GraphiQL in ASP.NET Core

You can install GraphiQL.NET by copying and pasting the following command into your Package Manager Console within Visual Studio (Tools > NuGet Package Manager > Package Manager Console).

Install-Package graphiql

Alternatively you can install it using the .NET Core CLI using the following command:

dotnet add package graphiql

From there all you need to do is call the UseGraphiQl(); extension method within the Configure method within Startup.cs, ensuring you do it before your AddMvc(); registration.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
    app.UseGraphiQl();

    app.UseMvc();
}

Now when you navigate to /graphql you should be greeted with the same familiar GraphiQL screen but without the hassle of having to add node or any NPM packages to your project as a dependency - nice!

The library is still version 1 so if you run into any issues then please do feel free to report them!

Injecting content into your head or body tags via dependency injection using ITagHelperComponent

Posted on Monday, 17 Jul 2017

Having been playing around with the ASP.NET Core 2.0 preview for a little while now, one cool feature I stumbled upon was the addition of the new ITagHelperComponent interface and its use.

What problem does the ITagHelperComponent solve?

Pre .NET Core 2.0, if you're using a library that comes bundled with some static assets such as JavaScript or CSS, you'll know that in order to use the library you have to manually add script and/or link tags (including a reference to the files in your wwwroot folder), to your views. This is far from ideal as not only does it force users to jump through additional hoops, but it also runs the risk of introducing breaking changes when a user decides to remove the library and forgets to remove the JavaScript references, or if you update the library version but forget to change the appropriate JavaScript reference.

This is where the ITagHelperComponent comes in; it allows you to inject content into the header or footer of your application's web page. Essentially, it's dependency injection for your JavaScript or CSS assets! All that's required of the user is they register the dependency with their IoC Container of choice within their Startup.cs file.

Enough talk, let's take a look at how it works. Hopefully a demonstration will clear things up.

Injecting JavaScript or CSS assets into the head or body tags

Imagine we have some JavaScript we'd like to include on each page, this could be from either:

  • A JavaScript and/or CSS library we'd like to use (Bootstrap, Pure etc)
  • Some database driven JavaScript code or value that needs to be included in the head of your page
  • A JavaScript file that's bundled with a library that our users need to include before the closing </body> tag.

In our case, we'll keep it simple - we need to include some database drive JavaScript in our page in the form of some Google Analytics JavaScript.

Creating our JavaScript tag helper component

Looking at the contract of the ITagHelperComponent interface you'll see it's a simple one:

public interface ITagHelperComponent{
    int Order { get; }
    void Init(TagHelperContext context);
    Task ProcessAsync(TagHelperContext context, TagHelperOutput output);
}

We could implement the interface ourselves, or we could lean on the existing TagHelperComponent base class and override only the properties and methods we require. We'll do the later.

Let's start by creating our implementation which we'll call CustomerAnalyticsTagHelper:

// CustomerAnalyticsTagHelper.cs

CustomerAnalyticsTagHelper : TagHelperComponent {}

For this example the only method we're concerned about is the ProcessAsync one, though we will touch on the Order property later.

Let's go ahead and implement it:

// CustomerAnalyticsTagHelper.cs

public class CustomerAnalyticsTagHelper : TagHelperComponent
{
    private readonly ICustomerAnalytics _analytics;

    public CustomerAnalyticsTagHelper(ICustomerAnalytics analytics)
    {
        _analytics = analytics;
    }

    public override Task ProcessAsync(TagHelperContext context, TagHelperOutput output)
    {
        if (string.Equals(context.TagName, "body", StringComparison.Ordinal))
        {
            string analyticsSnippet = @"
            <script>
                (function (i, s, o, g, r, a, m) {
                    i['GoogleAnalyticsObject'] = r; i[r] = i[r] || function () {
                        (i[r].q = i[r].q || []).push(arguments)
                    }, i[r].l = 1 * new Date(); a = s.createElement(o),
                        m = s.getElementsByTagName(o)[0]; a.async = 1; a.src = g; m.parentNode.insertBefore(a, m)
                })(window, document, 'script', '//www.google-analytics.com/analytics.js', 'ga');
                ga('create', '" + _analytics.CustomerUaCode + @"', 'auto')
                ga('send', 'pageview');
            </script>";
            
            output.PostContent.AppendHtmlLine(analyticsSnippet);
        }

        return Task.CompletedTask;
    }
}

As you can see, the TagHelperContext argument gives us context around the tag we're inspecting, in this case we want to look for the body HTML element. If we wanted to drop JavaScript or CSS into the <head></head> tags then we'd inspect tag name of "head" instead.

The TagHelperOutput argument gives us access to a host of properties around where we can place content, these include:

  • PreElement
  • PreContent
  • Content
  • PostContent
  • PostElement
  • IsContentModified
  • Attributes

In this instance we're going to append our JavaScript after the content located within the <body> tag, placing it just before the closing </body> tag.

Dependency Injection in our tag helper

With dependency injection being baked into the ASP.NET Core framework, we're able to inject dependencies into our tag helper - in this case I'm injecting our database driven consumer UA (User Analytics) code.

Registering our tag helper with our IoC container

Now all that's left to do is register our tag helper with our IoC container of choice. In this instance I'm using the build in ASP.NET Core one from the Microsoft.Extensions.DependencyInjection package.

// Startup.cs

public void ConfigureServices(IServiceCollection services)
{
    services.AddSingleton<ICustomerAnalytics, CustomerAnalytics>(); // Data source containing UA code
    services.AddSingleton<ITagHelperComponent, CustomerAnalyticsTagHelper>(); // Our tag helper
    ...
}

Now firing up our tag helper we can see our JavaScript has now been injected in our HTML page without us needing to touch any of our .cshtml Razor files!

...
<body>
    ...
    <script>
        (function (i, s, o, g, r, a, m) {
            i['GoogleAnalyticsObject'] = r; i[r] = i[r] || function () {
                (i[r].q = i[r].q || []).push(arguments)
            }, i[r].l = 1 * new Date(); a = s.createElement(o),
                m = s.getElementsByTagName(o)[0]; a.async = 1; a.src = g; m.parentNode.insertBefore(a, m)
        })(window, document, 'script', '//www.google-analytics.com/analytics.js', 'ga');
        ga('create', 'UA-123456789', 'auto')
        ga('send', 'pageview');
    </script>
</body>
</html>

Ordering our output

If we needed to include more than one script or script file in our output, we can lean on the Order property we saw earlier, overriding this allows us to specify the order of our output. Let's see how we can do this:

// JsLoggingTagHelper.cs

public class JsLoggingTagHelper : TagHelperComponent
{
    public override int Order => 1;

    public override Task ProcessAsync(TagHelperContext context, TagHelperOutput output)
    {
        if (string.Equals(context.TagName, "body", StringComparison.Ordinal))
        {
            const string script = @"<script src=""/jslogger.js""></script>";
            output.PostContent.AppendHtmlLine(script);
        }

        return Task.CompletedTask;
    }
}
// CustomerAnalyticsTagHelper.cs

public class CustomerAnalyticsTagHelper : TagHelperComponent
{
    ...
    public override int Order => 2; // Set our AnalyticsTagHelper to appear after our logger
    ...
}
// Startup.cs

public void ConfigureServices(IServiceCollection services)
{
    services.AddSingleton<ICustomerAnalytics, CustomerAnalytics>();
    services.AddSingleton<ITagHelperComponent, CustomerAnalyticsTagHelper>();
    services.AddSingleton<ITagHelperComponent, JsLoggingTagHelper>();
    ...   
}

When we we launch our application we should see the following HTML output:

<script src="/jslogger.js"></script>
<script>
    (function (i, s, o, g, r, a, m) {
        i['GoogleAnalyticsObject'] = r; i[r] = i[r] || function () {
            (i[r].q = i[r].q || []).push(arguments)
        }, i[r].l = 1 * new Date(); a = s.createElement(o),
            m = s.getElementsByTagName(o)[0]; a.async = 1; a.src = g; m.parentNode.insertBefore(a, m)
    })(window, document, 'script', '//www.google-analytics.com/analytics.js', 'ga');
    ga('create', 'UA-123456789', 'auto')
    ga('send', 'pageview');
</script>
</body>
</html>

Conclusion

Hopefully this post has highlighted how powerful the recent changes to tag helpers are and how using the ITagHelperComponent interface allows us to inject content into our HTML without having to touch any files. This means as a library author we can ease integration for our users by simply asking them to register a type with their IoC container and we can take care of the rest!

.NET Core solution management via the command line interface

Posted on Monday, 03 Jul 2017

One of the strengths boasted by .NET Core is its new command line interface (CLI for short), and by now you're probably aware that Visual Studio, Rider, Visual Studio Code etc shell out to the .NET Core CLI under the bonnet for most .NET Core related operations, so it makes sense that what you're able to do in your favourite IDE you're also able to do via the CLI.

With this in mind, only recently did I spend the time and effort to investigate how easy it was to create and manage a project solution via the CLI, including creating the solution structure, referencing projects along the way and adding them to .NET's .sln file.

It turns out it's incredibly easy and has instantly become my preferred way of managing solutions. Hopefully by the end of this post you'll arrive at the same conclusion too.

Benefits of using the CLI for solution management

So what are the benefits of using the CLI for solution management? Let's have a look:

  • Something that has always been a fiddly endevour of UI interactions is now so much simpler via the CLI - what's more, you don't need to open your editor of choice if you want to create references or update a NuGet package.

  • Using the CLI for creating projects and solutions is particularly helpful if (like me) you work across multiple operating systems and want to normalise your tool chain.

  • Loading an IDE just to update a NuGet package seems unnecessary

Let's begin!

Creating our solution

So let's take a look at how we can create the following project structure using the .NET Core CLI.

piedpiper
└── src
    ├── piedpiper.domain
    ├── piedpiper.sln
    ├── piedpiper.tests
    └── piedpiper.website

First we'll create our solution (.sln) file, I've always preferred to create the solution file in the top level source folder but the choice is yours (just bear in mind to specify the right path in the commands used throughout the rest of this post).

# /src/

$ dotnet new sln -n piedpiper

This will create a new sln file called piedpiper.sln.

Next we use the output parameter on the dotnet new <projecttype> command to create a project in a particular folder:

# /src/

$ dotnet new mvc -o piedpiper.website

This will create an ASP.NET Core MVC application in the piedpiper.website folder in the same directory. If we were to look at our folder structure thus far it looks like this:

# /src/

$ ls -la

piedpiper.sln
piedpiper.website

Next we can do the same for our domain and test projects:

# /src/

$ dotnet new classlib -o piedpiper.domain
$ dotnet new xunit -o piedpiper.tests

Adding our projects to our solution

At this point we've got a solution file that has no projects referenced, we can verify this by calling the list command like so:

# /src/

$ dotnet sln list

No projects found in the solution.

Next we'll add our projects to our solution file. Once upon a time doing this involved opening Visual Studio then adding a reference to each project manually. Thankfully this can also be done via the .NET Core CLI.

Now start to add each project with the following command, we do this by referencing the .csproj file:

# /src/

$ dotnet sln add piedpiper.website/piedpiper.website.csproj
$ dotnet sln add piedpiper.domain/piedpiper.domain.csproj
$ dotnet sln add piedpiper.tests/piedpiper.tests.csproj

Note: If you're using a Linux/Unix based shell you can do this in a single command using a globbing pattern!

# /src/

$ dotnet sln add **/*.csproj

Project `piedpiper.domain/piedpiper.domain.csproj` added to the solution.
Project `piedpiper.tests/piedpiper.tests.csproj` added to the solution.
Project `piedpiper.website/piedpiper.website.csproj` added to the solution.

Now when we call list on our solution file we should get the following output:

# /src/

$ dotnet sln list

Project reference(s)
--------------------
piedpiper.domain/piedpiper.domain.csproj
piedpiper.tests/piedpiper.tests.csproj
piedpiper.website/piedpiper.website.csproj

So far so good!

Adding a project reference to a project

Next up we want to start adding project references to our project, linking our domain library to our website and test library via the dotnet add reference command:

# /src/

$ dotnet add piedpiper.tests reference piedpiper.domain/piedpiper.domain.csproj

Reference `..\piedpiper.domain\piedpiper.domain.csproj` added to the project.

Now if you were to view the contents of your test project we'd see our domain library has now been referenced:

# /src/piedpiper.tests/

$ cat piedpiper.tests/piedpiper.tests.csproj 

<Project Sdk="Microsoft.NET.Sdk">

  ...

  <ItemGroup>
    <ProjectReference Include="..\piedpiper.domain\piedpiper.domain.csproj" />
  </ItemGroup>

</Project>

Next we'll do the same for our website project, so let's go to our website folder and run the same command:

# /src/

$ dotnet add piedpiper.website reference piedpiper.domain/piedpiper.domain.csproj
# /src/

$ cat piedpiper.website/piedpiper.website.csproj 

<Project Sdk="Microsoft.NET.Sdk">

  ...

<ItemGroup>
    <ProjectReference Include="..\piedpiper.domain\piedpiper.domain.csproj" />
  </ItemGroup>

</Project>

At this point we're done!

If we navigate back to our root source folder and run the build command we should see everything build successfully:

$ cd ../

# /src/

$ dotnet build

icrosoft (R) Build Engine version 15.3.388.41745 for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.

    piedpiper.domain -> /Users/josephwoodward/Desktop/demo/src/piedpiper.domain/bin/Debug/netstandard2.0/piedpiper.domain.dll
    piedpiper.tests -> /Users/josephwoodward/Desktop/demo/src/piedpiper.tests/bin/Debug/netcoreapp2.0/piedpiper.tests.dll
    piedpiper.website -> /Users/josephwoodward/Desktop/demo/src/piedpiper.website/bin/Debug/netcoreapp2.0/piedpiper.website.dll

Build succeeded.

    0 Warning(s)
    0 Error(s)

Time Elapsed 00:00:08.08

Adding a NuGet package to a project or updating it

Before wrapping up, let's say we wanted to add a NuGet package to one of our projects, we can do this using the add package command.

First navigate to the project you want to add a NuGet package to:

# /src/

$ cd pipedpiper.tests/

$ dotnet add package shouldly

info : Adding PackageReference for package 'shouldly'
...
log  : Installing Shouldly 2.8.3.

Optionally we could specify a version we'd like to install using the version argument:

$ dotnet add package shouldly -v 2.8.2

Updating a NuGet package

Updating a NuGet package to the latest version is just as easy, simply use the same command without the version argument:

dotnet add package shouldly

Conclusion

If you've managed to get this far then well done, hopefully by now you've realised how easy creating and managing a solution is using the new .NET Core command line interface.

One of the great powers of using the CLI is you can now turn creating the same project structure into a handy bash script which you could alias and reuse!

#!/bin/bash

echo "Enter project name, followed by [ENTER]:"
read projname

echo "Creating solution for $projname"

dotnet new sln -n $projname

dotnet new mvc -o $projname.website
dotnet new classlib -o $projname.domain
dotnet new xunit -o $projname.tests

echo "Adding projects to solution"
dotnet sln add **/*.csproj

echo "Referencing projects"
dotnet add $projname.website reference $projname.domain/$projname.domain.csproj
dotnet add $projname.tests reference $projname.domain/$projname.domain.csproj

Happy coding!

Tips on starting and running a .NET user group

Posted on Thursday, 22 Jun 2017

As someone that organises and runs the .NET South West in Bristol, I've had a number of people online and in person approach me expressing an interest in starting a .NET focused meet up but not sure where to start; so much so that I thought it would be good to summarise the challenges and hurdles of running a meet up in a succinct blog post.

.NET South West meet up

Running a user group isn't a sail in the park, but it's not hard or particularly time consuming either. Hopefully this post will provide some valuable information and reassurance to those looking to create and foster a local .NET focused community of people wishing to learn, share, expand their knowledge and meet local developers with similar passions and interests.

1. You can do it

The the very first question that can play through your mind is whether you're capable of starting and running such a meet up. If you're having any self-doubts about whether you're knowledgeable enough to run a user group, or whether you have the confidence to organise it - don't.

Running a user group is an incredibly rewarding experience that starts off small and grows, as it grows you grow with it. Everyone that attends user groups are there to learn, that also applies to the organiser(s) too. So don't let any hesitations or self-doubts get in your way.

2. Gauging the interest in a .NET user group

One of the first hurdles you face when starting a user group is trying to gauge the level of interest that exists in your local area.

I've found a great way to gauge interest is to simply create a meet up group on the popular user group organising site meetup.com informing people that you're interested in seeing what the level of interest is like. You can create an event with no date and set the title to "To be announced" then leaving it active for a few months. Meetup.com notifies people with similar interests of the new user group and over time you start to get people joining the group waiting for the first meet to be announced. In the meantime your meet up page has a forum where you can start conversations with some of the new members where you can look for assistance or ask if anyone knows of a suitable venue.

This time is a great opportunity to get to know local developers before the meet up.

3. Having an online presence

When it comes to organising a meet up, websites like meetup.com make the whole process a breeze. The service isn't free (starting at $9.99 - see more here) but personally I would say it's worth it in terms of how much effort it saves you. MeetUp provides services such as:

  • Posting and announcing meet ups to members
  • Sending out regular emails to your user group
  • Increases meet up visibility to local developers on the platform
  • Semi-brandable pages (though this could be better)
  • Ability to link to add sponsors within your meet up page

If you are on a budget then there are free alternatives such as the free tier of EventBrite which you can link to from a website you could set up.

4. Many hands make light work

Starting a meet up requires a lot of work, once the meet up is running the number of hours required to keep it ticking along dramatically reduces. That said, there are times where you may have personal commitments that make it difficult to focus on the meet up - so why not look to see if anyone else is interested in helping?

If you don't have any close friends or work colleagues that are interested in helping you can mention you're looking for help on your meet up page discussed previously. There's also nothing to stop you from talking to members once the meet up is under way to see if anyone is interested in helping out. If you do have people interested in helping then why not create a Slack channel for your group where you can stay organised.

5. The Venue

Next up is the venue, this is often the most challenging part as most venues will require payment for a room. All is not lost though as there are a few options available to you:

  • Look for sponsorship to cover the cost of the venue - some companies (such as recruitment companies) are often open to ideas of sponsoring meet ups in one way or another for publicity. Naturally this all depends on your stance around having recruitment companies at your meet up.

  • Approach software companies to see if they are interested in hosting the event. Often you'll find software companies are geared up for hosting meet ups and happy to do so in exchange for interacting with the community (and potentially saving them recruitment costs).

  • Small pubs - I know of a few meet up organisers who host at pubs in a back room as the venue are often aware that a few people stay behind to have a few drinks so it works in their favour too.

Ultimately you want to ensure is you have consistency, so talk to the venue to and make it clear that you're looking for a long-term solution.

6. Speakers

Once you've got your venue sorted the next task you face ( and this will be a regular one) is sourcing speakers. Luckily this finding speakers is often reasonably simple and once your meet up becomes established you'll start to find you have speakers approaching you with interest in giving a talk. I would also recommend looking at other near by meet ups for past speakers and making contact with them via Twitter. Networking at conferences is also a great way of finding potential speakers too.

In addition to the aforementioned suggestions, Microsoft also have a handy Microsoft Evangelists (UK only) for finding Evangelists nearby that are often more than happy to travel to your user group to give a talk.

Finally, encourage attendees of your meet up to give talks. You're trying to foster a community, so try to drive engagement and ownership by opening up space for short 15 minute lightning talks.

7. Sponsorship / Competitions and Prizes

Once your user group is off the ground I would recommend reaching out to software companies to see if they provide sponsorship for meet ups in the shape of prize licences or extended trials - for instance, JetBrains are well known for their awesome community support programme which I'd highly recommend taking a look at.

Some companies require your meet up to be a certain size, some are more flexible on what they can provide, often being happy to ship swag such as stickers and t-shirts instead which can be given away as prizes during your meet up (though if you're accepting swag from abroad then do be sure to clarify import tax so you don't get stung).

Swag and prizes aren't essential for a meet up, but it's something worth considering to spice things up a bit.

Go for it

Hopefully this post has given you some ideas if you are considering setting up a meet up. Organising a meet up and running it is an extremely satisfying responsibility and it's great seeing a community of developers coming together to share knowledge and learn from one another. So what are you waiting for, go for it!

Retrospective of DDD 12 in Reading

Posted on Monday, 12 Jun 2017

Yesterday was my second time both attending and speaking at the DDD conference run out of the Microsoft offices in Reading, and as a tradition I like to finish the occasion by writing a short retrospective post highlighting the day.

Arrival

Living just two hours' drive from Reading I decided to drive up early in the morning with fellow Just Eater Stuart Lang who was also giving a talk. After arriving and grabbing our speaker polo shirts we headed to the speaker room to say hello to the other speakers - some of whom I know from previous conferences and through organising DDD South West.

Speakers' Room

I always enjoying spending time in the speakers' room. As a relative newcomer to speaking I find it's a great opportunity to get some tips from more experienced speakers as well as geek out about programming. In addition, I still had some preparation to do so it's a quiet room where I can refine and tweak my talk slides.

Talk 1 - Goodbye REST; Hello GraphQL by Sandeep Singh

Even though I had preparation to take care of for my post-lunch talk, I was determined to attend Sandeep Singh's talk on GraphQL as it's a technology I've heard lots about via podcasts and have been interested to learn more. In addition, working at Just Eat where we have a lot of distributed services that are extremely chatty over HTTP - I was interested to see if and where GraphQL could help.

Having met Sandeep Singh for the first time at DDD South West he's clearly a knowledgeable guy so I was expecting great thing, and he delivered. The talk was very informative and by the end of the talk Sandeep had demonstrated the power of GraphQL (along with well-balanced considerations that need to be made) and answered the majority of questions I had forming in my notes as the talk progressed. It's definitely sparked my interest in the GraphQL and I'm keen to start playing with it.

My talk - Building a better Web API architecture using CQRS

Having submitted two talks to DDD Reading (this and my Razor Deep Dive I delivered at DDD South West), the one that received the most votes was this talk, a topic and architectural style I've been extremely interested in for a number of years now (long before MediatR was a thing!).

Having spoken at DDD Reading before, this year my talk was held in a rather intimidating room called Chicago that seats up to 90 with a stage overlooking the attendees. All in all I was happy with the way the talk went, however I did burn through my slides far faster than I did during practice, luckily though the attendees had plenty of questions so I had plenty of opportunity to answer and expand on them with the remaining time. I'll chalk this down to experience and learn from it.

I must say that one of my concerns whilst preparing the talk was the split opinions around what CQRS really is, and how it differs to Bertrand Meyer's formulation of CQS (coincidentally, there was a healthy debate around the definition moments before my talk in the speakers' room between two speakers well-versed in the area!).

Talk 3 - Async in C# - The Good, the Bad and the Ugly by Stuart Lang

Having worked with Stuart for a year now and known him for slightly longer, it's clear that his knowledge on async/await is spot on and certainly far deeper than any other developer I've met. Having seen a slightly alternative version of his Stuart's talk delivered internally at Just Eat I was familiar with the narrative, however I was keen to attend to this talk because C#'s async language construct is deep area and one I'm interested in.

Overall the talk went really well with a nice break in the middle allowing for time for questions before moving on (something I may have to try in my talk moving forward).

Conclusion

Overall DDD 12 was an awesome day, I'd love to have attended more talks and spend more time speaking with people but keen to deliver my talk to the best of my ability I had work to be done. None the less after my talk was over it was great catching up with familiar faces and meeting new people (easily one of my favourite parts of a conference as I'm a bit of a chatterbox!).

I'll close this post with a massive thanks to the event organisers (having helped organise DDD South West this year I have full appreciation for the hard work and time that goes into organising such an event), and also the sponsors - without them the conference would not have been possible.

Until next year!

Another day, another fantastic DDD (DDD South West 7)

Posted on Friday, 12 May 2017

Saturday 6th of May marked the day of another great DDD South West event. Having attended other DDD events around the UK I've always felt they had a special community feel to them, a feeling I've not felt at other conferences. This year's DDD South West event was particularly special to me not only because I was selected to speak at the conference, but because this year I was part of the organisation team.

This post is just a short summary of the highs and lows of organising the conference and the day itself.

Organising the conference

This year I was honoured to be part of the organisation team for DDD South West, and I loved every minute of it. Myself and the other organisers (there were 5 of us in total, some of whom have been organising the conference for well over 5 or 6 years!) would hold regular meetings via Skype, breaking down responsibilities such as sponsorship, catering, speaker related tasks and finance. Initially these meetings were about a month apart, but as the conference drew closer and the pressure started to set in, we would meet weekly.

During the organising of DDD Southwest I've gained a true appreciation for the amount of effort conference organisers (especially those that run non-profit events in their spare time, such as those I've had the pleasure of working with) put in to organising an event for the community.

On the day everything went as expected with no hiccups, though as I was speaking on the day I was definitely a lot more stressed than I would have been otherwise. After the event we all headed over to the Just Eat offices for an after party, which I'll cover shortly.

For more information on the day, there are two fantastic write ups by Craig Phillips and Dan Clarke that I'd highly recommend reading.

ASP.NET Core Razor Deep Dive talk

Whilst organising DDD South west 7, I figured why not pile the stress on and submit a few sessions. Last year I caught the speaking bug and this year I was keen to continue to pursue it, so I submitted 3 sessions each on very different subjects and was quite surprised to see the ASP.NET Core Razor Deep Dive was selected. It's not the sexiest topic to talk about, but non-the-less it was a great opportunity to share some experience and give people information they can take away and directly apply in real life (something I always try to do when putting together a talk).

The talk itself was focused on the new Razor changes and features introduced in ASP.NET Core, why and where you'd use them. The topics included:

  • Razor as a view engine and it's syntax
  • Tag helpers and how powerful they are
  • Create your own tag helpers for some really interesting use cases - this part was especially great as I could see a lot of people in the audience had the "light bulb" moment. This brings me great joy as I know there's something they were able to take away from the talk.
  • Why Partial Views and Child Actions are limiting
  • View Components and how you can use them to create more modular, reusable views
  • And finished off with the changes people can expect to see in Razor when ASP.NET Core 2.0 is released (ITagHelperComponents and Razor Pages)

Overall I was really happy with the talk and the turnout. The feedback I received was great, with lots of mentions of the word "engaging" (which as a relatively new speaker still trying to find his own style, is always positive to hear).

DDD South West 7 after party

Once all was done and dusted and the conference drew to a close, a large majority of us took a 5 minute stroll over to the Just Eat offices for an after party where free pizza and beer was on offer for the attendees (Thanks Just Eat!).

After a long day of ensuring the event was staying on track coupled with the stresses of talking, it was great to be able to unwind and enjoy spending time mingling with the various attendees and sponsors, all of whom made the event possible.

Bring on DDD South West 8!

C# 7 ValueTuple types and their limitations

Posted on Thursday, 20 Apr 2017

Having been experimenting and reading lots about C# 7's tuples recently I thought I'd summarise some of the more interesting aspects about this new C# language feature whilst highlighting some areas and limitations that may not be apparent when first using them.

Hopefully by the end of this post you'll have a firm understanding of the new feature and how it differs in comparison to the Tuple type introduced in .NET 4.

A quick look at tuple types as a language feature

Prior to C# 7, .NET's tuples were an awkward somewhat retrofitted approach to what's a powerful language feature. As a result you don't see them being used as much as they are in other language like Python or to some extend Go (which doesn't support Tuples, but has many features they provide such as multiple return values) - with this in mind it behoves me to briefly explain what Tuples are and why you'd use them for those that may not have touched them before.

So what are Tuples and where you would use them?

The tuples type's main strengths lie in allowing you to group types into a closely related data structure (much like creating a class to represent more than one value), this means they're particularly useful in cases such as returning more than one value from a method, for instance:

public class ValidationResult {
    public string Message { get; set; }
    public bool IsValid { get; set; }
}

var result = ValidateX(x);
if (!result.IsValid)
{
    Logger.Log(Error, result.Message);
}

Whilst there's nothing wrong with this example, sometimes we don't want to have to create a type to represent a set of data - we want types to work for us, not against us; this is where the tuple type's utility is.

In fact, in a languages such as Go (which allow multiple return types from a function) we see such a pattern used extensively throughout their standard library.

bytes, err := ioutil.ReadFile("file.json")
if err != nil {
    log.Fatal(err)
}

Multiple return types can also stop you from needing to use the convoluted TryParse method pattern with out parameters.

Now we've got that covered and we're all on the same page, let's continue.

In the beginning there was System.Tuple<T1, T2, .., T7>

Verbosity
Back in .NET 4 we saw the appearance of the System.Tuple<T> class which introduced a verbose and somewhat awkward API:

var person = new Tuple<string, int>("John Smith", 43);
Console.WriteLine($"Name: {person.Item1}, Age {person.Item2}");
// Output: Name: John Smith, Age: 43

Alternatively there's a static factory method that cleared things up a bit:

var person = Tuple.Create("John Smith", 43);

But there was still room for improvement such as:

No named elements
One of the weaknesses of the System.Tuple type is that you have to refer to your elements as Item1, Item2 etc instead of by their 'named' version (allowing you to unpack a tuple and reference the properties directly) like you can in Python:

name, age = person_tuple
print name

Garbage collection pressure
In addition the System.Tuple type is a reference type, meaning you pay the penalty of a heap allocation, thus increasing pressure on the garbage collector.

public class Tuple<T1> : IStructuralEquatable, IStructuralComparable, IComparable

Nonetheless the System.Tuple type scratched an itch and solved a problem, especially if you owned the API.

C# 7 Tuples to the rescue

With the introduction of the System.ValueTuple type in C# 7, a lot of these problems have been solved (it's worth mentioning that if you want to use or play with the new Tuple type you're going to need to download the following NuGet package.

Now in C# 7 you can do such things as:

Tuple literals

// This is awesome and really clears things up; we can even directly reference the named value!

var person = (Name: "John Smith", Age: 43);
Console.WriteLine(person.Name); // John Smith

Tuple (multiple) return types

(string, int) GetPerson()
{
    var name = "John Smith";
    var age = 32;
    return (name, age);
}

var person = GetPerson();
Console.WriteLine(person.Item1); // John Smith

Even named Tuple return types!

(string Name, int Age) GetPerson()
{
    var name = "John Smith";
    var age = 32;
    return (name, age);
}

var person = GetPerson();
Console.WriteLine(person.Name); // John Smith

If that wasn't enough you can also deconstruct types:

public class Person
{
    public string Name => "John Smith";
    public int Age => 43;

    public void Deconstruct(out string name, out int age)
    {
        name = Name;
        age = Age;
    }
}

...

var person = new Person();
var (name, age) = person;

Console.WriteLine(name); // John Smith

As you can see the System.ValueTuple greatly improves on the older version, allowing you to write far more declarative and succinct code.

It's a value type, baby!
In addition (if the name hadn't given it away!) C# 7's Tuple type is now a value type, meaning there's no heap allocation and one less de-allocation to worry about when compacting the GC heap. This means the ValueTuple can be used in the more performance critical code.

Now going back to our original example where we created a type to represent the return value of our validation method, we can delete that type (because deleting code is always a great feeling) and clean things up a bit:

var (message, isValid) = ValidateX(x);
if (!isvalid)
{
    Logger.Log(Log.Error, message);
}

Much better! We've now got the same code without the need to create a separate type just to represent our return value.

C# 7 Tuple's limitations

So far we've looked at what makes the ValueTuple special, but in order to know the full story we should look at what limitations exist so we can make an educated descision on when and where to use them.

Let's take the same person tuple and serialise it to a JSON object. With our named elements we should expect to see an object that resembles our tuple.

var person = (Name: "John", Last: "Smith");
var result = JsonConvert.SerializeObject(person);

Console.WriteLine(result);
// {"Item1":"John","Item2":"Smith"}

Wait, what? where have our keys gone?

To understand what's going on here we need to take a look at how ValueTuples work.

How the C# 7 ValueTuple type works

Let's take our GetPerson method example that returns a named tuple and check out the de-compiled source. No need to install a de-compiler for this, a really handy website called tryroslyn.azurewebsites.net will do everything we need.

// Our code
using System;
public class C {
    public void M() {
        var person = GetPerson();
        Console.WriteLine(person.Name + " is " + person.Age);
    }
    
    (string Name, int Age) GetPerson()
    {
        var name = "John Smith";
        var age = 32;
        return (name, age);
    }
}

You'll see that when de-compiled, the GetPerson method is simply syntactic sugar for the following:

// Our code de-compiled
public class C
{
    public void M()
    {
        ValueTuple<string, int> person = this.GetPerson();
        Console.WriteLine(person.Item1 + " is " + person.Item2);
    }
    [return: TupleElementNames(new string[] {
        "Name",
        "Age"
    })]
    private ValueTuple<string, int> GetPerson()
    {
        string item = "John Smith";
        int item2 = 32;
        return new ValueTuple<string, int>(item, item2);
    }
}

If you take a moment to look over the de-compiled source you'll see two areas of particular interest to us:

First of all, our Console.WriteLine() method call to our named elements have gone and been replaced with Item1 and Item2. What's happened to our named elements? Looking further down the code you'll see they've actually been pulled out and added via the TupleElementNames attribute.

...
[return: TupleElementNames(new string[] {
    "Name",
    "Age"
})]
...

This is because the ValueTuple type's named elements are erased at runtime, meaning there's no runtime representation of them. In fact, if we were to view the IL (within the TryRoslyn website switch the Decompiled dropdown to IL), you'll see any mention of our named elements have completely vanished!

IL_0000: nop // Do nothing (No operation)
IL_0001: ldarg.0 // Load argument 0 onto the stack
IL_0002: call instance valuetype [System.ValueTuple]System.ValueTuple`2<string, int32> C::GetPerson() // Call method indicated on the stack with arguments
IL_0007: stloc.0 // Pop a value from stack into local variable 0
IL_0008: ldloc.0 // Load local variable 0 onto stack
IL_0009: ldfld !0 valuetype [System.ValueTuple]System.ValueTuple`2<string, int32>::Item1 // Push the value of field of object (or value type) obj, onto the stack
IL_000e: ldstr " is " // Push a string object for the literal string
IL_0013: ldloc.0 // Load local variable 0 onto stack
IL_0014: ldfld !1 valuetype [System.ValueTuple]System.ValueTuple`2<string, int32>::Item2 // Push the value of field of object (or value type) obj, onto the stack
IL_0019: box [mscorlib]System.Int32 // Convert a boxable value to its boxed form
IL_001e: call string [mscorlib]System.String::Concat(object, object, object) // Call method indicated on the stack with arguments
IL_0023: call void [mscorlib]System.Console::WriteLine(string) // Call method indicated on the stack with arguments
IL_0028: nop  // Do nothing (No operation)
IL_0029: ret  // Return from method, possibly with a value

So what does that mean to us as developers?

No reflection on named elements

The absence of named elements in the compiled source means that it's not possible to use reflection to get those name elements via reflection, which limits ValueTuple's utility.

This is because under the bonnet the compiler is erasing the named elements and reverting to the Item1 and Item2 properties, meaning our serialiser doesn't have access to the properties.

I would highly recommend reading Marc Gravell's Exploring tuples as a library author post where he discusses a similar hurdle when trying to use tuples within Dapper.

No dynamic access to named elements

This also means when casting your tuple to a dynamic object results in the loss of the named elements. This can be witnessed by running the following example:

var person = (Name: "John", Last: "Smith");
var dynamicPerson = (dynamic)person;
Console.WriteLine(dynamicPerson.Name);

Results in the following error RuntimeBinder exception:

Unhandled Exception: Microsoft.CSharp.RuntimeBinder.RuntimeBinderException: 'System.ValueTuple<string,string>' does not contain a definition for 'Name'
   at CallSite.Target(Closure , CallSite , Object )
   at CallSite.Target(Closure , CallSite , Object )
   at TupleDemo.Program.Main(String[] args) in /Users/josephwoodward/Dev/TupleDemo/Program.cs:line 16

Thanks to Daniel Crabtree's post for highlighting this!

No using named Tuples in Razor views either (unless they're declared in your view)

Naturally the name erasure in C# 7 tuples also means that you cannot use the names in your view from your view models. For instance:

public class ExampleViewModel {

    public (string Name, int Age) Person => ("John Smith", 30);

}
public class HomeController : Controller
{
    ...
    public IActionResult About()
    {
        var model = new ExampleViewModel();

        return View(model);
    }
}
// About.cshtml
@model TupleDemo3.Models.ExampleViewModel

<h1>Hello @Model.Person.Name</h1>

Results in the following error:

'ValueTuple<string, int>' does not contain a definition for 'Name' and no extension method 'Name' accepting a first argument of type 'ValueTuple<string, int>' could be found (are you missing a using directive or an assembly reference?)

Though switching the print statement to @Model.Person.Item1 outputs the result you'd expect.

Conclusion

That's enough about Tuples for now. Some of the examples used in this post aren't approaches you'd use in real life, but hopefully go to demonstrate some of the limitations of the new type and where you can and can't use C# 7's new ValueTuple type.

Setting up a local Selenium Grid using Docker and .NET Core

Posted on Monday, 20 Mar 2017

Since jumping on the Docker bandwagon I've found its utility spans beyond the repeatable deployments and consistent runtime environment benefits that come with the use of containers. There's a whole host of tooling and use cases emerging which take advantage of containerisation technology, one use case that I recently discovered after a conversation with Phil Jones via Twitter is the ability to quickly set up a Selenium Grid.

Setting up and configuring a Selenium Grid has never been an simple process, but thanks to Docker it's suddenly got a whole lot easier. In addition, you're now able to run your own Selenium Grid locally and greatly speed up your tests' execution. If that isn't enough then another benefit is because the tests execute inside of a Docker container, you'll no longer be blocked by your browser navigating the website you're testing!

Let's take a look at how this can be done.

Note: For the impatient, I've put together a working example of the following post in a GitHub repository you can clone and run.

Selenium Grid Docker Compose file

For those that haven't touched Docker Compose (or Docker for that matter), a Docker Compose file is a Yaml based configuration document (often named docker-compose.yml) that allows you to configure your applications' Docker environment.

Without Docker Compose you'd need to manually run your individual Dockerfile files specifying their network connections and configuration parameters along the way. With Docker Compose you can configure everything in a single file and start your environment with a simple docker-compose up command.

Below is the Selenium Grid Docker Compose configuration you can copy and paste:

# docker-compose.yml

version: '2'
services:
    selenium_hub:
        image: selenium/hub:3.0.1-aluminum
        container_name: selenium_hub
        privileged: true
        ports:
            - 4444:4444
        environment:
            - GRID_TIMEOUT=120000
            - GRID_BROWSER_TIMEOUT=120000
        networks:
            - selenium_grid_internal

    nodechrome1:
        image: selenium/node-chrome-debug:3.0.1-aluminum
        privileged: true
        depends_on:
            - selenium_hub
        ports:
            - 5900
        environment:
            - no_proxy=localhost
            - TZ=Europe/London
            - HUB_PORT_4444_TCP_ADDR=selenium_hub
            - HUB_PORT_4444_TCP_PORT=4444
        networks:
            - selenium_grid_internal

    nodechrome2:
        image: selenium/node-chrome-debug:3.0.1-aluminum
        privileged: true
        depends_on:
            - selenium_hub
        ports:
            - 5900
        environment:
            - no_proxy=localhost
            - TZ=Europe/London
            - HUB_PORT_4444_TCP_ADDR=selenium_hub
            - HUB_PORT_4444_TCP_PORT=4444
        networks:
            - selenium_grid_internal

networks:
    selenium_grid_internal:

In the above Docker Compose file we've defined our Selenium Hub (selenium_hub) service, exposing it on port 4444 and attaching it to a custom network named selenium_grid_internal (which you'll see all of our nodes are on).

selenium_hub:
    image: selenium/hub:3.0.1-aluminum
    container_name: selenium_hub
    privileged: true
    ports:
        - 4444:4444
    environment:
        - GRID_TIMEOUT=120000
        - GRID_BROWSER_TIMEOUT=120000
    networks:
        - selenium_grid_internal

All that's remaining at this point is to add our individual nodes. In this instance I've added two Chrome based nodes, named nodechrome1 and nodechrome2:

nodechrome1:
    image: selenium/node-chrome-debug:3.0.1-aluminum
    privileged: true
    depends_on:
        - selenium_hub
    ports:
        - 5900
    environment:
        - no_proxy=localhost
        - TZ=Europe/London
        - HUB_PORT_4444_TCP_ADDR=selenium_hub
        - HUB_PORT_4444_TCP_PORT=4444
    networks:
        - selenium_grid_internal

nodechrome2:
    image: selenium/node-chrome-debug:3.0.1-aluminum
    ...

Note: If you wanted to add Firefox to the mix then you can replace the image: value with the following Docker image:

nodefirefox1:
    image: selenium/node-firefox-debug:3.0.1-aluminum
    ...

Now if we run docker-compose up you'll see our Selenium Grid environment will spring into action.

To verify everything is working correctly we can navigate to http://0.0.0.0:4444 in our browser where we should be greeted with the following page:

Connecting Selenium Grid from .NET Core

At the time of writing this post the official Selenium NuGet package does not support .NET Standard, however there's a pending pull request which adds support (the pull request has been on hold for a while as the Selenium team wanted to wait for the tooling to stabilise). In the mean time the developer that added support released it as a separate NuGet package which can be downloaded here.

Alternatively just create the following .csproj file and run the dotnet restore CLI command.

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp1.0</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.0.0-preview-20170123-02" />
    <PackageReference Include="xunit" Version="2.2.0-beta5-build3474" />
    <PackageReference Include="xunit.runner.visualstudio" Version="2.2.0-beta5-build1225" />
    <PackageReference Include="CoreCompat.Selenium.WebDriver" Version="3.2.0-beta003" />
  </ItemGroup>

</Project>

Next we'll create the following base class that will create a remote connection to our Selenium Grid:

public abstract class BaseTest
{
    private IWebDriver _driver;

    public IWebDriver GetDriver()
    {
        var capability = DesiredCapabilities.Chrome();
        if (_driver == null){
            _driver = new RemoteWebDriver(new Uri("http://0.0.0.0:4444/wd/hub/"), capability, TimeSpan.FromSeconds(600));
        }

        return _driver;
    }
}

After that we'll create a very simple (and trivial) test that checks for the existence of an ID on google.co.uk.

public class UnitTest1 : BaseTest 
{

    [Fact]
    public void TestForId()
    {
        using (var driver = GetDriver())
        {
            driver.Navigate().GoToUrl("http://www.google.co.uk");
            var element = driver.FindElement(By.Id("lst-ib"));
            Assert.True(element != null);
        }
    }

    ...

}

Now if we run our test (either via the dotnet test CLI command or from your editor or choice) we should see our Docker terminal console showing our Selenium Grid container jump into action as it starts executing the test one one of the registered Selenium Grid nodes.

At the moment we're only executing the one test so you'll only see one node running the test, but as you start to add more tests across multiple classes the Selenium Grid hub will start to distribute those tests across its cluster of nodes, dramatically increasing your test execution time.

If you'd like to give this a try then I've added all of the source code and Docker Compose file in a GitHub repository that you can clone and run.

The drawbacks

Before closing there are a few drawbacks to this method of running tests, especially if you're planning on doing it locally (instead of setting a grid up on a Virtual Machine via Docker).

Debugging is made harder
If you're planning on using Selenium Grid locally then you'll lose the visibility of what's happening in the browser as the tests are running within a Docker container. This means that in order to see the state of the web page on a failing test you'll need to switch to local execution using the Chrome / FireFox or Internet Explorer driver.

Reaching localhost from within a container
In this example we're executing the tests against an external domain (google.co.uk) that our container can resolve. However if you're planning on running tests against a local development environment then there will be some additional Docker configuration required to allow the container to access the Docker host's IP address.

Conclusion

Hopefully this post has broadened your options around Selenium based testing and demonstrated how pervasive Docker is becoming. I'm confident that the more Docker (and other container technology for that matter) matures, the more we'll see said technology being used for such use cases as we've witnessed in this post.

An in-depth look at the various ways of specifying the IP or host ASP.NET Core listens on

Posted on Friday, 10 Feb 2017

Recently I've been working on an ASP.NET Core application where I've needed to configure at runtime, the host address and port the webhost will listen on. For those that have built an ASP.NET Core app you'll know that the default approach generated by the .NET CLI is less than ideal in this case as it's hard-coded.

After a bit of digging I quickly realised there weren't any places that summarised all of the options available, so I thought I'd summarise it all in a post.

Enough talk, let's begin.

Don't set an IP

The first approach is to not specify any IP address (this means removing the .NET Core CLI template convention of using the .UseUrls() method. Without it the web host will listen on localhost:5000 by default.

Whilst this approach is far from ideal, it is an option so deserves a place here.

Hard-coded approach via .UseUrls()

As I alluded to earlier, the default approach that .NET Core's CLI uses is to hard-code the IP address in your application's program.cs file via the UseUrls(...) extension method that's available on the IWebHostBuilder interface.

If you take a look at the UseUrls extension method's signature you'll see the argument is an unbounded string array allowing you to specify more than one IP address that the web host will listen on (which may be preferable to you depending on your development machine, network configuration or preference as specifying more than one host address can save people running into issues between localhost vs 0.0.0.0 vs 127.0.0.1).

public static IWebHostBuilder UseUrls(this IWebHostBuilder hostBuilder, params string[] urls);

Adding multiple IP addresses can either be done by comma-separated strings, or a single string separated by a semi-colon; both result in the same configuration.

var host = new WebHostBuilder()
    .UseConfiguration(config)
    .UseKestrel()
    .UseUrls("http://0.0.0.0:5000", "http://localhost:5000")
    ///.UseUrls("http://0.0.0.0:5000;http://localhost:5000") also works
    .UseContentRoot(Directory.GetCurrentDirectory())
    .UseIISIntegration()
    .UseStartup<Startup>()
    .Build();

If you'd rather not explicitly list every IP address to listen on you can use a wildcard instead resulting in the web host binding to all IPv4 and IPv6 IPs addresses on port 5000:

.UseUrls("http://*:5000")

A bit about wildcards: The wildcard is not special in any way, in fact anything not recognised as an IP will be bound to all IPv4 or IPv6 addresses, so http://@£$%^&*:5000 is considered the same as "http://*:5000" and vice versa.

Whilst this hard-coded approach makes it easy to get your application up and running, the very fact that it's hard-coded does make it difficult to configure externally via automation such as a continuous integration/deployment pipeline.

Note: It's worth mentioning that setting a binding IP address directly on the WebHost as we are in this approach always takes preference over any of the other approaches listed in this post, but we'll go into this later.

Environment variables

You can also specify the IP your application listens on via an environment variable. To do this first you'll need to download the Microsoft.Extensions.Configuration package from NuGet then call the AddEnvironmentVariables() extension method on your ConfigurationBuilder object like so:

public static void Main(string[] args)
{
    var config = new ConfigurationBuilder()
        .AddEnvironmentVariables()
        .Build();

    var host = new WebHostBuilder()
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseConfiguration(config)
        .UseStartup<Startup>()
        .Build();

    host.Run();
}

Now if you were to set the following environment variable and run your application it will listen to the IP address specified:

ASPNETCORE_URLS=https://*:5123

Command line argument

The another option available is to supply the host name and port via a command line argument when your application's initially executed (notice how you can use one or more addresses as we did above).

dotnet run --urls "http://*:5000;http://*:6000"

or

dotnet YourApp.dll --urls "http://*:5000;http://*:6000"

Before you can use command line arguments, you're going to need the Microsoft.Extensions.Configuration.CommandLine package and update your Program.cs bootstrap configuration accordingly:

public static void Main(string[] args)
{
    var config = new ConfigurationBuilder()
        .AddCommandLine(args)
        .Build();

    var host = new WebHostBuilder()
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseConfiguration(config)
        .UseStartup<Startup>()
        .Build();

    host.Run();
}

Notice how I've removed the .ForUrls() method; this prevents .ForUrls() overwriting the IP address provided via command line.

hosting.json approach

Another popular approach to specifying your host and port address is to read the IP address from a .json file during the application boot up. Whilst you can name your configuration anything, the common approach appears to be hosting.json, with the contents of your file containing the IP address you want your application to listen on:

{
  "urls": "http://*:5000"
}

In order to use this approach you're first going to need to include the Microsoft.Extensions.Configuration.Json package, allowing you to load configurations via .json documents.

public static void Main(string[] args)
{
    var config = new ConfigurationBuilder()
        .SetBasePath(Directory.GetCurrentDirectory())
        .AddJsonFile("hosting.json", optional: true)
        .Build();

    var host = new WebHostBuilder()
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseConfiguration(config)
        .UseStartup<Startup>()
        .Build();

    host.Run();
}

Now when you run the dotnet run or dotnet YourApp.dll command you'll noticed the output will reflect the address specified within your hosting.json document.

Just a reminder that before publishing your application be sure to include your hosting file in your publishing options (in either your project.json or your .csproj file:

// project.json

"publishOptions": {
    "include": [
      "wwwroot",
      "Views",
      "appsettings.json",
      "web.config",
      "hosting.json"
    ]
}
// YourApp.csproj

<ItemGroup>
  <Content Update="wwwroot;Views;appsettings.json;web.config;hosting.json">
  <CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>
  </Content>
</ItemGroup>

Out of all of the approaches available this has to be my most preferred option. It's simple enough to overwrite or modify within your test/release pipeline whilst removing hurdles co-workers may need to jump through when wanting to download your source code and run the application (as opposed to the command line approach).

Order of preference

When it comes down to the order the IP addresses are loaded, I would recommend you check out the documentation here, especially this snippet:

You can override any of these environment variable values by specifying configuration (using UseConfiguration) or by setting the value explicitly (using UseUrls for instance). The host will use whichever option sets the value last. For this reason, UseIISIntegration must appear after UseUrls, because it replaces the URL with one dynamically provided by IIS. If you want to programmatically set the default URL to one value, but allow it to be overridden with configuration, you could configure the host as follows:

var config = new ConfigurationBuilder()
   .AddCommandLine(args)
   .Build();

var host = new WebHostBuilder()
   .UseUrls("http://*:1000") // default URL
   .UseConfiguration(config) // override from command line
   .UseKestrel()
   .Build();

Conclusion

Hopefully this post helped you gain a better understanding of the many options available to you when configuring what IP address you want your application to listen on, writing it has certainly helped me cement them in my mind!

C# IL Viewer for Visual Studio Code using Roslyn side project

Posted on Monday, 30 Jan 2017

For the past couple of weeks I've been working on an IL (Intermediate Language) Viewer for Visual Studio Code. As someone that develops on a Mac, I spend a lot of time doing C# in VS Code or JetBrains' Rider editor - however neither of them have the ability to view the IL generated (I know JetBrains are working on this for Rider) so I set out to fix this problem as a side project.

As someone that's never written a Visual Studio Code extension before it was a bit of an abmitious first extension, but enjoyable none the less.

Today I released the first version of the IL Viewer (0.0.1) to the Visual Studio Code Marketplace so it's available to download and try via the link below:

Download IL Viewer for Visual Studio Code

Download C# IL Viewer for Visual Studio Code or install it directly within Visual Studio Code by launching Quick Open (CMD+P for Mac or CTRL+P for Windows) and pasting in the follow command and press enter.

ext install vscodeilviewer

The source code is all up on GitHub so feel free to take a look, but be warned - it's a little messy right now as it was hacked together to get it working.

VS Code IL Viewer

For those interested in how it works, continue reading.

How does it work?

Visual Studio Code and HTTP Service

At its heart, Visual Studio Code is a glorified text editor. C# support is added via the hard work of the OmniSharp developers, which itself is backed by Roslyn. This means that in order to add any IL inspection capabilities I needed to either hook into OmniSharp or build my own external service that gets bundled within the extension. In the end I decided to go with the later.

When Visual Studio Code loads and detects the language is C# and the existence of a project.json file, it starts an external HTTP serivce (using Web API) which is bundled within the extension.

Moving forward I intend to switch this out for a console application communicating over stdin and stdout. This should speed up the overall responsiveness of the extension whilst reducing the resources required, but more importantly reduce the start up time of the IL viewer.

Inspecting the Intermediate Language

Initially I planned on making the Visual Studio Code IL Viewer extract the desired file's IL directly from its project's built DLL, however after a little experimentation this proved not to ideal as it required the solution to be built in order to view the IL, and built again for any inspections after changes, no matter how minor. It would also be blocking the user from doing any work whilst the project was building.

In the end I settled on an approach that builds just the .cs file you wish to inspect into an in memory assembly then extracts the IL and displays it to the user.

Including external dependencies

One problem with compiling just the source code in memory is that it doesn't include any external dependencies. As you'd expect, as soon as Roslyn encounters a reference for an external binary you get a compilation error. Luckily Roslyn has the ability to automatically include external dependencies via Roslyn's workspace API.

public static Compilation LoadWorkspace(string filePath)
{
    var projectWorkspace = new ProjectJsonWorkspace(filePath);

    var project = projectWorkspace.CurrentSolution.Projects.FirstOrDefault();
    var compilation = project.GetCompilationAsync().Result;

    return compilation;
}

After that the rest was relativly straight forward, I grab the syntax tree from the compilation unit of the project the load in any additional dependencies before using Mono's Cecil library to extract the IL (as Cecil supports .NET Standard it does not require the Mono runtime).

Once I have the IL I return the contents as a HTTP response then display it to the user in Visual Studio Code's split pane window.

Below is a simplified diagram of how it's all tied together:

IL Viewer

As I mentioned above, the source code is all available on GitHub so feel free to take a look. Contributions are also very welcome!