Latest Blog Posts

REST Client for VS Code, an elegant alternative to Postman

Posted on Wednesday, 18 Oct 2017

For sometime now I've been a huge proponent of Postman, working in an environment that has a large number of remote services meant Postman's ease of generating requests, the ability to manage collections, view historic requests and so forth made it my goto tool for hand crafted HTTP requests. However there have always been features I felt were missing, one such feature was the ability to copy and paste a raw RFC 2616 compliant HTTP request including request method, headers and body directly into Postman and fire it off without the need to manually tweak the request. This lead me to a discussion on Twitter where Darrel Miller recommended I check out the REST Client extension for Visual Studio Code.

REST Client for Visual Studio Code

After installing REST Client the first thing I noticed was how elegant it is. Simply create a new tab, paste in your raw HTTP request (ensuring the tab's Language Mode is set to either HTTP or Plaintext, more on this later) and in no time at all you'll see a "Send Request" button appear above your HTTP request allowing you to execute the request as is, no further modifications are required to tell REST Client how to parse or format.

Features

To give you a firm grasp of why you should consider adding REST Client to your tool chain, here are a few of the features that particularly stood out to me, organised in an easily consumable list format, because we all like lists:

No BS request building

The easiest form of an HTTP request you can send is to paste in a normal HTTP GET URL like so:

https://example.com/comments/1

Note: You can either paste your requests into a Plaintext window where you'll need to highlight the request and press the Send Request keyboard shortcut (Ctrl+Alt+R for Windows, or Cmd+Alt+R for macOS) or set the tab's Language Mode to HTTP where a "Send Request" will appear above the HTTP request.

If you want more control over your request then a raw HTTP request will do:

POST https://example.com/comments HTTP/1.1
content-type: application/json

{
    "name": "sample",
    "time": "Wed, 21 Oct 2015 18:27:50 GMT"
}

Once loaded you'll see the response appear in a separate pane. A nice detail that I really liked is the ability to hover my cursor over the request timer and get a breakdown of duration details, including times surrounding Socket, DNS, TCP, First Byte and Download.

Saving requests as collections for later use is a simple plain text .http file

Following the theme of low friction elegance, it's nice that saving requests for later use (or if you want to check them into your component's source control) are saved as simple plain text documents with an .http file extension.

Break down of requests

One of the gripes I had with Postman was requests separated by tabs. If you had a number of requests you were working with they'd quickly get lost amongst the number of tabs I tend to have open.

REST Client doesn't suffer the same fate as requests can be grouped in your documents and separated by comments, where three or more hash characters (#) are treated as delimiters between requests by REST Client.

Environments and Variables

REST Client has a concept of Environments and Variables, meaning if you work with different environments (ie QA, Staging and Production), you can easily switch between environments configured in the REST Client settings (see below), changing the set of variables configured without having to modify the requests.

Environments

"rest-client.environmentVariables": {
    "local": {
        "host": "localhost",
        "token": "test token"
    },
    "production": {
        "host": "example.com",
        "token": "product token"
    }
}

Variables

Variables on the other hand allow you to simply define variables in your document and reference them throughout.

@host = localhost:5000
@token = Bearer e975b15aa477ee440417ea069e8ef728a22933f0

GET https://{{host}}/api/comments/1 HTTP/1.1
Authorization: {{token}}

It's not Electron

I have nothing against Electron, but it's known to be a big of a resource hog, so much so that I rarely leave Postman open between sessions, where as I've always got VS Code (one Electron process is enough) open meaning it's far easier to dip into to test a few requests.

Conclusion

This post is just a brief overview of some of the features in REST Client, if you're open to trying an alternative to your current HTTP Request generation tool then I'd highly recommend you check it out, you can read more about it on the REST Client GitHub page.

Global Exception Handling in ASP.NET Core Web API

Posted on Wednesday, 20 Sep 2017

Note: Whilst this post is targeted towards Web API, it's not unique to Web API and can be applied to any framework running on the ASP.NET Core pipeline.

For anyone that uses a command-dispatcher library such as MediatR, Magneto or Brighter (to name a few), you'll know that the pattern encourages you to push your domain logic down into a domain library via a handler, encapsulating your app or API's behaviours such as retrieving an event, like so:

public async Task<IActionResult> Get(int id)
{
    var result = await _mediator.Send(new EventDetail.Query(id));
    return Ok(result);
}

Continuing with the event theme above, within your handler, or in the pipeline before it you'll take care of all of your validation, throwing an exception if the argument is invalid (in this case, an ArgumentException).

Now when it comes to handling that exception you're left having to explicitly catch it from each action method:

public async Task<string> Get(int id)
{
    try {
        var result = await _mediator.Send(new EventDetail.Query(id));
        return Ok(result);
    } catch (ArgumentException){
        return BadRequest();
    }
}

Whilst this is perfectly acceptable, I'm always looking at ways I can reduce boiler plate code, and what happens if an exception is thrown somewhere else in the HTTP pipeline This is why I created Global Exception Handler for ASP.NET Core.

What is Global Exception Handler?

Available via NuGet or GitHub, Global Exception Handler lets you configure an exception handling convention within your Startup.cs file, which will catch any of the exceptions specified, outputting the appropriate error response and status code.

Not just for Web API or MVC

Whilst it's possible to use Global Exception Handler with Web API or MVC, it's actually framework agnostic meaning as long as it runs, or can run on the ASP.NET Core pipeline (such as BotWin or Nancy) then it should work.

Let's take a look at how we can use it alongside Web API (though the configuration will be the same regardless of framework).

How do I use Global Exception Handler for an ASP.NET Core Web API project?

To configure Global Exception Handler, you call it via the the UseWebApiGlobalExceptionHandler extension method in your Configure method, specifying the exception(s) you wish to handle and the resulting status code. In this instance a ArgumentException should translate to a 400 (Bad Request) status code:

public class Startup
{
    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        app.UseWebApiGlobalExceptionHandler(x =>
        {
            x.ForException<ArgumentException>().ReturnStatusCode(HttpStatusCode.BadRequest);
        });

        app.UseMvc();
    }
}

Now when our MediatR pipeline throws an ArgumentException we no longer need to explicitly catch and handle it in every controller action:

public async Task<IActionResult> Get(int id)
{
    // This throws a ArgumentException
    var result = await _mediator.Send(new EventDetail.Query(id));
    ...
}

Instead our global exception handler will catch the exception and handle it according to our convention, resulting in the following JSON output:

{
    "error": {
        "status": 400,
        "message": "Invalid arguments supplied"
    }
}

This saves us in the following three scenarios:

  • You no longer have to explicitly catch exceptions per method
  • Those times you forgot to add exception handling will be caught
  • Enables you to catch any exceptions further up the HTTP pipeline and propagate to a configured result

Not happy with the error format?

If you're not happy with the default error format then it can be changed in one of two places.

First you can set a global error format via the MessageFormatter method:

Global formatter

public class Startup
{
    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        app.UseWebApiGlobalExceptionHandler(x =>
        {
            x.ForException<ArgumentException>().ReturnStatusCode(HttpStatusCode.BadRequest);
            x.MessageFormatter(exception => JsonConvert.SerializeObject(new {
                error = new {
                    message = "Something went wrong",
                    statusCode = exception.HttpStatusCode
                }
            }));
        });

        app.UseMvc();
    }
}

Exception specific formatter

Alternatively you can specify a custom message per exception caught, which will override the global one demoed above:

app.UseWebApiGlobalExceptionHandler(x =>
{
    x.ContentType = "text/xml";
    x.ForException<ArgumentException>().ReturnStatusCode(HttpStatusCode.BadRequest).UsingMessageFormatter(
        exception => JsonConvert.SerializeObject(new
        {
            error = new
            {
                message = "Oops, something went wrong"
            }
        }));
    x.MessageFormatter(exception => "This formatter will be overridden when an ArgumentException is thrown");
});

Resulting in the following 400 response:

{
    "error": {
        "message": "Oops, something went wrong"
    }
}

Content type

By default Global Exception Handler is set to output application/json content type, however this can be overridden for those that may prefer to use XML or an alternative format. This can be done via the the ContentType property:

app.UseWebApiGlobalExceptionHandler(x =>
{
    x.ContentType = "text/xml";
    x.ForException<ArgumentException>().ReturnStatusCode(HttpStatusCode.BadRequest)
    x.MessageFormatter(exception => {
        // serialise your XML in here
    });
});

Moving forward

Having used this for a little while now, one suggestion was to implement problem+json as the default content type, standardising the default output which I'm seriously considering. I'm also in the process of building ASP.NET Core MVC compatibility so exceptions can result in views being rendered or requests being redirect to routes (such as a 404 page not found view).

For more information feel free to check out the GitHub page or try it out via Nuget.

Turbo charging your command line with ripgrep

Posted on Tuesday, 12 Sep 2017

In the past few years the command line has made a resurgence in the Windows world. With the .NET Core CLI now a first class citizen and the Linux sub-system for Windows making it easy to run Linux based tools on a Windows machine, it's clear that Microsoft are keen to boost the profile of the CLI amongst developers.

Yet Windows has always come up short on delivering a powerful grep experience for searching via a command line, this is where a tool like ripgrep can help you.

Having first heard about ripgrep after a conversation on Twitter, I was made aware of the fact that most people are actively using it without knowing it as ripgrep is what powers Visual Studio Code's search functionality! (see here)

As someone that's a heavy user of grep in my day-to-day work flow the first thing that blew me away with ripgrep was its blazing fast speed. Having read its benchmarks that it so proudly displays on its GitHub page, at first I was septical - but ripgrep flies on large recursive searches like no other grepping tool I've used.

Let's take a look.

A bit about ripgrep

Written in Rust, ripgrep is a tool that combines the usability of The Silver Searcher (a super fast ack clone) with the raw performance of GNU. In addition, ripgrep also has first class support for Windows, Mac and Linux (available on their GitHub page), so for anyone who regularly works across multiple platforms and is looking to normalise their tool chain then it's well worth a look.

Some of ripgrep's features that sold it to me are:

  • It's crazily fast at searching large directories
  • Ripgrep won't search files already ignored by your .gitignore file (this can easily be overridden when needed).
  • Ignores binary or hidden files by default
  • Easy to search specific file types (making it great for searching for functions or references in code files)
  • Highlights matches in colour
  • Full unicode support
  • First class Windows, Mac and Linux support

Let's take a look at how we can install ripgrep.

Installation

Mac

If you're on a Mac using Homebrew then installation is as easy as:

$ brew install ripgrep

Windows

  • Download the ripgrep executable from their releases page on GitHub
  • Put the executable in a familiar location (c:/tools for instance)
  • Add the aforementioned tools path to your PATH environment

Alternatively if you're using Chocolately then installation is as simple as:

$ choco install ripgrep

Linux

$ yum-config-manager --add-repo=https://copr.fedorainfracloud.org/coprs/carlwgeorge/ripgrep/repo/epel-7/carlwgeorge-ripgrep-epel-7.repo

$ yum install ripgrep

See the ripgrep GitHub page for more installation options.

Usage

Next, let's take a look at some of the use cases for ripgrep in our day-to-day scenarios.

Recursively search contents of files in current directory, while respecting all .gitignore files, ignore hidden files and directories and skip binary files:

$ rg hello

ripgrep search all files

Search the contents of .html and .css files only for the word foobar using the Type flag (t).

$ rg -thtml -tcss foobar

Or return everything but css files using the Type Not flag (T):

$ rg -Tjs foobar

Returns a list of all .css files

$ rg -g *.css --files

ripgrep search all files

More examples

ripgrep has a whole host of other searching options so I'd highly recommend checking out their GitHub page where they reference more examples.

Conclusion

Hopefully this post has given you a taste of how awesome ripgrep is and encouraged you to at least install it and give it a spin. If you're someone that spends a lot of time on the command line for day to day navigation then having a powerful grepping tool at your disposal and getting into the habit of using it whenever you need to locate a file really does help your work flow.

Now, go forth an grep with insane speed!

GraphiQL in ASP.NET Core

Posted on Thursday, 10 Aug 2017

Having recently attended a talk on GraphQL and read about GitHib's glowing post surrounding choice to use GraphQL over REST for their API, I was interested in having a play to see what all of the fuss was about. For those that aren't sure what GraphQL is or where it fits in the stack let me give a brief overview.

What is GraphQL?

GraphQL is a query language (hence QL in the name) created and open-sourced by Facebook that allows you to query an HTTP (or other protocols for that matter) based API. Let me demonstrate with a simple example:

Say you're consuming a REST API on the following resource identifier:

GET service.com/user/1

The response returns on this URI is the following JSON object is:

{
    "Id": 1,
    "FirstName": "Richard",
    "Surname": "Hendricks",
    "Gender": "Male",
    "Age": 31,
    "Occupation": "Pied Piper CEO",
    "RoleId": 5,
    ...
}

Now as a mobile app developer calling this service you're aware of the constraints on bandwidth users face when connected to a mobile network, so returning the whole JSON blob when you're only interested in their FirstName and Surname properties is wasteful, this is called over-fetching data, GraphQL solves this by letting you as a consumer dictate your data needs, as opposed to having it forced upon you by the service.

This problem is a fundamental requirement that REST doesn't solve (in fairness to REST, it never set out to solve this problem, however as the internet has changed it's a problem that does exist).

This is where GraphQL comes in.

Using GraphQL we're given control as consumers to dictate what our data requirements are, so instead of calling the aforementioned URI, instead we POST a query to a GraphQL endpoint (often /graphql) in the following shape:

{
    Id,
    FirstName,
    Surname
}

Our Web API powered GraphQL server fulfills the request, returning the following response:

{
    "Id": 1,
    "FirstName": "Richard",
    "Surname": "Hendrix"
}

This also applies to under-fetching which can best be described as when you have to make multiple calls to join data (following the above example, retrieving the RoleId only to then call another endpoint to get the Role information). In GraphQL's case, we could represent that with the following query, which would save us an additional HTTP request:

{
    Id,
    FirstName,
    Surname,
    Role {
        RoleName
    }
}

The GraphQL query language includes a whole host of other functionality including static type checking, query functions and the like so I would recommend checking it out when you can (or stay tuned for a later post I'm in the process of writing where I demonstrate how to set it up in .NET Core).

So what is GraphiQL?

Now you know what GraphQL is, GraphiQL (pronounced 'graphical') is a web-based JavaScript powered editor that allows you to query a GraphQL endpoint, taking full advantage of the static type checking and intellisense promised by GraphQL. You can consider it the Swagger of the GraphQL world.

In fact, I'd suggest you taking a moment to go try a live exampe of GraphiQL here and see how GraphQL's static type system and help you discover data that's available to you via the documentation and intellisense. GitHub also allow you to query your GitHub activity via their example GraphiQL endpoint too.

Introducing GraphiQL.NET

Traditionally if you wanted to set this up you'd need to configure a whole host of node modules and JavaScript files to enable you to do this, however given .NET Core's powerful middleware/pipeline approach creating a GraphiQL middleware seemed like the obvious way to enable a GraphiQL endpoint.

Now you no longer need to worry about taking a dependency on Node or NPM in your ASP.NET Core solution and can instead add GraphiQL support via a simple middleware calls using GraphiQL.NET (before continuing I feel it's worth mentioning all of the code is up on GitHub.

Setting up GraphiQL in ASP.NET Core

You can install GraphiQL.NET by copying and pasting the following command into your Package Manager Console within Visual Studio (Tools > NuGet Package Manager > Package Manager Console).

Install-Package graphiql

Alternatively you can install it using the .NET Core CLI using the following command:

dotnet add package graphiql

From there all you need to do is call the UseGraphiQl(); extension method within the Configure method within Startup.cs, ensuring you do it before your AddMvc(); registration.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
    app.UseGraphiQl();

    app.UseMvc();
}

Now when you navigate to /graphql you should be greeted with the same familiar GraphiQL screen but without the hassle of having to add node or any NPM packages to your project as a dependency - nice!

The library is still version 1 so if you run into any issues then please do feel free to report them!

Injecting content into your head or body tags via dependency injection using ITagHelperComponent

Posted on Monday, 17 Jul 2017

Having been playing around with the ASP.NET Core 2.0 preview for a little while now, one cool feature I stumbled upon was the addition of the new ITagHelperComponent interface and its use.

What problem does the ITagHelperComponent solve?

Pre .NET Core 2.0, if you're using a library that comes bundled with some static assets such as JavaScript or CSS, you'll know that in order to use the library you have to manually add script and/or link tags (including a reference to the files in your wwwroot folder), to your views. This is far from ideal as not only does it force users to jump through additional hoops, but it also runs the risk of introducing breaking changes when a user decides to remove the library and forgets to remove the JavaScript references, or if you update the library version but forget to change the appropriate JavaScript reference.

This is where the ITagHelperComponent comes in; it allows you to inject content into the header or footer of your application's web page. Essentially, it's dependency injection for your JavaScript or CSS assets! All that's required of the user is they register the dependency with their IoC Container of choice within their Startup.cs file.

Enough talk, let's take a look at how it works. Hopefully a demonstration will clear things up.

Injecting JavaScript or CSS assets into the head or body tags

Imagine we have some JavaScript we'd like to include on each page, this could be from either:

  • A JavaScript and/or CSS library we'd like to use (Bootstrap, Pure etc)
  • Some database driven JavaScript code or value that needs to be included in the head of your page
  • A JavaScript file that's bundled with a library that our users need to include before the closing </body> tag.

In our case, we'll keep it simple - we need to include some database drive JavaScript in our page in the form of some Google Analytics JavaScript.

Creating our JavaScript tag helper component

Looking at the contract of the ITagHelperComponent interface you'll see it's a simple one:

public interface ITagHelperComponent{
    int Order { get; }
    void Init(TagHelperContext context);
    Task ProcessAsync(TagHelperContext context, TagHelperOutput output);
}

We could implement the interface ourselves, or we could lean on the existing TagHelperComponent base class and override only the properties and methods we require. We'll do the later.

Let's start by creating our implementation which we'll call CustomerAnalyticsTagHelper:

// CustomerAnalyticsTagHelper.cs

CustomerAnalyticsTagHelper : TagHelperComponent {}

For this example the only method we're concerned about is the ProcessAsync one, though we will touch on the Order property later.

Let's go ahead and implement it:

// CustomerAnalyticsTagHelper.cs

public class CustomerAnalyticsTagHelper : TagHelperComponent
{
    private readonly ICustomerAnalytics _analytics;

    public CustomerAnalyticsTagHelper(ICustomerAnalytics analytics)
    {
        _analytics = analytics;
    }

    public override Task ProcessAsync(TagHelperContext context, TagHelperOutput output)
    {
        if (string.Equals(context.TagName, "body", StringComparison.Ordinal))
        {
            string analyticsSnippet = @"
            <script>
                (function (i, s, o, g, r, a, m) {
                    i['GoogleAnalyticsObject'] = r; i[r] = i[r] || function () {
                        (i[r].q = i[r].q || []).push(arguments)
                    }, i[r].l = 1 * new Date(); a = s.createElement(o),
                        m = s.getElementsByTagName(o)[0]; a.async = 1; a.src = g; m.parentNode.insertBefore(a, m)
                })(window, document, 'script', '//www.google-analytics.com/analytics.js', 'ga');
                ga('create', '" + _analytics.CustomerUaCode + @"', 'auto')
                ga('send', 'pageview');
            </script>";
            
            output.PostContent.AppendHtmlLine(analyticsSnippet);
        }

        return Task.CompletedTask;
    }
}

As you can see, the TagHelperContext argument gives us context around the tag we're inspecting, in this case we want to look for the body HTML element. If we wanted to drop JavaScript or CSS into the <head></head> tags then we'd inspect tag name of "head" instead.

The TagHelperOutput argument gives us access to a host of properties around where we can place content, these include:

  • PreElement
  • PreContent
  • Content
  • PostContent
  • PostElement
  • IsContentModified
  • Attributes

In this instance we're going to append our JavaScript after the content located within the <body> tag, placing it just before the closing </body> tag.

Dependency Injection in our tag helper

With dependency injection being baked into the ASP.NET Core framework, we're able to inject dependencies into our tag helper - in this case I'm injecting our database driven consumer UA (User Analytics) code.

Registering our tag helper with our IoC container

Now all that's left to do is register our tag helper with our IoC container of choice. In this instance I'm using the build in ASP.NET Core one from the Microsoft.Extensions.DependencyInjection package.

// Startup.cs

public void ConfigureServices(IServiceCollection services)
{
    services.AddSingleton<ICustomerAnalytics, CustomerAnalytics>(); // Data source containing UA code
    services.AddSingleton<ITagHelperComponent, CustomerAnalyticsTagHelper>(); // Our tag helper
    ...
}

Now firing up our tag helper we can see our JavaScript has now been injected in our HTML page without us needing to touch any of our .cshtml Razor files!

...
<body>
    ...
    <script>
        (function (i, s, o, g, r, a, m) {
            i['GoogleAnalyticsObject'] = r; i[r] = i[r] || function () {
                (i[r].q = i[r].q || []).push(arguments)
            }, i[r].l = 1 * new Date(); a = s.createElement(o),
                m = s.getElementsByTagName(o)[0]; a.async = 1; a.src = g; m.parentNode.insertBefore(a, m)
        })(window, document, 'script', '//www.google-analytics.com/analytics.js', 'ga');
        ga('create', 'UA-123456789', 'auto')
        ga('send', 'pageview');
    </script>
</body>
</html>

Ordering our output

If we needed to include more than one script or script file in our output, we can lean on the Order property we saw earlier, overriding this allows us to specify the order of our output. Let's see how we can do this:

// JsLoggingTagHelper.cs

public class JsLoggingTagHelper : TagHelperComponent
{
    public override int Order => 1;

    public override Task ProcessAsync(TagHelperContext context, TagHelperOutput output)
    {
        if (string.Equals(context.TagName, "body", StringComparison.Ordinal))
        {
            const string script = @"<script src=""/jslogger.js""></script>";
            output.PostContent.AppendHtmlLine(script);
        }

        return Task.CompletedTask;
    }
}
// CustomerAnalyticsTagHelper.cs

public class CustomerAnalyticsTagHelper : TagHelperComponent
{
    ...
    public override int Order => 2; // Set our AnalyticsTagHelper to appear after our logger
    ...
}
// Startup.cs

public void ConfigureServices(IServiceCollection services)
{
    services.AddSingleton<ICustomerAnalytics, CustomerAnalytics>();
    services.AddSingleton<ITagHelperComponent, CustomerAnalyticsTagHelper>();
    services.AddSingleton<ITagHelperComponent, JsLoggingTagHelper>();
    ...   
}

When we we launch our application we should see the following HTML output:

<script src="/jslogger.js"></script>
<script>
    (function (i, s, o, g, r, a, m) {
        i['GoogleAnalyticsObject'] = r; i[r] = i[r] || function () {
            (i[r].q = i[r].q || []).push(arguments)
        }, i[r].l = 1 * new Date(); a = s.createElement(o),
            m = s.getElementsByTagName(o)[0]; a.async = 1; a.src = g; m.parentNode.insertBefore(a, m)
    })(window, document, 'script', '//www.google-analytics.com/analytics.js', 'ga');
    ga('create', 'UA-123456789', 'auto')
    ga('send', 'pageview');
</script>
</body>
</html>

Conclusion

Hopefully this post has highlighted how powerful the recent changes to tag helpers are and how using the ITagHelperComponent interface allows us to inject content into our HTML without having to touch any files. This means as a library author we can ease integration for our users by simply asking them to register a type with their IoC container and we can take care of the rest!

.NET Core solution management via the command line interface

Posted on Monday, 03 Jul 2017

One of the strengths boasted by .NET Core is its new command line interface (CLI for short), and by now you're probably aware that Visual Studio, Rider, Visual Studio Code etc shell out to the .NET Core CLI under the bonnet for most .NET Core related operations, so it makes sense that what you're able to do in your favourite IDE you're also able to do via the CLI.

With this in mind, only recently did I spend the time and effort to investigate how easy it was to create and manage a project solution via the CLI, including creating the solution structure, referencing projects along the way and adding them to .NET's .sln file.

It turns out it's incredibly easy and has instantly become my preferred way of managing solutions. Hopefully by the end of this post you'll arrive at the same conclusion too.

Benefits of using the CLI for solution management

So what are the benefits of using the CLI for solution management? Let's have a look:

  • Something that has always been a fiddly endevour of UI interactions is now so much simpler via the CLI - what's more, you don't need to open your editor of choice if you want to create references or update a NuGet package.

  • Using the CLI for creating projects and solutions is particularly helpful if (like me) you work across multiple operating systems and want to normalise your tool chain.

  • Loading an IDE just to update a NuGet package seems unnecessary

Let's begin!

Creating our solution

So let's take a look at how we can create the following project structure using the .NET Core CLI.

piedpiper
└── src
    ├── piedpiper.domain
    ├── piedpiper.sln
    ├── piedpiper.tests
    └── piedpiper.website

First we'll create our solution (.sln) file, I've always preferred to create the solution file in the top level source folder but the choice is yours (just bear in mind to specify the right path in the commands used throughout the rest of this post).

# /src/

$ dotnet new sln -n piedpiper

This will create a new sln file called piedpiper.sln.

Next we use the output parameter on the dotnet new <projecttype> command to create a project in a particular folder:

# /src/

$ dotnet new mvc -o piedpiper.website

This will create an ASP.NET Core MVC application in the piedpiper.website folder in the same directory. If we were to look at our folder structure thus far it looks like this:

# /src/

$ ls -la

piedpiper.sln
piedpiper.website

Next we can do the same for our domain and test projects:

# /src/

$ dotnet new classlib -o piedpiper.domain
$ dotnet new xunit -o piedpiper.tests

Adding our projects to our solution

At this point we've got a solution file that has no projects referenced, we can verify this by calling the list command like so:

# /src/

$ dotnet sln list

No projects found in the solution.

Next we'll add our projects to our solution file. Once upon a time doing this involved opening Visual Studio then adding a reference to each project manually. Thankfully this can also be done via the .NET Core CLI.

Now start to add each project with the following command, we do this by referencing the .csproj file:

# /src/

$ dotnet sln add piedpiper.website/piedpiper.website.csproj
$ dotnet sln add piedpiper.domain/piedpiper.domain.csproj
$ dotnet sln add piedpiper.tests/piedpiper.tests.csproj

Note: If you're using a Linux/Unix based shell you can do this in a single command using a globbing pattern!

# /src/

$ dotnet sln add **/*.csproj

Project `piedpiper.domain/piedpiper.domain.csproj` added to the solution.
Project `piedpiper.tests/piedpiper.tests.csproj` added to the solution.
Project `piedpiper.website/piedpiper.website.csproj` added to the solution.

Now when we call list on our solution file we should get the following output:

# /src/

$ dotnet sln list

Project reference(s)
--------------------
piedpiper.domain/piedpiper.domain.csproj
piedpiper.tests/piedpiper.tests.csproj
piedpiper.website/piedpiper.website.csproj

So far so good!

Adding a project reference to a project

Next up we want to start adding project references to our project, linking our domain library to our website and test library via the dotnet add reference command:

# /src/

$ dotnet add piedpiper.tests reference piedpiper.domain/piedpiper.domain.csproj

Reference `..\piedpiper.domain\piedpiper.domain.csproj` added to the project.

Now if you were to view the contents of your test project we'd see our domain library has now been referenced:

# /src/piedpiper.tests/

$ cat piedpiper.tests/piedpiper.tests.csproj 

<Project Sdk="Microsoft.NET.Sdk">

  ...

  <ItemGroup>
    <ProjectReference Include="..\piedpiper.domain\piedpiper.domain.csproj" />
  </ItemGroup>

</Project>

Next we'll do the same for our website project, so let's go to our website folder and run the same command:

# /src/

$ dotnet add piedpiper.website reference piedpiper.domain/piedpiper.domain.csproj
# /src/

$ cat piedpiper.website/piedpiper.website.csproj 

<Project Sdk="Microsoft.NET.Sdk">

  ...

<ItemGroup>
    <ProjectReference Include="..\piedpiper.domain\piedpiper.domain.csproj" />
  </ItemGroup>

</Project>

At this point we're done!

If we navigate back to our root source folder and run the build command we should see everything build successfully:

$ cd ../

# /src/

$ dotnet build

icrosoft (R) Build Engine version 15.3.388.41745 for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.

    piedpiper.domain -> /Users/josephwoodward/Desktop/demo/src/piedpiper.domain/bin/Debug/netstandard2.0/piedpiper.domain.dll
    piedpiper.tests -> /Users/josephwoodward/Desktop/demo/src/piedpiper.tests/bin/Debug/netcoreapp2.0/piedpiper.tests.dll
    piedpiper.website -> /Users/josephwoodward/Desktop/demo/src/piedpiper.website/bin/Debug/netcoreapp2.0/piedpiper.website.dll

Build succeeded.

    0 Warning(s)
    0 Error(s)

Time Elapsed 00:00:08.08

Adding a NuGet package to a project or updating it

Before wrapping up, let's say we wanted to add a NuGet package to one of our projects, we can do this using the add package command.

First navigate to the project you want to add a NuGet package to:

# /src/

$ cd pipedpiper.tests/

$ dotnet add package shouldly

info : Adding PackageReference for package 'shouldly'
...
log  : Installing Shouldly 2.8.3.

Optionally we could specify a version we'd like to install using the version argument:

$ dotnet add package shouldly -v 2.8.2

Updating a NuGet package

Updating a NuGet package to the latest version is just as easy, simply use the same command without the version argument:

dotnet add package shouldly

Conclusion

If you've managed to get this far then well done, hopefully by now you've realised how easy creating and managing a solution is using the new .NET Core command line interface.

One of the great powers of using the CLI is you can now turn creating the same project structure into a handy bash script which you could alias and reuse!

#!/bin/bash

echo "Enter project name, followed by [ENTER]:"
read projname

echo "Creating solution for $projname"

dotnet new sln -n $projname

dotnet new mvc -o $projname.website
dotnet new classlib -o $projname.domain
dotnet new xunit -o $projname.tests

echo "Adding projects to solution"
dotnet sln add **/*.csproj

echo "Referencing projects"
dotnet add $projname.website reference $projname.domain/$projname.domain.csproj
dotnet add $projname.tests reference $projname.domain/$projname.domain.csproj

Happy coding!

Tips on starting and running a .NET user group

Posted on Thursday, 22 Jun 2017

As someone that organises and runs the .NET South West in Bristol, I've had a number of people online and in person approach me expressing an interest in starting a .NET focused meet up but not sure where to start; so much so that I thought it would be good to summarise the challenges and hurdles of running a meet up in a succinct blog post.

.NET South West meet up

Running a user group isn't a sail in the park, but it's not hard or particularly time consuming either. Hopefully this post will provide some valuable information and reassurance to those looking to create and foster a local .NET focused community of people wishing to learn, share, expand their knowledge and meet local developers with similar passions and interests.

1. You can do it

The the very first question that can play through your mind is whether you're capable of starting and running such a meet up. If you're having any self-doubts about whether you're knowledgeable enough to run a user group, or whether you have the confidence to organise it - don't.

Running a user group is an incredibly rewarding experience that starts off small and grows, as it grows you grow with it. Everyone that attends user groups are there to learn, that also applies to the organiser(s) too. So don't let any hesitations or self-doubts get in your way.

2. Gauging the interest in a .NET user group

One of the first hurdles you face when starting a user group is trying to gauge the level of interest that exists in your local area.

I've found a great way to gauge interest is to simply create a meet up group on the popular user group organising site meetup.com informing people that you're interested in seeing what the level of interest is like. You can create an event with no date and set the title to "To be announced" then leaving it active for a few months. Meetup.com notifies people with similar interests of the new user group and over time you start to get people joining the group waiting for the first meet to be announced. In the meantime your meet up page has a forum where you can start conversations with some of the new members where you can look for assistance or ask if anyone knows of a suitable venue.

This time is a great opportunity to get to know local developers before the meet up.

3. Having an online presence

When it comes to organising a meet up, websites like meetup.com make the whole process a breeze. The service isn't free (starting at $9.99 - see more here) but personally I would say it's worth it in terms of how much effort it saves you. MeetUp provides services such as:

  • Posting and announcing meet ups to members
  • Sending out regular emails to your user group
  • Increases meet up visibility to local developers on the platform
  • Semi-brandable pages (though this could be better)
  • Ability to link to add sponsors within your meet up page

If you are on a budget then there are free alternatives such as the free tier of EventBrite which you can link to from a website you could set up.

4. Many hands make light work

Starting a meet up requires a lot of work, once the meet up is running the number of hours required to keep it ticking along dramatically reduces. That said, there are times where you may have personal commitments that make it difficult to focus on the meet up - so why not look to see if anyone else is interested in helping?

If you don't have any close friends or work colleagues that are interested in helping you can mention you're looking for help on your meet up page discussed previously. There's also nothing to stop you from talking to members once the meet up is under way to see if anyone is interested in helping out. If you do have people interested in helping then why not create a Slack channel for your group where you can stay organised.

5. The Venue

Next up is the venue, this is often the most challenging part as most venues will require payment for a room. All is not lost though as there are a few options available to you:

  • Look for sponsorship to cover the cost of the venue - some companies (such as recruitment companies) are often open to ideas of sponsoring meet ups in one way or another for publicity. Naturally this all depends on your stance around having recruitment companies at your meet up.

  • Approach software companies to see if they are interested in hosting the event. Often you'll find software companies are geared up for hosting meet ups and happy to do so in exchange for interacting with the community (and potentially saving them recruitment costs).

  • Small pubs - I know of a few meet up organisers who host at pubs in a back room as the venue are often aware that a few people stay behind to have a few drinks so it works in their favour too.

Ultimately you want to ensure is you have consistency, so talk to the venue to and make it clear that you're looking for a long-term solution.

6. Speakers

Once you've got your venue sorted the next task you face ( and this will be a regular one) is sourcing speakers. Luckily this finding speakers is often reasonably simple and once your meet up becomes established you'll start to find you have speakers approaching you with interest in giving a talk. I would also recommend looking at other near by meet ups for past speakers and making contact with them via Twitter. Networking at conferences is also a great way of finding potential speakers too.

In addition to the aforementioned suggestions, Microsoft also have a handy Microsoft Evangelists (UK only) for finding Evangelists nearby that are often more than happy to travel to your user group to give a talk.

Finally, encourage attendees of your meet up to give talks. You're trying to foster a community, so try to drive engagement and ownership by opening up space for short 15 minute lightning talks.

7. Sponsorship / Competitions and Prizes

Once your user group is off the ground I would recommend reaching out to software companies to see if they provide sponsorship for meet ups in the shape of prize licences or extended trials - for instance, JetBrains are well known for their awesome community support programme which I'd highly recommend taking a look at.

Some companies require your meet up to be a certain size, some are more flexible on what they can provide, often being happy to ship swag such as stickers and t-shirts instead which can be given away as prizes during your meet up (though if you're accepting swag from abroad then do be sure to clarify import tax so you don't get stung).

Swag and prizes aren't essential for a meet up, but it's something worth considering to spice things up a bit.

Go for it

Hopefully this post has given you some ideas if you are considering setting up a meet up. Organising a meet up and running it is an extremely satisfying responsibility and it's great seeing a community of developers coming together to share knowledge and learn from one another. So what are you waiting for, go for it!

Retrospective of DDD 12 in Reading

Posted on Monday, 12 Jun 2017

Yesterday was my second time both attending and speaking at the DDD conference run out of the Microsoft offices in Reading, and as a tradition I like to finish the occasion by writing a short retrospective post highlighting the day.

Arrival

Living just two hours' drive from Reading I decided to drive up early in the morning with fellow Just Eater Stuart Lang who was also giving a talk. After arriving and grabbing our speaker polo shirts we headed to the speaker room to say hello to the other speakers - some of whom I know from previous conferences and through organising DDD South West.

Speakers' Room

I always enjoying spending time in the speakers' room. As a relative newcomer to speaking I find it's a great opportunity to get some tips from more experienced speakers as well as geek out about programming. In addition, I still had some preparation to do so it's a quiet room where I can refine and tweak my talk slides.

Talk 1 - Goodbye REST; Hello GraphQL by Sandeep Singh

Even though I had preparation to take care of for my post-lunch talk, I was determined to attend Sandeep Singh's talk on GraphQL as it's a technology I've heard lots about via podcasts and have been interested to learn more. In addition, working at Just Eat where we have a lot of distributed services that are extremely chatty over HTTP - I was interested to see if and where GraphQL could help.

Having met Sandeep Singh for the first time at DDD South West he's clearly a knowledgeable guy so I was expecting great thing, and he delivered. The talk was very informative and by the end of the talk Sandeep had demonstrated the power of GraphQL (along with well-balanced considerations that need to be made) and answered the majority of questions I had forming in my notes as the talk progressed. It's definitely sparked my interest in the GraphQL and I'm keen to start playing with it.

My talk - Building a better Web API architecture using CQRS

Having submitted two talks to DDD Reading (this and my Razor Deep Dive I delivered at DDD South West), the one that received the most votes was this talk, a topic and architectural style I've been extremely interested in for a number of years now (long before MediatR was a thing!).

Having spoken at DDD Reading before, this year my talk was held in a rather intimidating room called Chicago that seats up to 90 with a stage overlooking the attendees. All in all I was happy with the way the talk went, however I did burn through my slides far faster than I did during practice, luckily though the attendees had plenty of questions so I had plenty of opportunity to answer and expand on them with the remaining time. I'll chalk this down to experience and learn from it.

I must say that one of my concerns whilst preparing the talk was the split opinions around what CQRS really is, and how it differs to Bertrand Meyer's formulation of CQS (coincidentally, there was a healthy debate around the definition moments before my talk in the speakers' room between two speakers well-versed in the area!).

Talk 3 - Async in C# - The Good, the Bad and the Ugly by Stuart Lang

Having worked with Stuart for a year now and known him for slightly longer, it's clear that his knowledge on async/await is spot on and certainly far deeper than any other developer I've met. Having seen a slightly alternative version of his Stuart's talk delivered internally at Just Eat I was familiar with the narrative, however I was keen to attend to this talk because C#'s async language construct is deep area and one I'm interested in.

Overall the talk went really well with a nice break in the middle allowing for time for questions before moving on (something I may have to try in my talk moving forward).

Conclusion

Overall DDD 12 was an awesome day, I'd love to have attended more talks and spend more time speaking with people but keen to deliver my talk to the best of my ability I had work to be done. None the less after my talk was over it was great catching up with familiar faces and meeting new people (easily one of my favourite parts of a conference as I'm a bit of a chatterbox!).

I'll close this post with a massive thanks to the event organisers (having helped organise DDD South West this year I have full appreciation for the hard work and time that goes into organising such an event), and also the sponsors - without them the conference would not have been possible.

Until next year!

Another day, another fantastic DDD (DDD South West 7)

Posted on Friday, 12 May 2017

Saturday 6th of May marked the day of another great DDD South West event. Having attended other DDD events around the UK I've always felt they had a special community feel to them, a feeling I've not felt at other conferences. This year's DDD South West event was particularly special to me not only because I was selected to speak at the conference, but because this year I was part of the organisation team.

This post is just a short summary of the highs and lows of organising the conference and the day itself.

Organising the conference

This year I was honoured to be part of the organisation team for DDD South West, and I loved every minute of it. Myself and the other organisers (there were 5 of us in total, some of whom have been organising the conference for well over 5 or 6 years!) would hold regular meetings via Skype, breaking down responsibilities such as sponsorship, catering, speaker related tasks and finance. Initially these meetings were about a month apart, but as the conference drew closer and the pressure started to set in, we would meet weekly.

During the organising of DDD Southwest I've gained a true appreciation for the amount of effort conference organisers (especially those that run non-profit events in their spare time, such as those I've had the pleasure of working with) put in to organising an event for the community.

On the day everything went as expected with no hiccups, though as I was speaking on the day I was definitely a lot more stressed than I would have been otherwise. After the event we all headed over to the Just Eat offices for an after party, which I'll cover shortly.

For more information on the day, there are two fantastic write ups by Craig Phillips and Dan Clarke that I'd highly recommend reading.

ASP.NET Core Razor Deep Dive talk

Whilst organising DDD South west 7, I figured why not pile the stress on and submit a few sessions. Last year I caught the speaking bug and this year I was keen to continue to pursue it, so I submitted 3 sessions each on very different subjects and was quite surprised to see the ASP.NET Core Razor Deep Dive was selected. It's not the sexiest topic to talk about, but non-the-less it was a great opportunity to share some experience and give people information they can take away and directly apply in real life (something I always try to do when putting together a talk).

The talk itself was focused on the new Razor changes and features introduced in ASP.NET Core, why and where you'd use them. The topics included:

  • Razor as a view engine and it's syntax
  • Tag helpers and how powerful they are
  • Create your own tag helpers for some really interesting use cases - this part was especially great as I could see a lot of people in the audience had the "light bulb" moment. This brings me great joy as I know there's something they were able to take away from the talk.
  • Why Partial Views and Child Actions are limiting
  • View Components and how you can use them to create more modular, reusable views
  • And finished off with the changes people can expect to see in Razor when ASP.NET Core 2.0 is released (ITagHelperComponents and Razor Pages)

Overall I was really happy with the talk and the turnout. The feedback I received was great, with lots of mentions of the word "engaging" (which as a relatively new speaker still trying to find his own style, is always positive to hear).

DDD South West 7 after party

Once all was done and dusted and the conference drew to a close, a large majority of us took a 5 minute stroll over to the Just Eat offices for an after party where free pizza and beer was on offer for the attendees (Thanks Just Eat!).

After a long day of ensuring the event was staying on track coupled with the stresses of talking, it was great to be able to unwind and enjoy spending time mingling with the various attendees and sponsors, all of whom made the event possible.

Bring on DDD South West 8!

C# 7 ValueTuple types and their limitations

Posted on Thursday, 20 Apr 2017

Having been experimenting and reading lots about C# 7's tuples recently I thought I'd summarise some of the more interesting aspects about this new C# language feature whilst highlighting some areas and limitations that may not be apparent when first using them.

Hopefully by the end of this post you'll have a firm understanding of the new feature and how it differs in comparison to the Tuple type introduced in .NET 4.

A quick look at tuple types as a language feature

Prior to C# 7, .NET's tuples were an awkward somewhat retrofitted approach to what's a powerful language feature. As a result you don't see them being used as much as they are in other language like Python or to some extend Go (which doesn't support Tuples, but has many features they provide such as multiple return values) - with this in mind it behoves me to briefly explain what Tuples are and why you'd use them for those that may not have touched them before.

So what are Tuples and where you would use them?

The tuples type's main strengths lie in allowing you to group types into a closely related data structure (much like creating a class to represent more than one value), this means they're particularly useful in cases such as returning more than one value from a method, for instance:

public class ValidationResult {
    public string Message { get; set; }
    public bool IsValid { get; set; }
}

var result = ValidateX(x);
if (!result.IsValid)
{
    Logger.Log(Error, result.Message);
}

Whilst there's nothing wrong with this example, sometimes we don't want to have to create a type to represent a set of data - we want types to work for us, not against us; this is where the tuple type's utility is.

In fact, in a languages such as Go (which allow multiple return types from a function) we see such a pattern used extensively throughout their standard library.

bytes, err := ioutil.ReadFile("file.json")
if err != nil {
    log.Fatal(err)
}

Multiple return types can also stop you from needing to use the convoluted TryParse method pattern with out parameters.

Now we've got that covered and we're all on the same page, let's continue.

In the beginning there was System.Tuple<T1, T2, .., T7>

Verbosity
Back in .NET 4 we saw the appearance of the System.Tuple<T> class which introduced a verbose and somewhat awkward API:

var person = new Tuple<string, int>("John Smith", 43);
Console.WriteLine($"Name: {person.Item1}, Age {person.Item2}");
// Output: Name: John Smith, Age: 43

Alternatively there's a static factory method that cleared things up a bit:

var person = Tuple.Create("John Smith", 43);

But there was still room for improvement such as:

No named elements
One of the weaknesses of the System.Tuple type is that you have to refer to your elements as Item1, Item2 etc instead of by their 'named' version (allowing you to unpack a tuple and reference the properties directly) like you can in Python:

name, age = person_tuple
print name

Garbage collection pressure
In addition the System.Tuple type is a reference type, meaning you pay the penalty of a heap allocation, thus increasing pressure on the garbage collector.

public class Tuple<T1> : IStructuralEquatable, IStructuralComparable, IComparable

Nonetheless the System.Tuple type scratched an itch and solved a problem, especially if you owned the API.

C# 7 Tuples to the rescue

With the introduction of the System.ValueTuple type in C# 7, a lot of these problems have been solved (it's worth mentioning that if you want to use or play with the new Tuple type you're going to need to download the following NuGet package.

Now in C# 7 you can do such things as:

Tuple literals

// This is awesome and really clears things up; we can even directly reference the named value!

var person = (Name: "John Smith", Age: 43);
Console.WriteLine(person.Name); // John Smith

Tuple (multiple) return types

(string, int) GetPerson()
{
    var name = "John Smith";
    var age = 32;
    return (name, age);
}

var person = GetPerson();
Console.WriteLine(person.Item1); // John Smith

Even named Tuple return types!

(string Name, int Age) GetPerson()
{
    var name = "John Smith";
    var age = 32;
    return (name, age);
}

var person = GetPerson();
Console.WriteLine(person.Name); // John Smith

If that wasn't enough you can also deconstruct types:

public class Person
{
    public string Name => "John Smith";
    public int Age => 43;

    public void Deconstruct(out string name, out int age)
    {
        name = Name;
        age = Age;
    }
}

...

var person = new Person();
var (name, age) = person;

Console.WriteLine(name); // John Smith

As you can see the System.ValueTuple greatly improves on the older version, allowing you to write far more declarative and succinct code.

It's a value type, baby!
In addition (if the name hadn't given it away!) C# 7's Tuple type is now a value type, meaning there's no heap allocation and one less de-allocation to worry about when compacting the GC heap. This means the ValueTuple can be used in the more performance critical code.

Now going back to our original example where we created a type to represent the return value of our validation method, we can delete that type (because deleting code is always a great feeling) and clean things up a bit:

var (message, isValid) = ValidateX(x);
if (!isvalid)
{
    Logger.Log(Log.Error, message);
}

Much better! We've now got the same code without the need to create a separate type just to represent our return value.

C# 7 Tuple's limitations

So far we've looked at what makes the ValueTuple special, but in order to know the full story we should look at what limitations exist so we can make an educated descision on when and where to use them.

Let's take the same person tuple and serialise it to a JSON object. With our named elements we should expect to see an object that resembles our tuple.

var person = (Name: "John", Last: "Smith");
var result = JsonConvert.SerializeObject(person);

Console.WriteLine(result);
// {"Item1":"John","Item2":"Smith"}

Wait, what? where have our keys gone?

To understand what's going on here we need to take a look at how ValueTuples work.

How the C# 7 ValueTuple type works

Let's take our GetPerson method example that returns a named tuple and check out the de-compiled source. No need to install a de-compiler for this, a really handy website called tryroslyn.azurewebsites.net will do everything we need.

// Our code
using System;
public class C {
    public void M() {
        var person = GetPerson();
        Console.WriteLine(person.Name + " is " + person.Age);
    }
    
    (string Name, int Age) GetPerson()
    {
        var name = "John Smith";
        var age = 32;
        return (name, age);
    }
}

You'll see that when de-compiled, the GetPerson method is simply syntactic sugar for the following:

// Our code de-compiled
public class C
{
    public void M()
    {
        ValueTuple<string, int> person = this.GetPerson();
        Console.WriteLine(person.Item1 + " is " + person.Item2);
    }
    [return: TupleElementNames(new string[] {
        "Name",
        "Age"
    })]
    private ValueTuple<string, int> GetPerson()
    {
        string item = "John Smith";
        int item2 = 32;
        return new ValueTuple<string, int>(item, item2);
    }
}

If you take a moment to look over the de-compiled source you'll see two areas of particular interest to us:

First of all, our Console.WriteLine() method call to our named elements have gone and been replaced with Item1 and Item2. What's happened to our named elements? Looking further down the code you'll see they've actually been pulled out and added via the TupleElementNames attribute.

...
[return: TupleElementNames(new string[] {
    "Name",
    "Age"
})]
...

This is because the ValueTuple type's named elements are erased at runtime, meaning there's no runtime representation of them. In fact, if we were to view the IL (within the TryRoslyn website switch the Decompiled dropdown to IL), you'll see any mention of our named elements have completely vanished!

IL_0000: nop // Do nothing (No operation)
IL_0001: ldarg.0 // Load argument 0 onto the stack
IL_0002: call instance valuetype [System.ValueTuple]System.ValueTuple`2<string, int32> C::GetPerson() // Call method indicated on the stack with arguments
IL_0007: stloc.0 // Pop a value from stack into local variable 0
IL_0008: ldloc.0 // Load local variable 0 onto stack
IL_0009: ldfld !0 valuetype [System.ValueTuple]System.ValueTuple`2<string, int32>::Item1 // Push the value of field of object (or value type) obj, onto the stack
IL_000e: ldstr " is " // Push a string object for the literal string
IL_0013: ldloc.0 // Load local variable 0 onto stack
IL_0014: ldfld !1 valuetype [System.ValueTuple]System.ValueTuple`2<string, int32>::Item2 // Push the value of field of object (or value type) obj, onto the stack
IL_0019: box [mscorlib]System.Int32 // Convert a boxable value to its boxed form
IL_001e: call string [mscorlib]System.String::Concat(object, object, object) // Call method indicated on the stack with arguments
IL_0023: call void [mscorlib]System.Console::WriteLine(string) // Call method indicated on the stack with arguments
IL_0028: nop  // Do nothing (No operation)
IL_0029: ret  // Return from method, possibly with a value

So what does that mean to us as developers?

No reflection on named elements

The absence of named elements in the compiled source means that it's not possible to use reflection to get those name elements via reflection, which limits ValueTuple's utility.

This is because under the bonnet the compiler is erasing the named elements and reverting to the Item1 and Item2 properties, meaning our serialiser doesn't have access to the properties.

I would highly recommend reading Marc Gravell's Exploring tuples as a library author post where he discusses a similar hurdle when trying to use tuples within Dapper.

No dynamic access to named elements

This also means when casting your tuple to a dynamic object results in the loss of the named elements. This can be witnessed by running the following example:

var person = (Name: "John", Last: "Smith");
var dynamicPerson = (dynamic)person;
Console.WriteLine(dynamicPerson.Name);

Results in the following error RuntimeBinder exception:

Unhandled Exception: Microsoft.CSharp.RuntimeBinder.RuntimeBinderException: 'System.ValueTuple<string,string>' does not contain a definition for 'Name'
   at CallSite.Target(Closure , CallSite , Object )
   at CallSite.Target(Closure , CallSite , Object )
   at TupleDemo.Program.Main(String[] args) in /Users/josephwoodward/Dev/TupleDemo/Program.cs:line 16

Thanks to Daniel Crabtree's post for highlighting this!

No using named Tuples in Razor views either (unless they're declared in your view)

Naturally the name erasure in C# 7 tuples also means that you cannot use the names in your view from your view models. For instance:

public class ExampleViewModel {

    public (string Name, int Age) Person => ("John Smith", 30);

}
public class HomeController : Controller
{
    ...
    public IActionResult About()
    {
        var model = new ExampleViewModel();

        return View(model);
    }
}
// About.cshtml
@model TupleDemo3.Models.ExampleViewModel

<h1>Hello @Model.Person.Name</h1>

Results in the following error:

'ValueTuple<string, int>' does not contain a definition for 'Name' and no extension method 'Name' accepting a first argument of type 'ValueTuple<string, int>' could be found (are you missing a using directive or an assembly reference?)

Though switching the print statement to @Model.Person.Item1 outputs the result you'd expect.

Conclusion

That's enough about Tuples for now. Some of the examples used in this post aren't approaches you'd use in real life, but hopefully go to demonstrate some of the limitations of the new type and where you can and can't use C# 7's new ValueTuple type.