Latest Blog Posts

Going serverless with .NET Core, AWS Lambda and the Serverless framework

Posted on Wednesday, 08 Nov 2017

Recently I gave a talk titled 'Going serverless with AWS Lambda' where I briefly went through what the serverless is and the architectural advantages it gives you along with the trades offs to consider. Half way through the talk I then went on to demonstrate the Serverless framework and was surprised by the number of people that are currently experimenting with AWS Lambda or Azure Functions that have never heard of it, so much so that I thought I'd write a post demonstrating its value.

What is the Serverless framework and what problem does it aim to solve?

Serverless, which I'll refer to as the Serverless framework to avoid confusion, is a cloud provider agnostic toolkit designed to aid operations around building, managing and deploying serverless components, whether full-blown serverless architectures or disparate functions (or FaaS).

To give a more concrete example, Serverless framework aims to provide developers with an interface that abstracts away the vendor's cloud specific APIs and configuration whilst simultaneously providing you with additional tooling to be able to test and deploy functions with ease, perfect for rapid feedback or being able to integrate into your CI/CD pipelines.

Let's take a look.

Getting started with .NET Core and the Serverless Framework

First of all we're going to need to install the Serverless framework:

$ npm install serverless -g

Next let's see what Serverless framework templates are currently available:

$ serverless create --help

Note: In addition to the serverless command line argument, sls is a nice short hand equivalent, producing the same results:

$ sls create --help
Template for the service. Available templates:
"aws-nodejs",
"aws-python",
"aws-python3",
"aws-groovy-gradle",
"aws-java-maven",
"aws-java-gradle",
"aws-scala-sbt",
"aws-csharp",
"aws-fsharp",
"azure-nodejs",
"openwhisk-nodejs",
"openwhisk-python",
"openwhisk-swift",
"google-nodejs"

To create our .NET Core template we use the --template command:

$ serverless create --template aws-csharp --name demo

Let's take a moment to look at the files created by the Serverless framework and go through the more note worthy ones:

$ ls -la
.
..
.gitignore
Handler.cs
aws-csharp.csproj
build.cmd
build.sh
global.json
serverless.yml

Handler.cs
Opening Handler.cs reveals it's the function that will be invoked in response to an event such as notifications, S3 updates and so forth.

//Handler.cs

[assembly:LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]
namespace AwsDotnetCsharp
{
    public class Handler
    {
       public Response Hello(Request request)
       {
           return new Response("Go Serverless v1.0! Your function executed successfully!", request);
       }
    }
    ...
}

serverless.yml
This is where the magic happens. The serverless.yml file is your schema which defines the configuration of your Lambda(s) (or Azure functions) and how they interact with your wider architecture. Once configured Serverless framework generates a Cloud Formation template off of the back of this which AWS uses to provision the appropriate infrastructure.

global.json
Open global.json and you'll notice it's pinned to version 1.0.4 of the .NET Core framework, this is because as of the time of writing this .NET Core 2.0 isn't supported, though Amazon have promised support is on its way.

{
  "sdk": {
    "version": "1.0.4"
  }
}

Now, let's go ahead and create our Lambda.

Creating our .NET Core Lambda

For the purpose of this demonstration we're going to create a Lambda that's reachable via HTTP. In order to do this we're going to need to stand up an API Gateway in front of it. Normally doing this would require logging into the AWS Console and manually configuring an API Gateway, so it's a perfect example to demonstrate how Serverless framework can take care of all of a lot of heavy lifting.

Let's head over to our serverless.yml file and scroll down to the following section:

# serverless.yml
functions:
  hello:
    handler: CsharpHandlers::AwsDotnetCsharp.Handler::Hello

#    The following are a few example events you can configure
#    NOTE: Please make sure to change your handler code to work with those events
#    Check the event documentation for details
#    events:
#      - http:
#          path: users/create
#          method: get
#      - s3: ${env:BUCKET}
#      - schedule: rate(10 minutes)
#      - sns: greeter-topic
#      - stream: arn:aws:dynamodb:region:XXXXXX:table/foo/stream/1970-01-01T00:00:00.000
#      - alexaSkill
#      - iot:
#          sql: "SELECT * FROM 'some_topic'"
#      - cloudwatchEvent:
#          event:
#            source:
#              - "aws.ec2"
#            detail-type:
#              - "EC2 Instance State-change Notification"
#            detail:
#              state:
#                - pending
#      - cloudwatchLog: '/aws/lambda/hello'
#      - cognitoUserPool:
#          pool: MyUserPool
#          trigger: PreSignUp

#    Define function environment variables here
#    environment:
#      variable2: value2
...

This part of the serverless.yml file describes the various events that our Lambda should respond to. As we're going to be using API Gateway as our method of invocation we can remove a large portion of this for clarity, then uncomment the event and its properties pertaining to http:

functions:
  hello:
    handler: CsharpHandlers::AwsDotnetCsharp.Handler::Hello

#    The following are a few example events you can configure
#    NOTE: Please make sure to change your handler code to work with those events
#    Check the event documentation for details
   events:
     - http:
         path: users/create
         method: get

#    Define function environment variables here
#    environment:
#      variable2: value2
...

Creating our .NET Core C# Lambda

Because we're using HTTP as our method of invocation we need to add the Amazon.Lambda.APIGatewayEvents NuGet package to our Lambda and reference the correct request and return types, we can do that using the following .NET Core CLI command:

$ dotnet add package Amazon.Lambda.APIGatewayEvents

Now let's open our Handler.cs file and update our Lambda to return the correct response type:

public APIGatewayProxyResponse Hello(APIGatewayProxyRequest request, ILambdaContext context)
{
    // Log entries show up in CloudWatch
    context.Logger.LogLine("Example log entry\n");

    var response = new APIGatewayProxyResponse
    {
        StatusCode = (int)HttpStatusCode.OK,
        Body = "{ \"Message\": \"Hello World\" }",
        Headers = new Dictionary<string, string> {{ "Content-Type", "application/json" }}
    };

    return response;
}

Now we're set. Let's move on to deploying our Lambda.

Registering an account on AWS Lambda

If you're reading this then I assume you already have an account with AWS, if not you're going to need to head over to their registration page and sign up.

Setting AWS credentials in Serverless framework

In order to enable the Serverless framework to create Lambdas and accommodating infrastructure around our Lambdas we're going to need to set up our AWS credentials. The Serverless framework documentation does a good job of explaining how to do this, but for those that know how to generate keys in AWS you can set your credentials via the following command:

serverless config credentials --provider aws --key <Your Key> --secret <Your Secret>

Build and deploy our .NET Core Lambda

Now we're good to go!

Let's verify our setup by deploying our Lambda, this will give us an opportunity to see just how rapid the feedback cycle can be when using Serverless framework.

At this point if we weren't using the Serverless framework we'd have to manually package our Lambda up into a .zip file in a certain structure then manually log into AWS to upload our zip and create an infrastructure (in this instance API Gateway in front of our Lambda). But as we're using the Serverless framework it'll take care of all of the heavy lifting.

First let's build our .NET Core Lambda:

$ sh build.sh

or if you're on Windows:

$ build.cmd

Next we'll deploy it.

Deploying Lambdas using the Serverless framework is performed using the deploy argument. In this instance we'll set the output to be verbose using the -v flag so we can see what Serverless framework is doing:

$ serverless deploy -v

Once completed you should see output similar to the following:

$ serverless deploy -v

Serverless: Packaging service...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
CloudFormation - UPDATE_IN_PROGRESS - 
...
CloudFormation - UPDATE_COMPLETE - AWS::CloudFormation::Stack - demo-dev
Serverless: Stack update finished...
Service Information
service: demo
stage: dev
region: us-east-1
api keys:
  None
endpoints:
  GET - https://b2353kdlcc.execute-api.us-east-1.amazonaws.com/dev/users/create
functions:
  hello: demo-dev-hello

Stack Outputs
HelloLambdaFunctionQualifiedArn: arn:aws:lambda:us-east-1:082958828786:function:demo-dev-hello:2
ServiceEndpoint: https://b2353kdlcc.execute-api.us-east-1.amazonaws.com/dev
ServerlessDeploymentBucketName: demo-dev-serverlessdeploymentbucket-1o4sd9lppvgfv

Now if we were to log into our AWS account and navigate to the the Cloud Formation page in the region us-east-1 (see the console output) we'd see that Serverless framework has taken care of all of the heavy lifting in spinning our stack up.

Let's navigate to the endpoint address returned in the console output which is where our Lambda can be reached.

If all went as expected we should be greeted with a successful response, awesome!

HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 28
Connection: keep-alive
Date: Wed, 08 Nov 2017 02:51:33 GMT
x-amzn-RequestId: bb076390-c42f-11e7-89fc-6fcb7a11f609
X-Amzn-Trace-Id: sampled=0;root=1-5a027135-bc15ce531d1ef45e3eed7a9b
X-Cache: Miss from cloudfront
Via: 1.1 3943e81340bd903a74d536bc9599c3f3.cloudfront.net (CloudFront)
X-Amz-Cf-Id: ZDHCvVSR1DAPVUfrL8bU_IuWk3aMoAotdRKBjUIor16VcBPkIiNjNw==

{
  "Message": "Hello World"
}

In addition to invoking our Lambda manually via an HTTP request, we could also invoke it using the following Serverless framework command, where the -l flag will return any log output:

$ serverless invoke -f hello -l

{
    "statusCode": 200,
    "headers": {
        "Content-Type": "application/json"
    },
    "body": "{ \"Message\": \"Hello World\" }",
    "isBase64Encoded": false
}
--------------------------------------------------------------------
START RequestId: bbcbf6a3-c430-11e7-a2e3-132defc123e3 Version: $LATEST
Example log entry

END RequestId: bbcbf6a3-c430-11e7-a2e3-132defc123e3
REPORT RequestId: bbcbf6a3-c430-11e7-a2e3-132defc123e3	Duration: 17.02 ms	Billed Duration: 100 ms 	Memory Size: 1024 MB	Max Memory Used: 34 MB

Making modifications to our Lambda

At this point if we to make any further code changes we'd have to re-run the build script (build.sh or build.cmd depending on your platform) followed by the Serverless framework deploy function command:

$ serverless deploy function -f hello

However if we needed to modify the serverless.yml file then we'd have to deploy our infrastructure changes via the deploy command:

$ serverless deploy -v

The difference being the former is far faster as it will only deploy the source code where as the later will tear down your Cloud Formation stack and stand it back up again reflecting the changes made in your serverless.yml configuration.

Command recap

So, let's recap on the more important commands we've used:

Create our Lambda:

$ serverless create --template aws-csharp --name demo

Deploy our infrastructure and code (you must have built your Lambda before hand using one of the build scripts)

$ serverless deploy -v

Or just deploy the changes to our hello function (again, we need to have built our Lambda as above)

$ serverless deploy function -f hello

At this point we can invoke our Lambda, where -l is whether we want to include log output.

$ serverless invoke -f hello -l

If our commands were written in Python or Node then you could optionally use the invoke local command, however this isn't available for .NET Core.

$ serverless invoke local -f hello

Once finished with our demo function we can clean up after ourselves using the remove command:

$ serverless remove

Adding more Lambdas

Before we wrapping up, imagine we wanted to add more Lambdas to our project, to do this we can simply add another function name to the functions section of serverless.yml configuration file:

From this:

functions:
  hello:
    handler: CsharpHandlers::AwsDotnetCsharp.Handler::Hello
    events:
     - http:
         path: users/create
         method: get

To this:

functions:
  hello:
    handler: CsharpHandlers::AwsDotnetCsharp.Handler::Hello
    events:
     - http:
         path: users/create
         method: get
  world:
    handler: CsharpHandlers::AwsDotnetCsharp.Handler2::World
    events:
     - http:
         path: users/
         method: get

At this point all we'd need to do is create a new Handler class (for the sake of this demonstration I called it Handler2.cs) and make sure we set the handler property in our serverless.yml configuration appropriately, we'd also need to ensure we set our new Handler2 function name to World as to match the handler address in our serverless configuration.

As the additional function will require its own set of infrastructure we would need to run the build script and then use the following command to regenerate our stack:

$ serverless deploy -v

Once deployed we're able to navigate to our second function as we do our first.

We can also deploy our functions independent of one another by supplying the appropriate function name when executing the deploy function command:

$ serverless deploy function -f world

Conclusion

Hopefully this post has given you an idea as to how the Serverless framework can help you develop and manage your functions, whether using Azure, AWS, Google or any of the other providers supported.

If you're interested in learning more about the Serverless framework then I'd highly recommend checking out their documentation (which is plentiful and very well documented).

REST Client for VS Code, an elegant alternative to Postman

Posted on Wednesday, 18 Oct 2017

For sometime now I've been a huge proponent of Postman, working in an environment that has a large number of remote services meant Postman's ease of generating requests, the ability to manage collections, view historic requests and so forth made it my goto tool for hand crafted HTTP requests. However there have always been features I felt were missing, one such feature was the ability to copy and paste a raw RFC 2616 compliant HTTP request including request method, headers and body directly into Postman and fire it off without the need to manually tweak the request. This lead me to a discussion on Twitter where Darrel Miller recommended I check out the REST Client extension for Visual Studio Code.

REST Client for Visual Studio Code

After installing REST Client the first thing I noticed was how elegant it is. Simply create a new tab, paste in your raw HTTP request (ensuring the tab's Language Mode is set to either HTTP or Plaintext, more on this later) and in no time at all you'll see a "Send Request" button appear above your HTTP request allowing you to execute the request as is, no further modifications are required to tell REST Client how to parse or format.

Features

To give you a firm grasp of why you should consider adding REST Client to your tool chain, here are a few of the features that particularly stood out to me, organised in an easily consumable list format, because we all like lists:

No BS request building

The easiest form of an HTTP request you can send is to paste in a normal HTTP GET URL like so:

https://example.com/comments/1

Note: You can either paste your requests into a Plaintext window where you'll need to highlight the request and press the Send Request keyboard shortcut (Ctrl+Alt+R for Windows, or Cmd+Alt+R for macOS) or set the tab's Language Mode to HTTP where a "Send Request" will appear above the HTTP request.

If you want more control over your request then a raw HTTP request will do:

POST https://example.com/comments HTTP/1.1
content-type: application/json

{
    "name": "sample",
    "time": "Wed, 21 Oct 2015 18:27:50 GMT"
}

Once loaded you'll see the response appear in a separate pane. A nice detail that I really liked is the ability to hover my cursor over the request timer and get a breakdown of duration details, including times surrounding Socket, DNS, TCP, First Byte and Download.

Saving requests as collections for later use is a simple plain text .http file

Following the theme of low friction elegance, it's nice that saving requests for later use (or if you want to check them into your component's source control) are saved as simple plain text documents with an .http file extension.

Break down of requests

One of the gripes I had with Postman was requests separated by tabs. If you had a number of requests you were working with they'd quickly get lost amongst the number of tabs I tend to have open.

REST Client doesn't suffer the same fate as requests can be grouped in your documents and separated by comments, where three or more hash characters (#) are treated as delimiters between requests by REST Client.

Environments and Variables

REST Client has a concept of Environments and Variables, meaning if you work with different environments (ie QA, Staging and Production), you can easily switch between environments configured in the REST Client settings (see below), changing the set of variables configured without having to modify the requests.

Environments

"rest-client.environmentVariables": {
    "local": {
        "host": "localhost",
        "token": "test token"
    },
    "production": {
        "host": "example.com",
        "token": "product token"
    }
}

Variables

Variables on the other hand allow you to simply define variables in your document and reference them throughout.

@host = localhost:5000
@token = Bearer e975b15aa477ee440417ea069e8ef728a22933f0

GET https://{{host}}/api/comments/1 HTTP/1.1
Authorization: {{token}}

It's not Electron

I have nothing against Electron, but it's known to be a big of a resource hog, so much so that I rarely leave Postman open between sessions, where as I've always got VS Code (one Electron process is enough) open meaning it's far easier to dip into to test a few requests.

Conclusion

This post is just a brief overview of some of the features in REST Client, if you're open to trying an alternative to your current HTTP Request generation tool then I'd highly recommend you check it out, you can read more about it on the REST Client GitHub page.

Global Exception Handling in ASP.NET Core Web API

Posted on Wednesday, 20 Sep 2017

Note: Whilst this post is targeted towards Web API, it's not unique to Web API and can be applied to any framework running on the ASP.NET Core pipeline.

For anyone that uses a command-dispatcher library such as MediatR, Magneto or Brighter (to name a few), you'll know that the pattern encourages you to push your domain logic down into a domain library via a handler, encapsulating your app or API's behaviours such as retrieving an event, like so:

public async Task<IActionResult> Get(int id)
{
    var result = await _mediator.Send(new EventDetail.Query(id));
    return Ok(result);
}

Continuing with the event theme above, within your handler, or in the pipeline before it you'll take care of all of your validation, throwing an exception if the argument is invalid (in this case, an ArgumentException).

Now when it comes to handling that exception you're left having to explicitly catch it from each action method:

public async Task<string> Get(int id)
{
    try {
        var result = await _mediator.Send(new EventDetail.Query(id));
        return Ok(result);
    } catch (ArgumentException){
        return BadRequest();
    }
}

Whilst this is perfectly acceptable, I'm always looking at ways I can reduce boiler plate code, and what happens if an exception is thrown somewhere else in the HTTP pipeline This is why I created Global Exception Handler for ASP.NET Core.

What is Global Exception Handler?

Available via NuGet or GitHub, Global Exception Handler lets you configure an exception handling convention within your Startup.cs file, which will catch any of the exceptions specified, outputting the appropriate error response and status code.

Not just for Web API or MVC

Whilst it's possible to use Global Exception Handler with Web API or MVC, it's actually framework agnostic meaning as long as it runs, or can run on the ASP.NET Core pipeline (such as BotWin or Nancy) then it should work.

Let's take a look at how we can use it alongside Web API (though the configuration will be the same regardless of framework).

How do I use Global Exception Handler for an ASP.NET Core Web API project?

To configure Global Exception Handler, you call it via the the UseWebApiGlobalExceptionHandler extension method in your Configure method, specifying the exception(s) you wish to handle and the resulting status code. In this instance a ArgumentException should translate to a 400 (Bad Request) status code:

public class Startup
{
    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        app.UseWebApiGlobalExceptionHandler(x =>
        {
            x.ForException<ArgumentException>().ReturnStatusCode(HttpStatusCode.BadRequest);
        });

        app.UseMvc();
    }
}

Now when our MediatR pipeline throws an ArgumentException we no longer need to explicitly catch and handle it in every controller action:

public async Task<IActionResult> Get(int id)
{
    // This throws a ArgumentException
    var result = await _mediator.Send(new EventDetail.Query(id));
    ...
}

Instead our global exception handler will catch the exception and handle it according to our convention, resulting in the following JSON output:

{
    "error": {
        "status": 400,
        "message": "Invalid arguments supplied"
    }
}

This saves us in the following three scenarios:

  • You no longer have to explicitly catch exceptions per method
  • Those times you forgot to add exception handling will be caught
  • Enables you to catch any exceptions further up the HTTP pipeline and propagate to a configured result

Not happy with the error format?

If you're not happy with the default error format then it can be changed in one of two places.

First you can set a global error format via the MessageFormatter method:

Global formatter

public class Startup
{
    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        app.UseWebApiGlobalExceptionHandler(x =>
        {
            x.ForException<ArgumentException>().ReturnStatusCode(HttpStatusCode.BadRequest);
            x.MessageFormatter(exception => JsonConvert.SerializeObject(new {
                error = new {
                    message = "Something went wrong",
                    statusCode = exception.HttpStatusCode
                }
            }));
        });

        app.UseMvc();
    }
}

Exception specific formatter

Alternatively you can specify a custom message per exception caught, which will override the global one demoed above:

app.UseWebApiGlobalExceptionHandler(x =>
{
    x.ContentType = "text/xml";
    x.ForException<ArgumentException>().ReturnStatusCode(HttpStatusCode.BadRequest).UsingMessageFormatter(
        exception => JsonConvert.SerializeObject(new
        {
            error = new
            {
                message = "Oops, something went wrong"
            }
        }));
    x.MessageFormatter(exception => "This formatter will be overridden when an ArgumentException is thrown");
});

Resulting in the following 400 response:

{
    "error": {
        "message": "Oops, something went wrong"
    }
}

Content type

By default Global Exception Handler is set to output application/json content type, however this can be overridden for those that may prefer to use XML or an alternative format. This can be done via the the ContentType property:

app.UseWebApiGlobalExceptionHandler(x =>
{
    x.ContentType = "text/xml";
    x.ForException<ArgumentException>().ReturnStatusCode(HttpStatusCode.BadRequest)
    x.MessageFormatter(exception => {
        // serialise your XML in here
    });
});

Moving forward

Having used this for a little while now, one suggestion was to implement problem+json as the default content type, standardising the default output which I'm seriously considering. I'm also in the process of building ASP.NET Core MVC compatibility so exceptions can result in views being rendered or requests being redirect to routes (such as a 404 page not found view).

For more information feel free to check out the GitHub page or try it out via Nuget.

Turbo charging your command line with ripgrep

Posted on Tuesday, 12 Sep 2017

In the past few years the command line has made a resurgence in the Windows world. With the .NET Core CLI now a first class citizen and the Linux sub-system for Windows making it easy to run Linux based tools on a Windows machine, it's clear that Microsoft are keen to boost the profile of the CLI amongst developers.

Yet Windows has always come up short on delivering a powerful grep experience for searching via a command line, this is where a tool like ripgrep can help you.

Having first heard about ripgrep after a conversation on Twitter, I was made aware of the fact that most people are actively using it without knowing it as ripgrep is what powers Visual Studio Code's search functionality! (see here)

As someone that's a heavy user of grep in my day-to-day work flow the first thing that blew me away with ripgrep was its blazing fast speed. Having read its benchmarks that it so proudly displays on its GitHub page, at first I was septical - but ripgrep flies on large recursive searches like no other grepping tool I've used.

Let's take a look.

A bit about ripgrep

Written in Rust, ripgrep is a tool that combines the usability of The Silver Searcher (a super fast ack clone) with the raw performance of GNU. In addition, ripgrep also has first class support for Windows, Mac and Linux (available on their GitHub page), so for anyone who regularly works across multiple platforms and is looking to normalise their tool chain then it's well worth a look.

Some of ripgrep's features that sold it to me are:

  • It's crazily fast at searching large directories
  • Ripgrep won't search files already ignored by your .gitignore file (this can easily be overridden when needed).
  • Ignores binary or hidden files by default
  • Easy to search specific file types (making it great for searching for functions or references in code files)
  • Highlights matches in colour
  • Full unicode support
  • First class Windows, Mac and Linux support

Let's take a look at how we can install ripgrep.

Installation

Mac

If you're on a Mac using Homebrew then installation is as easy as:

$ brew install ripgrep

Windows

  • Download the ripgrep executable from their releases page on GitHub
  • Put the executable in a familiar location (c:/tools for instance)
  • Add the aforementioned tools path to your PATH environment

Alternatively if you're using Chocolately then installation is as simple as:

$ choco install ripgrep

Linux

$ yum-config-manager --add-repo=https://copr.fedorainfracloud.org/coprs/carlwgeorge/ripgrep/repo/epel-7/carlwgeorge-ripgrep-epel-7.repo

$ yum install ripgrep

See the ripgrep GitHub page for more installation options.

Usage

Next, let's take a look at some of the use cases for ripgrep in our day-to-day scenarios.

Recursively search contents of files in current directory, while respecting all .gitignore files, ignore hidden files and directories and skip binary files:

$ rg hello

ripgrep search all files

Search the contents of .html and .css files only for the word foobar using the Type flag (t).

$ rg -thtml -tcss foobar

Or return everything but css files using the Type Not flag (T):

$ rg -Tjs foobar

Returns a list of all .css files

$ rg -g *.css --files

ripgrep search all files

More examples

ripgrep has a whole host of other searching options so I'd highly recommend checking out their GitHub page where they reference more examples.

Conclusion

Hopefully this post has given you a taste of how awesome ripgrep is and encouraged you to at least install it and give it a spin. If you're someone that spends a lot of time on the command line for day to day navigation then having a powerful grepping tool at your disposal and getting into the habit of using it whenever you need to locate a file really does help your work flow.

Now, go forth an grep with insane speed!

GraphiQL in ASP.NET Core

Posted on Thursday, 10 Aug 2017

Having recently attended a talk on GraphQL and read about GitHib's glowing post surrounding choice to use GraphQL over REST for their API, I was interested in having a play to see what all of the fuss was about. For those that aren't sure what GraphQL is or where it fits in the stack let me give a brief overview.

What is GraphQL?

GraphQL is a query language (hence QL in the name) created and open-sourced by Facebook that allows you to query an HTTP (or other protocols for that matter) based API. Let me demonstrate with a simple example:

Say you're consuming a REST API on the following resource identifier:

GET service.com/user/1

The response returns on this URI is the following JSON object is:

{
    "Id": 1,
    "FirstName": "Richard",
    "Surname": "Hendricks",
    "Gender": "Male",
    "Age": 31,
    "Occupation": "Pied Piper CEO",
    "RoleId": 5,
    ...
}

Now as a mobile app developer calling this service you're aware of the constraints on bandwidth users face when connected to a mobile network, so returning the whole JSON blob when you're only interested in their FirstName and Surname properties is wasteful, this is called over-fetching data, GraphQL solves this by letting you as a consumer dictate your data needs, as opposed to having it forced upon you by the service.

This problem is a fundamental requirement that REST doesn't solve (in fairness to REST, it never set out to solve this problem, however as the internet has changed it's a problem that does exist).

This is where GraphQL comes in.

Using GraphQL we're given control as consumers to dictate what our data requirements are, so instead of calling the aforementioned URI, instead we POST a query to a GraphQL endpoint (often /graphql) in the following shape:

{
    Id,
    FirstName,
    Surname
}

Our Web API powered GraphQL server fulfills the request, returning the following response:

{
    "Id": 1,
    "FirstName": "Richard",
    "Surname": "Hendrix"
}

This also applies to under-fetching which can best be described as when you have to make multiple calls to join data (following the above example, retrieving the RoleId only to then call another endpoint to get the Role information). In GraphQL's case, we could represent that with the following query, which would save us an additional HTTP request:

{
    Id,
    FirstName,
    Surname,
    Role {
        RoleName
    }
}

The GraphQL query language includes a whole host of other functionality including static type checking, query functions and the like so I would recommend checking it out when you can (or stay tuned for a later post I'm in the process of writing where I demonstrate how to set it up in .NET Core).

So what is GraphiQL?

Now you know what GraphQL is, GraphiQL (pronounced 'graphical') is a web-based JavaScript powered editor that allows you to query a GraphQL endpoint, taking full advantage of the static type checking and intellisense promised by GraphQL. You can consider it the Swagger of the GraphQL world.

In fact, I'd suggest you taking a moment to go try a live exampe of GraphiQL here and see how GraphQL's static type system and help you discover data that's available to you via the documentation and intellisense. GitHub also allow you to query your GitHub activity via their example GraphiQL endpoint too.

Introducing GraphiQL.NET

Traditionally if you wanted to set this up you'd need to configure a whole host of node modules and JavaScript files to enable you to do this, however given .NET Core's powerful middleware/pipeline approach creating a GraphiQL middleware seemed like the obvious way to enable a GraphiQL endpoint.

Now you no longer need to worry about taking a dependency on Node or NPM in your ASP.NET Core solution and can instead add GraphiQL support via a simple middleware calls using GraphiQL.NET (before continuing I feel it's worth mentioning all of the code is up on GitHub.

Setting up GraphiQL in ASP.NET Core

You can install GraphiQL.NET by copying and pasting the following command into your Package Manager Console within Visual Studio (Tools > NuGet Package Manager > Package Manager Console).

Install-Package graphiql

Alternatively you can install it using the .NET Core CLI using the following command:

dotnet add package graphiql

From there all you need to do is call the UseGraphiQl(); extension method within the Configure method within Startup.cs, ensuring you do it before your AddMvc(); registration.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
    app.UseGraphiQl();

    app.UseMvc();
}

Now when you navigate to /graphql you should be greeted with the same familiar GraphiQL screen but without the hassle of having to add node or any NPM packages to your project as a dependency - nice!

The library is still version 1 so if you run into any issues then please do feel free to report them!

Injecting content into your head or body tags via dependency injection using ITagHelperComponent

Posted on Monday, 17 Jul 2017

Having been playing around with the ASP.NET Core 2.0 preview for a little while now, one cool feature I stumbled upon was the addition of the new ITagHelperComponent interface and its use.

What problem does the ITagHelperComponent solve?

Pre .NET Core 2.0, if you're using a library that comes bundled with some static assets such as JavaScript or CSS, you'll know that in order to use the library you have to manually add script and/or link tags (including a reference to the files in your wwwroot folder), to your views. This is far from ideal as not only does it force users to jump through additional hoops, but it also runs the risk of introducing breaking changes when a user decides to remove the library and forgets to remove the JavaScript references, or if you update the library version but forget to change the appropriate JavaScript reference.

This is where the ITagHelperComponent comes in; it allows you to inject content into the header or footer of your application's web page. Essentially, it's dependency injection for your JavaScript or CSS assets! All that's required of the user is they register the dependency with their IoC Container of choice within their Startup.cs file.

Enough talk, let's take a look at how it works. Hopefully a demonstration will clear things up.

Injecting JavaScript or CSS assets into the head or body tags

Imagine we have some JavaScript we'd like to include on each page, this could be from either:

  • A JavaScript and/or CSS library we'd like to use (Bootstrap, Pure etc)
  • Some database driven JavaScript code or value that needs to be included in the head of your page
  • A JavaScript file that's bundled with a library that our users need to include before the closing </body> tag.

In our case, we'll keep it simple - we need to include some database drive JavaScript in our page in the form of some Google Analytics JavaScript.

Creating our JavaScript tag helper component

Looking at the contract of the ITagHelperComponent interface you'll see it's a simple one:

public interface ITagHelperComponent{
    int Order { get; }
    void Init(TagHelperContext context);
    Task ProcessAsync(TagHelperContext context, TagHelperOutput output);
}

We could implement the interface ourselves, or we could lean on the existing TagHelperComponent base class and override only the properties and methods we require. We'll do the later.

Let's start by creating our implementation which we'll call CustomerAnalyticsTagHelper:

// CustomerAnalyticsTagHelper.cs

CustomerAnalyticsTagHelper : TagHelperComponent {}

For this example the only method we're concerned about is the ProcessAsync one, though we will touch on the Order property later.

Let's go ahead and implement it:

// CustomerAnalyticsTagHelper.cs

public class CustomerAnalyticsTagHelper : TagHelperComponent
{
    private readonly ICustomerAnalytics _analytics;

    public CustomerAnalyticsTagHelper(ICustomerAnalytics analytics)
    {
        _analytics = analytics;
    }

    public override Task ProcessAsync(TagHelperContext context, TagHelperOutput output)
    {
        if (string.Equals(context.TagName, "body", StringComparison.Ordinal))
        {
            string analyticsSnippet = @"
            <script>
                (function (i, s, o, g, r, a, m) {
                    i['GoogleAnalyticsObject'] = r; i[r] = i[r] || function () {
                        (i[r].q = i[r].q || []).push(arguments)
                    }, i[r].l = 1 * new Date(); a = s.createElement(o),
                        m = s.getElementsByTagName(o)[0]; a.async = 1; a.src = g; m.parentNode.insertBefore(a, m)
                })(window, document, 'script', '//www.google-analytics.com/analytics.js', 'ga');
                ga('create', '" + _analytics.CustomerUaCode + @"', 'auto')
                ga('send', 'pageview');
            </script>";
            
            output.PostContent.AppendHtmlLine(analyticsSnippet);
        }

        return Task.CompletedTask;
    }
}

As you can see, the TagHelperContext argument gives us context around the tag we're inspecting, in this case we want to look for the body HTML element. If we wanted to drop JavaScript or CSS into the <head></head> tags then we'd inspect tag name of "head" instead.

The TagHelperOutput argument gives us access to a host of properties around where we can place content, these include:

  • PreElement
  • PreContent
  • Content
  • PostContent
  • PostElement
  • IsContentModified
  • Attributes

In this instance we're going to append our JavaScript after the content located within the <body> tag, placing it just before the closing </body> tag.

Dependency Injection in our tag helper

With dependency injection being baked into the ASP.NET Core framework, we're able to inject dependencies into our tag helper - in this case I'm injecting our database driven consumer UA (User Analytics) code.

Registering our tag helper with our IoC container

Now all that's left to do is register our tag helper with our IoC container of choice. In this instance I'm using the build in ASP.NET Core one from the Microsoft.Extensions.DependencyInjection package.

// Startup.cs

public void ConfigureServices(IServiceCollection services)
{
    services.AddSingleton<ICustomerAnalytics, CustomerAnalytics>(); // Data source containing UA code
    services.AddSingleton<ITagHelperComponent, CustomerAnalyticsTagHelper>(); // Our tag helper
    ...
}

Now firing up our tag helper we can see our JavaScript has now been injected in our HTML page without us needing to touch any of our .cshtml Razor files!

...
<body>
    ...
    <script>
        (function (i, s, o, g, r, a, m) {
            i['GoogleAnalyticsObject'] = r; i[r] = i[r] || function () {
                (i[r].q = i[r].q || []).push(arguments)
            }, i[r].l = 1 * new Date(); a = s.createElement(o),
                m = s.getElementsByTagName(o)[0]; a.async = 1; a.src = g; m.parentNode.insertBefore(a, m)
        })(window, document, 'script', '//www.google-analytics.com/analytics.js', 'ga');
        ga('create', 'UA-123456789', 'auto')
        ga('send', 'pageview');
    </script>
</body>
</html>

Ordering our output

If we needed to include more than one script or script file in our output, we can lean on the Order property we saw earlier, overriding this allows us to specify the order of our output. Let's see how we can do this:

// JsLoggingTagHelper.cs

public class JsLoggingTagHelper : TagHelperComponent
{
    public override int Order => 1;

    public override Task ProcessAsync(TagHelperContext context, TagHelperOutput output)
    {
        if (string.Equals(context.TagName, "body", StringComparison.Ordinal))
        {
            const string script = @"<script src=""/jslogger.js""></script>";
            output.PostContent.AppendHtmlLine(script);
        }

        return Task.CompletedTask;
    }
}
// CustomerAnalyticsTagHelper.cs

public class CustomerAnalyticsTagHelper : TagHelperComponent
{
    ...
    public override int Order => 2; // Set our AnalyticsTagHelper to appear after our logger
    ...
}
// Startup.cs

public void ConfigureServices(IServiceCollection services)
{
    services.AddSingleton<ICustomerAnalytics, CustomerAnalytics>();
    services.AddSingleton<ITagHelperComponent, CustomerAnalyticsTagHelper>();
    services.AddSingleton<ITagHelperComponent, JsLoggingTagHelper>();
    ...   
}

When we we launch our application we should see the following HTML output:

<script src="/jslogger.js"></script>
<script>
    (function (i, s, o, g, r, a, m) {
        i['GoogleAnalyticsObject'] = r; i[r] = i[r] || function () {
            (i[r].q = i[r].q || []).push(arguments)
        }, i[r].l = 1 * new Date(); a = s.createElement(o),
            m = s.getElementsByTagName(o)[0]; a.async = 1; a.src = g; m.parentNode.insertBefore(a, m)
    })(window, document, 'script', '//www.google-analytics.com/analytics.js', 'ga');
    ga('create', 'UA-123456789', 'auto')
    ga('send', 'pageview');
</script>
</body>
</html>

Conclusion

Hopefully this post has highlighted how powerful the recent changes to tag helpers are and how using the ITagHelperComponent interface allows us to inject content into our HTML without having to touch any files. This means as a library author we can ease integration for our users by simply asking them to register a type with their IoC container and we can take care of the rest!

.NET Core solution management via the command line interface

Posted on Monday, 03 Jul 2017

One of the strengths boasted by .NET Core is its new command line interface (CLI for short), and by now you're probably aware that Visual Studio, Rider, Visual Studio Code etc shell out to the .NET Core CLI under the bonnet for most .NET Core related operations, so it makes sense that what you're able to do in your favourite IDE you're also able to do via the CLI.

With this in mind, only recently did I spend the time and effort to investigate how easy it was to create and manage a project solution via the CLI, including creating the solution structure, referencing projects along the way and adding them to .NET's .sln file.

It turns out it's incredibly easy and has instantly become my preferred way of managing solutions. Hopefully by the end of this post you'll arrive at the same conclusion too.

Benefits of using the CLI for solution management

So what are the benefits of using the CLI for solution management? Let's have a look:

  • Something that has always been a fiddly endevour of UI interactions is now so much simpler via the CLI - what's more, you don't need to open your editor of choice if you want to create references or update a NuGet package.

  • Using the CLI for creating projects and solutions is particularly helpful if (like me) you work across multiple operating systems and want to normalise your tool chain.

  • Loading an IDE just to update a NuGet package seems unnecessary

Let's begin!

Creating our solution

So let's take a look at how we can create the following project structure using the .NET Core CLI.

piedpiper
└── src
    ├── piedpiper.domain
    ├── piedpiper.sln
    ├── piedpiper.tests
    └── piedpiper.website

First we'll create our solution (.sln) file, I've always preferred to create the solution file in the top level source folder but the choice is yours (just bear in mind to specify the right path in the commands used throughout the rest of this post).

# /src/

$ dotnet new sln -n piedpiper

This will create a new sln file called piedpiper.sln.

Next we use the output parameter on the dotnet new <projecttype> command to create a project in a particular folder:

# /src/

$ dotnet new mvc -o piedpiper.website

This will create an ASP.NET Core MVC application in the piedpiper.website folder in the same directory. If we were to look at our folder structure thus far it looks like this:

# /src/

$ ls -la

piedpiper.sln
piedpiper.website

Next we can do the same for our domain and test projects:

# /src/

$ dotnet new classlib -o piedpiper.domain
$ dotnet new xunit -o piedpiper.tests

Adding our projects to our solution

At this point we've got a solution file that has no projects referenced, we can verify this by calling the list command like so:

# /src/

$ dotnet sln list

No projects found in the solution.

Next we'll add our projects to our solution file. Once upon a time doing this involved opening Visual Studio then adding a reference to each project manually. Thankfully this can also be done via the .NET Core CLI.

Now start to add each project with the following command, we do this by referencing the .csproj file:

# /src/

$ dotnet sln add piedpiper.website/piedpiper.website.csproj
$ dotnet sln add piedpiper.domain/piedpiper.domain.csproj
$ dotnet sln add piedpiper.tests/piedpiper.tests.csproj

Note: If you're using a Linux/Unix based shell you can do this in a single command using a globbing pattern!

# /src/

$ dotnet sln add **/*.csproj

Project `piedpiper.domain/piedpiper.domain.csproj` added to the solution.
Project `piedpiper.tests/piedpiper.tests.csproj` added to the solution.
Project `piedpiper.website/piedpiper.website.csproj` added to the solution.

Now when we call list on our solution file we should get the following output:

# /src/

$ dotnet sln list

Project reference(s)
--------------------
piedpiper.domain/piedpiper.domain.csproj
piedpiper.tests/piedpiper.tests.csproj
piedpiper.website/piedpiper.website.csproj

So far so good!

Adding a project reference to a project

Next up we want to start adding project references to our project, linking our domain library to our website and test library via the dotnet add reference command:

# /src/

$ dotnet add piedpiper.tests reference piedpiper.domain/piedpiper.domain.csproj

Reference `..\piedpiper.domain\piedpiper.domain.csproj` added to the project.

Now if you were to view the contents of your test project we'd see our domain library has now been referenced:

# /src/piedpiper.tests/

$ cat piedpiper.tests/piedpiper.tests.csproj 

<Project Sdk="Microsoft.NET.Sdk">

  ...

  <ItemGroup>
    <ProjectReference Include="..\piedpiper.domain\piedpiper.domain.csproj" />
  </ItemGroup>

</Project>

Next we'll do the same for our website project, so let's go to our website folder and run the same command:

# /src/

$ dotnet add piedpiper.website reference piedpiper.domain/piedpiper.domain.csproj
# /src/

$ cat piedpiper.website/piedpiper.website.csproj 

<Project Sdk="Microsoft.NET.Sdk">

  ...

<ItemGroup>
    <ProjectReference Include="..\piedpiper.domain\piedpiper.domain.csproj" />
  </ItemGroup>

</Project>

At this point we're done!

If we navigate back to our root source folder and run the build command we should see everything build successfully:

$ cd ../

# /src/

$ dotnet build

icrosoft (R) Build Engine version 15.3.388.41745 for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.

    piedpiper.domain -> /Users/josephwoodward/Desktop/demo/src/piedpiper.domain/bin/Debug/netstandard2.0/piedpiper.domain.dll
    piedpiper.tests -> /Users/josephwoodward/Desktop/demo/src/piedpiper.tests/bin/Debug/netcoreapp2.0/piedpiper.tests.dll
    piedpiper.website -> /Users/josephwoodward/Desktop/demo/src/piedpiper.website/bin/Debug/netcoreapp2.0/piedpiper.website.dll

Build succeeded.

    0 Warning(s)
    0 Error(s)

Time Elapsed 00:00:08.08

Adding a NuGet package to a project or updating it

Before wrapping up, let's say we wanted to add a NuGet package to one of our projects, we can do this using the add package command.

First navigate to the project you want to add a NuGet package to:

# /src/

$ cd pipedpiper.tests/

$ dotnet add package shouldly

info : Adding PackageReference for package 'shouldly'
...
log  : Installing Shouldly 2.8.3.

Optionally we could specify a version we'd like to install using the version argument:

$ dotnet add package shouldly -v 2.8.2

Updating a NuGet package

Updating a NuGet package to the latest version is just as easy, simply use the same command without the version argument:

dotnet add package shouldly

Conclusion

If you've managed to get this far then well done, hopefully by now you've realised how easy creating and managing a solution is using the new .NET Core command line interface.

One of the great powers of using the CLI is you can now turn creating the same project structure into a handy bash script which you could alias and reuse!

#!/bin/bash

echo "Enter project name, followed by [ENTER]:"
read projname

echo "Creating solution for $projname"

dotnet new sln -n $projname

dotnet new mvc -o $projname.website
dotnet new classlib -o $projname.domain
dotnet new xunit -o $projname.tests

echo "Adding projects to solution"
dotnet sln add **/*.csproj

echo "Referencing projects"
dotnet add $projname.website reference $projname.domain/$projname.domain.csproj
dotnet add $projname.tests reference $projname.domain/$projname.domain.csproj

Happy coding!

Tips on starting and running a .NET user group

Posted on Thursday, 22 Jun 2017

As someone that organises and runs the .NET South West in Bristol, I've had a number of people online and in person approach me expressing an interest in starting a .NET focused meet up but not sure where to start; so much so that I thought it would be good to summarise the challenges and hurdles of running a meet up in a succinct blog post.

.NET South West meet up

Running a user group isn't a sail in the park, but it's not hard or particularly time consuming either. Hopefully this post will provide some valuable information and reassurance to those looking to create and foster a local .NET focused community of people wishing to learn, share, expand their knowledge and meet local developers with similar passions and interests.

1. You can do it

The the very first question that can play through your mind is whether you're capable of starting and running such a meet up. If you're having any self-doubts about whether you're knowledgeable enough to run a user group, or whether you have the confidence to organise it - don't.

Running a user group is an incredibly rewarding experience that starts off small and grows, as it grows you grow with it. Everyone that attends user groups are there to learn, that also applies to the organiser(s) too. So don't let any hesitations or self-doubts get in your way.

2. Gauging the interest in a .NET user group

One of the first hurdles you face when starting a user group is trying to gauge the level of interest that exists in your local area.

I've found a great way to gauge interest is to simply create a meet up group on the popular user group organising site meetup.com informing people that you're interested in seeing what the level of interest is like. You can create an event with no date and set the title to "To be announced" then leaving it active for a few months. Meetup.com notifies people with similar interests of the new user group and over time you start to get people joining the group waiting for the first meet to be announced. In the meantime your meet up page has a forum where you can start conversations with some of the new members where you can look for assistance or ask if anyone knows of a suitable venue.

This time is a great opportunity to get to know local developers before the meet up.

3. Having an online presence

When it comes to organising a meet up, websites like meetup.com make the whole process a breeze. The service isn't free (starting at $9.99 - see more here) but personally I would say it's worth it in terms of how much effort it saves you. MeetUp provides services such as:

  • Posting and announcing meet ups to members
  • Sending out regular emails to your user group
  • Increases meet up visibility to local developers on the platform
  • Semi-brandable pages (though this could be better)
  • Ability to link to add sponsors within your meet up page

If you are on a budget then there are free alternatives such as the free tier of EventBrite which you can link to from a website you could set up.

4. Many hands make light work

Starting a meet up requires a lot of work, once the meet up is running the number of hours required to keep it ticking along dramatically reduces. That said, there are times where you may have personal commitments that make it difficult to focus on the meet up - so why not look to see if anyone else is interested in helping?

If you don't have any close friends or work colleagues that are interested in helping you can mention you're looking for help on your meet up page discussed previously. There's also nothing to stop you from talking to members once the meet up is under way to see if anyone is interested in helping out. If you do have people interested in helping then why not create a Slack channel for your group where you can stay organised.

5. The Venue

Next up is the venue, this is often the most challenging part as most venues will require payment for a room. All is not lost though as there are a few options available to you:

  • Look for sponsorship to cover the cost of the venue - some companies (such as recruitment companies) are often open to ideas of sponsoring meet ups in one way or another for publicity. Naturally this all depends on your stance around having recruitment companies at your meet up.

  • Approach software companies to see if they are interested in hosting the event. Often you'll find software companies are geared up for hosting meet ups and happy to do so in exchange for interacting with the community (and potentially saving them recruitment costs).

  • Small pubs - I know of a few meet up organisers who host at pubs in a back room as the venue are often aware that a few people stay behind to have a few drinks so it works in their favour too.

Ultimately you want to ensure is you have consistency, so talk to the venue to and make it clear that you're looking for a long-term solution.

6. Speakers

Once you've got your venue sorted the next task you face ( and this will be a regular one) is sourcing speakers. Luckily this finding speakers is often reasonably simple and once your meet up becomes established you'll start to find you have speakers approaching you with interest in giving a talk. I would also recommend looking at other near by meet ups for past speakers and making contact with them via Twitter. Networking at conferences is also a great way of finding potential speakers too.

In addition to the aforementioned suggestions, Microsoft also have a handy Microsoft Evangelists (UK only) for finding Evangelists nearby that are often more than happy to travel to your user group to give a talk.

Finally, encourage attendees of your meet up to give talks. You're trying to foster a community, so try to drive engagement and ownership by opening up space for short 15 minute lightning talks.

7. Sponsorship / Competitions and Prizes

Once your user group is off the ground I would recommend reaching out to software companies to see if they provide sponsorship for meet ups in the shape of prize licences or extended trials - for instance, JetBrains are well known for their awesome community support programme which I'd highly recommend taking a look at.

Some companies require your meet up to be a certain size, some are more flexible on what they can provide, often being happy to ship swag such as stickers and t-shirts instead which can be given away as prizes during your meet up (though if you're accepting swag from abroad then do be sure to clarify import tax so you don't get stung).

Swag and prizes aren't essential for a meet up, but it's something worth considering to spice things up a bit.

Go for it

Hopefully this post has given you some ideas if you are considering setting up a meet up. Organising a meet up and running it is an extremely satisfying responsibility and it's great seeing a community of developers coming together to share knowledge and learn from one another. So what are you waiting for, go for it!

Retrospective of DDD 12 in Reading

Posted on Monday, 12 Jun 2017

Yesterday was my second time both attending and speaking at the DDD conference run out of the Microsoft offices in Reading, and as a tradition I like to finish the occasion by writing a short retrospective post highlighting the day.

Arrival

Living just two hours' drive from Reading I decided to drive up early in the morning with fellow Just Eater Stuart Lang who was also giving a talk. After arriving and grabbing our speaker polo shirts we headed to the speaker room to say hello to the other speakers - some of whom I know from previous conferences and through organising DDD South West.

Speakers' Room

I always enjoying spending time in the speakers' room. As a relative newcomer to speaking I find it's a great opportunity to get some tips from more experienced speakers as well as geek out about programming. In addition, I still had some preparation to do so it's a quiet room where I can refine and tweak my talk slides.

Talk 1 - Goodbye REST; Hello GraphQL by Sandeep Singh

Even though I had preparation to take care of for my post-lunch talk, I was determined to attend Sandeep Singh's talk on GraphQL as it's a technology I've heard lots about via podcasts and have been interested to learn more. In addition, working at Just Eat where we have a lot of distributed services that are extremely chatty over HTTP - I was interested to see if and where GraphQL could help.

Having met Sandeep Singh for the first time at DDD South West he's clearly a knowledgeable guy so I was expecting great thing, and he delivered. The talk was very informative and by the end of the talk Sandeep had demonstrated the power of GraphQL (along with well-balanced considerations that need to be made) and answered the majority of questions I had forming in my notes as the talk progressed. It's definitely sparked my interest in the GraphQL and I'm keen to start playing with it.

My talk - Building a better Web API architecture using CQRS

Having submitted two talks to DDD Reading (this and my Razor Deep Dive I delivered at DDD South West), the one that received the most votes was this talk, a topic and architectural style I've been extremely interested in for a number of years now (long before MediatR was a thing!).

Having spoken at DDD Reading before, this year my talk was held in a rather intimidating room called Chicago that seats up to 90 with a stage overlooking the attendees. All in all I was happy with the way the talk went, however I did burn through my slides far faster than I did during practice, luckily though the attendees had plenty of questions so I had plenty of opportunity to answer and expand on them with the remaining time. I'll chalk this down to experience and learn from it.

I must say that one of my concerns whilst preparing the talk was the split opinions around what CQRS really is, and how it differs to Bertrand Meyer's formulation of CQS (coincidentally, there was a healthy debate around the definition moments before my talk in the speakers' room between two speakers well-versed in the area!).

Talk 3 - Async in C# - The Good, the Bad and the Ugly by Stuart Lang

Having worked with Stuart for a year now and known him for slightly longer, it's clear that his knowledge on async/await is spot on and certainly far deeper than any other developer I've met. Having seen a slightly alternative version of his Stuart's talk delivered internally at Just Eat I was familiar with the narrative, however I was keen to attend to this talk because C#'s async language construct is deep area and one I'm interested in.

Overall the talk went really well with a nice break in the middle allowing for time for questions before moving on (something I may have to try in my talk moving forward).

Conclusion

Overall DDD 12 was an awesome day, I'd love to have attended more talks and spend more time speaking with people but keen to deliver my talk to the best of my ability I had work to be done. None the less after my talk was over it was great catching up with familiar faces and meeting new people (easily one of my favourite parts of a conference as I'm a bit of a chatterbox!).

I'll close this post with a massive thanks to the event organisers (having helped organise DDD South West this year I have full appreciation for the hard work and time that goes into organising such an event), and also the sponsors - without them the conference would not have been possible.

Until next year!

Another day, another fantastic DDD (DDD South West 7)

Posted on Friday, 12 May 2017

Saturday 6th of May marked the day of another great DDD South West event. Having attended other DDD events around the UK I've always felt they had a special community feel to them, a feeling I've not felt at other conferences. This year's DDD South West event was particularly special to me not only because I was selected to speak at the conference, but because this year I was part of the organisation team.

This post is just a short summary of the highs and lows of organising the conference and the day itself.

Organising the conference

This year I was honoured to be part of the organisation team for DDD South West, and I loved every minute of it. Myself and the other organisers (there were 5 of us in total, some of whom have been organising the conference for well over 5 or 6 years!) would hold regular meetings via Skype, breaking down responsibilities such as sponsorship, catering, speaker related tasks and finance. Initially these meetings were about a month apart, but as the conference drew closer and the pressure started to set in, we would meet weekly.

During the organising of DDD Southwest I've gained a true appreciation for the amount of effort conference organisers (especially those that run non-profit events in their spare time, such as those I've had the pleasure of working with) put in to organising an event for the community.

On the day everything went as expected with no hiccups, though as I was speaking on the day I was definitely a lot more stressed than I would have been otherwise. After the event we all headed over to the Just Eat offices for an after party, which I'll cover shortly.

For more information on the day, there are two fantastic write ups by Craig Phillips and Dan Clarke that I'd highly recommend reading.

ASP.NET Core Razor Deep Dive talk

Whilst organising DDD South west 7, I figured why not pile the stress on and submit a few sessions. Last year I caught the speaking bug and this year I was keen to continue to pursue it, so I submitted 3 sessions each on very different subjects and was quite surprised to see the ASP.NET Core Razor Deep Dive was selected. It's not the sexiest topic to talk about, but non-the-less it was a great opportunity to share some experience and give people information they can take away and directly apply in real life (something I always try to do when putting together a talk).

The talk itself was focused on the new Razor changes and features introduced in ASP.NET Core, why and where you'd use them. The topics included:

  • Razor as a view engine and it's syntax
  • Tag helpers and how powerful they are
  • Create your own tag helpers for some really interesting use cases - this part was especially great as I could see a lot of people in the audience had the "light bulb" moment. This brings me great joy as I know there's something they were able to take away from the talk.
  • Why Partial Views and Child Actions are limiting
  • View Components and how you can use them to create more modular, reusable views
  • And finished off with the changes people can expect to see in Razor when ASP.NET Core 2.0 is released (ITagHelperComponents and Razor Pages)

Overall I was really happy with the talk and the turnout. The feedback I received was great, with lots of mentions of the word "engaging" (which as a relatively new speaker still trying to find his own style, is always positive to hear).

DDD South West 7 after party

Once all was done and dusted and the conference drew to a close, a large majority of us took a 5 minute stroll over to the Just Eat offices for an after party where free pizza and beer was on offer for the attendees (Thanks Just Eat!).

After a long day of ensuring the event was staying on track coupled with the stresses of talking, it was great to be able to unwind and enjoy spending time mingling with the various attendees and sponsors, all of whom made the event possible.

Bring on DDD South West 8!