Latest Blog Posts

Executing JavaScript inside of .NET Core using JavaScriptServices

Posted on Wednesday, 28 Sep 2016

Recently we were lucky enough to have Steve Sanderson speak at .NET South West, a Bristol based .NET meet up I help organise. His talk was titled SPAs (Single Page Applications) on ASP.NET Core and featured a whole host of impressive tools and APIs he's been developing at Microsoft, all aimed at aiding developers building single page applications (including Angular, Knockout and React) on the ASP.NET Core platform.

As Steve was demonstrating all of these amazing APIs (including  server side rendering of Angular 2/Reacts applications, Angular 2 validation that was integrated with .NET Core MVC's validation) the question that was at the end of everyone's tongue was "How's he doing this?!".

When the opportunity finally arose, Steve demonstrated what I think it one of the coolest parts of the talk - the JavaScriptServices middleware - the topic of this blog post.

Before continuing, if you develop single page apps in either Angular, React or Knockout then I'd highly recommend you check out the talk, which can also be found here.

What is JavaScriptServices?

JavaScriptServices is a .NET Core middleware library that plugs into the .NET Core pipeline which uses Node to execute JavaScript (naturally this also includes Node modules) at runtime. This means that in order to use JavaScriptServices you have to have Node installed the host machine.

How does it work and what application does it have? Let's dive in and take a look!

Setting up JavaScriptServices

Before we continue, it's worth mentioning that it looks like the package is currently going through a rename (from NodeServices to JavaScriptServices) - so you'll notice the API and NuGet package is referenced NodeServices, yet I'm referring to JavaScriptServices throughout. Now that that's out of the way, let's continue!

First of all, as mentioned above, JavaScriptServices relies on Node being installed on the host machine, so if you don't have Node installed then head over to NodeJs.org to download and install it. If you've already got Node installed then you're good to continue.

As I alluded to earlier, setting up the JavaScriptServices middleware is as easy as setting up any other piece of middleware in the in the new .NET Core framework. Simply include the JavaScriptServices NuGet package in your solution:

Install-Package Microsoft.AspNetCore.NodeServices -Pre

Then reference it in your Startup.cs file's ConfigureServices method:

public void ConfigureServices(IServiceCollection services)
{
    services.AddNodeServices();
}

Now we have the following interface at our disposal for calling Node modules:

public interface INodeServices : IDisposable
{
    Task<T> InvokeAsync<T>(string moduleName, params object[] args);
    Task<T> InvokeAsync<T>(CancellationToken cancellationToken, string moduleName, params object[] args);

    Task<T> InvokeExportAsync<T>(string moduleName, string exportedFunctionName, params object[] args);
    Task<T> InvokeExportAsync<T>(CancellationToken cancellationToken, string moduleName, string exportedFunctionName, params object[] args);
}


Basic Usage

Now we've got JavaScriptServices setup, let's look at getting started with a simple use case and run through how we can execute some trivial JavaScript in our application and capture the output.

First we'll begin by creating a simple JavaScript file that contains the a Node module that returns a greeting message:

// greeter.js
module.exports = function (callback, firstName, surname) {

    var greet = function (firstName, surname) {
        return "Hello " + firstName + " " + surname;
    }

    callback(null, greet(firstName, surname));
}

Next, we inject an instance of INodeServices into our controller and invoke our Node module by calling `InvokeAsync<T>` where T is our module's return type (a string in this instance).

public DemoController {

    private readonly INodeServices _nodeServices;

    public DemoController(INodeServices nodeServices)
    {
        _nodeServices = nodeServices;
    }

    public async Task<IActionResult> Index()
    {
        string greetingMessage = await _nodeServices.InvokeAsync<string>("./scripts/js/greeter", "Joseph", "Woodward");

        ...
    }

}


Whilst this is a simple example, hopefully it's demonstrated how easy it and given you an idea as to how powerful this can potentiall be. Now let's go one further.

Taking it one step further - transpiling ES6/ES2015 to ES5, including source mapping files

Whilst front end task runners such as Grunt and Gulp have their place, what if we were writing ES6 code and didn't want to have to go through the hassle of setting up a task runner just to transpile our ES2015 JavaScript?

What if we could transpile our Javascript at runtime in our ASP.NET Core application? Wouldn't that be cool? Well, we can do just this with JavaScriptServices!

First we need to include a few Babel packages to transpile our ES6 code down to ES5. So let's go ahead and create a packages.json in the root of our solution and install the listed packages by executing npm install at the same level as our newly created packages.json file.

{
    "name": "nodeservicesexamples",
    "version": "0.0.0",
    "dependencies": {
        "babel-core": "^6.7.4",
        "babel-preset-es2015": "^6.6.0"
    }
}

Now all we need to register the NodeServices service in the ConfigureServices method of our Startup.cs class:

public void ConfigureServices(IServiceCollection services)
{
    ...
    services.AddNodeServices();
    ...
}

After this we want to create our Node module that will invoke the Babel transpiler - this will also include the source mapping files.

// /node/transpilation.js

var fs = require('fs');
var babelCore = require('babel-core');

module.exports = function(cb, physicalPath, requestPath) {
    var originalContents = fs.readFileSync(physicalPath);
    var result = babelCore.transform(originalContents, {
        presets: ['es2015'],
        sourceMaps: 'inline',
        sourceFileName: '/sourcemapped' + requestPath
    });

    cb(null, result.code);
}

Now comes the interesting part. On every request we want to check to see if the HTTP request being made is for a .js extension. If it is, then we want to pass its contents to our JavaScriptServices instance to transpile it to ES6/2015 JavaScript, then finish off by writing the output to the output response.

At this point I think it's only fair to say that if you were doing this in production then you'd probably want some form of caching of output. This would prevent the same files being transpiled on every request - but hopefully the follow example is enough to give you an idea as to what it would look like:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, INodeServices nodeServices)
{
    ...
    
    // Dynamically transpile any .js files under the '/js/' directory
    app.Use(next => async context => {
        var requestPath = context.Request.Path.Value;
        if (requestPath.StartsWith("/js/") && requestPath.EndsWith(".js")) {
            var fileInfo = env.WebRootFileProvider.GetFileInfo(requestPath);
            if (fileInfo.Exists) {
                var transpiled = await nodeServices.InvokeAsync<string>("./node/transpilation.js", fileInfo.PhysicalPath, requestPath);
                await context.Response.WriteAsync(transpiled);
                return;
            }
        }

        await next.Invoke(context);
    });

    ...
}

Here, all we're doing is checking the ending path of every request to see if the file exists within the /js/ folder and ends with .js. Any matches are then checked to see if the file exists on disk, then is passed to the transpilation.js module we created earlier. The transpilation module will then run the contents of the file through Babel and return the output to JavaScriptServices, which then proceeds to write to our application's response object before invoking the next handler in our HTTP pipeline.

Now that's all set up, let's go ahead and give it a whirl. If we create a simple ES2016 Javascript class in a wwwroot/js/ folder and reference it within our view in a script tag.

// wwwroot/js/example.js

class Greeter {
    getMessage(name){
        return "Hello " + name + "!";
    }
}

var greeter = new Greeter();
console.log(greeter.getMessage("World"));

Now, when we load our application and navigate to our example.js file via your browser's devtools you should see it's been transpiled to ES2015!

Conclusion

Hopefully this post has giving you enough of an understanding of the JavaScriptServices package to demonstrate how powerful the library really is. With the abundance of Node modules available there's all sorts of functionality you can build into your application, or application's build process. Have fun!

Building rich client side apps using Angular 2 talk at DDD 11 in Reading

Posted on Saturday, 03 Sep 2016

This weekend I had the pleasure of speaking in front of this friendly bunch (pictured) at DDD 11, a .NET focused developers' conference - this year hosted out of Microsoft's Reading offices.

My talk topic? Building Rich client-side applications using Angular 2.

As a regular speaker at meet ups and the occasional podcast, this year I've been keen to step it up and move into the conference space. Speaking is something that I love doing, and DDD 11 was a great opportunity that I didn't want to miss.

Having spotted a tweet that DDD were accepting talk proposals a number of weeks ago, I quickly submitted a few talks I had up my sleeve - with Building rich client-side apps using Angular 2 being the talk that received enough votes to get me a speaking slot - yay!

My talk was one of the last talks of the day so I had to contend with a room full of sleepy heads, tired after a long day of networking and programming related sessions (myself included). With this in mind I decided it would be best to step up the energy levels with a hope of keeping people more engaged.

Overall I think the talk went well (though in hindsight I could have slowed down a little bit!) and received what I felt was a good reception, and it was great to have people in the audience (some of whom are speakers themselves) approach me afterwards saying they enjoyed the talk.

Angular 2 CLI interview with .NET Rocks

Posted on Tuesday, 23 Aug 2016

Recently I once again (for those that remember my last talk with Carl and Richard was about shortcuts and productivity games in Visual Studio) had the pleasure of talking with Carl Franklin and Richard Campbell on the .NET Rocks Show about the Angular 2 command line interface. With all that's been happening with Angular 2 it was great to have the opportunity to spend a bit of time talking about the tooling around the framework and some of the features the Angular 2 CLI affords us.

So if you've been looking at Angular 2 recently then be sure to check out show 1339 and leave any feedback you may have.

Integration testing your ASP.NET Core middleware using TestServer

Posted on Sunday, 31 Jul 2016

Lately I've been working on a piece of middleware that simplifies temporarily or permanently redirecting a URL from one path to another, whilst easily expressing the permanency of the redirect by sending the appropriate 301 or 302 HTTP Status Code to the browser.

If you've ever re-written a website and had to ensure old, expired URLs didn't result in 404 errors and lost traffic then you'll know what a pain this can be when dealing with a large number of expired URLs. Thankfully using .NET Core's new middleware approach, this task becomes far easier - so much so that I've wrapped it into a library I intend to publish to NuGet:

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    ...
    app.UseRequestRedirect(r => r.ForPath("/32/old-path.html").RedirectTo("/32/new-path").Permanently());
    app.UseRequestRedirect(r => r.ForPath("/33/old-path.html").RedirectTo("/33/new-path").Temporarily());
    app.UseRequestRedirect(r => r.ForPath("/34/old-path.html").RedirectTo(DeferredQueryDbForPath).Temporarily());
    ...
}

private string DeferredQueryDbForPath(string oldPath){
    /* Query database for new path only if old path is hit */
    return newPath;
}

Whilst working on this middleware I was keen to add some integration tests for a more complete range of tests. After a bit of digging I noticed that doing so was actually really simple, thanks to the TestServer class available as part of the Microsoft.AspNetCore.TestHost package.

What is Test Server?

TestServer is a lightweight and configurable host server, designed solely for testing purposes. Its ability to create and serve test requests without the need for a real web host is it's true value, making it perfect for testing middleware libraries (amongst other things!) that take a request, act upon it and eventually return a response. In the case of the aforementioned middleware, our response we'll be testing will be a status code informing the browser that the page has moved along with the destination the page is located at.

TestServer Usage

As mentioned above, you'll find the TestServer class within the Microsoft.AspNetCore.TestHost package, so first of all you'll need to add it to your test project either by using the following NuGet command:

Install-Package Microsoft.AspNetCore.TestHost

Or by updating your project.json file directly:

"dependencies": {
    ...
    "Microsoft.AspNetCore.TestHost": "1.0.0",
    ...
},

Once the NuGet package has downloaded we're ready start creating our tests.

After creating our test class the first thing we need to do is configure an instance of WebHostBuilder, ensuring we add our configured middleware to the pipeline. Once configured we create our instance of TestServer which then bootstraps our test server based on the supplied WebHostBuilder configurations.

[Fact]
public async void Should_Redirect_Permanently()
{
    // Arrange
    var builder = new WebHostBuilder()
        .Configure(app => {
            app.UseRequestRedirect(r => r.ForPath("/old/").RedirectTo("/new/").Permanently());
        }
    );

    var server = new TestServer(builder);

    // Act
    ...
}

Next we need to manually create a new HTTP Request, passing the parameters required to exercise our middleware. In this instance, using the redirect middleware, all I need to do is create a new GET request to the path outlined in my arrange code snippet above. Once created we simply pass our newly created HttpRequestMessage to our configured instance of TestServer.

// Act
var requestMessage = new HttpRequestMessage(new HttpMethod("GET"), "/old/");
var responseMessage = await server.CreateClient().SendAsync(requestMessage);

// Assert
...

Now all that's left is to assert our test using the response we received from our TestServer.SendAsync() method call. In the example below I'm using assertion library Shouldly to assert that the correct status code is emitted and the correct path (/new/) is returned in the header.

// Assert
responseMessage.StatusCode.ShouldBe(HttpStatusCode.MovedPermanently);
responseMessage.Headers.Location.ToString().ShouldBe("/new/");

The complete test will look like this:

public class IntegrationTests
{
    [Fact]
    public async void Should_Redirect_Permanently()
    {
        // Arrange
        var builder = new WebHostBuilder()
            .Configure(app => {
                app.UseRequestRedirect(r => r.ForPath("/old/").RedirectTo("/new/").Permanently());
            }
        );

        var server = new TestServer(builder);

        // Act
        var requestMessage = new HttpRequestMessage(new HttpMethod("GET"), "/old/");
        var responseMessage = await server.CreateClient().SendAsync(requestMessage);

        // Assert
        responseMessage.StatusCode.ShouldBe(HttpStatusCode.MovedPermanently);
        responseMessage.Headers.Location.ToString().ShouldBe("/new/");
    }
}

Conclusion

In this post we looked at how simple it is to test our middleware using the TestServer instance. Whilst the above example is quite trivial, hopefully it provides you with enough of an understanding as to how you can start writing integration or functional tests for your middleware. 

Proxying HTTP requests in ASP.NET Core using Kestrel

Posted on Saturday, 02 Jul 2016

A bit of a short post this week but hopefully one that will save some a bit of googling!

Recently I've been doing a lot of ASP.NET Core MVC and in one project I've been working on I have an admin login area. My first instinct was to create the login section as an Area within the main application project.

Creating a login area in the same application would work, but other than sharing an /admin/ path it really was a completely separate application. It has different concerns, a completely different UI (in this instance it's an Angular 2 application talking to a back-end API). For these reasons, creating the admin section as an MVC Area just felt wrong - so I began to look at what Kestrel could offer in terms of proxying requests to another application. This way I could keep my user facing website as one project and the administration area as another, allowing them to grow independent of one another.

Whilst proxying requests is possible in IIS, I was keen to use Kestrel as I'd like the option of hosting the application across various platforms, so I was keen to see what Kestrel had to offer.

Enter the ASP.NET Core Proxy Middleware!

After a little digging it came to no surprise that there was some middleware that made proxying requests a breeze. The middleware approach to ASP.NET Core MVC lends itself to such a task and setup was so simple that I felt it merited a blog post.

After installing the Microsoft.AspNetCore.Proxy NuGet package via the "Install-Package Microsoft.AspNetCore.Proxy" command, all I had to do was hook the proxy middleware up to my pipeline using the MapWhen method within my application's Startup class:

// Startup.cs
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    ...

    app.MapWhen(IsAdminPath, builder => builder.RunProxy(new ProxyOptions
    {
        Scheme = "http",
        Host = "localhost",
        Port = "8081"
    }));

    ...

}

private static bool IsAdminPath(HttpContext httpContext)
{
    return httpContext.Request.Path.Value.StartsWith(@"/admin/", StringComparison.OrdinalIgnoreCase);
}

As you can see, all I'm doing is passing a method that checks the path begins with /admin/.

Once setup, all you need to do is set your second application (in this instance it's my admin application) to the configured port. You can do this within the Program class via the UseUrls extension method:

// Program.cs
public class Program
{
    public static void Main(string[] args)
    {
        var host = new WebHostBuilder()
            .UseKestrel()
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseUrls("http://localhost:8081")
            .UseIISIntegration()
            .UseStartup<Startup>()
            .Build();

        host.Run();
    }
}

Now, if you start up your application and navigate to /admin/ (or whatever path you've specified) the request should be proxied to your secondary application!

Happy coding!

Fira Mono - An exceptional programming font, and now with (optional) ligatures

Posted on Friday, 17 Jun 2016

I've always been a fan of customising my IDE or text editor of choice, and one such customisation (and often the first thing I do after installing an editor) is setup my font of choice which has long been Google's Droid Sans font.

Recently however, I was introduced to a rather interesting but delightful looking typeface called Fira Mono that's been designed by Mozilla specifically for their Firefox OS.

At first I didn't think much of the font, but the more I saw him using it the more it grew on me. Eventually I decided to download it and give it a try. And having been using it now for a number of days I've no intention of switching back to Droid Sans.

Fira Mono in Visual Studio Code:

Fira Mono in Visual Studio:

Would you like ligatures with that?

If you're a developer that likes your programming fonts with ligatures then there is a version available that includes ligatures called Fira Code.

Downloading Fira Mono

The font itself is open-source so if you're interesting it giving it a try then download it via Font Squirrel here, then once extracted to your fonts directory load up Visual Studio (or restart so Visual Studio can load the font) and to go Tools > Options > Environment > Fonts and Colors and select it from the Font dropdown.

Having experimented with various font sizes (In Visual Studio only), font size 9 appears to work really well with Fira Mono.

As mentioned above, if you'd like the ligatures then head over here to download them.

Select by camel case - the greatest ReSharper setting you never knew

Posted on Monday, 06 Jun 2016

One of ReSharper's most powerful features is the sheer number of additional shortcuts it adds to Visual Studio, and out of the arsenal of shortcuts available my most used shortcuts have to be the ones that enable me to modify and manipulate code in as fewer wasted keystrokes as possible. Ultimately, this boils down to the following two shortcuts: 

1: Expand/Shrink Selection (CTRL + ALT + Right to expand, CTRL + ALT + Left to shrink)
This shortcut enables you to expand a selection by scope, meaning pressing CTRL + ALT + Right to expand your selection will start by highlighting the code your cursor is over, then the line then function scope, class and namespace. Check out the following gif to see an example. Be sure to watch the selected area!

Expand/Shrink Selection:

2: Increase selection by word (CTRL + Shift + Right to expand, CTRL + Shift + Left to shrink)

A few of you will probably notice that this shortcut isn't really a ReSharper shortcut - and you'd be right. But none the less, once harnessed, increase/decrease selection by word is extremely powerful when it comes to renaming variables, methods, classes etc. and will serve you well if mastered.

Where am I going with this?

Whilst the aforementioned shortcuts are excellent tools to add to your shortcut toolbox, one thing I always wished they would do was the expand selection by camel case, allowing me to highlight words with more precision and save me the additional key presses when it comes to highlighting the latter part of a variable in order to rename it. For instance, instead of highlighting the whole word in one go (say, the word ProductService, for example), it would first highlight the word Product, followed by Service after the second key press.

Having wanted to do this for sometime now, I was pleasantly surprised when I stumbled across a ReSharper setting to enable just this. This setting can be enabled by going to ReSharper > Options > Environment > Editor > Editor Behaviour and selecting the Use CamelHumps checkbox.

The problem I've found when enabling this is the setting overwrites the default behaviour of CTRL  + ALT + Left / Right. Whilst this may be fine for some, I would rather the ability to choose when to highlight by word and when to highlight by camel case. Luckily you can do just this via the ReSharper_HumpPrev and ReSharper_HumpNext commands that are available for binding in Visual Studio.

To do this do the following:

  1. Open the Visual Studio Options window from Tools > Options
  2. Expand Environment and scroll down to Keyboard
  3. Map the two commands ReSharper_HumpNext and ReSharper_HumpPrev to the key mappings you wish (E.g. ALT+Right Arrow and ALT+Left Arrow) by selecting the command from the list and entering the key mapping in the Press shortcut keys text box, then click Assign.

Now, with UseCamelHumps enabled and my shortcut keys customised, I can choose between the default selection by string, or extend selection by camel case - giving me even more code-editing precision!

Social authentication via Google in ASP.NET Core MVC

Posted on Sunday, 29 May 2016

Lately I've been re-developing my blog and moving it to .NET Core MVC. As I've been doing this I decided to change authentication methods to take advantage of Google's OAuth API as I didn't want the hastle of managing username and passwords.

Initially, I started looking at the SimpleAuthentication library - but quickly realised ASP.NET Core already provided support for third party authentication providers via the `Microsoft.AspNet.Authentication` library.

Having implemented cookie based authentication I thought I'd take a moment to demonstrate how easy it is with ASP.NET's new middleware functionality.

Let's get started.

Sign up for Google Auth Service

Before we start, we're going to need to register our application with the Google Developer's Console and create a Client Id and Client Secret (which we'll use later on in this demonstration).

  1. To do this go to Google's developer console and click "Create Project". Enter your project name (in this instance it's called BlogAuth) then click Create.
  2. Next we need to enable authentication with Google's social API (Google+). Within the Overview page click the Google+ API link located under Social API and click Enable once the Google+ page has loaded.
  3. At this point you should see a message informing you that we need to create our credentials. To do this click the Credentials link on the left hand side, or the Go to Credentials button.
  4. Go across to the OAuth Consent Screen and enter a name of the application you're setting up. This name is visible to the user when authenticating. Once done, click Save.
  5. At this point we need to create our ClientId and ClientSecret, so go across to the Credentials tab and click Create Credentials and select OAuth client ID from the dropdown then select Web Application.
  6. Now we need to enter our app details. Enter an app name (used for recognising the app within Google Developer Console) and enter your domain into the Authorized JavaScript origins. If you're developing locally then enter your localhost address into this field including port number.
  7. Next enter the return path into the Authorized redirect URIs field. This is a callback path that Google will use to set the authorisation cookie. In this instance we'll want to enter http://<domain>:<port>/signin-google (where domain and port are the values you entered in step 6).
  8. Once done click Create.
  9. You should now be greeted with a screen displaying your Client ID and Client Secret. Take a note of these as we'll need them shortly.

Once you've stored your Client ID and Secret somewhere you're safe to close the Google Developer Console window.

Authentication middleware setup

With our Client ID and Client Secret in hand, our next step is to set up authentication within our application. Before we start, we first need to to import the Microsoft.AspNet.Authentication package (Microsoft.AspNetCore.Authorization if you're using RC2 or later) into our solution via NuGet using the following command:

// RC1
install Microsoft.AspNet.Authentication

// RC2
install Microsoft.AspNetCore.Authorization

Once installed it's time to hook it up to ASP.NET Core's pipeline within your solution's `Startup.cs` file.

First we need to register our authentication scheme with ASP.NET. within the `ConfigureServices` method:

public IServiceProvider ConfigureServices(IServiceCollection services)
{
    ...
    
    // Add authentication middleware and inform .NET Core MVC what scheme we'll be using
    services.AddAuthentication(options => options.SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme);
        
    ...
}

 

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{

    ...
    // Adds a cookie-based authentication middleware to application
    app.UseCookieAuthentication(new CookieAuthenticationOptions
    {
        LoginPath = "/account/login",
        AuthenticationScheme = "Cookies",
        AutomaticAuthenticate = true,
        AutomaticChallenge = true
    });

    // Plugin Google Authentication configuration options
    app.UseGoogleAuthentication(new GoogleOptions
    {
        ClientId = "your_client_id",
        ClientSecret = "your_client_secret",
        Scope = { "email", "openid" }
    });
    
    ...

}

In terms of configuring our ASP.NET Core MVC application to use Google for authentication - we're done! (yes, it's that easy, thanks to .NET Core MVC's middleware approach). 

All that's left to do now is to plumb in our UI and controllers.

Setting up our controller

First, let's go ahead and create the controller that we'll use to authenticate our users. We'll call this controller AccountController:

public class AccountController : Controller
{
    [HttpGet]
    public IActionResult Login()
    {
        return View();
    }
    
    public IActionResult External(string provider)
    {
        var authProperties = new AuthenticationProperties
        {
            // Specify where to return the user after successful authentication with Google
            RedirectUri = "/account/secure"
        };

        return new ChallengeResult(provider, authProperties);
    }
    
    [Authorize]
    public IActionResult Secure()
    {
        // Yay, we're secured! Any unauthenticated access to this action will be redirected to the login screen
        return View();
    }

    public async Task<IActionResult> LogOut()
    {
        await HttpContext.Authentication.SignOutAsync("Cookies");

        return RedirectToAction("Index", "Homepage");
    }
}

Now we've created our AccountController that we'll use to authenticate users, we also need to create our views for the Login and Secure controller actions. Please be aware that these are rather basic and are simply as a means to demonstrate the process of logging in via Google authentication.

// Login.cshtml

<h2>Login</h2>
<a href="/account/external?provider=Google">Sign in with Google</a>
// Secure.cshtml

<h2>Secured!</h2>
This page can only be accessed by authenticated users.

Now, if we fire up our application and head to the /login/ page and click Sign in with Google we should be taken to the Google account authentication screen. Once we click Continue  we should be automatically redirected back to our /secure/ page as expected!

ASP.NET Core tag helpers - with great power comes great responsibility

Posted on Monday, 09 May 2016

I recently watched a Build 2016 talk by N. Taylor Mullen where Taylor demonstrated the power of ASP.NET Core MVC's new tag helpers. Whilst I've been keeping up to date with the changes and improvements being made to Razor, there were a couple times my jaw dropped as Taylor talked about points that were completely new to me. These points really highlighted how powerful the Razor engine is becoming - but as Ben Parker said in Spiderman, "With great power comes great responsibility".

This post serves as a review of how powerful the Razor tag engine is, but also a warning of potential pitfalls you may encounter as your codebase grows.

The power of ASP.NET Core MVC's tag engine

For those of you that haven't been keeping up to date with the changes in ASP.NET Core MVC, one of the new features included within Razor are Tag Helpers. At their essence, tag helpers allow you to replace Razor's jarring syntax with a more natural HTML-like syntax. If we take moment to compare a new tag helper to the equivalent Razor function you'll see the difference (remember, you can still use the HTML helpers, Tag helpers do not replace them and will happily work side by side in the same view).

// Before - HTML Helpers
@Html.ActionLink("Click me", "MyController", "MyAction", { @class="my-css-classname", data_my_attr="my-attribute"}) 

// After - Tag Helpers
<a asp-controller="MyController" asp-action="MyAction" class="my-css-classname" my-attr="my-attribute">Click me</a>

Whilst both of these will output the same HTML, it's clear to see how much more natural the tag helper syntax looks and feels. Infact, with data prefix being optional when using data attributes in HTML, you could mistake the tag helper for HTML (more on this later).

Building your own tag helpers

It goes without saying that we're able to create our own tag helpers, and this is where they get extremely powerful. Let's start by creating a tag helper from start to finish. The follow example is a trivial example, but if you stick stick with me hopefully you'll see why I chose this example as we near the end. So let's begin by creating a tag helper that automatically adds a link-juice preserving rel="nofollow" attribute to links outbound links:

public class NoFollowTagHelper : TagHelper
{
    // Public properties becomes available on our custom tag as an attribute.
    public string Href { get; set; }

    public override void Process(TagHelperContext context, TagHelperOutput output)
    {
        output.TagName = "a"; // Specify our tag output name
        output.TagMode = TagMode.StartTagAndEndTag; // The type of tag we wish to create

        output.Attributes["href"] = Href;
        if (!output.Attributes["href"].Value.ToString().Contains("josephwoodward.co.uk"))
        {
            output.Attributes["rel"] = "nofollow";
        }

        base.Process(context, output);
    }
}

Before continuing, it's worth noting that our derived class (NoFollowTagHelper) is what will become our custom tag helper name; Razor will add hyphens between the uppercase character and then lowercase the string. It will also remove the word TagHelper from the class name if it exists.

Before we can use our tag helper we need to tell Razor where to find it.

Loading our tag helper

To load our tag helper we need to add it to our `_ViewImports.cshtml` file. The _ViewImport's sole purpose is to reference our assemblies relating to the views to save us littering our views with references to assemblies. Using the _ViewImport we can do this in one place, much like we used to specify custom HTLM Helpers in the Web.config in previous versions of ASP.NET MVC.

// _ViewImports.cshtml
@using TagHelperDemo
@using TagHelperDemo.Models
@using TagHelperDemo.ViewModels.Account
@using TagHelperDemo.ViewModels.Manage
@using Microsoft.AspNet.Identity
@addTagHelper "*, Microsoft.AspNet.Mvc.TagHelpers"
@addTagHelper "*, TagHelperDemo" // Reference our assembly containing our tag helper here.

The asterisk will load all tag helpers within the TagHelperDemo assembly. If you wish to only load a single tag helper you can specify it like so:

// _ViewImports.cshtml
...
@addTagHelper "ImageLoaderTagHelper, TagHelperDemo"

Using our tag helper

Now that we've created our tag helper and referenced it, any references to <no-follow> elements will be transformed into no follow anchor links if the href is going to an external domain:

// Our custom tag helper
<no-follow href="http://outboundlink.com">Thanks for visiting</no-follow>
<no-follow href="/about">About</no-follow>
<!-- The transformed output -->
<a href="http://outboundlink.com" rel="nofollow">Thanks for visiting</a>
<a href="/about">About</a>

But wait! There's more!

Ok, so creating custom no-follow tags isn't ideal and is quite silly when we can just type normal HTML, so let's go one step further. With the new tag helper syntax you can actually transform normal HTML tags too! Let's demonstrate this awesomeness by modifiyng our nofollow tag helper:

[HtmlTargetElement("a", Attributes = "href")]
public class NoFollowTagHelper : TagHelper
{
    public override void Process(TagHelperContext context, TagHelperOutput output)
    {
        var href = output.Attributes["href"];
        if (!href.Value.ToString().Contains("josephwoodward.co.uk"))
        {
            output.Attributes["rel"] = "nofollow";
        }

        base.Process(context, output);
    }
}

As you'll see, we've removed some redundant code and added the HtmlTargetElement attribute. This attribute is what allows us to target existing HTML elements and add additional functionality. Now, if we look at our Razor code ALL of our anchors have been processed by our NoFollowTagHelper class and only those with outbound links have been transformed:

<!-- Before -->
<a href="http://outboundlink.com">Thanks for visiting</a>
<a href="/about">About</a>

<!-- After -->
<a href="http://outboundlink.com" rel="nofollow">Thanks for visiting</a>
<a href="/about">About</a>

We've retrospectively changed the output of our HTML without needing to go through our codebase! For those that have worked on large applications and needed to create some kind of consistency between views, you'll hopefully understand how powerful this can be and the potential use cases for it. In fact, this is exactly how ASP.NET uses the tilde (~/) to locate the path an img src path - see for yourself here.

Moving on

So far we've spent the duration of this blog post talking about how powerful ASP.NET Core MVC's Tag Helpers, but with all great powers comes great responsibility - so let's take a moment to look at the downsides of tag helpers and ways we can prevent potential pitfalls as we use them.

The Responsibility

They look just like HTML
 
When the ASP.NET team first revealed tag helpers to the world there were mixed reactions over the syntax. The powers of Tag Helpers were clear, but some people feel the blurring of lines between HTML and Razor breaks separation of concerns between HTML and Razor. Take the following comments taken from Scott Hanselman's ASP.NET 5 (vNext) Work in Progress - Exploring TagHelpers post demonstrate the feelings of some:

What are the design goals for this feature? I see less razor syntax, but just exchanged for more non-standard html-like markup. I'm still fond of the T4MVC style of referencing controller actions.

This seems very tough to get on-board with. My biggest concern is how do we easily discern between which more obscure attributes are "TagHelper" related vs which ones are part of the HTML spec? When my company hires new devs, I can't rely on the fact that they would realize that "action" is a "server-side" attribute, but then things like "media" and "type" are HTML ... not to mention how hard it would be to tell the difference if I'm trying to maintain code where folks have mixed server-side/html attributes.

This lack of distinction between HTML and Razor quickly becomes apparent when you open a view file in a text editor that doesn't support the syntax highlighting of Tag Helpers that Visual Studio does. Can you spot what's HTML and what's a tag helper the following screenshot?

The solution

Luckily there is a solution to help people discern between HTML and Razor, and that's to force prefixes using the @tagHelperPrefix declaration.

By adding the @tagHelperPrefix declaration to the top of your view file you're able to force prefixes on all of the tags within that current view:

// Index.cshtml
@tagHelperPrefix "helper:"

...

<div>
    <helper:a asp-controller="MyController" asp-action="MyAction" class="my-css-classname" my-attr="my-attribute">Click me</helper:a>
</div>

With a tagHelperPrefix declaration specified in a page, any tag helper that isn't prefixed with the specified prefix will be completely ignored by the Razor engine (note: You also have to prefix the closing tag). If the tag is an enclosing tag that wraps some content, then the body of the tag helper will be emitted:

// Index.cshtml - specify helper prefix
@tagHelperPrefix "helper:"

...

<div>
    // Haven't specified helper: prefix
    <asp-controller="MyController" asp-action="MyAction" class="my-css-classname" my-attr="my-attribute">Click me</a>
</div>
<!-- Output without prefix: -->
<div>
    Click me
</div>

One problem that may arise from this solution is you may forget to add the prefix declaration to your view file. To combat this you can add the prefix declaration to the _ViewImport.cshtml file (which we talked about earlier). As all views automatically inherit from _ViewImport, our prefix rule will cascade down through the rest of our views. As you'd expect, this change will force all tag helpers to require your prefix - even the native .NET MVC tags including any anchors or image HTML tags that feature a tilde:

Unexpected HTML behaviour

Earlier in this article, the second version of NoFollowTagHelper demonstrated how we can harness the power of Tag Helpers to transform any HTML element. Whilst the ability to perform such transformations is a extremely powerful, we're effectively taking HTML, a simple markup language with very little native functionality, and giving it this new power. 

Let me try and explain.

If you were to copy and paste a page of HTML into a .html file and look at it, you'd be confident that there's no magic going on with the markup - effectively what you see is what you get. Now rename that HTML page to .cshtml and put it in an ASP.NET Core MVC application with a number of tag helpers that don't require a prefix and you'll no longer have that same confidence. This lack of confidence can create uncertainty in what was once static HTML. It's the same problem you have when performing DOM manipulations using JavaScript, which is why I prefer to prefix selectors targeted by JavaScript with a 'js', making it clear to the reader that it's being touched by JavaScript (as opposed to selecting DOM elements by classes used styling purposes).

To demonstrate this with an example, what does the following HTML element do?

<img src="/profile.jpg" alt="Profile Picture" />

In any other context it would be a simple profile picture from the root of your application. Considering the lack of classes or Id attributes you'd be fairly confident there's no JavaScript targeting it too.

With Tag Helpers added to the equation this HTML isn't what you expect. When rendered it actually becomes the following:

<img src="http://cdn.com/profile.mobile.jpg" alt="Profile Picture" />

Ultimately, the best way to avoid this unpredictable behaviour is to ensure you use prefixes, or at least be ensure your team are clear as to what tag helpers or are not active.

On closing

Hopefully this post has been valuable in highlighting the good, the bad and the potentially ugly consequences of Tag Helpers in your codebase. They're an extremely powerful addition to the .NET framework, but as with all things - there is potential to shoot yourself in the foot.

Capturing IIS / ASP.NET traffic in Fiddler

Posted on Monday, 25 Apr 2016

Recently, whilst debugging an issue I needed to capture the traffic being sent from my local application to an external RESTful web service. In this instance, I needed to see the contents of a JWT token being passed to the service to verify some of the data. Thankfully, Fiddler by Telerik is just the tool for the job.

What is fiddler?

Fiddler is a super powerful, free web debugging proxy tool created by the guys and girls at Telerik. Once launched, Fiddler will capture all incoming and outgoing traffic from your machine, allowing you to analyse traffic, manipulate HTTP (and HTTPS!) requests and perform a whole host of traffic based operations. It's a fantastic tool for debugging and if you don't have it I'd highly recommend you take a look at it. Did I say it's 100% free too?

Capturing ASP.NET / IIS Traffic

By default, Fiddler is configured to register itself as the system proxy for Microsoft Windows Internet Services (WinInet) - the HTTP layer used by IE (and other browsers), Microsoft Office, and many other products. Whilst this default configuration is suitable for the majority of your debugging, if you wish to capture traffic from IIS (which bypasses WinInet), we'll need to re-route our IIS traffic through Fiddler by modifying our application's Web.config.

Step 1: Update your Web.config

To do this, simply open your Web.config and add the following snippet of code after the <configSections> element.

<system.net>
    <defaultProxy enabled="true">
        <proxy proxyaddress="http://127.0.0.1:8888" bypassonlocal="False"/>
    </defaultProxy>
</system.net>

Step 2: Configure Fiddler to use the same port

Now that we've routed our IIS traffic through port 8888, we have to configure Fiddler to listen to the same port. To do this simple open Fiddler, go to Tools > Fiddler Options > Connections then change the port listed within the "Fiddler listens on port" setting to 8888.

Now if you fire up your application you'll start to see your requests stacking up in Fiddler ready for your inspection.

Happy debugging!

About Me

I’m a .NET focused software engineer currently working at Just Eat in Bristol, a regular speaker at user groups (and the occasional podcast), an active member within the .NET community, OSS contributor and co-organiser of the .NET South West meet up.

Having worked professionally in the software/web industry for over 10 years across a variety of frameworks and languages, I’ve found my real interests lie in good coding practices and writing maintainable software, regardless of the language.

Read More

Feel free to follow me on any of the channels below.

 
 
profile for JoeMighty at Stack Overflow, Q&A for professional and enthusiast programmers

Recent Blog Posts