Latest Blog Posts

Setting up Google OAuth in ASP.NET Core MVC

Posted on Sunday, 29 May 2016

Lately I've been re-developing my blog and moving it to .NET Core MVC. As I've been doing this I decided to change authentication methods to take advantage of Google's OAuth API as I didn't want the hastle of managing username and passwords.

Initially, I started looking at the SimpleAuthentication library - but quickly realised ASP.NET Core already provided support for third party authentication providers via the `Microsoft.AspNet.Authentication` library.

Having implemented cookie based authentication I thought I'd take a moment to demonstrate how easy it is with ASP.NET's new middleware functionality.

Let's get started.

Sign up for Google Auth Service

Before we start, we're going to need to register our application with the Google Developer's Console and create a Client Id and Client Secret (which we'll use later on in this demonstration).

  1. To do this go to Google's developer console and click "Create Project". Enter your project name (in this instance it's called BlogAuth) then click Create.
  2. Next we need to enable authentication with Google's social API (Google+). Within the Overview page click the Google+ API link located under Social API and click Enable once the Google+ page has loaded.
  3. At this point you should see a message informing you that we need to create our credentials. To do this click the Credentials link on the left hand side, or the Go to Credentials button.
  4. Go across to the OAuth Consent Screen and enter a name of the application you're setting up. This name is visible to the user when authenticating. Once done, click Save.
  5. At this point we need to create our ClientId and ClientSecret, so go across to the Credentials tab and click Create Credentials and select OAuth client ID from the dropdown then select Web Application.
  6. Now we need to enter our app details. Enter an app name (used for recognising the app within Google Developer Console) and enter your domain into the Authorized JavaScript origins. If you're developing locally then you're fine to enter your localhost address into this field. Next enter the return path into the Authorized redirect URIs field. This is the path users will be redirected to after authentication with Google. Once done click Create.
  7. You should now be greeted with a screen displaying your Client ID and Client Secret. Take a note of these as we'll need them shortly.

Once you've stored your Client ID and Secret somewhere you're safe to close the Google Developer Console window.

Authentication middleware setup

With our Client ID and Client Secret in hand, our next step is to set up authentication within our application. Before we start, we first need to to import the Microsoft.AspNet.Authentication package into our solution via NuGet using the following command:

install Microsoft.AspNet.Authentication

Once installed it's time to hook it up to ASP.NET Core's pipeline within your solution's `Startup.cs` file.

First we need to register our authentication scheme with ASP.NET. within the `ConfigureServices` method:

public IServiceProvider ConfigureServices(IServiceCollection services)
{
    ...
    
    // Add authentication middleware and inform .NET Core MVC what scheme we'll be using
    services.AddAuthentication(options => options.SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme);
        
    ...
}

 

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{

    ...
    // Adds a cookie-based authentication middleware to application
    app.UseCookieAuthentication(new CookieAuthenticationOptions
    {
        LoginPath = "/account/login",
        AuthenticationScheme = "Cookies",
        AutomaticAuthenticate = true,
        AutomaticChallenge = true
    });

    // Plugin Google Authentication configuration options
    app.UseGoogleAuthentication(new GoogleOptions
    {
        ClientId = "your_client_id",
        ClientSecret = "your_client_secret",
        Scope = { "email", "openid" }
    });
    
    ...

}

In terms of configuring our ASP.NET Core MVC application to use Google for authentication - we're done! (yes, it's that easy, thanks to .NET Core MVC's middleware approach). 

All that's left to do now is to plumb in our UI and controllers.

Setting up our controller

First, let's go ahead and create the controller that we'll use to authenticate our users. We'll call this controller AccountController:

public class AccountController : Controller
{
    [HttpGet]
    public IActionResult Login()
    {
        return View();
    }
    
    public IActionResult External(string provider)
    {
        var authProperties = new AuthenticationProperties
        {
            // Specify where to return the user after successful authenticartion with Google
            RedirectUri = "/homepage/secure"
        };

        return new ChallengeResult(provider, authProperties);
    }
    
    [Authorize]
    public IActionResult Secure()
    {
        // Yay, we're secured! Any unauthenticated access to this action will be redirected to the login screen
        return View();
    }

    public async Task<IActionResult> LogOut()
    {
        await HttpContext.Authentication.SignOutAsync("Cookies");

        return RedirectToAction("Index", "Homepage");
    }
}

Now we've created our AccountController that we'll use to authenticate users, we also need to create our views for the Login and Secure controller actions. Please be aware that these are rather basic and are simply as a means to demonstrate the process of logging in via Google authentication.

// Login.cshtml

<h2>Login</h2>
<a href="/account/external?provider=Google">Sign in with Google</a>
// Secure.cshtml

<h2>Secured!</h2>
This page can only be accessed by authenticated users.

Now, if we fire up our application and head to the /login/ page and click Sign in with Google we should be taken to the Google account authentication screen. Once we click Continue  we should be automatically redirected back to our /secure/ page as expected!

ASP.NET Core tag helpers - with great power comes great responsibility

Posted on Monday, 09 May 2016

I recently watched a Build 2016 talk by N. Taylor Mullen where Taylor demonstrated the power of ASP.NET Core MVC's new tag helpers. Whilst I've been keeping up to date with the changes and improvements being made to Razor, there were a couple times my jaw dropped as Taylor talked about points that were completely new to me. These points really highlighted how powerful the Razor engine is becoming - but as Ben Parker said in Spiderman, "With great power comes great responsibility".

This post serves as a review of how powerful the Razor tag engine is, but also a warning of potential pitfalls you may encounter as your codebase grows.

The power of ASP.NET Core MVC's tag engine

For those of you that haven't been keeping up to date with the changes in ASP.NET Core MVC, one of the new features included within Razor are Tag Helpers. At their essence, tag helpers allow you to replace Razor's jarring syntax with a more natural HTML-like syntax. If we take moment to compare a new tag helper to the equivalent Razor function you'll see the difference (remember, you can still use the HTML helpers, Tag helpers do not replace them and will happily work side by side in the same view).

// Before - HTML Helpers
@Html.ActionLink("Click me", "MyController", "MyAction", { @class="my-css-classname", data_my_attr="my-attribute"}) 

// After - Tag Helpers
<a asp-controller="MyController" asp-action="MyAction" class="my-css-classname" my-attr="my-attribute">Click me</a>

Whilst both of these will output the same HTML, it's clear to see how much more natural the tag helper syntax looks and feels. Infact, with data prefix being optional when using data attributes in HTML, you could mistake the tag helper for HTML (more on this later).

Building your own tag helpers

It goes without saying that we're able to create our own tag helpers, and this is where they get extremely powerful. Let's start by creating a tag helper from start to finish. The follow example is a trivial example, but if you stick stick with me hopefully you'll see why I chose this example as we near the end. So let's begin by creating a tag helper that automatically adds a link-juice preserving rel="nofollow" attribute to links outbound links:

public class NoFollowTagHelper : TagHelper
{
    // Public properties becomes available on our custom tag as an attribute.
    public string Href { get; set; }

    public override void Process(TagHelperContext context, TagHelperOutput output)
    {
        output.TagName = "a"; // Specify our tag output name
        output.TagMode = TagMode.StartTagAndEndTag; // The type of tag we wish to create

        output.Attributes["href"] = Href;
        if (!output.Attributes["href"].Value.ToString().Contains("josephwoodward.co.uk"))
        {
            output.Attributes["rel"] = "nofollow";
        }

        base.Process(context, output);
    }
}

Before continuing, it's worth noting that our derived class (NoFollowTagHelper) is what will become our custom tag helper name; Razor will add hyphens between the uppercase character and then lowercase the string. It will also remove the word TagHelper from the class name if it exists.

Before we can use our tag helper we need to tell Razor where to find it.

Loading our tag helper

To load our tag helper we need to add it to our `_ViewImports.cshtml` file. The _ViewImport's sole purpose is to reference our assemblies relating to the views to save us littering our views with references to assemblies. Using the _ViewImport we can do this in one place, much like we used to specify custom HTLM Helpers in the Web.config in previous versions of ASP.NET MVC.

// _ViewImports.cshtml
@using TagHelperDemo
@using TagHelperDemo.Models
@using TagHelperDemo.ViewModels.Account
@using TagHelperDemo.ViewModels.Manage
@using Microsoft.AspNet.Identity
@addTagHelper "*, Microsoft.AspNet.Mvc.TagHelpers"
@addTagHelper "*, TagHelperDemo" // Reference our assembly containing our tag helper here.

The asterisk will load all tag helpers within the TagHelperDemo assembly. If you wish to only load a single tag helper you can specify it like so:

// _ViewImports.cshtml
...
@addTagHelper "ImageLoaderTagHelper, TagHelperDemo"

Using our tag helper

Now that we've created our tag helper and referenced it, any references to <no-follow> elements will be transformed into no follow anchor links if the href is going to an external domain:

// Our custom tag helper
<no-follow href="http://outboundlink.com">Thanks for visiting</no-follow>
<no-follow href="/about">About</no-follow>
<!-- The transformed output -->
<a href="http://outboundlink.com" rel="nofollow">Thanks for visiting</a>
<a href="/about">About</a>

But wait! There's more!

Ok, so creating custom no-follow tags isn't ideal and is quite silly when we can just type normal HTML, so let's go one step further. With the new tag helper syntax you can actually transform normal HTML tags too! Let's demonstrate this awesomeness by modifiyng our nofollow tag helper:

[HtmlTargetElement("a", Attributes = "href")]
public class NoFollowTagHelper : TagHelper
{
    public override void Process(TagHelperContext context, TagHelperOutput output)
    {
        var href = output.Attributes["href"];
        if (!href.Value.ToString().Contains("josephwoodward.co.uk"))
        {
            output.Attributes["rel"] = "nofollow";
        }

        base.Process(context, output);
    }
}

As you'll see, we've removed some redundant code and added the HtmlTargetElement attribute. This attribute is what allows us to target existing HTML elements and add additional functionality. Now, if we look at our Razor code ALL of our anchors have been processed by our NoFollowTagHelper class and only those with outbound links have been transformed:

<!-- Before -->
<a href="http://outboundlink.com">Thanks for visiting</a>
<a href="/about">About</a>

<!-- After -->
<a href="http://outboundlink.com" rel="nofollow">Thanks for visiting</a>
<a href="/about">About</a>

We've retrospectively changed the output of our HTML without needing to go through our codebase! For those that have worked on large applications and needed to create some kind of consistency between views, you'll hopefully understand how powerful this can be and the potential use cases for it. In fact, this is exactly how ASP.NET uses the tilde (~/) to locate the path an img src path - see for yourself here.

Moving on

So far we've spent the duration of this blog post talking about how powerful ASP.NET Core MVC's Tag Helpers, but with all great powers comes great responsibility - so let's take a moment to look at the downsides of tag helpers and ways we can prevent potential pitfalls as we use them.

The Responsibility

They look just like HTML
 
When the ASP.NET team first revealed tag helpers to the world there were mixed reactions over the syntax. The powers of Tag Helpers were clear, but some people feel the blurring of lines between HTML and Razor breaks separation of concerns between HTML and Razor. Take the following comments taken from Scott Hanselman's ASP.NET 5 (vNext) Work in Progress - Exploring TagHelpers post demonstrate the feelings of some:

What are the design goals for this feature? I see less razor syntax, but just exchanged for more non-standard html-like markup. I'm still fond of the T4MVC style of referencing controller actions.

This seems very tough to get on-board with. My biggest concern is how do we easily discern between which more obscure attributes are "TagHelper" related vs which ones are part of the HTML spec? When my company hires new devs, I can't rely on the fact that they would realize that "action" is a "server-side" attribute, but then things like "media" and "type" are HTML ... not to mention how hard it would be to tell the difference if I'm trying to maintain code where folks have mixed server-side/html attributes.

This lack of distinction between HTML and Razor quickly becomes apparent when you open a view file in a text editor that doesn't support the syntax highlighting of Tag Helpers that Visual Studio does. Can you spot what's HTML and what's a tag helper the following screenshot?

The solution

Luckily there is a solution to help people discern between HTML and Razor, and that's to force prefixes using the @tagHelperPrefix declaration.

By adding the @tagHelperPrefix declaration to the top of your view file you're able to force prefixes on all of the tags within that current view:

// Index.cshtml
@tagHelperPrefix "helper:"

...

<div>
    <helper:a asp-controller="MyController" asp-action="MyAction" class="my-css-classname" my-attr="my-attribute">Click me</helper:a>
</div>

With a tagHelperPrefix declaration specified in a page, any tag helper that isn't prefixed with the specified prefix will be completely ignored by the Razor engine (note: You also have to prefix the closing tag). If the tag is an enclosing tag that wraps some content, then the body of the tag helper will be emitted:

// Index.cshtml - specify helper prefix
@tagHelperPrefix "helper:"

...

<div>
    // Haven't specified helper: prefix
    <asp-controller="MyController" asp-action="MyAction" class="my-css-classname" my-attr="my-attribute">Click me</a>
</div>
<!-- Output without prefix: -->
<div>
    Click me
</div>

One problem that may arise from this solution is you may forget to add the prefix declaration to your view file. To combat this you can add the prefix declaration to the _ViewImport.cshtml file (which we talked about earlier). As all views automatically inherit from _ViewImport, our prefix rule will cascade down through the rest of our views. As you'd expect, this change will force all tag helpers to require your prefix - even the native .NET MVC tags including any anchors or image HTML tags that feature a tilde:

Unexpected HTML behaviour

Earlier in this article, the second version of NoFollowTagHelper demonstrated how we can harness the power of Tag Helpers to transform any HTML element. Whilst the ability to perform such transformations is a extremely powerful, we're effectively taking HTML, a simple markup language with very little native functionality, and giving it this new power. 

Let me try and explain.

If you were to copy and paste a page of HTML into a .html file and look at it, you'd be confident that there's no magic going on with the markup - effectively what you see is what you get. Now rename that HTML page to .cshtml and put it in an ASP.NET Core MVC application with a number of tag helpers that don't require a prefix and you'll no longer have that same confidence. This lack of confidence can create uncertainty in what was once static HTML. It's the same problem you have when performing DOM manipulations using JavaScript, which is why I prefer to prefix selectors targeted by JavaScript with a 'js', making it clear to the reader that it's being touched by JavaScript (as opposed to selecting DOM elements by classes used styling purposes).

To demonstrate this with an example, what does the following HTML element do?

<img src="/profile.jpg" alt="Profile Picture" />

In any other context it would be a simple profile picture from the root of your application. Considering the lack of classes or Id attributes you'd be fairly confident there's no JavaScript targeting it too.

With Tag Helpers added to the equation this HTML isn't what you expect. When rendered it actually becomes the following:

<img src="http://cdn.com/profile.mobile.jpg" alt="Profile Picture" />

Ultimately, the best way to avoid this unpredictable behaviour is to ensure you use prefixes, or at least be ensure your team are clear as to what tag helpers or are not active.

On closing

Hopefully this post has been valuable in highlighting the good, the bad and the potentially ugly consequences of Tag Helpers in your codebase. They're an extremely powerful addition to the .NET framework, but as with all things - there is potential to shoot yourself in the foot.

Capturing IIS / ASP.NET traffic in Fiddler

Posted on Monday, 25 Apr 2016

Recently, whilst debugging an issue I needed to capture the traffic being sent from my local application to an external RESTful web service. In this instance, I needed to see the contents of a JWT token being passed to the service to verify some of the data. Thankfully, Fiddler by Telerik is just the tool for the job.

What is fiddler?

Fiddler is a super powerful, free web debugging proxy tool created by the guys and girls at Telerik. Once launched, Fiddler will capture all incoming and outgoing traffic from your machine, allowing you to analyse traffic, manipulate HTTP (and HTTPS!) requests and perform a whole host of traffic based operations. It's a fantastic tool for debugging and if you don't have it I'd highly recommend you take a look at it. Did I say it's 100% free too?

Capturing ASP.NET / IIS Traffic

By default, Fiddler is configured to register itself as the system proxy for Microsoft Windows Internet Services (WinInet) - the HTTP layer used by IE (and other browsers), Microsoft Office, and many other products. Whilst this default configuration is suitable for the majority of your debugging, if you wish to capture traffic from IIS (which bypasses WinInet), we'll need to re-route our IIS traffic through Fiddler by modifying our application's Web.config.

Step 1: Update your Web.config

To do this, simply open your Web.config and add the following snippet of code after the <configSections> element.

<system.net>
    <defaultProxy enabled="true">
        <proxy proxyaddress="http://127.0.0.1:8888" bypassonlocal="False"/>
    </defaultProxy>
</system.net>

Step 2: Configure Fiddler to use the same port

Now that we've routed our IIS traffic through port 8888, we have to configure Fiddler to listen to the same port. To do this simple open Fiddler, go to Tools > Fiddler Options > Connections then change the port listed within the "Fiddler listens on port" setting to 8888.

Now if you fire up your application you'll start to see your requests stacking up in Fiddler ready for your inspection.

Happy debugging!

Publishing your first NuGet package in 5 easy steps

Posted on Friday, 15 Apr 2016

So you've just finished writing a small .NET library for a one-off project and you pause for a moment and think "I should stick this on NuGet - others may find this useful!". You know what NuGet is and how it works, but having never published a package before you're unsure what to do and are short on time. If this is the case then hopefully this post will help you out and show you just how painless creating your first NuGet package is.

Let's begin!

Step 1: Download the NuGet command line tool

First, you'll need to download the NuGet command line tool. You can do this by going here and downloading the latest version (beneath the Windows x86 Commandline heading).

Step 2: Add the NuGet executable to your PATH system variables

Now we've downloaded the NuGet executable we want to add it to our PATH system variables. At this point, you could simply reference the executable directly - but before long you'll be wanting to contribute more of your libraries and adding it to your PATH system variables will save you more work in the long run.

If you already know how to add PATH variables then jump to Step 3, if not then read on.

Adding the nuget command to your PATH system variables

First, move the NuGet.exe you downloaded to a suitable location (I store mine in C:/NuGet/). Now, right-click My Computer (This PC if you're on Windows 10). Click "Advanced System Settings" then click the "Environment Variables" button located within the Advanced tab. From here double-click the PATH variable in the top panel and create a new entry by adding the path to the directory that contains your NuGet.exe file (in this instance it's C:/NuGet/).

Now, if all's done right you should be able to open a Command Prompt window, type "nuget" and you'll be greeted with a list of NuGet commands.

Step 3: Create a Package.nuspec configuration file

In short, a .nuspec file is an XML-based configuration file that describes our NuGet package and its contents. For further reading on the role of the .nuspec file see the nuspec reference on nuget.org.

To create a .nuspec package manifest file, let's go to the root of our project we wish to publish (that's where I prefer to keep my .nuspec file as it can be added to source control) and open a Command Prompt window (Tip: Typing "cmd" in the folder path of your explorer window will automatically open a Command Prompt window pointing to the directory).

Now type "nuget spec" into your Command Prompt window. If all goes well you should be greeted with a message saying "Created 'Package.nuspec' successfully". If so, you should now see a Package.nuspec file your project folder.

Let's take a moment to look inside of our newly created Package.nuspec file. It should look a little like below:

<?xml version="1.0"?>
<package>
  <metadata>
    <id>Package</id>
    <version>1.0.0</version>
    <authors>Joseph</authors>
    <owners>Joseph</owners>
    <licenseUrl>http://LICENSE_URL_HERE_OR_DELETE_THIS_LINE</licenseUrl>
    <projectUrl>http://PROJECT_URL_HERE_OR_DELETE_THIS_LINE</projectUrl>
    <iconUrl>http://ICON_URL_HERE_OR_DELETE_THIS_LINE</iconUrl>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>Package description</description>
    <releaseNotes>Summary of changes made in this release of the package.</releaseNotes>
    <copyright>Copyright 2016</copyright>
    <tags>Tag1 Tag2</tags>
    <dependencies>
      <dependency id="SampleDependency" version="1.0" />
    </dependencies>
  </metadata>
</package>

As you can see, all of the parameters are pretty self-explanatory (see the docs here if you have any uncertainties on what the configuration does). The only one you may have a question about is the dependencies node - this is simply a list of the dependencies and their version that your NuGet package may have (See here for more info).

Now we've generated our NuGet configuration file, let's take a moment to fill it in.

Once you're done, your manifest file should look a little like below. The next step is to reference the files to be packaged. In the following example, you'll see that I've referenced the Release .dll file (take a look at the documentation here for more file options). If you haven't noticed, I've also removed the <dependencies> node as my package doesn't have any additional dependencies.

<?xml version="1.0"?>
<package >
  <metadata>
    <id>Slugity</id>
    <version>1.0.2</version>
	<title>Slugity</title>
    <authors>Joseph Woodward</authors>
    <owners>Joseph Woodward</owners>
    <licenseUrl>https://raw.githubusercontent.com/JosephWoodward/SlugityDotNet/master/LICENSE</licenseUrl>
    <projectUrl>https://github.com/JosephWoodward/SlugityDotNet</projectUrl>
    <iconUrl>https://raw.githubusercontent.com/JosephWoodward/SlugityDotNet/release/assets/logo_128x128.png</iconUrl>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>Slugity is a simple, configuration based class library that's designed to create search engine friendly URL slugs</description>
	<language>en-US</language>
    <releaseNotes>Initial release of Slugity.</releaseNotes>
    <copyright>Copyright 2016</copyright>
    <tags>Slug, slug creator, slug generator, url, url creator</tags>
  </metadata>
  <files>
    <file src="bin\Release\Slugity.dll" target="lib\NET40" />
  </files>
</package>

Step 4: Creating your NuGet package

Once your Package.nuspec file has been filled in, let's create our NuGet package! To do this simply run the following command, replacing the path to the .csproj file with your own.

nuget pack Slugity/Slugity.csproj -Prop Configuration=Release -Verbosity detailed

If you have the time, take a moment to look over the various pack command options in the NuGet documentation.

Once the above command has been run, if all has gone well then you should see a YourPackageName.nupkg file appear in your project directory. This is our NuGet package that we can now submit to nuget.org!

Step 5: Submitting your NuGet package to nuget.org

We're almost there! Now all we need to do is head over to nuget.org and submit our package. If you're not already registered at nuget.org then you'll need to do so (see the "Register / Sign in" link at the top right of the homepage). Next, go to your Account page and click "Upload a package". Now all you need to do is upload your .nupkg package and verify the package details that nuget.org will use for your package listing!

Congratulations on submitting your first NuGet package!

I hope this guide has been helpful to you, and as always, if you have any questions then leave them in the comments!

Tips for learning on the job

Posted on Tuesday, 29 Mar 2016

Today, whilst out walking the dog I was listening to an episode of Developer Tea titled "Learning on the Job". As someone that's always looking for smarter ways of learning, this episode was of particular interest to me. In fact, it got me thinking - what tips would I recommend for learning on the job?

I find learning on the job interesting for a few reasons. First of all, you spend a good portion of time at your computer programming. This very fact means you have plenty of opportunities to improve your knowledge and understanding of the language or framework you're using. However, you also have to balance this opportunity with the fact that you're working and have things to do and deadlines to meet. These external pressures mean you have to be smart about how you learn.

One of the strengths of a good developer is their ability to learn on the fly, and we should always be looking for opportunities to learn.

With this in mind. This post is a collection what I've found to be effective ways of learning on the job, whilst not letting it get in the way of your work.

REPLs, Fiddlers and online editors are your friends

REPLs are a great tool for learning. Within my browser's bookmark toolbar I have a whole host of online REPLs or Fiddlers for different languages. Languages such as C#, TypeScript and JavaScript.

If I'm researching a problem or happen to come across some code that I'm struggling to understand, I'll often take a moment to launch the appropriate REPL or Fiddler and experiment with the output to make sure I fully understand what the code is doing. These can be an invaluable tool for testing that base library function you've not never used before, test the output of the algorithm you've found or learn what your transpiled TypeScript code will look like. In fact, some online tools such as .NET Fiddle also have the ability to decompile code all the way to the IL/MSIL level - enabling you to gain an even greater understanding of what your code is doing.

Here are a few of the online tools I would recommend:

This aptly brings me onto my next tip.

Don't just copy and paste that code

Above I talked about REPLs and online Fiddlers. Use them! We've all copied and pasted some code on Stack Overflow to get something working. Take a moment to figure out exactly WHY the code is working and what your code is doing. Copying and pasting code that you don't understand is both dangerous and a wasted opportunity to learn something new. If you're too busy then bookmark it, or better yet - some of the editors listed above such as .NET Fiddle allow you to save snippets of code for later.

Write it down in a notepad

This tip I've found particularly effective and goes beyond learning whilst on the job, but I'll expound on this in a moment.

If you stumble across one of those bits of information that you think "Wow, I didn't know that. I should remember that!", take a moment and write it down on a notepad. The process of writing it down helps your new found knowledge to your long-term memory. Another tip for further improving your long-term memory is to revisit some of your notes a couple of hours, or days later. Repetition is another proven way to improve the retention of such information.

Having a notebook on you is also useful when reading books. If you happen to read a sentence of paragraph whilst reading a book, take a moment to write it down in your own words in your notebook. I guarantee you'll be able to recall the information later that day far clearer than you would have had you continued reading.

Share it with a co-worker or summarise it in a quick tweet

If you've just solved a problem or stumbled across a valuable piece of information then share it with your nearest co-worker. The very fact that you're talking about it or explaining the concept will help engrain it into your long-term memory. You also have the added benefit of sharing information knowledge with others.

If no-one's around then you could always talk it through with your rubber duck, or summarise your new knowledge in a quick tweet.

Stack Overflow

Stack Overflow is another great tool for learning on the job. If you're a registered user then you should take full advantage of the ability to favourite answers or questions. Doing so enables you to review them in more detail at a time more suitable.

Pocket and other alternatives

Bookmark sites like Pocket are another powerful tool when it comes to learning on the job and go a step beyond bookmarking.

Whilst struggling with that problem, if you happened to have come across an interesting article but didn't have the time to read it in its entirety then why not add it to your personal Pocket list and fill in the knowledge gap at a later date whilst you're waiting for the bus. Many of the apps like Pocket include the ability to automatically sync bookmarks with your smartphone for offline viewing making it perfect for commuting.

I hope people find these tips valuable. I'm always interested in hearing other recommendations for learning on the job so if there's anything I've not mentioned then please feel free to add it to the comments!

Easy slug generation with Slugity

Posted on Friday, 18 Mar 2016

This year, as outlined in my goals for 2016 post, I've been keen to turn random bits of code I've written across a variety of projects, into fully-fledged open-sourced libraries that can be downloaded via NuGet. My first library aims to tackle a minor problem any developer that works on public facing web-based applications faces - generating search engine friendly URL slugs.

Introducing Slugity - The highly configurable C# slug generator

Whilst creating Slugity, my main focus was configure-ability. People, myself included, have opinions on how a URL should be formatted. Some people like the camel-case URLs that frameworks such as ASP.NET MVC encourage, other dogmatically favour lowercase slugs. Some people still like to strip stop-words from their slugs in order to shorten the overall URL length whilst retaining keyword density within their slugs. Having jumped from pattern to pattern across a variety of projects myself, I was keen to develop Slugity to cater to these needs.

Having been working on Slugity for a number of days now, below are the configuration end-points included in its first release:

  • Customisable slug text case (CamelCase, LowerCase, Ignore)
  • String Separator
  • Slug's maximum length
  • Optional replacement characters
  • Stop Words

Slugity setup and configuring

Given the fact that slugs aren't the most complicated strings to get right, setting up Slugity is a breeze. It comes with a default configuration that should be suitable for the majority of its users, myself includes - but it's simple enough to configure if the default settings don't meet your requirements.

Using default configuration options:

var slugity = new Slugity();
string slug = slugity.GenerateSlug("A <span style="font-weight: bold">customisable</a> slug generation library");

Console.Log(slug); 
//Output: a-customisable-slug-generation-library

Configuring Slugity:

If Slugity's default configuration doesn't meet your requirements then configuration is easy:

var configuration = new SlugityConfig
{
    TextCase = TextCase.LowerCase,
    StringSeparator = '_',
    MaxLength = 60
};

configuration.ReplacementCharacters.Add("eat", "munch on");

var slugity = new Slugity(configuration);
string slug = slugity.GenerateSlug("I like to eat lettuce");

Console.Log(slug);
//Output: i_like_to_munch_on_lettuce

Moving forward I'd like to add the ability to replace the various formatting classes that are responsible for the various formatting and cleaning of the slug.

The code is currently up on GitHub and can be viewed here. I'm in the process of finishing off the last bits and pieces and then getting it added to NuGet, so stay tuned.

Update

The library has recently been added to NuGet and can be downloaded here, or via the following NuGet command:

Install-Package Slugity

OzCode Review - The magical extension that takes the pain out of debugging

Posted on Thursday, 10 Mar 2016

TLDR; Yes, OzCode is easily worth the money. It doesn't take you long to realise the amount of time it saves you in debugging quickly pays for itself over time.

Debugging code can be both hard and time-consuming. Thankfully, the guys over at OzCode have been working hard to battle the debugging time sink-hole by developing OzCode, a magically intelligent debugging extension for Visual Studio.

Remembering back to when I first installed OzCode (when it was in its beta stage!), it took all of a few minutes to see that the OzCode team were onto a winner. I clearly remember entering into debug mode by pressing F5 and watching my Visual Studio code window light up with a host of new debugging tools. Tools that I've been crying out for in Visual Studio (or other editors for that matter!) for a long time. In fact, OzCode has changed the way I debug altogether.

What does OzCode cost, and is it worth it?

Before continuing, let's go over the price. OzCode is a paid for Visual Studio extension, coming in at an extremely reasonable $79 (£55) for a personal licence. Is it worth the money? Definitely.

I'm always happy to pay for good software and OzCode is no exception. Suffice it to say, it doesn't take long to realise the value it provides and how quickly it pays for itself in the time it saves you. Don't believe me? Go try the free 30-day trial and see for yourself. I purchased it as soon as it was out of public beta as it was clear it was worth the money.

It's also worth mentioning that OzCode do provide open-source licences and to Microsoft MVPs.

So without further ado, let's take a deeper look at OzCode and some of the features that make it one of the first extensions I install when setting up a new development machine, and why I think you'll love it.

Magic Glance

OzCode's magic glance feature is by far the biggest change to how you'll debug your code, and probably the first thing you'll notice when entering into debug mode. Normally, when you're trying to find out the value of a variable or property within Visual Studio you need to hover over the variable(s) you'd like to inspect. This is less than perfect as the more variables you have to inspect then the more time it'll take you hovering over them to keep track of what's changing. This is what OzCode's Magic Glance feature steps in.

With OzCode installed and Magic Glance at your disposal, you're treated to a helpful display of variable/parameter values without the requirement to hover over them. This also helps you build a better mental model and holistic view of the data you're debugging.

Reveal

Collections (or any derivative of an array) receive an even larger dosage of OzCode love with the Reveal feature. If you've ever had to view specific properties within a collection then you'll know it's not the most pleasant experience.

OzCode simplifies the process of reviewing data in a collection by providing you with the ability to mark properties within a collection as favourites (via a toggleable star icon) which make them instantly visible above the variable.

Search

Linked closely to the aforementioned Reveal function, Search adds the additional benefit of being able to search a collection for a particular value. To do this simply inspect the collection and enter the value you're searching for. OzCode will then filter the results that match your input. By default OzCode will perform a shallow search on your collection (I'd imagine this is for performance purposes) - however, if your object graph is deeper then you can easily drill down even deeper.

Investigate

If you've ever had to split a complex expression up in order to see the values whilst debugging then OzCode's Investigate feature will be music to your ears. When reaching a complex (or simple) if statement, OzCode adds visual aids to the expressions to indicate whether they're true of false.

Quick Attach to a process

OzCode's Quick Attach to a process feature is easily one of my most used features - and I'll explain why in a second. Depending on your project type, you either have to enter debugging mode by pressing F5, or by manually attaching your build to a process. With OzCode, they've greatly simplified this process, and as a keyboard shortcut junkie - they've also provided shortcuts to make it even faster.

Using shortcuts to attach to a process via quickly became my de facto way of entering debugging mode, and it saves me a tonne of time. Friends and colleagues are often impressed at how quickly I'm able to enter debugging mode, to which I explain that it's all thanks OzCode. Debugging is now a single keypress away and I love it! (as opposed to multiple clicks, or waiting for Visual Studio to launch my application in a new browser window)

Loads more features

These are just a few of my favourite features that easily make OzCode worth the money.

If you want to see a full list of features then I'd recommend taking a look over OzCode's features page.

Features include:

  • Predict the Future
  • Live Coding
  • Conditional Breakpoints
  • When Set... Break
  • Exceptions Trails
  • Predicting Exceptions
  • Trace
  • Foresee
  • Compare
  • Custom Expressions
  • Quick Actions
  • Show All Instances

Future of OzCode

The future of OzCode is looking bright. With code analysis tools now being able to leverage Roslyn platform I can't wait to see what wizardry the OzCode guys come up with next. For a sneak peek at some of the great features on the horizon I would definitely recommend this recent OzCode webinar. If you've only got a few minutes then I would highly recommend you take that time to check out the 28-minute mark at the LINQ visualisation.

Disclaimer:
I am in no way associated to OzCode. I'm just an extremely happy user of a great Visual Studio extension that I purchased when it was first released and still use to this day.


Any questions about this review or OzCode itself then feel free to ask!

The Ajax response object pattern

Posted on Sunday, 21 Feb 2016

For some time now I've been using what I like to call the Ajax response object pattern to great success across a variety of projects, enough so that I thought it merited its own blog post.

The Ajax response object pattern is an incredibly simple pattern to implement, but goes a long way to help promote a consistent API to handling most Ajax responses, and hopefully by the end of this post you'll be able to see the value in implementing such an approach.

Before we dig into what the Ajax response object pattern is and how to implement it, let's take a moment to look at the problem it aims to solve.

The common way to handle Ajax responses

Typically you'll see a variation of the following approach to handling an asynchronous javascript response (the important part here is the handling of the response rather than how we created it), and whilst the code may vary slightly (the example used is a trivial implementation) hopefully it will give you an idea as to what we're trying to do.

// The back end
public class ProfileController : Controller
{

    ...

    [HttpGet]
    public ActionResult GetProfile(int profileId)
    {
        var userProfile = this.profileService.GetUserProfile(profileId);

        return View(new ProfileViewModel(userProfile));
    }
}
// The Javascript / JQuery
$.ajax({
    type: "GET",
    url: "/Profile/LoadProfile/" + $('#userProfileId').val(),
    dataType: "html",
    success: function (data, textStatus, jqXHR) {
        $('#profileContainer').html(data);
    },
    error: function (jqXHR, textStatus, errorThrown) {
        // Correctly handle error
    }
});

As you can see from the above code all we're doing is creating our asynchronous javascript request to our back end to retrieve a profile based on the profile id provided. The DOM is then updated and the response loaded via the call to JQuery's html() method.

What's wrong with this approach?

Nothing. There's nothing wrong with this approach and it's in fact, a perfectly acceptable way to perform and handle ajax requests/responses - however, there is room for improvement. To see what can be improved ask yourself the following questions:

  1. What does our code tell us about the state of the ajax response?
    Judging by our code, we can tell that our HTTP request to our LoadProfile controller action was successful as our success method is being invoked - however what does this REALLY tell us about the state of our response? How to do we know that our payload is in fact a user profile. After all, our success method (or our status code of 200 Ok) simply tells us that the server successfully responded to our HTTP request - it doesn't tell us that our domain validation conditions within our service were met.
     
  2. Is it easy to reason with?
    When programming we should always aim to write code succinct and clear. Code that leaves no room for ambiguity and removes any guesswork. Does the above solution do this?
     
  3. How could we pass additional context to our Javascript?
    As we're returning an HTML response to be rendered to the user, what happens if we wanted to pass additional context to the Javascript such as a notification of an error, or some data we wish to display to the user via a modal dialog?
     
  4. Are we promoting any kind of consistency?
    What happens if our next Ajax response returns a Json object? That will need to be handled in a completely different way. If we have multiple developers working on a project then they'll each probably implement ajax responses in various ways.

A better approach - let's model our Ajax response

Object-orientated programming is all about modelling. Wikipedia says it best: 

Object-oriented programming (OOP) is a programming language model organized around objects rather than "actions" and data rather than logic.

To us mere mortals (as opposed to computers), modelling behaviour/data makes it easier for us to grasp and reason with - this premise is what helped make the Object-orientated programming paradigm popular in the early to mid-1990s. Looking back over our previous example, are we really modelling our response?

When we start passing values around (such as HTML in the previous example), we're missing out on the benefits gained from creating a model around our expected behaviour. So, instead of simply returning a plain HTML response or a JSON object containing just our data, let's try and model our HTTP response and see what benefits it will bring us. This is where the Ajax response object pattern can help.

The Ajax response object pattern

Implementation of the Ajax response object pattern simple. To resolve the concerns and questions raised above, we simply need to model our Ajax response, allowing to us to add additional context to our asynchronous HTTP response which we can then reason with within our Javascript.

The following example is the object I tend to favour when applying the Ajax response object pattern - but your implementation may vary depending on what you're doing.

public class AjaxResponse
{
    public bool Success { get; set; }

    public string ErrorMessage { get; set; }

    public string RedirectUrl { get; set; }

    public object ResponseData { get; set; }

    public static AjaxResponse CreateSuccessfulResult(object responseData)
    {
        return new AjaxResponse
        {
            Success = true,
            ResponseData = responseData
        };
    }
}

We can then use the AjaxResponse object to model our HTTP response to something like the following:

[HttpGet]
public JsonResult GetProfile(int profileId)
{
    var response = new AjaxResponse();

    try
    {
        var userProfile = this.profileService.GetUserProfile(profileId);

        response.Success = true;
        response.ResponseData = RenderPartialViewToString("Profile", new ProfileViewModel(userProfile));
    }
    catch (RecordNotFoundException exception)
    {
        response.Success = false;
        response.ErrorMessage = exception.Message;
    }

    return Json(response, JsonRequestBehavior.AllowGet);
}

Earlier we were rendering the view and returning just the HTML payload in the response, now we need to render the view to a string and pass to the ResponseData property. This way we can make use of the additional properties such as whether the response the user is expecting was successful, if not then we can supply and error message. Because our ResponseData property is an object base type, we can use it to store any type, including Json.

Below is an implementation of the RenderPartialViewToString method I often create in a base controller when writing ASP.NET MVC applications.

public class BaseController : Controller
{
    protected string RenderPartialViewToString(string viewName, object model)
    {
        if (string.IsNullOrEmpty(viewName))
        {
            viewName = this.ControllerContext.RouteData.GetRequiredString("action");
        }
                
        this.ViewData.Model = model;
        using (var stringWriter = new StringWriter())
        {
            ViewEngineResult partialView = ViewEngines.Engines.FindPartialView(this.ControllerContext, viewName);
            ViewContext viewContext = new ViewContext(this.ControllerContext, partialView.View, this.ViewData, this.TempData, (TextWriter)stringWriter);
            partialView.View.Render(viewContext, (TextWriter)stringWriter);
            return stringWriter.GetStringBuilder().ToString();
        }
    }
}

Now that we've modelled our response we have the ability to provide the response consumer far more context, this context enables us to better reason with our server response. In the cases where we may need to perform any kind of front end action based on the outcome of the response, we can easily do.

// The Javascript / JQuery
$.ajax({
    type: "GET",
    url: "/Profile/LoadProfile/" + $('#userProfileId').val(),
    dataType: "json"
    success: function (data, textStatus, jqXHR) {
        if (ajaxResponse.Success === true) {
            $('#profileContainer').html(ajaxResponse.ResponseData);
        } else {
            Dialog.showDialog(ajaxResponse.ErrorMessage);
        }
    },
    error: function (jqXHR, textStatus, errorThrown) {
        ...
    }
});

Additionally, if the approach is used throughout a team then you're promoting a consistent API that you can build around. This makes the development of general error handlers far easier. What's more, if you're using TypeScript in your codebase then you can continue to leverage the benefits by casting your response to a TypeScript implementation of the ajaxResponse class and gain all the intelli-sense and tooling support goodness that comes with TypeScript.

// TypeScript implementation
export interface IAjaxResponse {
    Success: boolean;
    ErrorMessage: string;
    RedirectUrl: string;
    ResponseData: any;
}

That's all for now. Thoughts and comments welcome!

Adding search to your website with Azure Search

Posted on Wednesday, 10 Feb 2016

As traffic for my blog has started to grow I've become increasingly keen to implement some kind of search facility to allow visitors to search for content, and as most of you will probably know search isn't an easy problem - in fact search is an extremely hard problem, which is why I was keen to look at some of the existing search providers out there.

Azure's search platform has been on my radar for a while now, and having recently listened to an MS Dev Show interview with one of the developers behind Azure Search I was keen to give it a go.

This article serves as a high level overview as to how to get up and running with Azure Search service on your website or blog with a relatively simple implementation that could use some fleshing out. Before we begin it's also worth mentioning that Azure's Search service has some really powerful search capabilities that are beyond the scope of this article so I'd highly recommend checking out the documentation.

Azure Search Pricing Structure

Before we begin, let us take a moment to look at the Azure Search pricing:

One of the great things about Azure Search is it has a free tier that's more than suitable for relatively low to medium traffic websites. What's odd though is after the free tier there's a sudden climb in price and specifications for the next model up - which I assume is because whilst it's been available for a year, it's still relatively new, so hopefully we'll see more options become available moving forward but only time will tell; but as it stands the free tier is more than what I will need so let us continue!

Setting up Azure Search - Step 1: Creating our search service

Before we begin we've got to create our Azure Search service within Azure's administration portal.

Note: If you've got the time then the Azure Search service documentation goes through this step in greater detail, so if you get stuck then feel free to refer to it here.

Enter in your service name (in this instance I used the name "jwblog"), select you subscription, resource group, location and select the free pricing tier.

Setting up Azure Search - Step 2: Configuring our service

Now that we've created our free search service, next we have to configure it. To do this click the Add Index button within the top Azure Search menu.

Add an index (see image above)

Once we've created our account next we need to decide which content we're going to make searchable and reflect that within our search indexes. What Azure Search will then do is index that content, allowing it to be searched. We can do this either programatically via the Azure Search API, or via the Portal. Personally I'd rather do it up front in the Portal, but there's nothing to stop you doing it in the code, but for now lets do it via the Azure Portal. In the instance of this post, we'd want to index our blog post title and blog post content as this is what we want the user to be able to search.

Setting retrievable content  (see image above)

When configuring our Azure Search profile we can specify what content we want to mark as retrievable, in this instance as I plan on only showing the post title on the search result page so I will mark the post title as retrievable content. As the search result page will also need to link to the actual blog post I will also mark the blog post id as retrievable.

Now we've set up and profile and configured it let's get on with some coding!

Setting up Azure Search - Step 3: Implementing our search - the interface

Because it's always a good idea to program to an interface rather than implementation, and because we may want to index other content moving forward we'll create a simple search provider interface, avoiding the use of any Azure Search specific references.

public interface ISearchProvider<T> where T : class
{
    IEnumerable<TResult> SearchDocuments<TResult>(string searchText, string filter, Func<T, TResult> mapping);

    void AddToIndex(T document);
}

If you take a moment to look at the SearchDocuments method signature:

IEnumerable<TResult> SearchDocuments<TResult>(string searchText, string filter, Func<T, TResult> mapper);

You'll see that we take a Func of type T and return a TResult - this will allow us to map our search index class (which we'll create next) to a data transfer object - but more on this later. You're also able to provide search filters to provide your website with richer searching capabilities.

Next we want to create our blog post search index class that will contain all of the properties required to index our content. As we're creating a generic interface to our search provider we're going to extend Azure Search's AzureSearchIndex class. This allows us to create a BlogPostIndex class, but also gives us the scope to easily index other content such as pages (in which case we'd call it a BlogPageIndex).

What's important to note is that the below BlogPostSearchIndex class is that our properties are the same name and type as the columns we configured earlier within Step 2 and that we've passed the index name to the base constructor.

// BlogPostSearchIndex.cs

[SerializePropertyNamesAsCamelCase]
public class BlogPostSearchIndex
{
    public BlogPostSearchIndex(int postId, string postTitle, string postBody)
    {
        // Document index needs to be a unique string
        IndexId = "blogpost" + postId.ToString();
        PostId = postId;
        PostTitle = postTitle;
        PostBody = postBody;
    }

    // Properties must remain public as they'll be used for automatic data binding
    public string IndexId { get; set; }

    public int PostId { get; set; }

    public string PostTitle { get; set; }

    public string PostBody { get; set; }

    public override string ToString()
    {
        return $"IndexId: {IndexId}\tPostId: {PostId}\tPostTitle: {PostTitle}\tPostBody: {PostBody}";
    }
}

Now that we've created the interface to our search provider we'll go ahead and work on the implementation.

Setting up Azure Search - Step 5: Implementing our search - the implementation

At this point we're now ready to start working on the implementation to our search functionality, so we'll create an AzureSearchProvider class that extends our ISearchProvider interface and start fleshing out our search.

Before we begin it's worth being aware that Azure's search service does provide RESTful API that you can consume to manage and indexes, however as you'll see below I've opted to do it via their SDK.

// AzureSearchProvider.cs

public class AzureSearchProvider<T> : ISearchProvider<T> where T : class
{
    private readonly SearchServiceClient _searchServiceClient;
    private const string Index = "blogpost";

    public AzureSearchProvider(SearchServiceClient searchServiceClient)
    {
        _searchServiceClient = searchServiceClient;
    }

    public IEnumerable<TResult> SearchDocuments<TResult>(string searchText, string filter, Func<T, TResult> mapper)
    {
        SearchIndexClient indexClient = _searchServiceClient.Indexes.GetClient(Index);

        var sp = new SearchParameters();
        if (!string.IsNullOrEmpty(filter)) sp.Filter = filter;

        DocumentSearchResponse<T> response = indexClient.Documents.Search<T>(searchText, sp);
        return response.Select(result => mapper(result.Document)).ToList();
    }

    public void AddToIndex(T document)
    {
        if (document == null)
            throw new ArgumentNullException(nameof(document));

        SearchIndexClient indexClient = _searchServiceClient.Indexes.GetClient(Index);

        try
        {
            // No need to create an UpdateIndex method as we use MergeOrUpload action type here.
            IndexBatch<T> batch = IndexBatch.Create(IndexAction.Create(IndexActionType.MergeOrUpload, document));
            indexClient.Documents.Index(batch);
        }
        catch (IndexBatchException e)
        {
            Console.WriteLine("Failed to Index some of the documents: {0}",
                string.Join(", ", e.IndexResponse.Results.Where(r => !r.Succeeded).Select(r => r.Key)));
        }
    }
}

The last part of our implementation is to set our configure out search provider with our IoC container of choice to ensure that our AzureSearchProvider.cs class and its dependencies (Azure's SearchServiceClient class) can be resolved. Azure's SearchServiceClient.cs constructor requires our credentials and search service name as arguments so we'll configure them there too.

In this instance I'm using StructureMap, so if you're not using StructureMap then you'll need to configure your IOC configuration accordingly.

public class DomainRegistry : Registry
{
    public DomainRegistry()
    {
        ...

        this.For<SearchServiceClient>().Use(() => new SearchServiceClient("jwblog", new SearchCredentials("your search administration key")));
        this.For(typeof(ISearchProvider<>)).Use(typeof(AzureSearchProvider<>));

        ...
    }
}

At this point all we need to do is add our administration key which we can get from the Azure Portal under the Keys setting within the search service blade we used to configure our search service.

Setting up Azure Search - Step 6: Indexing our content

Now that all of the hard work is out of the way and our search service is configured, we need to index our content. Currently our Azure Search service is an empty container with no content to index, so in the context of a blog we need to ensure that when we add or edit a blog post the search document stored within Azure Search is either added or updated. To do this we need to go to our blog's controller action that's responsible for creating a blog post and index our content.

Below is a rough example as to how it would look, I'm a huge fan of a library called Mediatr for delegating my requests but below should be enough to give you an idea as to how we'd implement indexing our content. We would also need to ensure we did the same thing for updating our blog posts as we'd want to ensure our search indexes are up to date with any modified content.

public class BlogPostController : Controller
{
    private readonly ISearchProvider<BlogPostSearchIndex> _searchProvider;

    public BlogPostController(ISearchProvider<BlogPostSearchIndex> searchProvider)
    {
        this._searchProvider = searchProvider;
    }

    [HttpPost]
    public ActionResult Create(BlogPostAddModel model)
    {
        ...

        // Add your blog post to the database and use the Id as an index Id

        this._searchProvider.AddToIndex(new BlogPostSearchIndex(indexId, model.Id, model.Title, model.BlogPost));

        ...
    }

    [HttpPost]
    public ActionResult Update(BlogPostEditModel model)
    {
        ...
        // As we're using Azure Search's MergeOrUpload index action we can simply call AddToIndex() when updating
        this._searchProvider.AddToIndex(new BlogPostSearchIndex(indexId, model.Id, model.Title, model.BlogPost));

        ...
    }

}

Now that we're indexing our content we'll move onto querying it.

Setting up Azure Search - Step 7: Querying our indexes

As Azure's Search service is built on top of Lucene's query parser (Lucene is well known open-source search library) we have a variety of ways we can query our content, including:

  • Field-scoped queries
  • Fuzzy search
  • Proximity search
  • Term boosting
  • Regular expression search
  • Wildcard search
  • Syntax fundamentals
  • Boolean operators
  • Query size limitations

To query our search index all we need to do is call our generic SearchDocuments<T> method and map our search index object to a view model/DTO like so:

private IEnumerable<BlogPostSearchResult> QuerySearchDocuments(string keyword)
{
    IEnumerable<BlogPostSearchResult> result = _searchProvider.SearchDocuments(keyword, string.Empty, document => new BlogPostSearchResult
    {
        Id = document.PostId,
        Title = document.PostTitle
    });

    return result;
}

At this point you have one of two options, you can either retrieve the indexed text (providing you marked it as retrievable earlier in step 2) and display that in your search result, or you can return your ID and query your database for the relevant post based on that blog post ID. Naturally this does introduce what is an unnecessary database call so consider your options. Personally as my linkes include a filename, I prefer to treat the posts in my database as the source of truth and prefer to check the posts exist and are published so I'm happy to incur that extra database call.

public IEnumerable<BlogPostItem> Handle(BlogPostSearchResultRequest message)
{
    if (string.IsNullOrEmpty(message.Keyword) || string.IsNullOrWhiteSpace(message.Keyword))
        throw new ArgumentNullException(nameof(message.Keyword));

    List<int> postIds = QuerySearchDocuments(message.Keyword).Select(x => x.Id).ToList();

    // Get blog posts from database based on IDs
    return GetBlogPosts(postIds);
}

Setting up Azure Search - Wrapping it up

Now we're querying our indexes and retrieving the associated blog posts from the database all we have left to do is output our list of blog posts to the user.

I'm hoping you've found this general overview useful. As mentioned at the beginning of the post this is a high level overview and implementation of what's a powerful search service. At this point I would highly recommend you take the next step and look at how you can start to tweak your search results via means such as scoring profiles and some of the features provided by Lucene.

Happy coding!

Personal targets and goals for 2016

Posted on Monday, 01 Feb 2016

Around this time last year I set out a list of goals and targets for myself in my Personal targets and goals for 2015 blog post which, with 2015 coming to a close, I reflected on last month in my Reflecting on 2015 post.

I feel formally setting yourself yearly goals is a great way to focus a set of specific, well thought out targets as opposed to spending the year moving from one subject to another with no real aim. So without further ado, here are my personal targets for 2016.

Goal 1: Speak at more conferences and meet-ups

Those of you that know me will know that when I start talking about programming and software development it's hard to shut me up. Software and web development is a subject I've been hugely passionate about from a young age and since starting this blog I've found sharing knowledge and documenting progress a great way to learn, memorise get involved in the community. This desire to talk about software has ultimately led me to start speaking at local meet-ups (including a lunchtime lightning talk at DDD South West) which I've really enjoyed. This year I'd like to continue to pursue this by speaking at larger events and meet-ups that are further afield. I already plan on submitting a few talks to this year's DDD South West event so we'll see if they get accepted.

Goal 2: Start an Exeter based .NET User Group

Living in a rather quiet and rural part of United Kingdom has its pros and cons and whilst Somerset is a great place to live, it suffers from a lack of meet-ups, specifically .NET ones - this is something I'm hoping to rectify by starting up a .NET user group.

Whilst I don't live in Exeter, it's the closest place where I feel there will be enough interest for setting up a .NET specific user group, so I'm currently in the process of looking for a location on the outskirts of the city to make it easier for commuters living in some of the nearby towns and cities.

Goal 3: Continue contributing to the .NET OSS ecosystem

One of last year's goals was to contribute to more open-source libraries, and whilst I felt I made good progress in achieving this goal I'm keen to continue working in this area. Not only do I want to continue to contribute to existing projects but I'm also keen to help others get involved in contributing to projects. It's a really rewarding activity that can really help develop your skill set as a software developer, with this in mind I've been thinking of a few satellite sites that will help people get started.

I'm also thinking about a few talks that talk about how someone can get started and the lessons you can learn by contributing to open-source software.

In addition to the above, I'm also keen on contributing some of my own libraries to the .NET open-source ecosystem. More often than not I've found myself creating a few classes to encapsulate certain behaviour or abstract a third party API, yet I've never turned it into its own library. This year I'm keen to take that extra step and turn it into a fully fledged library that can be downloaded via NuGet.

Bonus Goal: F#

Having been watching the F# community closely for some time now I'm really interested in what it has to offer, so this year I'm considering jumping in and committing myself to learning it and becoming proficient in it. I've got a few side projects I plan on working on that I feel will be a great place to use F# so we'll see how that goes.

Conclusion

This concludes my personal targets for 2016. Whilst it's not an exclusive list of what I will be focusing on, it's certainly a list that I wish to feel I've made some progress on by the end of the year. I shall see how it goes and keep people updated.

About Me

I'm a software and web application developer living in Somerset within the UK and I eat/sleep software and web development. Programming has been a passion of mine from a young age and I consider myself extremely lucky that I am able to do what I love doing as profession. If I’m not found with my family then you’ll see me at my keyboard writing some form of code.

Read More

Feel free to follow me on any of the channels below.

 
 
profile for JoeMighty at Stack Overflow, Q&A for professional and enthusiast programmers

Recent Blog Posts