Latest Blog Posts

Capturing IIS / ASP.NET traffic in Fiddler

Posted on Monday, 25 Apr 2016

Recently, whilst debugging an issue I needed to capture the traffic being sent from my local application to an external RESTful web service. In this instance, I needed to see the contents of a JWT token being passed to the service to verify some of the data. Thankfully, Fiddler by Telerik is just the tool for the job.

What is fiddler?

Fiddler is a super powerful, free web debugging proxy tool created by the guys and girls at Telerik. Once launched, Fiddler will capture all incoming and outgoing traffic from your machine, allowing you to analyse traffic, manipulate HTTP (and HTTPS!) requests and perform a whole host of traffic based operations. It's a fantastic tool for debugging and if you don't have it I'd highly recommend you take a look at it. Did I say it's 100% free too?

Capturing ASP.NET / IIS Traffic

By default, Fiddler is configured to register itself as the system proxy for Microsoft Windows Internet Services (WinInet) - the HTTP layer used by IE (and other browsers), Microsoft Office, and many other products. Whilst this default configuration is suitable for the majority of your debugging, if you wish to capture traffic from IIS (which bypasses WinInet), we'll need to re-route our IIS traffic through Fiddler by modifying our application's Web.config.

Step 1: Update your Web.config

To do this, simply open your Web.config and add the following snippet of code after the <configSections> element.

<system.net>
    <defaultProxy enabled="true">
        <proxy proxyaddress="http://127.0.0.1:8888" bypassonlocal="False"/>
    </defaultProxy>
</system.net>

Step 2: Configure Fiddler to use the same port

Now that we've routed our IIS traffic through port 8888, we have to configure Fiddler to listen to the same port. To do this simple open Fiddler, go to Tools > Fiddler Options then change the port listed within the "Fiddler listens on port" setting to 8888.

Now if you fire up your application you'll start to see your requests stacking up in Fiddler ready for your inspection.

Happy debugging!

Publishing your first NuGet package in 5 easy steps

Posted on Friday, 15 Apr 2016

So you've just finished writing a small .NET library for a one-off project and you pause for a moment and think "I should stick this on NuGet - others may find this useful!". You know what NuGet is and how it works, but having never published a package before you're unsure what to do and are short on time. If this is the case then hopefully this post will help you out and show you just how painless creating your first NuGet package is.

Let's begin!

Step 1: Download the NuGet command line tool

First, you'll need to download the NuGet command line tool. You can do this by going here and downloading the latest version (beneath the Windows x86 Commandline heading).

Step 2: Add the NuGet executable to your PATH system variables

Now we've downloaded the NuGet executable we want to add it to our PATH system variables. At this point, you could simply reference the executable directly - but before long you'll be wanting to contribute more of your libraries and adding it to your PATH system variables will save you more work in the long run.

If you already know how to add PATH variables then jump to Step 3, if not then read on.

Adding the nuget command to your PATH system variables

First, move the NuGet.exe you downloaded to a suitable location (I store mine in C:/NuGet/). Now, right-click My Computer (This PC if you're on Windows 10). Click "Advanced System Settings" then click the "Environment Variables" button located within the Advanced tab. From here double-click the PATH variable in the top panel and create a new entry by adding the path to the directory that contains your NuGet.exe file (in this instance it's C:/NuGet/).

Now, if all's done right you should be able to open a Command Prompt window, type "nuget" and you'll be greeted with a list of NuGet commands.

Step 3: Create a Package.nuspec configuration file

In short, a .nuspec file is an XML-based configuration file that describes our NuGet package and its contents. For further reading on the role of the .nuspec file see the nuspec reference on nuget.org.

To create a .nuspec package manifest file, let's go to the root of our project we wish to publish (that's where I prefer to keep my .nuspec file as it can be added to source control) and open a Command Prompt window (Tip: Typing "cmd" in the folder path of your explorer window will automatically open a Command Prompt window pointing to the directory).

Now type "nuget spec" into your Command Prompt window. If all goes well you should be greeted with a message saying "Created 'Package.nuspec' successfully". If so, you should now see a Package.nuspec file your project folder.

Let's take a moment to look inside of our newly created Package.nuspec file. It should look a little like below:

<?xml version="1.0"?>
<package>
  <metadata>
    <id>Package</id>
    <version>1.0.0</version>
    <authors>Joseph</authors>
    <owners>Joseph</owners>
    <licenseUrl>http://LICENSE_URL_HERE_OR_DELETE_THIS_LINE</licenseUrl>
    <projectUrl>http://PROJECT_URL_HERE_OR_DELETE_THIS_LINE</projectUrl>
    <iconUrl>http://ICON_URL_HERE_OR_DELETE_THIS_LINE</iconUrl>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>Package description</description>
    <releaseNotes>Summary of changes made in this release of the package.</releaseNotes>
    <copyright>Copyright 2016</copyright>
    <tags>Tag1 Tag2</tags>
    <dependencies>
      <dependency id="SampleDependency" version="1.0" />
    </dependencies>
  </metadata>
</package>

As you can see, all of the parameters are pretty self-explanatory (see the docs here if you have any uncertainties on what the configuration does). The only one you may have a question about is the dependencies node - this is simply a list of the dependencies and their version that your NuGet package may have (See here for more info).

Now we've generated our NuGet configuration file, let's take a moment to fill it in.

Once you're done, your manifest file should look a little like below. The next step is to reference the files to be packaged. In the following example, you'll see that I've referenced the Release .dll file (take a look at the documentation here for more file options). If you haven't noticed, I've also removed the <dependencies> node as my package doesn't have any additional dependencies.

<?xml version="1.0"?>
<package >
  <metadata>
    <id>Slugity</id>
    <version>1.0.2</version>
	<title>Slugity</title>
    <authors>Joseph Woodward</authors>
    <owners>Joseph Woodward</owners>
    <licenseUrl>https://raw.githubusercontent.com/JosephWoodward/SlugityDotNet/master/LICENSE</licenseUrl>
    <projectUrl>https://github.com/JosephWoodward/SlugityDotNet</projectUrl>
    <iconUrl>https://raw.githubusercontent.com/JosephWoodward/SlugityDotNet/release/assets/logo_128x128.png</iconUrl>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>Slugity is a simple, configuration based class library that's designed to create search engine friendly URL slugs</description>
	<language>en-US</language>
    <releaseNotes>Initial release of Slugity.</releaseNotes>
    <copyright>Copyright 2016</copyright>
    <tags>Slug, slug creator, slug generator, url, url creator</tags>
  </metadata>
  <files>
    <file src="bin\Release\Slugity.dll" target="lib\NET40" />
  </files>
</package>

Step 4: Creating your NuGet package

Once your Package.nuspec file has been filled in, let's create our NuGet package! To do this simply run the following command, replacing the path to the .csproj file with your own.

nuget pack Slugity/Slugity.csproj -Prop Configuration=Release -Verbosity detailed

If you have the time, take a moment to look over the various pack command options in the NuGet documentation.

Once the above command has been run, if all has gone well then you should see a YourPackageName.nupkg file appear in your project directory. This is our NuGet package that we can now submit to nuget.org!

Step 5: Submitting your NuGet package to nuget.org

We're almost there! Now all we need to do is head over to nuget.org and submit our package. If you're not already registered at nuget.org then you'll need to do so (see the "Register / Sign in" link at the top right of the homepage). Next, go to your Account page and click "Upload a package". Now all you need to do is upload your .nupkg package and verify the package details that nuget.org will use for your package listing!

Congratulations on submitting your first NuGet package!

I hope this guide has been helpful to you, and as always, if you have any questions then leave them in the comments!

Tips for learning on the job

Posted on Tuesday, 29 Mar 2016

Today, whilst out walking the dog I was listening to an episode of Developer Tea titled "Learning on the Job". As someone that's always looking for smarter ways of learning, this episode was of particular interest to me. In fact, it got me thinking - what tips would I recommend for learning on the job?

I find learning on the job interesting for a few reasons. First of all, you spend a good portion of time at your computer programming. This very fact means you have plenty of opportunities to improve your knowledge and understanding of the language or framework you're using. However, you also have to balance this opportunity with the fact that you're working and have things to do and deadlines to meet. These external pressures mean you have to be smart about how you learn.

One of the strengths of a good developer is their ability to learn on the fly, and we should always be looking for opportunities to learn.

With this in mind. This post is a collection what I've found to be effective ways of learning on the job, whilst not letting it get in the way of your work.

REPLs, Fiddlers and online editors are your friends

REPLs are a great tool for learning. Within my browser's bookmark toolbar I have a whole host of online REPLs or Fiddlers for different languages. Languages such as C#, TypeScript and JavaScript.

If I'm researching a problem or happen to come across some code that I'm struggling to understand, I'll often take a moment to launch the appropriate REPL or Fiddler and experiment with the output to make sure I fully understand what the code is doing. These can be an invaluable tool for testing that base library function you've not never used before, test the output of the algorithm you've found or learn what your transpiled TypeScript code will look like. In fact, some online tools such as .NET Fiddle also have the ability to decompile code all the way to the IL/MSIL level - enabling you to gain an even greater understanding of what your code is doing.

Here are a few of the online tools I would recommend:

This aptly brings me onto my next tip.

Don't just copy and paste that code

Above I talked about REPLs and online Fiddlers. Use them! We've all copied and pasted some code on Stack Overflow to get something working. Take a moment to figure out exactly WHY the code is working and what your code is doing. Copying and pasting code that you don't understand is both dangerous and a wasted opportunity to learn something new. If you're too busy then bookmark it, or better yet - some of the editors listed above such as .NET Fiddle allow you to save snippets of code for later.

Write it down in a notepad

This tip I've found particularly effective and goes beyond learning whilst on the job, but I'll expound on this in a moment.

If you stumble across one of those bits of information that you think "Wow, I didn't know that. I should remember that!", take a moment and write it down on a notepad. The process of writing it down helps your new found knowledge to your long-term memory. Another tip for further improving your long-term memory is to revisit some of your notes a couple of hours, or days later. Repetition is another proven way to improve the retention of such information.

Having a notebook on you is also useful when reading books. If you happen to read a sentence of paragraph whilst reading a book, take a moment to write it down in your own words in your notebook. I guarantee you'll be able to recall the information later that day far clearer than you would have had you continued reading.

Share it with a co-worker or summarise it in a quick tweet

If you've just solved a problem or stumbled across a valuable piece of information then share it with your nearest co-worker. The very fact that you're talking about it or explaining the concept will help engrain it into your long-term memory. You also have the added benefit of sharing information knowledge with others.

If no-one's around then you could always talk it through with your rubber duck, or summarise your new knowledge in a quick tweet.

Stack Overflow

Stack Overflow is another great tool for learning on the job. If you're a registered user then you should take full advantage of the ability to favourite answers or questions. Doing so enables you to review them in more detail at a time more suitable.

Pocket and other alternatives

Bookmark sites like Pocket are another powerful tool when it comes to learning on the job and go a step beyond bookmarking.

Whilst struggling with that problem, if you happened to have come across an interesting article but didn't have the time to read it in its entirety then why not add it to your personal Pocket list and fill in the knowledge gap at a later date whilst you're waiting for the bus. Many of the apps like Pocket include the ability to automatically sync bookmarks with your smartphone for offline viewing making it perfect for commuting.

I hope people find these tips valuable. I'm always interested in hearing other recommendations for learning on the job so if there's anything I've not mentioned then please feel free to add it to the comments!

Easy slug generation with Slugity

Posted on Friday, 18 Mar 2016

This year, as outlined in my goals for 2016 post, I've been keen to turn random bits of code I've written across a variety of projects, into fully-fledged open-sourced libraries that can be downloaded via NuGet. My first library aims to tackle a minor problem any developer that works on public facing web-based applications faces - generating search engine friendly URL slugs.

Introducing Slugity - The highly configurable C# slug generator

Whilst creating Slugity, my main focus was configure-ability. People, myself included, have opinions on how a URL should be formatted. Some people like the camel-case URLs that frameworks such as ASP.NET MVC encourage, other dogmatically favour lowercase slugs. Some people still like to strip stop-words from their slugs in order to shorten the overall URL length whilst retaining keyword density within their slugs. Having jumped from pattern to pattern across a variety of projects myself, I was keen to develop Slugity to cater to these needs.

Having been working on Slugity for a number of days now, below are the configuration end-points included in its first release:

  • Customisable slug text case (CamelCase, LowerCase, Ignore)
  • String Separator
  • Slug's maximum length
  • Optional replacement characters
  • Stop Words

Slugity setup and configuring

Given the fact that slugs aren't the most complicated strings to get right, setting up Slugity is a breeze. It comes with a default configuration that should be suitable for the majority of its users, myself includes - but it's simple enough to configure if the default settings don't meet your requirements.

Using default configuration options:

var slugity = new Slugity();
string slug = slugity.GenerateSlug("A <span style="font-weight: bold">customisable</a> slug generation library");

Console.Log(slug); 
//Output: a-customisable-slug-generation-library

Configuring Slugity:

If Slugity's default configuration doesn't meet your requirements then configuration is easy:

var configuration = new SlugityConfig
{
    TextCase = TextCase.LowerCase,
    StringSeparator = '_',
    MaxLength = 60
};

configuration.ReplacementCharacters.Add("eat", "munch on");

var slugity = new Slugity(configuration);
string slug = slugity.GenerateSlug("I like to eat lettuce");

Console.Log(slug);
//Output: i_like_to_munch_on_lettuce

Moving forward I'd like to add the ability to replace the various formatting classes that are responsible for the various formatting and cleaning of the slug.

The code is currently up on GitHub and can be viewed here. I'm in the process of finishing off the last bits and pieces and then getting it added to NuGet, so stay tuned.

Update

The library has recently been added to NuGet and can be downloaded here, or via the following NuGet command:

Install-Package Slugity

OzCode Review - The magical extension that takes the pain out of debugging

Posted on Thursday, 10 Mar 2016

TLDR; Yes, OzCode is easily worth the money. It doesn't take you long to realise the amount of time it saves you in debugging quickly pays for itself over time.

Debugging code can be both hard and time-consuming. Thankfully, the guys over at OzCode have been working hard to battle the debugging time sink-hole by developing OzCode, a magically intelligent debugging extension for Visual Studio.

Remembering back to when I first installed OzCode (when it was in its beta stage!), it took all of a few minutes to see that the OzCode team were onto a winner. I clearly remember entering into debug mode by pressing F5 and watching my Visual Studio code window light up with a host of new debugging tools. Tools that I've been crying out for in Visual Studio (or other editors for that matter!) for a long time. In fact, OzCode has changed the way I debug altogether.

What does OzCode cost, and is it worth it?

Before continuing, let's go over the price. OzCode is a paid for Visual Studio extension, coming in at an extremely reasonable $79 (£55) for a personal licence. Is it worth the money? Definitely.

I'm always happy to pay for good software and OzCode is no exception. Suffice it to say, it doesn't take long to realise the value it provides and how quickly it pays for itself in the time it saves you. Don't believe me? Go try the free 30-day trial and see for yourself. I purchased it as soon as it was out of public beta as it was clear it was worth the money.

It's also worth mentioning that OzCode do provide open-source licences and to Microsoft MVPs.

So without further ado, let's take a deeper look at OzCode and some of the features that make it one of the first extensions I install when setting up a new development machine, and why I think you'll love it.

Magic Glance

OzCode's magic glance feature is by far the biggest change to how you'll debug your code, and probably the first thing you'll notice when entering into debug mode. Normally, when you're trying to find out the value of a variable or property within Visual Studio you need to hover over the variable(s) you'd like to inspect. This is less than perfect as the more variables you have to inspect then the more time it'll take you hovering over them to keep track of what's changing. This is what OzCode's Magic Glance feature steps in.

With OzCode installed and Magic Glance at your disposal, you're treated to a helpful display of variable/parameter values without the requirement to hover over them. This also helps you build a better mental model and holistic view of the data you're debugging.

Reveal

Collections (or any derivative of an array) receive an even larger dosage of OzCode love with the Reveal feature. If you've ever had to view specific properties within a collection then you'll know it's not the most pleasant experience.

OzCode simplifies the process of reviewing data in a collection by providing you with the ability to mark properties within a collection as favourites (via a toggleable star icon) which make them instantly visible above the variable.

Search

Linked closely to the aforementioned Reveal function, Search adds the additional benefit of being able to search a collection for a particular value. To do this simply inspect the collection and enter the value you're searching for. OzCode will then filter the results that match your input. By default OzCode will perform a shallow search on your collection (I'd imagine this is for performance purposes) - however, if your object graph is deeper then you can easily drill down even deeper.

Investigate

If you've ever had to split a complex expression up in order to see the values whilst debugging then OzCode's Investigate feature will be music to your ears. When reaching a complex (or simple) if statement, OzCode adds visual aids to the expressions to indicate whether they're true of false.

Quick Attach to a process

OzCode's Quick Attach to a process feature is easily one of my most used features - and I'll explain why in a second. Depending on your project type, you either have to enter debugging mode by pressing F5, or by manually attaching your build to a process. With OzCode, they've greatly simplified this process, and as a keyboard shortcut junkie - they've also provided shortcuts to make it even faster.

Using shortcuts to attach to a process via quickly became my de facto way of entering debugging mode, and it saves me a tonne of time. Friends and colleagues are often impressed at how quickly I'm able to enter debugging mode, to which I explain that it's all thanks OzCode. Debugging is now a single keypress away and I love it! (as opposed to multiple clicks, or waiting for Visual Studio to launch my application in a new browser window)

Loads more features

These are just a few of my favourite features that easily make OzCode worth the money.

If you want to see a full list of features then I'd recommend taking a look over OzCode's features page.

Features include:

  • Predict the Future
  • Live Coding
  • Conditional Breakpoints
  • When Set... Break
  • Exceptions Trails
  • Predicting Exceptions
  • Trace
  • Foresee
  • Compare
  • Custom Expressions
  • Quick Actions
  • Show All Instances

Future of OzCode

The future of OzCode is looking bright. With code analysis tools now being able to leverage Roslyn platform I can't wait to see what wizardry the OzCode guys come up with next. For a sneak peek at some of the great features on the horizon I would definitely recommend this recent OzCode webinar. If you've only got a few minutes then I would highly recommend you take that time to check out the 28-minute mark at the LINQ visualisation.

Disclaimer:
I am in no way associated to OzCode. I'm just an extremely happy user of a great Visual Studio extension that I purchased when it was first released and still use to this day.


Any questions about this review or OzCode itself then feel free to ask!

The Ajax response object pattern

Posted on Sunday, 21 Feb 2016

For some time now I've been using what I like to call the Ajax response object pattern to great success across a variety of projects, enough so that I thought it merited its own blog post.

The Ajax response object pattern is an incredibly simple pattern to implement, but goes a long way to help promote a consistent API to handling most Ajax responses, and hopefully by the end of this post you'll be able to see the value in implementing such an approach.

Before we dig into what the Ajax response object pattern is and how to implement it, let's take a moment to look at the problem it aims to solve.

The common way to handle Ajax responses

Typically you'll see a variation of the following approach to handling an asynchronous javascript response (the important part here is the handling of the response rather than how we created it), and whilst the code may vary slightly (the example used is a trivial implementation) hopefully it will give you an idea as to what we're trying to do.

// The back end
public class ProfileController : Controller
{

    ...

    [HttpGet]
    public ActionResult GetProfile(int profileId)
    {
        var userProfile = this.profileService.GetUserProfile(profileId);

        return View(new ProfileViewModel(userProfile));
    }
}
// The Javascript / JQuery
$.ajax({
    type: "GET",
    url: "/Profile/LoadProfile/" + $('#userProfileId').val(),
    dataType: "html",
    success: function (data, textStatus, jqXHR) {
        $('#profileContainer').html(data);
    },
    error: function (jqXHR, textStatus, errorThrown) {
        // Correctly handle error
    }
});

As you can see from the above code all we're doing is creating our asynchronous javascript request to our back end to retrieve a profile based on the profile id provided. The DOM is then updated and the response loaded via the call to JQuery's html() method.

What's wrong with this approach?

Nothing. There's nothing wrong with this approach and it's in fact, a perfectly acceptable way to perform and handle ajax requests/responses - however, there is room for improvement. To see what can be improved ask yourself the following questions:

  1. What does our code tell us about the state of the ajax response?
    Judging by our code, we can tell that our HTTP request to our LoadProfile controller action was successful as our success method is being invoked - however what does this REALLY tell us about the state of our response? How to do we know that our payload is in fact a user profile. After all, our success method (or our status code of 200 Ok) simply tells us that the server successfully responded to our HTTP request - it doesn't tell us that our domain validation conditions within our service were met.
     
  2. Is it easy to reason with?
    When programming we should always aim to write code succinct and clear. Code that leaves no room for ambiguity and removes any guesswork. Does the above solution do this?
     
  3. How could we pass additional context to our Javascript?
    As we're returning an HTML response to be rendered to the user, what happens if we wanted to pass additional context to the Javascript such as a notification of an error, or some data we wish to display to the user via a modal dialog?
     
  4. Are we promoting any kind of consistency?
    What happens if our next Ajax response returns a Json object? That will need to be handled in a completely different way. If we have multiple developers working on a project then they'll each probably implement ajax responses in various ways.

A better approach - let's model our Ajax response

Object-orientated programming is all about modelling. Wikipedia says it best: 

Object-oriented programming (OOP) is a programming language model organized around objects rather than "actions" and data rather than logic.

To us mere mortals (as opposed to computers), modelling behaviour/data makes it easier for us to grasp and reason with - this premise is what helped make the Object-orientated programming paradigm popular in the early to mid-1990s. Looking back over our previous example, are we really modelling our response?

When we start passing values around (such as HTML in the previous example), we're missing out on the benefits gained from creating a model around our expected behaviour. So, instead of simply returning a plain HTML response or a JSON object containing just our data, let's try and model our HTTP response and see what benefits it will bring us. This is where the Ajax response object pattern can help.

The Ajax response object pattern

Implementation of the Ajax response object pattern simple. To resolve the concerns and questions raised above, we simply need to model our Ajax response, allowing to us to add additional context to our asynchronous HTTP response which we can then reason with within our Javascript.

The following example is the object I tend to favour when applying the Ajax response object pattern - but your implementation may vary depending on what you're doing.

public class AjaxResponse
{
    public bool Success { get; set; }

    public string ErrorMessage { get; set; }

    public string RedirectUrl { get; set; }

    public object ResponseData { get; set; }

    public static AjaxResponse CreateSuccessfulResult(object responseData)
    {
        return new AjaxResponse
        {
            Success = true,
            ResponseData = responseData
        };
    }
}

We can then use the AjaxResponse object to model our HTTP response to something like the following:

[HttpGet]
public JsonResult GetProfile(int profileId)
{
    var response = new AjaxResponse();

    try
    {
        var userProfile = this.profileService.GetUserProfile(profileId);

        response.Success = true;
        response.ResponseData = RenderPartialViewToString("Profile", new ProfileViewModel(userProfile));
    }
    catch (RecordNotFoundException exception)
    {
        response.Success = false;
        response.ErrorMessage = exception.Message;
    }

    return Json(response, JsonRequestBehavior.AllowGet);
}

Earlier we were rendering the view and returning just the HTML payload in the response, now we need to render the view to a string and pass to the ResponseData property. This way we can make use of the additional properties such as whether the response the user is expecting was successful, if not then we can supply and error message. Because our ResponseData property is an object base type, we can use it to store any type, including Json.

Below is an implementation of the RenderPartialViewToString method I often create in a base controller when writing ASP.NET MVC applications.

public class BaseController : Controller
{
    protected string RenderPartialViewToString(string viewName, object model)
    {
        if (string.IsNullOrEmpty(viewName))
        {
            viewName = this.ControllerContext.RouteData.GetRequiredString("action");
        }
                
        this.ViewData.Model = model;
        using (var stringWriter = new StringWriter())
        {
            ViewEngineResult partialView = ViewEngines.Engines.FindPartialView(this.ControllerContext, viewName);
            ViewContext viewContext = new ViewContext(this.ControllerContext, partialView.View, this.ViewData, this.TempData, (TextWriter)stringWriter);
            partialView.View.Render(viewContext, (TextWriter)stringWriter);
            return stringWriter.GetStringBuilder().ToString();
        }
    }
}

Now that we've modelled our response we have the ability to provide the response consumer far more context, this context enables us to better reason with our server response. In the cases where we may need to perform any kind of front end action based on the outcome of the response, we can easily do.

// The Javascript / JQuery
$.ajax({
    type: "GET",
    url: "/Profile/LoadProfile/" + $('#userProfileId').val(),
    dataType: "json"
    success: function (data, textStatus, jqXHR) {
        if (ajaxResponse.Success === true) {
            $('#profileContainer').html(ajaxResponse.ResponseData);
        } else {
            Dialog.showDialog(ajaxResponse.ErrorMessage);
        }
    },
    error: function (jqXHR, textStatus, errorThrown) {
        ...
    }
});

Additionally, if the approach is used throughout a team then you're promoting a consistent API that you can build around. This makes the development of general error handlers far easier. What's more, if you're using TypeScript in your codebase then you can continue to leverage the benefits by casting your response to a TypeScript implementation of the ajaxResponse class and gain all the intelli-sense and tooling support goodness that comes with TypeScript.

// TypeScript implementation
export interface IAjaxResponse {
    Success: boolean;
    ErrorMessage: string;
    RedirectUrl: string;
    ResponseData: any;
}

That's all for now. Thoughts and comments welcome!

Adding search to your website with Azure Search

Posted on Wednesday, 10 Feb 2016

As traffic for my blog has started to grow I've become increasingly keen to implement some kind of search facility to allow visitors to search for content, and as most of you will probably know search isn't an easy problem - in fact search is an extremely hard problem, which is why I was keen to look at some of the existing search providers out there.

Azure's search platform has been on my radar for a while now, and having recently listened to an MS Dev Show interview with one of the developers behind Azure Search I was keen to give it a go.

This article serves as a high level overview as to how to get up and running with Azure Search service on your website or blog with a relatively simple implementation that could use some fleshing out. Before we begin it's also worth mentioning that Azure's Search service has some really powerful search capabilities that are beyond the scope of this article so I'd highly recommend checking out the documentation.

Azure Search Pricing Structure

Before we begin, let us take a moment to look at the Azure Search pricing:

One of the great things about Azure Search is it has a free tier that's more than suitable for relatively low to medium traffic websites. What's odd though is after the free tier there's a sudden climb in price and specifications for the next model up - which I assume is because whilst it's been available for a year, it's still relatively new, so hopefully we'll see more options become available moving forward but only time will tell; but as it stands the free tier is more than what I will need so let us continue!

Setting up Azure Search - Step 1: Creating our search service

Before we begin we've got to create our Azure Search service within Azure's administration portal.

Note: If you've got the time then the Azure Search service documentation goes through this step in greater detail, so if you get stuck then feel free to refer to it here.

Enter in your service name (in this instance I used the name "jwblog"), select you subscription, resource group, location and select the free pricing tier.

Setting up Azure Search - Step 2: Configuring our service

Now that we've created our free search service, next we have to configure it. To do this click the Add Index button within the top Azure Search menu.

Add an index (see image above)

Once we've created our account next we need to decide which content we're going to make searchable and reflect that within our search indexes. What Azure Search will then do is index that content, allowing it to be searched. We can do this either programatically via the Azure Search API, or via the Portal. Personally I'd rather do it up front in the Portal, but there's nothing to stop you doing it in the code, but for now lets do it via the Azure Portal. In the instance of this post, we'd want to index our blog post title and blog post content as this is what we want the user to be able to search.

Setting retrievable content  (see image above)

When configuring our Azure Search profile we can specify what content we want to mark as retrievable, in this instance as I plan on only showing the post title on the search result page so I will mark the post title as retrievable content. As the search result page will also need to link to the actual blog post I will also mark the blog post id as retrievable.

Now we've set up and profile and configured it let's get on with some coding!

Setting up Azure Search - Step 3: Implementing our search - the interface

Because it's always a good idea to program to an interface rather than implementation, and because we may want to index other content moving forward we'll create a simple search provider interface, avoiding the use of any Azure Search specific references.

public interface ISearchProvider<T> where T : class
{
    IEnumerable<TResult> SearchDocuments<TResult>(string searchText, string filter, Func<T, TResult> mapping);

    void AddToIndex(T document);
}

If you take a moment to look at the SearchDocuments method signature:

IEnumerable<TResult> SearchDocuments<TResult>(string searchText, string filter, Func<T, TResult> mapper);

You'll see that we take a Func of type T and return a TResult - this will allow us to map our search index class (which we'll create next) to a data transfer object - but more on this later. You're also able to provide search filters to provide your website with richer searching capabilities.

Next we want to create our blog post search index class that will contain all of the properties required to index our content. As we're creating a generic interface to our search provider we're going to extend Azure Search's AzureSearchIndex class. This allows us to create a BlogPostIndex class, but also gives us the scope to easily index other content such as pages (in which case we'd call it a BlogPageIndex).

What's important to note is that the below BlogPostSearchIndex class is that our properties are the same name and type as the columns we configured earlier within Step 2 and that we've passed the index name to the base constructor.

// BlogPostSearchIndex.cs

[SerializePropertyNamesAsCamelCase]
public class BlogPostSearchIndex
{
    public BlogPostSearchIndex(int postId, string postTitle, string postBody)
    {
        // Document index needs to be a unique string
        IndexId = "blogpost" + postId.ToString();
        PostId = postId;
        PostTitle = postTitle;
        PostBody = postBody;
    }

    // Properties must remain public as they'll be used for automatic data binding
    public string IndexId { get; set; }

    public int PostId { get; set; }

    public string PostTitle { get; set; }

    public string PostBody { get; set; }

    public override string ToString()
    {
        return $"IndexId: {IndexId}\tPostId: {PostId}\tPostTitle: {PostTitle}\tPostBody: {PostBody}";
    }
}

Now that we've created the interface to our search provider we'll go ahead and work on the implementation.

Setting up Azure Search - Step 5: Implementing our search - the implementation

At this point we're now ready to start working on the implementation to our search functionality, so we'll create an AzureSearchProvider class that extends our ISearchProvider interface and start fleshing out our search.

Before we begin it's worth being aware that Azure's search service does provide RESTful API that you can consume to manage and indexes, however as you'll see below I've opted to do it via their SDK.

// AzureSearchProvider.cs

public class AzureSearchProvider<T> : ISearchProvider<T> where T : class
{
    private readonly SearchServiceClient _searchServiceClient;
    private const string Index = "blogpost";

    public AzureSearchProvider(SearchServiceClient searchServiceClient)
    {
        _searchServiceClient = searchServiceClient;
    }

    public IEnumerable<TResult> SearchDocuments<TResult>(string searchText, string filter, Func<T, TResult> mapper)
    {
        SearchIndexClient indexClient = _searchServiceClient.Indexes.GetClient(Index);

        var sp = new SearchParameters();
        if (!string.IsNullOrEmpty(filter)) sp.Filter = filter;

        DocumentSearchResponse<T> response = indexClient.Documents.Search<T>(searchText, sp);
        return response.Select(result => mapper(result.Document)).ToList();
    }

    public void AddToIndex(T document)
    {
        if (document == null)
            throw new ArgumentNullException(nameof(document));

        SearchIndexClient indexClient = _searchServiceClient.Indexes.GetClient(Index);

        try
        {
            // No need to create an UpdateIndex method as we use MergeOrUpload action type here.
            IndexBatch<T> batch = IndexBatch.Create(IndexAction.Create(IndexActionType.MergeOrUpload, document));
            indexClient.Documents.Index(batch);
        }
        catch (IndexBatchException e)
        {
            Console.WriteLine("Failed to Index some of the documents: {0}",
                string.Join(", ", e.IndexResponse.Results.Where(r => !r.Succeeded).Select(r => r.Key)));
        }
    }
}

The last part of our implementation is to set our configure out search provider with our IoC container of choice to ensure that our AzureSearchProvider.cs class and its dependencies (Azure's SearchServiceClient class) can be resolved. Azure's SearchServiceClient.cs constructor requires our credentials and search service name as arguments so we'll configure them there too.

In this instance I'm using StructureMap, so if you're not using StructureMap then you'll need to configure your IOC configuration accordingly.

public class DomainRegistry : Registry
{
    public DomainRegistry()
    {
        ...

        this.For<SearchServiceClient>().Use(() => new SearchServiceClient("jwblog", new SearchCredentials("your search administration key")));
        this.For(typeof(ISearchProvider<>)).Use(typeof(AzureSearchProvider<>));

        ...
    }
}

At this point all we need to do is add our administration key which we can get from the Azure Portal under the Keys setting within the search service blade we used to configure our search service.

Setting up Azure Search - Step 6: Indexing our content

Now that all of the hard work is out of the way and our search service is configured, we need to index our content. Currently our Azure Search service is an empty container with no content to index, so in the context of a blog we need to ensure that when we add or edit a blog post the search document stored within Azure Search is either added or updated. To do this we need to go to our blog's controller action that's responsible for creating a blog post and index our content.

Below is a rough example as to how it would look, I'm a huge fan of a library called Mediatr for delegating my requests but below should be enough to give you an idea as to how we'd implement indexing our content. We would also need to ensure we did the same thing for updating our blog posts as we'd want to ensure our search indexes are up to date with any modified content.

public class BlogPostController : Controller
{
    private readonly ISearchProvider<BlogPostSearchIndex> _searchProvider;

    public BlogPostController(ISearchProvider<BlogPostSearchIndex> searchProvider)
    {
        this._searchProvider = searchProvider;
    }

    [HttpPost]
    public ActionResult Create(BlogPostAddModel model)
    {
        ...

        // Add your blog post to the database and use the Id as an index Id

        this._searchProvider.AddToIndex(new BlogPostSearchIndex(indexId, model.Id, model.Title, model.BlogPost));

        ...
    }

    [HttpPost]
    public ActionResult Update(BlogPostEditModel model)
    {
        ...
        // As we're using Azure Search's MergeOrUpload index action we can simply call AddToIndex() when updating
        this._searchProvider.AddToIndex(new BlogPostSearchIndex(indexId, model.Id, model.Title, model.BlogPost));

        ...
    }

}

Now that we're indexing our content we'll move onto querying it.

Setting up Azure Search - Step 7: Querying our indexes

As Azure's Search service is built on top of Lucene's query parser (Lucene is well known open-source search library) we have a variety of ways we can query our content, including:

  • Field-scoped queries
  • Fuzzy search
  • Proximity search
  • Term boosting
  • Regular expression search
  • Wildcard search
  • Syntax fundamentals
  • Boolean operators
  • Query size limitations

To query our search index all we need to do is call our generic SearchDocuments<T> method and map our search index object to a view model/DTO like so:

private IEnumerable<BlogPostSearchResult> QuerySearchDocuments(string keyword)
{
    IEnumerable<BlogPostSearchResult> result = _searchProvider.SearchDocuments(keyword, string.Empty, document => new BlogPostSearchResult
    {
        Id = document.PostId,
        Title = document.PostTitle
    });

    return result;
}

At this point you have one of two options, you can either retrieve the indexed text (providing you marked it as retrievable earlier in step 2) and display that in your search result, or you can return your ID and query your database for the relevant post based on that blog post ID. Naturally this does introduce what is an unnecessary database call so consider your options. Personally as my linkes include a filename, I prefer to treat the posts in my database as the source of truth and prefer to check the posts exist and are published so I'm happy to incur that extra database call.

public IEnumerable<BlogPostItem> Handle(BlogPostSearchResultRequest message)
{
    if (string.IsNullOrEmpty(message.Keyword) || string.IsNullOrWhiteSpace(message.Keyword))
        throw new ArgumentNullException(nameof(message.Keyword));

    List<int> postIds = QuerySearchDocuments(message.Keyword).Select(x => x.Id).ToList();

    // Get blog posts from database based on IDs
    return GetBlogPosts(postIds);
}

Setting up Azure Search - Wrapping it up

Now we're querying our indexes and retrieving the associated blog posts from the database all we have left to do is output our list of blog posts to the user.

I'm hoping you've found this general overview useful. As mentioned at the beginning of the post this is a high level overview and implementation of what's a powerful search service. At this point I would highly recommend you take the next step and look at how you can start to tweak your search results via means such as scoring profiles and some of the features provided by Lucene.

Happy coding!

Personal targets and goals for 2016

Posted on Monday, 01 Feb 2016

Around this time last year I set out a list of goals and targets for myself in my Personal targets and goals for 2015 blog post which, with 2015 coming to a close, I reflected on last month in my Reflecting on 2015 post.

I feel formally setting yourself yearly goals is a great way to focus a set of specific, well thought out targets as opposed to spending the year moving from one subject to another with no real aim. So without further ado, here are my personal targets for 2016.

Goal 1: Speak at more conferences and meet-ups

Those of you that know me will know that when I start talking about programming and software development it's hard to shut me up. Software and web development is a subject I've been hugely passionate about from a young age and since starting this blog I've found sharing knowledge and documenting progress a great way to learn, memorise get involved in the community. This desire to talk about software has ultimately led me to start speaking at local meet-ups (including a lunchtime lightning talk at DDD South West) which I've really enjoyed. This year I'd like to continue to pursue this by speaking at larger events and meet-ups that are further afield. I already plan on submitting a few talks to this year's DDD South West event so we'll see if they get accepted.

Goal 2: Start an Exeter based .NET User Group

Living in a rather quiet and rural part of United Kingdom has its pros and cons and whilst Somerset is a great place to live, it suffers from a lack of meet-ups, specifically .NET ones - this is something I'm hoping to rectify by starting up a .NET user group.

Whilst I don't live in Exeter, it's the closest place where I feel there will be enough interest for setting up a .NET specific user group, so I'm currently in the process of looking for a location on the outskirts of the city to make it easier for commuters living in some of the nearby towns and cities.

Goal 3: Continue contributing to the .NET OSS ecosystem

One of last year's goals was to contribute to more open-source libraries, and whilst I felt I made good progress in achieving this goal I'm keen to continue working in this area. Not only do I want to continue to contribute to existing projects but I'm also keen to help others get involved in contributing to projects. It's a really rewarding activity that can really help develop your skill set as a software developer, with this in mind I've been thinking of a few satellite sites that will help people get started.

I'm also thinking about a few talks that talk about how someone can get started and the lessons you can learn by contributing to open-source software.

In addition to the above, I'm also keen on contributing some of my own libraries to the .NET open-source ecosystem. More often than not I've found myself creating a few classes to encapsulate certain behaviour or abstract a third party API, yet I've never turned it into its own library. This year I'm keen to take that extra step and turn it into a fully fledged library that can be downloaded via NuGet.

Bonus Goal: F#

Having been watching the F# community closely for some time now I'm really interested in what it has to offer, so this year I'm considering jumping in and committing myself to learning it and becoming proficient in it. I've got a few side projects I plan on working on that I feel will be a great place to use F# so we'll see how that goes.

Conclusion

This concludes my personal targets for 2016. Whilst it's not an exclusive list of what I will be focusing on, it's certainly a list that I wish to feel I've made some progress on by the end of the year. I shall see how it goes and keep people updated.

Angular 2 based Piano Note Training game side project

Posted on Sunday, 24 Jan 2016

With Angular 2 hitting beta I decided to take it for a test drive and give it a spin. As a huge, (and vocal!) fan of TypeScript I was keen to see what the Angular team had done with it and was really interested to see how the component based approach to Angular 2 made writing Angular applications different to that of the first Angular.

The application

After playing with Angular 2 for a few evenings and liking what I was seeing, I decided I wanted to build something more real life than a throw-away To-Do list. Don't get me wrong, there's nothing wrong with building to-do lists, they're a great way to get to grips with a framework, but nothing beats a real world project - with this in mind I decided to cross a side project off of my "apps to build" list and build a piano note trainer (source code available on GitHub).

As someone who is currently learning to play the piano and isn't a great sight reader, I've been keen to develop a note trainer that records your progress over time to give you indicators as to just how well your sight reading is progressing.

So far the application is going well and I'm extremely impressed with Angular 2's component based approach (heavily inspired by React if I'm not mistaken - but more on that in a moment). At the time of writing the application will generate random notes and draw them to the HTML 5 canvas whilst monitoring the user's input to see if they press the appropriate key on the CSS based piano (credit to Taufik Nurrohman for the paino, it looks great and has saved me a tonne of time!). If the user presses the correct key then the application informs them of the correct key press and then moves onto another key. If the user fails to press the correct key then the application lets the user know and waits for them to try again.

As I continue to build the piano note trainer I'm finding Angular 2 feel more and more intuitive, and whilst Angular 2 is structurally different to Angular 1, it still feels quite similar in many ways - bar no controllers and the dreaded scope object.

Angular 2's component based approach feels really nice, because we've learned over time that composition over inheritance the best way to build software

One of my main gripes with Angular 1 is the $scope object that was passed throughout an application and all too easily became a dumping ground of functions and data. This often inadvertently resulted in controllers taking on dependencies and quickly becoming tightly coupled to one another. Alternatively, the component based approach in Angular 2 naturally encourages encapsulated building blocks that contain the behaviour, HTML and CSS specific to the component and its role - the component can then expose a strict surface to its consuming components. This component based model follows the all too important "composition over inheritance" approach and allows you to build your application out of a bunch of easily testable units.

For instance, if you were to take the following screenshot of the application in its current state and break it down into the various components it looks like this:

Overall I've been really happy with Angular 2 so far and can't wait to see what tooling we start to see appear for it now that it's using TypeScript. I can't help but feel 2016 is going to be a big year for Angular.

The source code for the application is available on my GitHub profile and once finished I plan on submitting it to builtwithangular2.com. I'm hoping to have a live link once the application has reached a point that I'm happy with it to be used (I've still got to add sharp and flat notes which are causing some issues). In the mean time, feel free to give it a try via my GitHub profile.

An often overlooked reason why you should be on Stack Overflow

Posted on Saturday, 16 Jan 2016

Stack Overflow is a fantastic resource for developers of all ability. Gone are the days of having to trawl through blog posts, mailing lists or forums looking for solutions to issues.

These days the answer to the majority of a developer's day to day questions are just a few clicks away and often rank quite highly on Google for the search term(s) making them even easier to find. And whilst the majority of us developers use it more than a few times a day I'm always quietly surprised when I see a developer browsing Stack Overflow looking for, or at, a solution to a problem they're having yet they don't have an account (its fairly easy to spot when someone doesn't have an account from a distance as advertisements are visible between the answers).

When you ask someone who regularly uses Stack Overflow why they don't have an account the response you usually get is "I usually find the answer to my question so have no need to ask", or "I don't plan on answering questions" - which are all perfectly valid reasons. But as developers when have we ever had an issue, fixed it and then remembered the solution to the problem the next time we ran into the same issue - days, weeks, months of ever years later? I feel I would be fairly safe in saying we've all had problems or questions we've needed to look for on more than one occasion. This is where Stack Overflow can be a huge help (not to mention you'll be giving back to the community).

Leaving yourself upvote breadcrumbs to your technical issues

Next time you're on Stack Overflow I would encourage you to take a moment to create an account, even if you don't wish to fill out your profile - and the next time you find an answer to the problem you're facing - simply upvote it. It takes a split second to do but that split second can save you minutes or potentially hours over your programming career. I can't remember the number of times I've had an technical issue, gone searching for a solution on Stack Overflow and noticed I'd upvoted an answer a few months or years earlier.

Ultimately isn't that one of the things we enjoy doing as developers? Look for ways to make difficult things simple? Next time make it easier to see signals through the noise by upvoting the solutions to your problems.

About Me

I'm a software and web application developer living in Somerset within the UK and I eat/sleep software and web development. Programming has been a passion of mine from a young age and I consider myself extremely lucky that I am able to do what I love doing as profession. If I’m not found with my family then you’ll see me at my keyboard writing some form of code.

Read More

Feel free to follow me on any of the channels below.

 
 
profile for JoeMighty at Stack Overflow, Q&A for professional and enthusiast programmers

Recent Blog Posts