<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Joseph Woodward&#39;s Blog</title><link>https://josephwoodward.co.uk/rss</link><description>Go, Software Engineering and Distributed Systems</description><managingEditor> (Joseph Woodward)</managingEditor><pubDate>Mon, 20 Apr 2026 08:57:39 +0000</pubDate><item><title>Reduce allocations and comparison performance with the new unique package in Go 1.23</title><link>https://josephwoodward.co.uk/2024/08/performance-improvements-unique-package-go-1-23</link><description>A new addition to the Go standard libary is interning via the unique package, allowing developers to potentially reduce allocations and improve comparison performance.</description><content:encoded><![CDATA[<p>Having been playing around with the upcoming <a href="https://tip.golang.org/doc/go1.23">Go 1.23 release</a> (which at the time of writing is still in draft but due for release this month) I was interested to understand the new <code>unique</code> package and the problems it aims to solve. Below is a write-up of that investigation which I&rsquo;m hoping will prove useful to others interested in learning more.</p>

<h2 id="interning-and-go">Interning and Go</h2>

<p>Interning, a term originally <a href="https://en.wikipedia.org/wiki/Interning_(computer_science)">introduced by Lisp</a> is the process of storing only one copy of a value in memory and sharing a unique reference to it instead of allocating multiple copies and wasting memory. For instance the Go compiler already performs interning of string constants at compile time, so instead of allocating multiple identical strings it will allocate a single instance of the string and share a reference to it, as demonstrated in the snippet below. (<a href="https://go.dev/play/p/upPazo3Arkk">Go Playground here</a>)</p>

<pre><code class="language-go">package main

import (
	&quot;fmt&quot;
	&quot;reflect&quot;
	&quot;unsafe&quot;
)

const greet1 = &quot;hello world&quot;
const greet2 = &quot;hello world&quot;

func main() {
	a := greet1
	p := (*reflect.StringHeader)(unsafe.Pointer(&amp;a))
	fmt.Println(&quot;a address:&quot;, *&amp;p.Data)

	b := greet2
	p2 := (*reflect.StringHeader)(unsafe.Pointer(&amp;b))
	fmt.Println(&quot;b address:&quot;, *&amp;p2.Data)
}
</code></pre>

<pre><code class="language-bash">$ go run main.go

a address: 4310983661
b address: 4310983661
</code></pre>

<p>Before Go 1.23, interning runtime values has only been available via <a href="https://github.com/josharian/intern">third-party packages</a>. However, as of Go 1.23 interning is now included in the standard library via the <a href="https://pkg.go.dev/unique@master#Handle">new <code>unique</code> package</a>.</p>

<h2 id="the-unique-package-in-go-1-23">The <code>unique</code> Package in Go 1.23</h2>

<p>Interning (or &lsquo;canonicalizing values&rsquo;) can now be performed using the <code>unique</code> package via its <code>Handle</code> type that acts as a globally unique identity for any provided (<a href="https://go.dev/blog/comparable">comparable</a>) value, meaning two handles compare equal exactly if the two values used to create the handles would have also compared equal.</p>

<pre><code class="language-go">type Handle[T comparable] struct {}
func (h Handle[T]) Value() T {}
func Make[T comparable](value T) Handle[T] {}
</code></pre>

<p>Internally the <code>Handle</code> type is backed by a concurrent-safe map (well, an <a href="https://db.in.tum.de/~leis/papers/ART.pdf">adaptive radix tree</a> to be precise due to its need to shrink and grow the tree more efficiently than a map) which acts as a read-through cache, storing unique values when not detected in the cache and returning a <code>Handle[T]</code> <strong>which is designed to be passed around as values/fields instead of the underlying value</strong>, saving you additional allocations and also resulting in cheaper comparisons as you’ll only incur a reference comparison as opposed to a value comparison.</p>

<h4 id="couldn-t-you-achieve-the-same-behaviour-with-a-map-of-unique-values">Couldn&rsquo;t you achieve the same behaviour with a map of unique values?</h4>

<p>Sure, the same interning behaviour could be achieved using a custom map to reduce duplicate allocations, however handling garbage collecting efficiently isn&rsquo;t possible. The <code>unique</code> package has a notion of &ldquo;weak references&rdquo; that means the garbage collector can clean up in a single cycle made possible by access to runtime internals, something a custom rolled solution wouldn&rsquo;t be able to perform.</p>

<h2 id="how-to-use-the-unique-package">How to use the <code>unique</code> package</h2>

<p>Let&rsquo;s take a look at an trivial example, inspired by the <code>net/netip</code> package which actually uses the <code>unique</code> package for efficient handling of IPv6 zone names.</p>

<pre><code class="language-go">package main

import (
	&quot;unique&quot;
)

type addrDetail struct {
	isV6   bool
	zoneV6 string
}

func main() {
	h1 := unique.Make(addrDetail{isV6: true, zoneV6: &quot;2001:0db8:0001:0000:0000:0ab9:C0A8:0102&quot;})

	// this addrDetail won't be allocated as it already exists in the underlying map
	h2 := unique.Make(addrDetail{isV6: true, zoneV6: &quot;2001:0db8:0001:0000:0000:0ab9:C0A8:0102&quot;})

	if h1 == h2 {
		println(&quot;addresses are equal&quot;)
	}
	
	// Value() returns a copy of the underlying value (ie, different memory address)
	println(h1.Value().zoneV6) 
}
</code></pre>

<h2 id="handle-comparison-performance">Handle Comparison Performance</h2>

<p>Earlier I mentioned that along with a reduction of unnecessary allocations via deduplication of values, the <code>unique</code> package can also reduce the cost of object comparisons, which can be especially evident when dealing with comparing large strings, structs with string fields or arrays, where the comparison can be reduced to a simple pointer comparison. Let&rsquo;s take a look.</p>

<p>Below we have a benchmark that repeats a comma-separated string of IPv6 zones and then performs a string comparison on the two identical copies, one benchmark without wrapping the string in a <code>Handle</code> type, the other with.</p>

<pre><code class="language-go">package main

import (
	&quot;strings&quot;
	&quot;testing&quot;
	&quot;unique&quot;
)

func BenchmarkStringCompareSmall(b *testing.B)  { benchStringComparison(b, 10) }
func BenchmarkStringCompareMedium(b *testing.B) { benchStringComparison(b, 100) }
func BenchmarkStringCompareLarge(b *testing.B)  { benchStringComparison(b, 1000000) }

func BenchmarkCanonicalisingSmall(b *testing.B)  { benchCanonicalising(b, 10) }
func BenchmarkCanonicalisingMedium(b *testing.B) { benchCanonicalising(b, 100) }
func BenchmarkCanonicalisingLarge(b *testing.B)  { benchCanonicalising(b, 1000000) }

func benchStringComparison(b *testing.B, count int) {
	s1 := strings.Repeat(&quot;2001:0db8:0001:0000:0000:0ab9:C0A8:0102,&quot;, count)
	s2 := strings.Repeat(&quot;2001:0db8:0001:0000:0000:0ab9:C0A8:0102,&quot;, count)
	b.ResetTimer()
	for n := 0; n &lt; b.N; n++ {
		if s1 != s2 {
			b.Fatal()
		}
	}
	b.ReportAllocs()
}

func benchCanonicalising(b *testing.B, count int) {
	s1 := unique.Make(strings.Repeat(&quot;2001:0db8:0001:0000:0000:0ab9:C0A8:0102,&quot;, count))
	s2 := unique.Make(strings.Repeat(&quot;2001:0db8:0001:0000:0000:0ab9:C0A8:0102,&quot;, count))
	b.ResetTimer()
	for n := 0; n &lt; b.N; n++ {
		if s1 != s2 {
			b.Fatal()
		}
	}
	b.ReportAllocs()
}
</code></pre>

<h3 id="benchmark-results">Benchmark Results</h3>

<p>Let&rsquo;s run the benchmark and check out the results:</p>

<pre><code class="language-bash">$ go test -run='^$' -bench=.

goos: darwin
goarch: arm64
pkg: go-experiment
cpu: Apple M1
BenchmarkStringCompareSmall-8     	116581837	        9.392 ns/op	      0 B/op	      0 allocs/op
BenchmarkStringCompareMedium-8    	14944300	       80.15 ns/op	      0 B/op	      0 allocs/op
BenchmarkStringCompareLarge-8     	903	               1296028 ns/op	      0 B/op	      0 allocs/op

BenchmarkCanonicalisingSmall-8    	1000000000	        0.3132 ns/op	      0 B/op	      0 allocs/op
BenchmarkCanonicalisingMedium-8   	1000000000	        0.3140 ns/op	      0 B/op	      0 allocs/op
BenchmarkCanonicalisingLarge-8    	1000000000	        0.3128 ns/op	      0 B/op	      0 allocs/op

PASS
ok  	go-experiment	5.596s
</code></pre>

<p>Running the benchmarks we can see that regardless of the size of the string, the nanoseconds per operation (ns/op) duration is consistently low regardless of whether performing the comparison over a string 10 copies long or 1,000,000, whereas the non-canonicalised version grows relative to the size of the string.</p>

<h3 id="conclusion">Conclusion</h3>

<p>Hopefully, this post has given you enough of an understanding of what the <code>unique</code> package is and the problems it aims to solve. It&rsquo;s another great package that&rsquo;s now included as part of Go&rsquo;s already solid standard library and hopefully in the future if you run into the potential use cases as those discussed above then this post will give you enough of an understanding to reach for interning.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Fri, 02 Aug 2024 23:04:01 +0000</pubDate></item><item><title>Providing context to cancellations in Go 1.20 with the new context WithCause API</title><link>https://josephwoodward.co.uk/2023/01/context-cancellation-cause-with-cancel-cause</link><description>A new addition to the context package in Go 1.20 is the ability to provide the cause of a context&#39;s cancellation. In this post we&#39;ll take a look at this new API and how to use it.</description><content:encoded><![CDATA[<p>Having been experimenting with some of the changes making their way into Go 1.20, there was one change I felt was worthy of its own post. This change is the <a href="https://github.com/golang/go/issues/51365">addition of APIs for writing and reading cancellation causes</a> in the standard library&rsquo;s <code>context</code> package. Let&rsquo;s have a look&hellip;</p>

<h2 id="the-problem">The Problem</h2>

<p>Prior to Go 1.20, out of the box options for understanding the reason for a context&rsquo;s cancellation were limited to two exported errors from the <code>context</code> package, <code>context.DeadlineExceeded</code> or <code>context.Canceled</code>. Though rather simplistic, below is a typical way these errors are commonly checked:</p>

<pre><code class="language-go">func main() {
    // Pass a context with a timeout to tell a blocking function that it
    // should abandon its work after the timeout elapses.
    timeoutDuration := 3 * time.Second
    ctx, cancel := context.WithTimeout(context.Background(), timeoutDuration)
    defer cancel()

    // block until context is timed out
    &lt;-ctx.Done()

    switch ctx.Err() {
        case context.DeadlineExceeded:
            fmt.Println(&quot;context timeout exceeded&quot;)
        case context.Canceled:
            fmt.Println(&quot;context cancelled by force&quot;)
    }
   
   // output:
   // context timeout exceeded
}
</code></pre>

<p>For a lot of cases this solution is sufficient, however there are occasions where you wish to provide more &ldquo;context&rdquo; as to <em>why</em> the context was canceled, in the form of custom error so developers can introspect it via the <code>errors.Is/As</code> functions to determine, for instance, whether the error is retryable or not.</p>

<p>Similarly when a context is explicitly cancelled by invoking the <code>context.CancelFunc()</code> function (assigned to the <code>cancel</code> variable in the above demonstration) there&rsquo;s no way to communicate whether this was due to an error or not.</p>

<p>This is the problem the following two accepted proposals aim to address with the addition of a new WithCause API to Go&rsquo;s <code>context</code> package available in Go 1.20. <a href="https://github.com/golang/go/issues/51365">First</a> by adding <code>context.WithCancelCause</code> and a <a href="https://github.com/golang/go/issues/56661">follow-on proposal</a> to add <code>WithDeadlineCause</code> and <code>WithTimeoutCause</code> to the <code>context</code> package.</p>

<p>Let&rsquo;s take a look.</p>

<h2 id="providing-context-to-your-context-cancellation">Providing context to your context cancellation</h2>

<h3 id="explicit-cancellation-via-withcancelcause">Explicit cancellation via <code>WithCancelCause</code></h3>

<p>As demonstrated below, using this new <code>WithCancelCause</code> API available in Go 1.20 allows us to pass a custom error type when cancelling the context. This is then stored on the <code>context</code> for retrieval:</p>

<pre><code class="language-go">package main

import (
    &quot;context&quot;
    &quot;errors&quot;
    &quot;fmt&quot;
)

var ErrTemporarilyUnavailable = fmt.Errorf(&quot;service temporarily unavailable&quot;)

func main() {
    ctx, cancel := context.WithCancelCause(context.Background())

    // operation failed, let's notify the caller by cancelling the context
    cancel(ErrTemporarilyUnavailable)

    switch ctx.Err() {
    case context.Canceled:
        fmt.Println(&quot;context cancelled by force&quot;)
    }

    // get the cause of cancellation, in this case the ErrTemporarilyUnavailable error
    err := context.Cause(ctx)

    if errors.Is(err, ErrTemporarilyUnavailable) {
        fmt.Printf(&quot;cancallation reason: %s&quot;, err)
    }
    
    // cancallation reason: service temporarily unavailable
}
</code></pre>

<p>As an API author we&rsquo;re now able to provide our consumers with more information as to why the context was canceled, for instance, communicating whether the operation is safe to retry or not.</p>

<h3 id="timed-cancellation-with-withdeadlinecause-and-withtimeoutcause">Timed cancellation with <code>WithDeadlineCause</code> and <code>WithTimeoutCause</code></h3>

<p>In addition to forced cancellation demonstrated above, Go 1.20 will also include the same capabilities with the timed cancellations via <code>WithDeadlineCause</code> and <code>WithTimeoutCause</code>. The main difference being that the timed cause functions, <code>WithDeadlineCause</code> and <code>WithTimeoutCause</code> require you to specify the cause of the cancellation when creating the context, as opposed to when invoking the <code>CancelFunc</code> returned. Let&rsquo;s take a look:</p>

<pre><code class="language-go">...

var ErrFailure = fmt.Errorf(&quot;request took too long&quot;)

func main() {
    timeout := time.Duration(2 * time.Second)
    ctx, _ := context.WithTimeoutCause(context.Background(), timeout, ErrFailure)

    // wait for the context to timeout
    &lt;-ctx.Done()

    switch ctx.Err() {
    case context.DeadlineExceeded:
        fmt.Printf(&quot;operation could not complete: %s&quot;, context.Cause(ctx))
    }
        
    // operation could not complete: request took too long 
}
</code></pre>

<p>And as you&rsquo;d expect, given the similarities between <code>WithDeadline</code> and <code>WithTimeout</code> the same rules apply to the <code>WithDeadlineCause</code> in that it takes the cancellation cause when creating the context:</p>

<pre><code class="language-go">...

var ErrFailure = fmt.Errorf(&quot;request took too long&quot;)

func main() {
    timeout := time.Now().Add(time.Duration(2 * time.Second))
    ctx, _ := context.WithDeadlineCause(context.Background(), timeout, ErrFailure)

    // wait for the context to timeout
    &lt;-ctx.Done()

    switch ctx.Err() {
    case context.DeadlineExceeded:
        fmt.Printf(&quot;operation could not complete: %s&quot;, context.Cause(ctx))
    }
        
    // operation could not complete: request took too long 
}
</code></pre>

<h2 id="wrapping-up">Wrapping up</h2>

<p>Hopefully you found this post enlightening and the trivial examples adequately demonstrated the challenges the new WithCause changes to the <code>context</code> package coming in Go 1.20 aim to solve. If you have the time I&rsquo;d highly recommend looking over the <a href="https://github.com/golang/go/issues/26356">original issue</a> that led to this proposal and change, and of course the discussions around the <a href="https://github.com/golang/go/issues/51365">proposal itself</a>.</p>

<p>See you next time!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Wed, 04 Jan 2023 23:04:01 +0000</pubDate></item><item><title>Playing With Slog, the Proposed Structured Logging Package for the Go Standard Library</title><link>https://josephwoodward.co.uk/2022/11/slog-structured-logging-proposal</link><description>Currently going through Go&#39;s proposal process, the slog package aims to introduce structured logging as a first-class citizen within the Go standard library. In this post we take a look at the proposal, the package and what it does.</description><content:encoded><![CDATA[<p>For a while now I&rsquo;ve been keeping a close eye on the <a href="https://go.googlesource.com/proposal/+/master/design/56345-structured-logging.md?utm_source=pocket_saves">structure logging proposal</a> making its way through Go&rsquo;s language proposal process. Last month an experimental version of the proposal was released under the <code>golang.org/x/exp</code> package (see <a href="golang.org/x/exp/slog">documentation here</a>) and I decided to experiment with it and decided to summarise my learnings in a blog for others to learn more about this proposed addition to the Go standard library.</p>

<h2 id="what-is-the-structured-logging-proposal">What is the Structured Logging Proposal?</h2>

<p>From a high level (see the <a href="https://go.googlesource.com/proposal/+/master/design/56345-structured-logging.md">proposal itself</a> for a more in-depth look) the proposal aims to address the current gap that exists in Go&rsquo;s standard library by adding structured logging with levels, residing in a new package with the import path <code>log/slog</code>.</p>

<h3 id="first-what-is-structured-logging">First, what is Structured Logging?</h3>

<p>Logs have traditionally been aimed at being human readable. These days it&rsquo;s common to want to index them to enable better filtering and searching. This can be performed by formatting the log output in a way that makes them machine-readable. This process is called structured logging.</p>

<p>For example, if you&rsquo;ve used Go&rsquo;s existing log package then the following code would output a log that looks like the output below:</p>

<pre><code class="language-go">log.Printf(&quot;processed %d items in %s&quot;, 23, time.Since(time.Now()))
</code></pre>

<pre><code>// Not structured, boo :(
2022/11/10 02:53:02 processed 23 items in 83ns
</code></pre>

<p>Its lack of structure makes parsing the output difficult and potentially costly. A better, <em>structured</em> alternative would be the following which adds keys and associated values to the output making parsing it programmatically far easier and more reliable.</p>

<pre><code>// Structured!
time=2022/11/10 02:53:02 msg=&quot;processed item&quot; size=23 duration=83ns level=info
</code></pre>

<p>Now the logs have a structure associated with them, it&rsquo;s much easier to encode them as other data formats such as JSON:</p>

<pre><code class="language-json">{&quot;time&quot;:&quot;2022-11-13T03:38:37.737821Z&quot;,&quot;level&quot;:&quot;INFO&quot;,&quot;msg&quot;:&quot;processed items&quot;,&quot;size&quot;:23,&quot;duration&quot;:83}
</code></pre>

<h2 id="providing-a-general-interface-for-logging-in-go">Providing a general interface for logging in Go</h2>

<p>A wide variety of structured logging packages exist within the Go ecosystem (<a href="https://github.com/sirupsen/logrus">Logrus</a>, <a href="https://github.com/uber-go/zap">Zap</a> and <a href="https://github.com/rs/zerolog">Zerolog</a> to name a few), however this diverse API landscape can make supporting logging in a provider agnostic way challenging, often requiring abstractions to avoid coupling your implementation to any given logging package. This proposal would help alleviate this pain by providing a common interface in the standard library that service owners of package authors alike could depend on.</p>

<p>That&rsquo;s enough talk for now, let&rsquo;s take a look at the proposed package in a bit more depth.</p>

<h2 id="logging-via-the-slog-package">Logging via the <code>slog</code> package</h2>

<h3 id="a-bare-bones-example">A Bare Bones Example</h3>

<p>Following on from the items processed logging theme above (which is common through examples in the proposal) the <code>slog</code> proposal aims to introduce the following API. We&rsquo;ll focus on the setup at first and dive into the log writing part later.</p>

<pre><code class="language-go">package main

import (
     &quot;os&quot;
     &quot;time&quot;

     &quot;golang.org/x/exp/slog&quot;
)

func main() {
    logger := slog.New(slog.NewTextHandler(os.Stdout))
    logger.Info(&quot;processed items&quot;,
        &quot;size&quot;, 23,
        &quot;duration&quot;, time.Since(time.Now()))
}

// time=2022-11-10T03:56:11.768Z level=INFO msg=&quot;processed items&quot; size=23 duration=42ns
</code></pre>

<p>As demonstrated, upon creating a new instance of the <code>slog</code> logger you can pass a type of <a href="https://pkg.go.dev/golang.org/x/exp/slog#Handler"><code>Handler</code></a>; an interface whose responsibility is to handle log records produced by a <a href="https://pkg.go.dev/golang.org/x/exp/slog#Logger"><code>Logger</code></a> struct type (returned when calling <code>slog.New(...)</code>). The <code>Handler</code> interface is a key piece of the extensibility enabled by the <code>slog</code> package and one we&rsquo;ll touch towards the end of this post.</p>

<p>In this instance we&rsquo;ve used the <a href="https://pkg.go.dev/golang.org/x/exp/slog#TextHandler"><code>TextHandler</code></a> for writing the log output as structured text, but a <a href="https://pkg.go.dev/golang.org/x/exp/slog#JSONHandler"><code>JSONHandler</code></a> also exists for formatting your structured logs to JSON.</p>

<pre><code class="language-go">...

logger := slog.New(slog.NewJSONHandler(os.Stdout))
logger.Info(&quot;processed items&quot;,
    &quot;size&quot;, 23,
    &quot;duration&quot;, time.Since(time.Now()))
    
// {&quot;time&quot;:&quot;2022-11-13T04:08:19.805393Z&quot;,&quot;level&quot;:&quot;INFO&quot;,&quot;msg&quot;:&quot;processed items&quot;,&quot;size&quot;:23,&quot;duration&quot;:84}
</code></pre>

<p>At this point it&rsquo;s also valuable to highlight that you can set the logger as the default logger via the <code>SetDefault</code> method call. This will do two things, set the logger as the default when using the <code>slog</code> package. For instance, both of these would output the same log statements:</p>

<pre><code class="language-go">...

func main() {
    logger := slog.New(&amp;LogrusHandler{
        logger: logrus.StandardLogger(),
    })
    slog.SetDefault(logger)

    logger.Error(&quot;something went wrong&quot;, net.ErrClosed,
        &quot;status&quot;, http.StatusInternalServerError)
    slog.Error(&quot;something went wrong&quot;, net.ErrClosed,
        &quot;status&quot;, http.StatusInternalServerError)
}

// {&quot;err&quot;:&quot;use of closed network connection&quot;,&quot;level&quot;:&quot;error&quot;,&quot;msg&quot;:&quot;something went wrong&quot;,&quot;status&quot;:500,&quot;time&quot;:&quot;2022-11-24T22:31:06Z&quot;}
</code></pre>

<p>Secondly, using <code>slog.SetDefault()</code> will also set the logger as the default logger for the <code>log</code> package. So function calls such as the following will log the expected output:</p>

<pre><code class="language-go">...
func main() {
    logger := slog.New(&amp;LogrusHandler{
        logger: logrus.StandardLogger(),
    })
    slog.SetDefault(logger)

    log.Print(&quot;something went wrong&quot;)
    log.Fatalln(&quot;something went wrong&quot;)
}

//{&quot;level&quot;:&quot;info&quot;,&quot;msg&quot;:&quot;something went wrong&quot;,&quot;time&quot;:&quot;2022-11-24T22:33:57Z&quot;}
// {&quot;level&quot;:&quot;info&quot;,&quot;msg&quot;:&quot;something went wrong&quot;,&quot;time&quot;:&quot;2022-11-24T22:33:57Z&quot;}
// exit status 1
</code></pre>

<h3 id="logging">Logging</h3>

<p>Logging via the <code>slog</code> package can be performed in one of two ways, either via the loosely typed API seen above (designed for brevity at the sacrifice of additional allocations), or via a strongly typed but more performance focused alternative seen below, both approaches output the log in the same format.</p>

<pre><code class="language-go">...

// designed for brevity at the sacrifice of additional allocations
logger.Info(&quot;processed items&quot;,
    &quot;size&quot;, 23,
    &quot;duration&quot;, time.Since(time.Now()),

// designed for performance
logger.LogAttrs(slog.InfoLevel, &quot;processed items&quot;,
    slog.Int(&quot;size&quot;, 23),
    slog.Duration(&quot;duration&quot;, time.Since(time.Now())),
</code></pre>

<h3 id="storing-and-fetching-via-context-context">Storing and fetching via <code>context.Context</code></h3>

<p>The <code>slog</code> logger also provides an API to enable you to store and fetch the logger via a <code>context</code>. Note that on the below example, <code>FromContext</code> will actually return a default logger if none exists within the context.</p>

<pre><code class="language-go">...

func main() {
    logger := slog.New(slog.NewJSONHandler(os.Stdout))

    ctx := context.Background()
    
    // save logger to context.Context
    ctx = slog.NewContext(ctx, logger)
    
    // retrieve logger from context after propagating it throughout your API,
    // If no logger exists then a default logger is returned
    logger = slog.FromContext(ctx)
    logger.Info(&quot;processed items&quot;,
        &quot;size&quot;, 23,
        &quot;duration&quot;, time.Since(time.Now()),
    )
}

</code></pre>

<p>Whilst propagating objects via a <code>context</code> is somewhat of a contentious point amongst Go developers (and one <a href="https://github.com/golang/go/issues/56345#issuecomment-1289842738">questioned</a> and discussed on the initial proposal), scoping a logger to a request is a common pattern when logging and as <a href="https://github.com/golang/go/issues/56345#issuecomment-1289940433">one user commented</a>, advocated within the OTEL (OpenTelemetry) <a href="https://opentelemetry.io/docs/reference/specification/logs/#log-correlation">specification</a>, so personally I don&rsquo;t have an issue with it in this case.</p>

<p>It becomes especially powerful when using the <code>With</code> function (<a href="https://pkg.go.dev/golang.org/x/exp/slog#Logger.With">see docs</a>) to attach request level attributes to the returned logger before propagating it further down the call stack like so:</p>

<pre><code class="language-go">func main() {
    logger := slog.New(slog.NewJSONHandler(os.Stdout))
    http.HandleFunc(&quot;/hello&quot;, func(w http.ResponseWriter, r *http.Request) {
        // subsequent log entries within this context will contain these fields
        l := logger.
            With(&quot;path&quot;, r.URL.Path).
            With(&quot;user-agent&quot;, r.UserAgent())

        // propagate the logger with the above fields via the request context
        ctx := slog.NewContext(r.Context(), l)

        handleRequest(w, r.WithContext(ctx))
    })

    http.ListenAndServe(&quot;:8080&quot;, nil)
}

func handleRequest(w http.ResponseWriter, r *http.Request) {
    l := slog.FromContext(r.Context())

    l.Info(&quot;handling request&quot;,
    &quot;status&quot;, http.StatusOK)
    
    w.Write([]byte(&quot;Hello World&quot;))
}
</code></pre>

<p>The above example prints the following log output, allowing you to attach the necessary fields to all log lines. This could include a request/trace ID or any other vital request level information.</p>

<pre><code class="language-json">{
    &quot;time&quot;:&quot;2022-11-16T00:48:43.169545Z&quot;,
    &quot;level&quot;:&quot;INFO&quot;,
    &quot;msg&quot;:&quot;handling request&quot;,
    &quot;path&quot;:&quot;/hello&quot;,
    &quot;user-agent&quot;:&quot;Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36&quot;,
    &quot;status&quot;:200
}
</code></pre>

<h3 id="log-handler">Log Handler</h3>

<p>At the beginning of this post we looked at how the logger takes an initial dependency on a <code>slog.Handler</code>, for instance when creating a new <code>TextHandler</code>. Let&rsquo;s spend some time looking at what a handler is and how it provides extensibility to the <code>slog</code> logger.</p>

<p>The <code>Handler</code> is an interface with four methods:</p>

<pre><code class="language-go">type Handler interface {
    Enabled(Level) bool
    Handle(r Record) error
    WithAttrs(attrs []Attr) Handler
    WithGroup(name string) Handler
}
</code></pre>

<p>Other than the <code>Enabled(Level)</code> method (which is used to report whether the logger supports the expected log level), the to most interesting method is  <code>Handle(Record)</code>, which gets invoked every time a log entry is written to the logger and receives an instance of <code>Record</code> as an argument, which provides all the relevant information of the particular log entry.</p>

<p>By implementing this interface we can create a logger that performs tasks such as batching log entries to send over HTTP, or create an adapter to integrate an existing logger package with slog, so we can make our application or package logging library agnostic.</p>

<h3 id="building-a-logrus-adapter-for-the-slog-logger">Building a Logrus adapter for the <code>slog</code> logger</h3>

<p>Let&rsquo;s take a look at what that might look like as we create a <code>slog</code> handler that integrates with <code>Logrus</code>, a popular open source logging library. It&rsquo;s not production ready but hopefully it gives you an idea of what a custom handler implementation would look like:</p>

<pre><code class="language-go">type LogrusHandler struct {
    logger logrus.Logger
}

func (h *LogrusHandler) Enabled(_ slog.Level) bool {
    // support all logging levels
    return true
}

func (h *LogrusHandler) Handle(rec slog.Record) error {
    fields := make(map[string]interface{}, rec.NumAttrs())

    rec.Attrs(func(a slog.Attr) {
        fields[a.Key] = a.Value.Any()
    })

    entry := h.logger.WithFields(fields)

    switch rec.Level {
        case slog.DebugLevel:
            entry.Debug(rec.Message)
        case slog.InfoLevel.Level():
            entry.Info(rec.Message)
        case slog.WarnLevel:
            entry.Warn(rec.Message)
        case slog.ErrorLevel:
            entry.Error(rec.Message)
    }

    return nil
}

func (h *LogrusHandler) WithAttrs(attrs []slog.Attr) slog.Handler {
    // not implemented for brevity
    return h
}

func (h *LogrusHandler) WithGroup(name string) slog.Handler {
    // not implemented for brevity
    return h
}
</code></pre>

<p>And to use the <code>slog</code> handler we can do something like the following:</p>

<pre><code class="language-go">package main

import (
    &quot;net&quot;
    &quot;net/http&quot;
    &quot;os&quot;

    &quot;github.com/sirupsen/logrus&quot;
    &quot;golang.org/x/exp/slog&quot;
)

func init() {
    // set up logrus
    logrus.SetFormatter(&amp;logrus.JSONFormatter{})
    logrus.SetOutput(os.Stdout)
    logrus.SetLevel(logrus.DebugLevel)
}

func main() {
    // integrate Logrus with the slog logger
    logger := slog.New(&amp;LogrusHandler{
        logger: logrus.StandardLogger(),
    })

    // start logging via the slog API
    slog.Error(&quot;something went wrong&quot;, net.ErrClosed,
        &quot;status&quot;, http.StatusInternalServerError)
}
</code></pre>

<p>Now, whenever we log via the <code>slog</code> logger, our log records will still be handled via Logrus but we&rsquo;ll no longer need to depend on the Logrus API in our library and we&rsquo;ll see the following output:</p>

<pre><code class="language-json">{&quot;err&quot;:&quot;use of closed network connection&quot;,&quot;level&quot;:&quot;error&quot;,&quot;msg&quot;:&quot;something went wrong&quot;,&quot;status&quot;:500,&quot;time&quot;:&quot;2022-11-19T00:07:51Z&quot;}
</code></pre>

<h3 id="conclusion">Conclusion</h3>

<p>Hopefully this post has given you some insight into the <code>slog</code> package proposal, its goals and its potentially. Personally I&rsquo;m hoping the proposal gets accepted as it&rsquo;d be great to have structured logging as a first-class citizen in the standard library, but only time will tell.</p>

<p>If you&rsquo;re interested in learning more then be sure to take a look at the <a href="https://go.googlesource.com/proposal/+/master/design/56345-structured-logging.md">proposal over here</a> and the further discussions <a href="https://github.com/golang/go/discussions/54763">here</a>.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Thu, 24 Nov 2022 23:04:01 +0000</pubDate></item><item><title>Global Error Handling via Middleware with Go&#39;s Gin Framework</title><link>https://josephwoodward.co.uk/2022/06/global-error-handling-gin-middleware</link><description>In this post we&#39;ll take a look at how we can utilise middleware to simplify error handling when using the popular Gin router for Go.</description><content:encoded><![CDATA[<p>A common, repetitive theme when building HTTP APIs is the process of translating application level errors into the appropriate HTTP status code. For resources that can return a number of status codes this tends to involve checking the type of the <code>err</code> value then returning the necessary status code. Couple this will the number of resources presented by your application and you&rsquo;ve got a lot of error handling that could, in my opinion, be better solved in a centralised manner as a cross-cutting concern via middleware. This is where <a href="https://github.com/JosephWoodward/gin-errorhandling">Gin Error Handling Middleware</a> comes in, a simple Gin middleware library I wrote.</p>

<p>Handling errors as part of your HTTP API&rsquo;s middleware enables the following benefits:</p>

<ul>
<li>Centralised location for handling errors</li>
<li>Reduce boilerplate &lsquo;error to response&rsquo; mappings in your request handlers or controller actions</li>
<li>Helps protect yourself from inadvertently revealing errors to API consumers</li>
</ul>

<h2 id="a-trivial-example">A Trivial Example</h2>

<p>Let&rsquo;s take a look at how we can use the Gin Error Handling Middleware package to utilise a more centralised approach to error handling.</p>

<p>First add the package to your application:</p>

<pre><code>go get github.com/JosephWoodward/gin-errorhandling
</code></pre>

<p>Let&rsquo;s take a look at a bare-bones example where we register a custom <code>NotFoundError</code>, mapping it to a 404 (<code>http.StatusNotFound</code>) error.</p>

<pre><code class="language-go">...

var (
    NotFoundError = fmt.Errorf(&quot;resource could not be found&quot;)
)

func main() {
    r := gin.Default()

    // Register middleware and specify your errors to handle
    r.Use(
        ErrorHandler(
            Map(NotFoundError).ToStatusCode(http.StatusNotFound),
        ))

    r.GET(&quot;/test&quot;, func(c *gin.Context) {
        // Return our typed error via Gin's Error function
        c.Error(NotFoundError)
    })

    r.Run()
}
</code></pre>

<p>Now when you make an HTTP request to <code>/test</code> you should see the following response:</p>

<pre><code class="language-http">HTTP/1.1 404 Not Found
Date: Wed, 16 Feb 2022 04:08:50 GMT
Content-Length: 0
Connection: close
</code></pre>

<h3 id="going-further">Going further</h3>

<p>So far we&rsquo;ve seen how we can use the Gin error handling middleware to translate a custom error into a specific status code. If we still want control over the response body returned to the consumer (we want to return a custom error to the caller for instance) then we can use the <code>ToResponse</code> function which allows us to pass a function literal which passes the <code>gin.Context</code> as an argument:</p>

<pre><code class="language-go">...

var (
    NotFoundError = fmt.Errorf(&quot;resource could not be found&quot;)
)

func main() {
    r := gin.Default()
    r.Use(
        ErrorHandler(
            Map(NotFoundError).ToResponse(func(c *gin.Context, err error) {
                c.Status(http.StatusNotFound)
                c.Writer.Write([]byte(err.Error())),
            }),
        ))

    r.GET(&quot;/ping&quot;, func(c *gin.Context) {
        c.Error(NotFoundError)
    })

    r.Run()
}
</code></pre>

<p>In this case the caller receives the following HTTP response to their request to <code>/ping</code></p>

<pre><code class="language-http">HTTP/1.1 404 Not Found
Date: Wed, 16 Feb 2022 04:21:37 GMT
Content-Length: 27
Content-Type: text/plain; charset=utf-8
Connection: close

resource could not be found
</code></pre>

<h3 id="on-closing">On Closing</h3>

<p>Hopefully this post has adequately demonstrated the advantages of error handling via Gin middleware with Gin Error Handling Middleware. Whilst still in its infancy hopefully the library will prove helpful.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Wed, 01 Jun 2022 23:04:01 +0000</pubDate></item><item><title>Automate testing of poor network conditions with Shopify&#39;s Toxiproxy in Go</title><link>https://josephwoodward.co.uk/2021/12/automate-testing-poor-network-conditions-shopifys-toxiproxy</link><description>You build things differently when you expect them to fail, and in a complex distributed system that communicates across networks failures should be expected. To combat this Shopify build Toxiproxy, a tool for simulating a wide range of network conditions. In this post we look at how we can leverage Toxiproxy to introduce network failures into our tests, so we can better defend against them.</description><content:encoded><![CDATA[<h2 id="what-is-toxiproxy">What is Toxiproxy?</h2>

<p>Toxiproxy is a TCP Proxy built by Shopify and written in Go designed for simulating a wide range of network conditions.</p>

<p>Built specifically to work in testing, continuous integration and development environments, Toxiproxy&rsquo;s strength comes from the fact that it provides a dynamic API that allows you to easily add, remove and configure the type of network conditions you want to test against on the fly.</p>

<p>Toxiproxy comes in two parts:</p>

<ol>
<li>A TCP Proxy written in Go, this acts as a server for proxying TCP traffic between the host machine and the upstream service you wish to simulate network conditions against.</li>
<li>A client used for communicating with the Toxiproxy server over HTTP. There are a <a href="https://github.com/Shopify/toxiproxy#clients">wide range of Toxiproxy clients available</a> from .NET, Ruby and PHP to Rust, Elixir and Haskell.</li>
</ol>

<h2 id="let-s-get-started">Let&rsquo;s get started</h2>

<h3 id="starting-the-toxiproxy-server">Starting the Toxiproxy server</h3>

<p>Before we can begin we need to download and run the Toxiproxy server. You can either run the server in a Docker container (really helpful for automated tests) or directly via an executable.</p>

<p>Head over to the <a href="https://github.com/Shopify/toxiproxy#1-installing-toxiproxy">Toxiproxy installation section</a> to find your preferred approach.</p>

<p>Once running you should see the following output in your terminal:</p>

<pre><code>$ INFO[0000] API HTTP server starting                      host=localhost port=8474 version=git
</code></pre>

<p>With the Toxiproxy server now running, let&rsquo;s take a look at writing our first test.</p>

<h3 id="writing-our-first-tests-using-toxiproxy">Writing our first tests using Toxiproxy</h3>

<p>In this example I&rsquo;m going to use the <code>TestMain(m *testing.M)</code> function for setup and tearing down Toxiproxy.</p>

<p>Looking at the setup code below you&rsquo;ll see I first connect to the Toxiproxy server (in our case running on port 8474 as per the above terminal output), then add my first proxy. A proxy is a service you wish to control network conditions for such as PostgreSQL, a Redis cache or an HTTP API.</p>

<p>When creating the client you&rsquo;ll notice we inform Toxiproxy to listen on <code>localhost:5050</code>. This instructs Toxiproxy to listen on the aforementioned address and proxy any network traffic from port <code>:5050</code> to whatever is listening on port <code>:8080</code> (in this case a simple web service)</p>

<pre><code class="language-go">package main

import (
	toxiproxy &quot;github.com/Shopify/toxiproxy/v2/client&quot;
	&quot;os&quot;
	&quot;testing&quot;
)

var client *toxiproxy.Client

func TestMain(m *testing.M) {
    upstreamService := &quot;localhost:8080&quot;
    listen := &quot;localhost:5050&quot;

    // Connect to the proxy server and create a proxy which we'll configure in individual test methods.
    client = toxiproxy.NewClient(&quot;localhost:8474&quot;)
    proxy, err := client.CreateProxy(&quot;upstream_api&quot;, listen, upstreamService)
    if err != nil {
        panic(err)
    }

    // Clean up the proxy once all tests have run
    defer proxy.Delete()

    os.Exit(m.Run())
}
</code></pre>

<p>Next we&rsquo;ll add our test method to simulate latency on the upstream request (Toxiproxy enables you introduce latency on either the upstream or downstream network traffic). We do this by adding what Toxiproxy calls a &ldquo;Toxic&rdquo;.</p>

<p><strong>What&rsquo;s a Toxic?</strong></p>

<p>A Toxic is a network condition such as latency, bandwidth limitation, connection reset etc. There are <a href="https://github.com/Shopify/toxiproxy#toxics">a number of toxics available out of the box</a> but you can even create your own if desired.</p>

<pre><code class="language-go">func TestSlowConnection(t *testing.T) {

    // Arrange

    // Use the client to get our proxy configured on test setup
    proxy, err := client.Proxy(&quot;upstream_api&quot;)
    if err != nil {
        t.Fatalf(&quot;fetching proxy 'upstream_api': %v&quot;, err)
    }

    // Add a 'toxic' to introduce 3 seconds latency into the upstream request 
    proxy.AddToxic(&quot;latency&quot;, &quot;latency&quot;, &quot;upstream&quot;, 1.0, toxiproxy.Attributes{
        &quot;latency&quot;: 3000,
    })

    // Delete the proxy after use for test isolation
    defer proxy.RemoveToxic(&quot;latency&quot;)

    // Act
    start := time.Now()

    // Make our request to our dependant service
    resp, err := http.Get(&quot;http://&quot; + proxy.Listen)

    // Assert tab
    if time.Since(start) &lt; 3000*time.Millisecond {
        t.Fatalf(&quot;request completed sooner than expected&quot;)
    }

    if resp.StatusCode != http.StatusOK {
        t.Fatalf(&quot;wanted 200, got %v&quot;, resp.StatusCode)
    }
}
</code></pre>

<p>As you&rsquo;ll see from the test code above, first we get the proxy (named upstream_api) we wish to introduce network issues into then we add the latency toxi that introduces latency of 3000 millisecods. We then defer the removal of the toxic to ensure it doesn&rsquo;t affect subsequent tests using that proxy, then finish up by asserting that our request took the expected length of time to respond.</p>

<h3 id="wrapping-up">Wrapping up</h3>

<p>Hopefully the above example has provided you with a good introduction to Toxiproxy and how you can use it to create automated tests that verify expected behaviour of services under various network conditions.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Sun, 05 Dec 2021 00:18:01 +0000</pubDate></item><item><title>Integration testing your Polly policies with HttpClient Interception</title><link>https://josephwoodward.co.uk/2020/07/integration-testing-polly-policies-httpclient-interception</link><description></description><content:encoded><![CDATA[<p>In my <a href="http://josephwoodward.co.uk/2020/05/stubbing-remote-http-api-services-httpclient-interception">previous post</a> I introduced <a href="https://github.com/justeat/httpclient-interception">HttpClientInterception</a>, a hugely versatile .NET library that I&rsquo;ve had great success with. In the post I demonstrated how you can use it to <a href="/2020/05/stubbing-remote-http-api-services-httpclient-interception">stub remote HTTP service dependencies</a> within your application to ease testing. Towards the end of the post I demonstrated how it integrated with <code>IHttpClientFactory</code> via the <code>IHttpMessageHandlerBuilderFilter</code> interface, enabling you to apply additional <code>HttpMessageHandler</code>s to your HttpClients generated via <code>IHttpClientFactory</code>.</p>

<p>In this post I want to go further by demonstrating how you can use this same approach to integration test your Polly policies and how they integrate with the <code>HttpClient</code>.</p>

<p>Note: If you haven&rsquo;t read my previous post then I would recommend doing so as this post leans a lot on content discussed in it.</p>

<h2 id="integration-testing-your-polly-policies-with-httpclient">Integration testing your Polly policies with HttpClient</h2>

<p>One of the more common approaches to configuring your Polly policies with an HttpClient is via the <code>Microsoft.Extensions.Http.Polly</code> package.</p>

<p>This package builds onto of the <code>IHttpClientBuilder</code> interface allowing you to chain Polly policies with the <code>AddPolicyHandler</code> extension method, like so:</p>

<script src="https://gist.github.com/JosephWoodward/81b86b1bac6dc4886b30068745eefb44.js"></script>

<p>The <code>.AddPolicyHandler</code> method takes an async Polly Policy and creates an <code>HttpMessageHandler</code> that gets added to the <code>HttpClient</code> message handler pipeline:</p>

<script src="https://gist.github.com/JosephWoodward/51e35481796e72636e94046185e3e276.js"></script>

<h2 id="integration-testing-a-polly-retry-policy">Integration testing a Polly Retry Policy</h2>

<p>Using this same approach we can create our HttpClientInterception configuration, but this time use the <code>WithInterceptionCallback(Action&lt;HttpRequestMessage&gt; action)</code> method, which gets executed after every matched request, to perform operations like count the number of retries made (this will include the initial request):</p>

<script src="https://gist.github.com/JosephWoodward/c45d88033dadbd341ece7250da136cca.js"></script>

<h2 id="conclusion">Conclusion</h2>

<p>Hopefully this and the previous post have demonstrated just how versatile the HttpClientInterception can be, from stubbing simple requests from dependencies to integration testing Polly policies as I&rsquo;ve demonstrated above.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Thu, 30 Jul 2020 00:18:01 +0000</pubDate></item><item><title>Effectively stubbing remote HTTP service dependencies with HttpClient Interception in C#</title><link>https://josephwoodward.co.uk/2020/05/stubbing-remote-http-api-services-httpclient-interception</link><description></description><content:encoded><![CDATA[<p>Recently I&rsquo;ve been reading the new <a href="https://www.amazon.co.uk/dp/B0859PF5HB">Software Engineering at Google book</a> where I&rsquo;ve found the chapters on testing to be particularly interesting. One paragraph that especially resonated with my own personal experiences with testing was the following excerpt:</p>

<p><img src="https://pbs.twimg.com/media/EV9CZOFWoAA976h?format=jpg&amp;name=medium" alt="Excerpt from the Software Engineering at Google book" /></p>

<p>This chapter got me thinking a lot about my preference for testing units of behaviour as opposed to implementation details, and as someone that works with a lot of services that communicate via HTTP there&rsquo;s one particular library that&rsquo;s helped me time and time again, so much so that I feel deserves its own post.</p>

<h2 id="introducing-httpclient-interception">Introducing HttpClient Interception</h2>

<p>From a high level, <a href="https://github.com/justeat/httpclient-interception">HttpClient Interception</a> is a .NET Standard library, written in C# that&rsquo;s designed to intercept server-side HTTP dependencies.</p>

<p>Prior to learning about it the two patterns I most frequently used in order to stub responses from an upstream API were either to create a custom <code>HttpMessageHandler</code> (which would return desired responses/assertions on a given input), or create an interface over an <code>HttpClient</code>. Both approaches tend to be painful (the former being difficult to mock due to lack of an interface and the latter somewhat the same).</p>

<p>HttpClient Interception, the brainchild of <a href="https://twitter.com/martin_costello">Martin Costello</a>, aims to alleviate these pains making <a href="https://en.wikipedia.org/wiki/Black-box_testing">Black Box testing</a> code that depends on one or more external HTTP APIs much less intrusive.</p>

<h2 id="let-s-take-a-look">Let&rsquo;s take a look</h2>

<p>HttpClient Interception offers a convenient builder pattern over returning a desired response for a given input. Let&rsquo;s take a look at what a simple test would look like this:</p>

<script src="https://gist.github.com/JosephWoodward/f865a11e995ae261c783a95787b6d2f2.js"></script>

<p>In this example we create an instance of <code>HttpRequestInterceptionBuilder</code> then specify the request we wish to match against and the response we&rsquo;d like to return. We then register the builder against our <code>HttpClientInterceptionOptions</code> type and use it to create an instance of an <code>HttpClient</code>.</p>

<p>Now any requests that match our builder&rsquo;s parameters will return the response defined (a <code>500</code> response code) without the request leaving the <code>HttpClient</code>.</p>

<h2 id="missing-registrations">Missing Registrations</h2>

<p>At this point it&rsquo;s worth calling out the <code>ThrowOnMissingRegistration</code> property.</p>

<p>Setting <code>ThrowOnMissingRegistration</code> to <code>true</code> ensures no request leaves an <code>HttpClient</code> created or modified by HttpClient Interception. This is helpful for ensuring you have visibility over unmatched requests, otherwise leaving <code>ThrowOnMissingRegistration</code> as false could mean your request gets handled by the real upstream service, causing a false positive or having a negative impact on your test without you being aware of it.</p>

<p>Though naturally it all depends on your use case so in some instances this may actually be a desired behaviour so consideration is required.</p>

<h2 id="matching-multiple-requests">Matching multiple requests</h2>

<p>If you want to stub the response to more than one HTTP request you can register multiple builders like so:</p>

<script src="https://gist.github.com/JosephWoodward/7b7c1b751a8c570658b5afdad4d55709.js"></script>

<p>Here I&rsquo;ve registered two separate endpoints on the same <code>HttpClientInterceptorOptions</code> object which is then used to create our <code>HttpClient</code>. Now requests to any of the two registered endpoints will result in either a Not Found response (404) or an Internal Server Error (500) response.</p>

<h2 id="creating-an-httpmessagehandler">Creating an HttpMessageHandler</h2>

<p>Not all libraries will enable you to use a custom HttpClient, some may only offer you the ability to add a new <code>HttpMessageHandler</code>, thankfully HttpClient Interception can also create an <code>HttpMessageHandler</code> that can be passed to the <code>HttpClient</code>.</p>

<p>For example:</p>

<script src="https://gist.github.com/JosephWoodward/f309039e18912f1b041d086b36257d20.js"></script>

<h2 id="using-httpclientinterception-with-asp-net-core-and-ihttpclientfactory">Using HttpClientInterception with ASP.NET Core and IHttpClientFactory.</h2>

<p>Now we&rsquo;ve seen what HttpClient Interception can offer, let&rsquo;s take a look at how we could use it to test an ASP.NET Core application depends on an upstream API. This example is taken straight from the <a href="https://github.com/justeat/httpclient-interception/tree/master/samples">Sample application</a> within the HttpClient Interception repository.</p>

<p>Imagine we have an application that makes a remote call to GitHub’s API via Refit. This is what the controller action might look like:</p>

<script src="https://gist.github.com/JosephWoodward/4b2f4abdbabc58ca63dbc53f634b3df2.js"></script> 

<p>To test this we can utilise HttpClient Interception to stub the response from GitHub&rsquo;s API with the following fixture (based on the common <a href="https://docs.microsoft.com/en-us/aspnet/core/test/integration-tests">WebApplicationFactory approach documented here</a>).</p>

<p>Notice that when overriding <code>ConfigureWebHost</code> we&rsquo;re taking advantage of the <code>IHttpMessageHandlerBuilderFilter</code> interface to add an additional <code>HttpMessageHandler</code> (created by HttpClient Interception) to any <code>HttpClient</code> created via the IHttpClientFactory API.</p>

<p>It&rsquo;s worth noting that you can also register the <code>HttpMessageHandler</code> using the <code>ConfigurePrimaryHttpMessageHandler</code> (see <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.extensions.dependencyinjection.httpclientbuilderextensions.configureprimaryhttpmessagehandler?view=dotnet-plat-ext-3.1">the docs here</a>) method that lives on the <code>IHttpClientBuilder</code> interface.</p>

<script src="https://gist.github.com/JosephWoodward/558ef565ae664f60acd365948ea55a20.js"></script>

<p>Now we can easily control the responses from the GitHub API from within our tests, without having to mock any internal implementation details of our application.</p>

<script src="https://gist.github.com/JosephWoodward/171c89097d1f2c4b2cdca562b3451a10.js"></script>

<h2 id="on-closing">On Closing</h2>

<p>Hopefully this post has demonstrated just how helpful and versatile HttpClient Interception can be and how it can create maintainable tests that don&rsquo;t break the moment you change an implementation detail. I&rsquo;d highly recommend you check it our as this post only scratches the surface of what it can do.</p>

<p>I&rsquo;d like to close this post with a <a href="https://stackoverflow.com/questions/153234/how-deep-are-your-unit-tests/153565#153565">quote from Kent Beck</a>:</p>

<blockquote>
<p>&ldquo;I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence.&rdquo;</p>
</blockquote>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Thu, 21 May 2020 02:55:53 +0000</pubDate></item><item><title>Consider Chesterton&#39;s Fence Principle Before Changing That Code</title><link>https://josephwoodward.co.uk/2020/04/software-the-chestertons-fence-principle</link><description></description><content:encoded><![CDATA[<p>Let&rsquo;s begin with a short story.</p>

<p>The sun is shining and you&rsquo;re walking through a wood when you happen upon quiet, desolate road. As you walk along the road you encounter an rather peculiar placed fence running from one side of the road to the other. &ldquo;<em>This fence has no use</em>&rdquo; you mutter under your breath as you remove it.</p>

<p>Removing the fence could very well be the correct thing to do, nonetheless you have fallen victim to the <a href="https://www.chesterton.org/taking-a-fence-down/">principle of Chesterton&rsquo;s Fence</a>. Before removing something we should first seek to understand the reasoning and rationale for why it was there in the first place:</p>

<blockquote>
<p>&ldquo;<em>In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.</em>&rdquo;</p>
</blockquote>

<p>As software developers we love writing code. Coupled with our tendency to want to prove ourselves we are often quick to identify code that we hastily deem as &ldquo;bad&rdquo; or &ldquo;wrong&rdquo;, especially when diving into an old codebase.</p>

<p>What we sometimes forget is that software isn&rsquo;t designed and built in a bubble, it&rsquo;s a bi-product of a series of complex interactions of events and factors.</p>

<p>Instead of jumping to conclusions we should first seek to control our impulse to simplify the things around us and be more cognizant of possible context or history by trying to understand why the &lsquo;fence is positioned across the road&rsquo; in the first place. This is especially true for code that looks unusual or out of place. You may actually discover that the unusual approach has a valid reason for why it&rsquo;s designed the way it is.</p>

<p>Similarly, the <a href="https://www.skybrary.aero/index.php/Toolkit:Systems_Thinking_for_Safety/Principle_2._Local_Rationality">Systems Thinking, the Local Rationality Principle</a> when summarised states:</p>

<blockquote>
<p><em>&ldquo;People do things that make sense to them given their goals, understanding of the situation and focus of attention at that time. Work needs to be understood from the local perspectives of those doing the work&rdquo;</em></p>
</blockquote>

<p>By going the extra mile you can gain a much deeper understanding of why the code or system you&rsquo;re working on is build the way it is, what lead the developer to do something a particular way or what conditions they were under that led them choose a particular approach, because learning is not just about understanding new things, but also understanding design decisions and the trade-offs made of existing things.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Thu, 02 Apr 2020 00:54:10 +0000</pubDate></item><item><title>A couple of nice Tuple use cases</title><link>https://josephwoodward.co.uk/2020/03/couple-nice-tuple-use-cases</link><description></description><content:encoded><![CDATA[<p>C# has had a Tuple type (<code>System.Tuple</code>) for sometime now, however a re-occuring theme was it generally lacked the utility that tuples in languages with native support had. This partly due to the fact that it was really just a class utilising generics with no real language or runtime support, resulting in its adoption being rather limited.</p>

<p>Since C# 7 and the introduction of the <code>ValueTuple</code> type (freeing up some GC cycles) and <a href="https://docs.microsoft.com/en-us/dotnet/csharp/deconstruct#deconstructing-a-tuple">tuple deconstruction</a>, things seem to have changed and I&rsquo;ve noticed more and more libraries and developers utilising them, myself included.</p>

<p>With that in mind I&rsquo;d like to highlight a couple of creative use cases I&rsquo;ve seen people using that I&rsquo;d never considered before:</p>

<h2 id="tuple-constructors">Tuple constructors</h2>

<p>I first learned about this pattern at a recent <a href="https://twitter.com/jonskeet">Jon Skeet</a> talk I attended at the <a href="https://www.meetup.com/dotnetsouthwest/events/267685744/">.NET South West</a> meet up I co-organise. Jon was demoing some new C# 8 language features and stopped to talk about how he&rsquo;d adopted this particular pattern.</p>

<p>Given the following constructor we can make this a little shorter without losing much readability using tuples:</p>

<script src="https://gist.github.com/JosephWoodward/e8c583b575256f3fe0985d9d91e31cf0.js"></script>

<script src="https://gist.github.com/JosephWoodward/46e7aa26dce21e1760ea02133b5c5d92.js"></script>

<p>When we look at the lowered version of the code using a tool such as <a href="https://sharplab.io/">SharpLab</a>, we can see that the Tupled version is lowered to exactly the same syntax as the version before which means there&rsquo;s no cost in using such a pattern.</p>

<script src="https://gist.github.com/JosephWoodward/a674ffa9c438a311e5585f32aef4bf43.js"></script>

<h2 id="tuple-methods">Tuple Methods</h2>

<p>As you might expect, you can also use the same approach for methods. Again this incurs no additional cost:</p>

<script src="https://gist.github.com/JosephWoodward/be595f36ae31e81d81551cdfb97a44ec.js"></script>

<h2 id="tuple-deconstructor-loops">Tuple / Deconstructor loops</h2>

<p>Another pattern I stumbled upon is using tuples within a loop. For instance given the following dictionary of people, due to <code>KeyValuePair</code> now <a href="https://github.com/dotnet/runtime/blob/master/src/libraries/System.Private.CoreLib/src/System/Collections/Generic/KeyValuePair.cs#L75">featuring a Deconstruct method</a>, we can unpack the contents in a few ways.</p>

<p>Traditionally we&rsquo;d have done this:</p>

<script src="https://gist.github.com/JosephWoodward/4bdd7732e374bdde171d8749bafa0ab7.js"></script>

<p>With the cost efficiently of the <code>ValueTuple</code> type we can start to get creative:</p>

<script src="https://gist.github.com/JosephWoodward/da7d6ad1a64cf8b581efec84ee7214c7.js"></script>

<p>At this point we can go one further by adding a Deconstruct method in our <code>Person</code> class, allowing us to deconstruct the properties into a tuple:</p>

<script src="https://gist.github.com/JosephWoodward/b0afa3a37aacb11c02d52d59b55ac0e0.js"></script>

<h3 id="benchmarking-the-above-patterns">Benchmarking the above patterns</h3>

<p>For peace of mind, let&rsquo;s benchmark the aforementioned patterns with <a href="https://benchmarkdotnet.org/">BenchmarkDotNet</a> to see what impact it has:</p>

<script src="https://gist.github.com/JosephWoodward/44d816ffad5be1569fec03bff64cabfb.js"></script>

<pre><code>|                Method |     Mean |   Error |  StdDev |  Gen 0 | Gen 1 | Gen 2 | Allocated |
|---------------------- |---------:|--------:|--------:|-------:|------:|------:|----------:|
|                  List | 142.9 ns | 2.92 ns | 6.17 ns | 0.0439 |     - |     - |     184 B |
| DeconstructionToTuple | 138.8 ns | 2.21 ns | 2.07 ns | 0.0439 |     - |     - |     184 B |
</code></pre>

<p>As you can see they&rsquo;re pretty much equal in terms of cost, with the added benefit of a more expressive syntax.</p>

<h3 id="conclusion">Conclusion</h3>

<p>Hopefully the patterns highlighted above are as new to you as they were to me, and sparked some more interest in the utility of the <code>ValueTuple</code> type, especially when used in conjunction with <code>KeyValuePair</code> and/or the <code>Deconstruct</code> method. I can only assume we&rsquo;ll start to see more patterns like this emerge in time as <a href="https://github.com/dotnet/csharplang/blob/master/proposals/records.md">Record types</a> make their way into C#.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Mon, 16 Mar 2020 00:38:08 +0000</pubDate></item><item><title>Chaos Engineering your .NET applications using Simmy</title><link>https://josephwoodward.co.uk/2020/01/chaos-engineering-your-dot-net-application-simmy</link><description></description><content:encoded><![CDATA[<p>One package I&rsquo;ve been using with great success recently is <a href="https://github.com/Polly-Contrib/Simmy">Simmy</a>, so much so that I feel it deserves its very own blog post.</p>

<h2 id="what-is-simmy-and-why-you-should-use-it">What is Simmy and why you should use it?</h2>

<p>Simmy is a fault-injection library that integrates with Polly, the popular .NET transient-fault-handling library. Its name comes from the <a href="https://medium.com/netflix-techblog/the-netflix-simian-army-16e57fbab116">Simian Army toolset</a>, a suite of tools created by engineers at Netflix who recognised that designing a fault tolerant architecture wasn&rsquo;t enough - you have to exercise it, normalising failure to ensure your system can handle it when it inevitably happens.</p>

<p>This idea isn&rsquo;t new. Experts in the resiliency engineering field such as Dr. Richard Cook have published multiple papers around this topic whilst studying high-risk sectors (such as traffic control and health care). He summarises failure quite nicely in his <a href="https://www.youtube.com/watch?v=3ZP98stDUf0">Working at the Center of the Cyclone talk</a> when he said:</p>

<blockquote>
<p>&ldquo;<em>You build things differently when you expect them to fail. Failure is normal, the failed state is the normal state</em>&rdquo;.</p>
</blockquote>

<p>Dr. Richard Cook (amongst others in the resiliency engineering industry) propose that in a complex system, failure is the normal state and it&rsquo;s humans that build mechanisms (either sociologically, organisationally or technically) to ensure systems continue to operate. Without human intervention, failure will happen.</p>

<p>Dr. Richard Cook&rsquo;s paper <a href="https://web.mit.edu/2.75/resources/random/How%20Complex%20Systems%20Fail.pdf">How Complex Systems Fail</a> goes further by saying:</p>

<blockquote>
<p>&ldquo;<em>The high consequences of failure lead over time to the construction of multiple layers of defence against failure. These defences include obvious technical components (e.g. backup systems, ‘safety’ features of equipment) and human components (e.g. training, knowledge) but also a variety of organisational, institutional, and regulatory defences (e.g. policies and procedures, certification, work rules, team training). The effect of these measures is to provide a series of shields that normally divert operations away from accidents.</em>&rdquo;</p>
</blockquote>

<p>This shift in perspective is one of the tenants of Chaos Engineering and why Netflix built the Simian Army - to regularly introduce failure in a safe, controlled manner to force their engineers to consider and handle those failures as part of regular, everyday work. So when those failures do happen, they won&rsquo;t even notice.</p>

<p>With that in mind let&rsquo;s take a look at how we can use Simmy to regularly test our transient fault-handling mechanisms such as timeouts, circuit breakers and graceful degradations.</p>

<h2 id="simmy-in-action">Simmy in Action</h2>

<p>If you&rsquo;re familiar with Polly then Simmy won&rsquo;t take long to pick up as Simmy&rsquo;s fault injection behaviours are Polly policies, so everything fits together nicely and instantly feels familiar.</p>

<p>Let&rsquo;s take a look at Simmy&rsquo;s current failure modes.</p>

<h3 id="simmy-s-failure-modes">Simmy&rsquo;s Failure Modes</h3>

<p>At the time of writing Simmy offers the following types of failure injection policies:</p>

<p><strong>Fault Policy</strong></p>

<p>A fault policy can inject exceptions, or substitute results. This gives you the ability to control the type of result that can be returned.</p>

<p>For instance, the following example causes the chaos policy to throw SocketException with a probability of 5% when enabled.</p>

<pre><code class="language-csharp">var fault = new SocketException(errorCode: 10013);

var policy = MonkeyPolicy.InjectFault(
	fault, 
	injectionRate: 0.05, 
	enabled: () =&gt; isEnabled()  
	);
</code></pre>

<p><strong>Latency Policy</strong></p>

<p>Like Netflix&rsquo;s Latency Monkey, a latency policy enables you to inject latency into executions such as remote calls before the calls are made.</p>

<pre><code class="language-csharp">var policy = MonkeyPolicy.InjectLatency(
	latency: TimeSpan.FromSeconds(5),
	injectionRate: 0.1,
	enabled: () =&gt; isEnabled()
	);
</code></pre>

<p><strong>Behaviour Policy</strong></p>

<p>Simmy&rsquo;s Behaviour Policy enables you to invoke any custom behaviour within your system (such as restarting a VM, or executing a custom call or script) before a call is placed.</p>

<p>For instance:</p>

<pre><code class="language-csharp">var policy = MonkeyPolicy.InjectBehaviour(
	behaviour: () =&gt; KillDockerContainer(), 
	injectionRate: 0.05,
	enabled: () =&gt; isEnabled()
	);
</code></pre>

<h3 id="a-trivial-example">A trivial example</h3>

<p>Once we&rsquo;ve defined our chaos policy we can use the standard Polly APIs to wrap our existing Polly policies.</p>

<pre><code class="language-csharp">var latencyPolicy = MonkeyPolicy.InjectLatency(
	latency: TimeSpan.FromSeconds(5),
	injectionRate: 0.5,
	enabled: () =&gt; isEnabled()
	);

...

PolicyWrap policies = Policy.WrapAsync(timeoutPolicy, latencyPolicy);
</code></pre>

<p>In the example above we introduce 5 seconds latency in 50% of our calls, which depending our timeout policy will force timeouts within our application making us more aware of how our application will handle timeouts.</p>

<p>By registering the Simmy policy as the inner most policy means it&rsquo;ll be invoked just before the outbound call.</p>

<h3 id="using-polly-registries">Using Polly registries</h3>

<p>When implementing Simmy&rsquo;s chaos policies you can use the aforementioned <code>WrapAsync</code> method but you&rsquo;ll probably want to implement them with as little change to your existing policy code as possible, this is why I&rsquo;d recommend using Polly&rsquo;s <code>PolicyRegistry</code> type. This way you can configure your policies then easily wrap them with your Simmy policies:</p>

<pre><code class="language-csharp">// Startup.cs

var policyRegistry = services.AddPolicyRegistry();
policyRegistry[&quot;LatencyPolicy&quot;] = GetGetLatencyPolicy();

...

if (env.IsDevelopment())
{
    // Wrap every policy in the policy registry in Simmy chaos injectors.
    var registry = app.ApplicationServices.GetRequiredService&lt;IPolicyRegistry&lt;string&gt;&gt;();
    registry?.AddChaosInjectors();
}
</code></pre>

<h2 id="testing-failure-modes-within-integration-or-end-to-end-style-tests">Testing failure modes within integration or end-to-end style tests</h2>

<p>Prior to learning about Simmy I used to test Polly policies via unit tests, in a lot of cases this is fine - however when you want to test them these policies part of an integration or end-to-end style test to understand how your application handles slow running requests or times outs things can get a little trickier as it can be hard to replicate these things under an automated test condition.</p>

<p>Using Simmy&rsquo;s Chaos policies we&rsquo;re now able to invoke those behaviours from the outside which is where Simmy really shines.</p>

<h2 id="taking-it-further">Taking it further</h2>

<p>Regularly and reliably testing some of the fault-tolerance patterns in a real world environment (such as staging, or dare I say it - Production!) can be challenging. If you&rsquo;re on a platform such as Kubernetes then introducing failure into your system can trivial as there are plenty of tools that can do this for you. For anyone else this can be a challenge. Simmy has opened the door to make this so much easier, so much so that at one point we had it enabled in our staging environment to introduce a healthy amount of failure into our application.</p>

<p>As <a href="https://twitter.com/adrianco">Adrian Cockcroft</a> said in his <a href="https://qconsf.com/system/files/presentation-slides/qconsf2019-adrian-cockcroft-managing-failure-modes-in-microservice-architectures.pdf">Managing Failure Modes in Microservice
Architectures talk</a>:</p>

<blockquote>
<p>&ldquo;<em>If we change the name from chaos engineering to continuous resilience, will you let us do it all the time in production?</em>&rdquo;</p>
</blockquote>

<h3 id="wrapping-it-up">Wrapping it up</h3>

<p>Hopefully this post has demonstrated the value in Simmy. Chaos Engineering is a really interesting area and it&rsquo;s great to see tools like this appearing to support running experiments in your .NET applications. Hopefully over the next coming years we&rsquo;ll start to see more tooling emerge, enabling us to introduce failure scenarios into our applications as part of regular work.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Fri, 03 Jan 2020 23:07:53 +0000</pubDate></item><item><title>Managing your .NET Core SDK versions with the .NET Install SDK Global Tool</title><link>https://josephwoodward.co.uk/2019/09/dotnet-core-install-sdk-global-tool</link><description></description><content:encoded><![CDATA[<p>During my day to day programming activities I regularly find myself switching between various .NET Core solutions. At work we have a multitude of .NET Core microservices/microsites and at home I enjoy contributing to OSS projects/reading other people&rsquo;s code. As I&rsquo;m switching between these solutions I regularly come across a project that uses a <code>global.json</code> file to define the version of the .NET Core SDK required.</p>

<p><strong>What is the global.json file?</strong></p>

<p>For those that might not be familiar the global.json file, here&rsquo;s an excerpt from the docs:</p>

<blockquote>
<p>The global.json file allows you to define which .NET Core SDK version is used when you run .NET Core CLI commands. Selecting the .NET Core SDK is independent from specifying the runtime your project targets. The .NET Core SDK version indicates which versions of the .NET Core CLI tools are used. In general, you want to use the latest version of the tools, so no global.json file is needed.</p>
</blockquote>

<p>You can <a href="https://docs.microsoft.com/en-us/dotnet/core/tools/global-json">read more about it here</a>, so let&rsquo;s continue.</p>

<p>Upon stumbling on a project with a global.json file I&rsquo;d go through the manual process of locating the correct version of the SDK to download then installing it. After a number of times doing this, as most developers would, I decided to remove the friction by creating a .NET Core global tool to automate this process.</p>

<p>The result is the <a href="https://github.com/JosephWoodward/InstallSdkGlobalTool/">.NET Core Install Global SDK global tool</a>. You can also find it on <a href="https://www.nuget.org/packages/InstallSdkGlobalTool/">NuGet here</a>.</p>

<p><strong>Note</strong>: Before I continue, a huge thanks to <a href="https://github.com/slang25">Stuart Lang</a> who was running into the same frustrations, noticed I&rsquo;d started this tool and contributed a tonne towards it.</p>

<h2 id="net-core-install-sdk-global-tool">.NET Core Install SDK Global Tool</h2>

<p>If you want to give it a try you can install the global tool on your machine by running the following command (assuming you&rsquo;ve got .NET Core installed).</p>

<pre><code class="language-bash">$ dotnet tool install -g installsdkglobaltool
</code></pre>

<p>Once installed the global tool has the following features:</p>

<h3 id="install-net-core-sdk-based-on-global-json-file">Install .NET Core SDK based on global.json file</h3>

<p>If you navigate to a folder with a <code>global.json</code> file in it and run the following command:</p>

<pre><code class="language-bash">$ dotnet install-sdk
</code></pre>

<p>The global tool will check the contents of the <code>global.json</code> file and download then start the installation of the defined version of the .NET Core SDK.</p>

<h3 id="install-the-latest-preview-of-the-net-core-sdk">Install the latest preview of the .NET Core SDK</h3>

<p>It&rsquo;s always fun playing with the latest preview releases of the .NET Core SDK so to save you time finding the latest version you can simply run:</p>

<pre><code class="language-bash">$ dotnet install-sdk --latest-preview
</code></pre>

<p>This will download and start the installation of the latest preview version of the .NET Core SDK.</p>

<h2 id="is-this-suitable-for-build-ci-environments">Is this suitable for build/CI environments?</h2>

<p>No, certainly not at this moment in time. This global tool has been built with a consumer focus so does not install the SDK in a headless fashion. Instead it launches the install and still gives you the same control you&rsquo;re used to (such as choosing install location).</p>

<p>If you&rsquo;re interested in what the whole experience looks like then check out the video below:</p>

<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/JcgiAOGaaa4" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

<p>Until next time!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Tue, 03 Sep 2019 17:16:52 +0000</pubDate></item><item><title>Approval Testing your Open API/Swagger Documents</title><link>https://josephwoodward.co.uk/2019/08/approval-testing-open-api-swagger-documents</link><description></description><content:encoded><![CDATA[<p>The team I work in at Just Eat have a number of HTTP based APIs which are consumed by other components, some in internally others externally. Like a lot of people building HTTP based APIs we use Swagger for both developer experimentation and documentation of the endpoints exposed. This is done via the Swagger UI, which uses an Open API document (formally Swagger docs) that describes the endpoints and resources in JSON form.</p>

<p>As our APIs evolve it&rsquo;s imperative that we don&rsquo;t unintentionally break any of these public contracts as this would cause headaches for consumers of our service.</p>

<p>What we need is visibility of intentional or unintentional changes to this contract, this is where approval testing has helped.</p>

<h2 id="what-is-approval-testing">What is approval testing?</h2>

<p>Approval testing is a method of testing that not many are familiar with. If I were to hypothesise why this is, I&rsquo;d say it&rsquo;s because of it narrow application, especially when contrasted to other more common forms of forms of testing.</p>

<h3 id="assertion-testing">Assertion testing</h3>

<p>We&rsquo;re all familiar with assertion based testing - you order your test in an arrange, act, assert flow where you first &lsquo;arrange&rsquo; the test, then execute the action (act) then &lsquo;assert&rsquo; the output.</p>

<p>For instance:</p>

<pre><code class="language-csharp">// Arrange
var child1 = 13;
var child2 = 22;

// Act
var age = calculateAge(13, 22);

// Assert
age.ShouldBe(35);
</code></pre>

<h3 id="approval-testing">Approval testing</h3>

<p>Approval testing follows the same pattern but with one difference; instead of asserting the expected return value given your set of input parameters, you &lsquo;approve&rsquo; the output instead.</p>

<p>This slight shift changes your perspective on what a failing test means. In other words, with Approval Testing the failed test doesn&rsquo;t prove something has broken but instead flags that the given output differs from the previous approved output and needs approving.</p>

<p>Actions speak louder than words so let&rsquo;s take a look at how we can apply this method of testing to our Open API documents in order to gain visibility of changes to the contract.</p>

<h2 id="test-setup">Test Setup</h2>

<p>In this example I&rsquo;ll use a simple .NET Core based API that has Swagger setup with Swagger UI. The test will use the <a href="https://www.nuget.org/packages/Microsoft.AspNetCore.Mvc.Testing">Microsoft.AspNetCore.Mvc.Testing</a> package for running our API in-memory. If you&rsquo;re not familiar with testing this way then be sure to check out the <a href="https://docs.microsoft.com/en-us/aspnet/core/test/integration-tests?view=aspnetcore-2.2">docs</a> if you wanted to try this yourself.</p>

<p>First let&rsquo;s take a look at our application:</p>

<h3 id="our-asp-net-core-application">Our ASP.NET Core Application</h3>

<pre><code class="language-csharp">// ProductController.cs 

[Route(&quot;api/[controller]&quot;)]
[ApiController]
public class ProductController : ControllerBase
{
    [HttpGet]
    public ActionResult&lt;IEnumerable&lt;ProductViewModel&gt;&gt; Get()
    {
        return new List&lt;ProductViewModel&gt;
        {
            new ProductViewModel {Id = 1, Name = &quot;Product 1&quot;},
            new ProductViewModel {Id = 2, Name = &quot;Product 1&quot;},
            new ProductViewModel {Id = 3, Name = &quot;Product 1&quot;}
        };
    }
}
</code></pre>

<pre><code class="language-csharp">// Program.cs

public class Program
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }
    
    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =&gt;
        WebHost.CreateDefaultBuilder(args)
        .UseStartup&lt;Startup&gt;();
}

</code></pre>

<pre><code class="language-csharp">// ProductViewModel.cs

public class ProductViewModel
{
    public string Name { get; set; }

    public int Id { get; set; }

    public string Description { get; set; }
}
</code></pre>

<pre><code class="language-csharp">// Startup.cs

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        ...

        services.AddSwaggerGen(c =&gt;
        {
            c.SwaggerDoc(&quot;v1&quot;, new OpenApiInfo { Title = &quot;My API&quot;, Version = &quot;v1&quot; });
        });
        
    }

    public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
    {
        ...
        
        app.UseSwagger();
        app.UseSwaggerUI(c =&gt;
        {
            c.SwaggerEndpoint(&quot;/swagger/v1/swagger.json&quot;, &quot;My API V1&quot;);
        });
    }
}
</code></pre>

<p>If we were to launch our API and go to <code>/swagger/v1/swagger.json</code> we should see a json document that includes information about the resources our API exposes.</p>

<p>Now we have a working API, let&rsquo;s look at our test setup.</p>

<h2 id="our-test-setup">Our Test Setup</h2>

<p>As mentioned above, our test will use the <a href="https://www.nuget.org/packages/Microsoft.AspNetCore.Mvc.Testing">Microsoft.AspNetCore.Mvc.Testing</a> package for running our API in-memory. Why do we need to run it in-memory? Hold that thought and read on.</p>

<p>We&rsquo;ll use a simple xunit fixture class that uses <code>WebApplicationFactory&lt;T&gt;</code> to run our API in memory.</p>

<pre><code class="language-csharp">// ApiFixture.cs

public class ApiFixture
{
    private readonly WebApplicationFactory&lt;Startup&gt; _fixture;

    public ApiFixture()
    {
        _fixture = new WebApplicationFactory&lt;Startup&gt;();
    }

    public HttpClient Create()
    {
        return _fixture.CreateClient();
    }
}
</code></pre>

<p>Now we&rsquo;ve got our API and infrastructure setup, this is where the magic happens. Below is a simple test that will perform the following:</p>

<ol>
<li>Run our ASP.NET Core API in memory and allow the tests to call it via an HttpClient.</li>
<li>Make a GET request to the Open API document (Swagger docs) and parse the content into a <code>string</code>.</li>
<li>Call <code>.ShouldMatchApproved()</code> on the string content.</li>
</ol>

<p>What is <code>ShouldMatchApproved()</code>? Read on&hellip;</p>

<pre><code class="language-csharp">// OpenApiDocumentTests.cs

public class OpenApiDocumentTests : IClassFixture&lt;ApiFixture&gt;
{
    private readonly ApiFixture _apiFixture;

    public OpenApiDocumentTests(ApiFixture apiFixture)
    {
        _apiFixture = apiFixture;
    }
    
    [Fact]
    public async Task DocumentHasChanged()
    {
        // Arrange
        var client = _apiFixture.Create();

        // Act
        var request = await  client.GetAsync(&quot;/swagger/v1/swagger.json&quot;);
        var content = await request.Content.ReadAsStringAsync();
        
        // Assert
        content.ShouldMatchApproved();
    }
}
</code></pre>

<h2 id="shouldly-and-shouldmatchapproved">Shouldly and <code>.ShouldMatchApproved()</code></h2>

<p><a href="https://github.com/shouldly/shouldly">Shouldly is an open-source assertion testing framework</a> that I frequently use (and as a result, contribute to) that simplifies test assertions.</p>

<p>One assertion method Shouldly provides that sets it apart from others is the <code>.ShouldMatchApproved()</code> extension which hangs off of .NET&rsquo;s <code>string</code> type. This extension method enables us to easily apply approval based testing to any string. There are other libraries such as <a href="https://github.com/approvals/ApprovalTests.Net">ApprovalTests.Net</a> that support more complex use cases, but for this example Shouldly will suffice.</p>

<h3 id="how-does-shouldmatchapproved-work">How does <code>.ShouldMatchApproved()</code> work?</h3>

<p>Upon executing your test <code>.ShouldMatchApproved()</code> performs the following steps:</p>

<ol>
<li><p>First it checks to see if an approved text file lives in the expected location on disk.</p></li>

<li><p>If the aforementioned file exists Shouldly will diff the contents of the file (containing the expected) against the input (the actual) of the test.</p></li>

<li><p>If <code>ShouldMatchApproved</code> detects a difference between the expected and actual values then it will scan common directories for a supported diff tool (such as Beyond Compare, Win Merge etc, or even VS Code).</p></li>

<li><p>If it finds a supported difftool it will automatically launch it, prompting you to <em>approve</em> the diffs and save the approved copy to the location described in Step 1.</p></li>

<li><p>Once approved and saved you can rerun the test and it will pass.</p></li>
</ol>

<p>Any changes to your API from this point on will result in a test pass, providing no difference is detected between your test case and the approval file stored locally. If a change is detected then the above process starts again.</p>

<p>One of the powerful aspects of this method of testing is that the approval file is committed to source control, meaning those diffs to the contract are visible to anyone reviewing your changes, whilst also keeping a history of changes to the public contract.</p>

<h3 id="demo">Demo</h3>

<p>Along side <a href="https://github.com/JosephWoodward/OpenApiApprovalTesting">the source code to the demonstration in this post and the below video</a>, I&rsquo;ve published quick demonstration on YouTube which you can take a look at. The video starts by first creating our approved document, I then go on to make a change to a model exposed via the Open API document, approve that change then rerun the test again.</p>

<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/2ZkeEnDuK6E" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Wed, 28 Aug 2019 14:45:15 +0000</pubDate></item><item><title>You can stop mocking ILogger</title><link>https://josephwoodward.co.uk/2019/06/you-can-stop-mocking-ilogger</link><description></description><content:encoded><![CDATA[<p>Just a quick post, nothing technically challenging but hopefully valuable to some none the less.</p>

<p>It all started a couple of days ago when I found myself browsing through Microsoft&rsquo;s Logging Abstractions library source code. For those that aren&rsquo;t familiar with this library the aim of it (as the name suggests) is to provide a set of abstractons over the top of common APIs that have emerged around logging over time (take <code>ILogger</code> for instance).</p>

<p>As I was reading the source code I noticed the library contains an implementation of the common <code>ILogger</code> inferface called <code>NullLogger</code>. <em>&ldquo;Great!&rdquo;</em> I thought, <em>&ldquo;I really don&rsquo;t like mocking and now I don&rsquo;t need to mock ILogger anymore!&rdquo;</em></p>

<p>Initially I thought this was old news and I&rsquo;d somehow missed the memo, so I swiftly moved on. But <a href="https://twitter.com/joe_mighty/status/1128260978502787073">given the number of likes and retweets a tweet of the discovery I made</a> received I thought it would be valuable to write a short post as a nod of its existence for anyone who hasn&rsquo;t seen it yet.</p>

<h2 id="logger-instance">Logger.Instance</h2>

<p>As you can see <a href="https://github.com/aspnet/Extensions/blob/master/src/Logging/Logging.Abstractions/src/NullLogger.cs">the implementation of the class</a> is very straight forward, it does nothing at all&hellip;</p>

<pre><code class="language-csharp">    public class NullLogger : ILogger
    {
        public static NullLogger Instance { get; } = new NullLogger();

        private NullLogger()
        {
        }

        public IDisposable BeginScope&lt;TState&gt;(TState state)
        {
            return NullScope.Instance;
        }

        public bool IsEnabled(LogLevel logLevel)
        {
            return false;
        }

        public void Log&lt;TState&gt;(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func&lt;TState, Exception, string&gt; formatter)
        {
        }
    }
</code></pre>

<p>However it does mean that if the project I&rsquo;m working on depends on this this library (which most on .NET Core will be doing) I can start to use instances of <code>NullLogger</code> instead of mocking <code>ILogger</code>.</p>

<p>It doesn&rsquo;t stop at <code>ILogger</code> either, there are also implementation of <code>ILogger&lt;T&gt;</code> in there (unsurprisingly called <code>NullLogger&lt;T&gt;</code>) and <code>NullLoggerFactory</code>.</p>

<p>Great, thanks Microsoft!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Sat, 01 Jun 2019 09:55:30 +0000</pubDate></item><item><title>Subcutaneous Testing in ASP.NET Core</title><link>https://josephwoodward.co.uk/2019/03/subcutaneous-testing-asp-net-core</link><description></description><content:encoded><![CDATA[<p>Having successfully applied Subcutaneous testing to a number of projects I was interested to see what options were available in ASP.NET Core. In this post we&rsquo;ll touch on what Subcutaneous testing is, its trade offs and how we can perform such tests in ASP.NET Core.</p>

<h2 id="what-is-subcutaneous-testing">What is Subcutaneous Testing?</h2>

<p>I first learnt about Subcutaneous testing from Matt Davies and Rob Moore&rsquo;s excellent <a href="https://www.youtube.com/watch?v=pls1Vk_bw_Y">Microtesting - How We Set Fire To The Testing Pyramid While Ensuring Confidence</a> talk at NDC Sydney. The talk introduces its viewers to various testing techniques and libraries you can use to ensure speed and confidence in your testing strategy whilst avoiding the crippling test brittleness that often ensues.</p>

<p>The term subcutaneous means &ldquo;<em>situated or applied under the skin</em>&rdquo;, which translated to an application would mean just under the UI layer.</p>

<p>Why would you want to test under the UI later?</p>

<p>As <a href="https://martinfowler.com/bliki/SubcutaneousTest.html">Martin Fowler highlights in his post on Subcutaneous testing</a>, such a means of testing is especially useful when trying to perform functional tests where you want to exercise end-to-end behaviour whilst avoiding some of the difficulties associated with testing via the UI itself.</p>

<p>Martin goes on to say (emphasis is my own, which we&rsquo;ll touch on later):</p>

<blockquote>
<p>Subcutaneous testing can avoid difficulties with hard-to-test presentation technologies and usually is much faster than testing through the UI. <strong>The big danger is that, unless you are a firm follower of keeping all useful logic out of your UI, subcutaneous testing will leave important behaviour out of its test</strong>.</p>
</blockquote>

<p>Let&rsquo;s take a look at an example&hellip;</p>

<h2 id="subcutaneous-testing-an-asp-net-core-application">Subcutaneous testing an ASP.NET Core application</h2>

<p>In previous non-core versions of ASP.NET MVC I used to use a tool called <a href="http://fluentmvctesting.teststack.net/">FluentMvcTesting</a>, however with the advent of .NET Core I was keen to see what options were available to create subcutaneous tests whilst leaning on some of the primitives that exist in bootstrapping one&rsquo;s application.</p>

<p>This investigation ultimately lead me to the solution we&rsquo;ll discuss shortly via the new <code>.AddControllersAsServices()</code> extension method that can be called via the <code>IMvcBuilder</code> interface.</p>

<p>It&rsquo;s always nice to understand what these APIs are doing so let&rsquo;s take a moment to look under the covers of the <code>AddControllersAsServices</code> method:</p>

<pre><code class="language-csharp">public static IMvcBuilder AddControllersAsServices(this IMvcBuilder builder)
{
    var feature = new ControllerFeature();
    builder.PartManager.PopulateFeature(feature);
    foreach (Type type in feature.Controllers.Select(c =&gt; c.AsType()))
    {
        builder.Services.TryAddTransient(type, type);
    }
    
    builder.Services.Replace(
        ServiceDescriptor.Transient&lt;IControllerActivator, ServiceBasedControllerActivator&gt;());
    
    return builder;
}
</code></pre>

<p>Looking at the source code it appears that the <code>AddControllersAsServices()</code> method populates the <code>ControllerFeature.Controllers</code> collection with a list of controllers via the <code>ControllerFeatureProvider</code> (which is indirectly invoked via the <code>PopulateFeature</code> call). The <code>ControllerFeatureProvider</code> then loops through all the &ldquo;parts&rdquo; (classes within your solution) looking for classes that are controllers. It does this, among a few other things such as checking to see if the type is public, by looking for anything ending in with the strong &ldquo;Controller&rdquo;.</p>

<p>Once the controllers in your application are added to the collection within the <code>ControllerFeature.Controllers</code> collection, they&rsquo;re then registered as transient services within .NET Core&rsquo;s IOC container (<code>IServiceCollection</code>).</p>

<p>What does this mean for us and Subcutaneous testing? Ultimately this means we can resolve our chosen controller from the IOC container and in doing so it will resolve any dependencies also registered in the controller, such as services, repositories etc.</p>

<h3 id="putting-it-together">Putting it together</h3>

<p>First we&rsquo;ll need to call the <code>AddControllersAsServices()</code> extension method as part of the <code>AddMvc</code> method chains in <code>Startup.cs</code>:</p>

<pre><code class="language-csharp">// Startup.cs
public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        ...
        services.AddMvc().AddNewtonsoftJson().AddControllersAsServices();
    }
}
</code></pre>

<p>Alternatively if you&rsquo;ve no reason to resolve controllers directly from your IOC container other than for testing you might prefer to configure it as part of your test infrastructure. A common pattern to do this is to move the MVC builder related method calls into a virtual method so we can override it and call the base method like so:</p>

<pre><code class="language-csharp">// Startup.cs
public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        ...
        ConfigureMvcBuilder(services);
    }

    public virtual IMvcBuilder ConfigureMvcBuilder(IServiceCollection services)
    =&gt; services.AddMvc().AddNewtonsoftJson();
    }
</code></pre>

<p>Now all that&rsquo;s left to do is create a derived version of <code>Startup.cs</code> which we&rsquo;ll call <code>TestStartup</code> then call <code>AddControllersAsServices()</code> after setting up MVC.</p>

<pre><code class="language-csharp">// TestStartup.cs
public class TestStartup : Startup
{
    public TestStartup(IConfiguration configuration) : base(configuration)
    {
    }

    public override IMvcBuilder ConfigureMvcBuilder(IServiceCollection serviceCollection)
    {
        var services = base.ConfigureMvcBuilder(serviceCollection);
        return services.AddControllersAsServices();
    }
}
</code></pre>

<p>We can now start to resolve controllers from the container using the <code>.Host.Services.GetService&lt;T&gt;()</code> method. Putting it in a simple test fixture would look like this:</p>

<pre><code class="language-csharp">// SubcutaneousTestFixture.cs
public class SubcutaneousTestFixture
{
    private TestServer _server;

    public T ResolveController&lt;T&gt;() where T : Controller
        =&gt; _server.Host.Services.GetService&lt;T&gt;();

    public SubcutaneousTestFixture Run&lt;T&gt;() where T : Startup
    {
        var webHostBuilder = WebHost.CreateDefaultBuilder();
        webHostBuilder.UseStartup&lt;T&gt;();
        webHostBuilder.UseContentRoot(Directory.GetCurrentDirectory());

        _server = new TestServer(webHostBuilder);

        return this;
    }
}
</code></pre>

<p>Which can be invoked like so within our test:</p>

<pre><code class="language-csharp">// NewsDeletionTests.cs
public class NewsDeletionTests : IClassFixture&lt;SubcutaneousTestFixture&gt;
{
    private readonly SubcutaneousTestFixture _fixture;

    public NewsDeletionTests(SubcutaneousTestFixture fixture)
    {
        _fixture = fixture.Run&lt;TestStartup&gt;();
    }

    ...
}
</code></pre>

<p>Now if we want to test our controller actions (using a trivial example here) we now do so without having to go through too much effort associated with end to end testing such as using Selenium:</p>

<pre><code class="language-csharp">//DeleteNewsController.cs
[HttpPost]
public async Task&lt;IActionResult&gt; Delete(DeleteNews.Command command)
{
    try
    {
        await _mediator.Send(command);
    }
    catch (ValidationException e)
    {
        return View(new ManageNewsViewModel
        {
            Errors = e.Errors.ToList()
        });
    }

    return RedirectToRoute(RouteNames.NewsManage);
}
</code></pre>

<pre><code class="language-csharp">// DeleteNewsTests.cs
[Fact]
public void UserIsRedirectedAfterSuccessfulDeletion()
{
    // Arrange
    var controller = _fixture.ResolveController&lt;DeleteNewsController&gt;();

    // Act
    var result = (RedirectToActionResult) controller.Delete(new DeleteNews.Command { Id = 1 });
    
    // Assert
    result.ControllerName.ShouldBe(nameof(NewsController));
    result.ActionName.ShouldBe(nameof(NewsController.Index));
}

[Fact]
public void ErrorsAreReturnedToUser()
{
    // Arrange
    var controller = _fixture.ResolveController&lt;DeleteNewsController&gt;();

    // Act
    var result = (ViewResult) controller.Delete(new DeleteNews.Command { Id = 1 });
    var viewModel = result.Model as DeleteNewsViewModel;

    // Assert
    viewModel?.Errors.Count.ShouldBe(3);
}
</code></pre>

<p>At this point we&rsquo;ve managed to successfully exercise end-to-end behaviour in our application whilst avoiding the difficulties often associated with with hard-to-test UI technologies. At the same time we&rsquo;ve successfully avoided mocking out dependencies so our tests won&rsquo;t start breaking when we modify the implementation details.</p>

<h3 id="there-are-no-silver-bullets">There are no silver bullets</h3>

<p>Testing is an imperfect size and there&rsquo;s rarely a one size fits all solution. As Fred Brooks put it, &ldquo;<em>there are no silver bullets</em>&rdquo;, and this approach is no exception - it has its limitations which, depending on how you architect your application, could affect its viability. Let&rsquo;s see what they are:</p>

<ul>
<li><p>No HTTP Requests<br>
As you may have noticed, there&rsquo;s no HTTP request, which means anything regarding HttpContext</p></li>

<li><p>Request filters
No HTTP request also means any global filters you may have will not be invoked.</p></li>

<li><p><code>ModelState</code> is not set<br>
As we&rsquo;re not generating an HTTP request, you&rsquo;ll notice <code>ModelState</code> validation is not set and would require mocking. To some this may or may not be a problem. Personally as someone that&rsquo;s a fan of <a href="https://github.com/jbogard/MediatR">MediatR</a> or <a href="https://github.com/shaynevanasperen/Magneto">Magneto</a> coupled with FluentValidation, my validation gets pushed down into my domain layer. Doing this also means I don&rsquo;t have to lean on validating my input models using yucky attributes.</p></li>
</ul>

<h3 id="conclusion">Conclusion</h3>

<p>Hopefully this post has given you a brief insight as to what Subcutaneous Testing is, the trade offs one has to make and an approach you could potentially use to test your applications in a similar a manner. There are a few approaches out there, but on occasion there are times where I wish to test a behaviour inside of my controller so this can do the trick.</p>

<p>Ultimately Subcutaneous Testing will enable you to test large parts of your application but will still leave you lacking the confidence you&rsquo;d require in order to push code into production, this is where you could fill in the gaps with tests that exercise the UI.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Thu, 14 Mar 2019 21:41:17 +0000</pubDate></item><item><title>The myth of the right tool for the job</title><link>https://josephwoodward.co.uk/2019/01/myth-of-right-tool-for-the-job</link><description></description><content:encoded><![CDATA[<p>The phrase &ldquo;<em>the right tool for the job</em>&rdquo; is one we&rsquo;ve all heard in software development and we&rsquo;ve all most likely said it at some point. However when you stop and think about what such a phrase actually means you begin to realise it&rsquo;s actually quite a problematic one, it makes too many assumptions. One could also go as far to say it has the potential to be quite a detrimental way to justify the use of a tool over other alternatives.</p>

<p>This post aims to take a look at what those five seemingly innocent words really mean, and hopefully by the end of this post you&rsquo;ll possibly reconsider using it in future, or at the very least be a little more conscious about its use.</p>

<h3 id="making-assumptions-in-a-world-of-variability">Making assumptions in a world of variability</h3>

<p>Often when you hear the aforementioned phrase used it&rsquo;s usually in the context of making an assertion or justification for the most suitable framework, language or service to be used in a given project. The problem with this is it makes too many assumptions. It&rsquo;s rare anyone truly knows the full scope and nature of a project upfront until it&rsquo;s done. You may have read the ten page project specification, or know the domain well, but ask yourself this - how many times have you been working on a project only to have the scope or specification changed underneath you. No one can predict the future, so why use a language that implies unquestionable certainty.</p>

<p>Ultimately building software is about making the right trade offs given the various different constraints and conditions (risks, costs, time to name a few), there are no &ldquo;right&rdquo; or &ldquo;wrong&rdquo; solutions, just ones that make the appropriate trade offs, this also applies to our tooling.</p>

<p>I&rsquo;ll leave you with this great tweet from <a href="https://twitter.com/allspaw">John Allspaw</a>:</p>

<p><blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">&amp;quot;The right tool for the job!&amp;quot; said someone whose assumptions, past experience, motivations, and definition of &amp;quot;job&amp;quot; aren&lsquo;t explicit.</p>&amp;mdash; John Allspaw (@allspaw) <a href="https://twitter.com/allspaw/status/578186271740297216?ref_src=twsrc%5Etfw">March 18, 2015</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Fri, 18 Jan 2019 01:03:05 +0000</pubDate></item><item><title>Skipping XUnit tests based on runtime conditions</title><link>https://josephwoodward.co.uk/2019/01/skipping-xunit-tests-based-on-runtime-conditions</link><description></description><content:encoded><![CDATA[<p>Have you ever needed to skip a test under a certain condition? Say, the presence or absence of an environmental variable, or some conditional runtime check? No, me neither - that was up until recently.</p>

<p>Skipping tests is usually a good practice to get into, but I use the word &ldquo;usually&rdquo; here because as with all things in software there sometimes certain constraints or parameters that may well justify doing so. In my case I needed to skip a certain test if I was running on AppVeyor in Linux.</p>

<p>Having done a bit of digging I found two options that would work:</p>

<h2 id="option-1-using-the-if-preprocessor-directive">Option 1 - Using the <code>#if</code> preprocessor directive</h2>

<p>Depending on your scenario it might be possible to simple use an <code>#if</code> preprocessor directive to include the test. If you&rsquo;re wishing to exclude a test based on the operating system the test were running on then this may be a solution.</p>

<script src="https://gist.github.com/JosephWoodward/3403dc4a38309e2f3878fcb829210dd1.js"></script>

<p>My scenario also involved checking the presence of environmental variables, which I&rsquo;d rather do through code. This led me to the next approach which I felt was more suitable.</p>

<h2 id="preferred-option-leveraging-existing-xunit-functionality">Preferred Option - Leveraging existing XUnit functionality</h2>

<p>XUnit already has the ability to skip a test by providing a reason for skipping the test in question via the <code>Skip</code> property that exists within the <code>Fact</code> attribute:</p>

<script src="https://gist.github.com/JosephWoodward/329b56aa3fa4c2eebda28cfbd031ff96.js"></script>

<p>So all we have to do is extend the <code>FactAttribute</code> class and set that property:</p>

<script src="https://gist.github.com/JosephWoodward/914677ff78e9194391496ddf342eb358.js"></script>

<p>Now instead of using the traditional <code>[Fact]</code> attribute, we can use our new <code>IgnoreOnAppVeyorLinuxFact</code> attribute:</p>

<script src="https://gist.github.com/JosephWoodward/9d7e8bdaa904b9de60dba3a184eb1d6e.js"></script>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Wed, 02 Jan 2019 21:15:43 +0000</pubDate></item><item><title>Aaaand, we&#39;re back...</title><link>https://josephwoodward.co.uk/2018/03/and-we-are-back</link><description></description><content:encoded><![CDATA[<p>Hello! Sorry for the lack of posts recently - I haven&rsquo;t died or lost interested in blogging, I&rsquo;ve been super busy with multiple things one of which broke my site. For this curious here&rsquo;s what I&rsquo;ve been up to:</p>

<h2 id="building-the-new-ddd-south-west-website">Building the new DDD South West website</h2>

<p>Since getting involved with organising the <a href="http://josephwoodward.co.uk/2017/05/ddd-southwest-7">DDD South West event last year</a>, one thing that was unanimous around the team of organisers was how we needed a new website. The previous website was a free SharePoint hosted solution that had a number of problems (slow, design could be improved, lots of manual processes, speaker submissions by sending word documents etc), so after last year&rsquo;s conference I started working on the new DDD South West website, and with the 2018 conference approaching it was a bit of a panic to get it finished in time.</p>

<p>Luckily it was all sorted by the time we opened for speaker submissions (with a bit of JIT development). Surprisingly considering there are very few tests everything has worked without any problems, and what&rsquo;s more is <a href="https://github.com/dddsw/dddsouthwest-web">it&rsquo;s all open source</a>!</p>

<p>Here&rsquo;s a screenshot of the before and after, hopefully you&rsquo;ll agree it&rsquo;s a large improvement!</p>

<p><strong>Before:</strong></p>

<p><img src="http://assets.josephwoodward.co.uk/blog/ddd_southwest_website_old.png" alt="" /></p>

<p><strong>After:</strong></p>

<p><img src="http://assets.josephwoodward.co.uk/blog/ddd_southwest_website_ne.png" alt="" /></p>

<p>Feel free to check the new website out over at <a href="https://dddsouthwest.com">https://dddsouthwest.com</a>, it&rsquo;s built with all of the usual stuff:</p>

<ul>
<li>.NET Core (MVC with MediatR)</li>
<li>PostgreSQL</li>
<li>Identity Server 4</li>
<li>Docker</li>
<li>Hosted on Linux behind nginx</li>
</ul>

<h2 id="migrating-my-blog-from-net-core-1-x-to-2-x">Migrating my blog from .NET Core 1.x to 2.x.</h2>

<p>I&rsquo;ve been meaning to do this for a while now, and migrating a blog to a major version number when you know there&rsquo;s lots of breaking changes isn&rsquo;t something you charge at with gusto. Suffice to say it was painful and I ran into some weird issue where I quickly realised it was easier to just create a new project and start copying things over than to figure out what was going wrong.</p>

<p>I managed to get the website back up and running but still had a lot to fix on the login/admin side of things which uses OAuth and Google sign-in. So whilst the website was functional the admin area (and thus my ability to post) was not. Couple with the finishing the DDD South West website it all took a bit of time.</p>

<p>Anyway, it&rsquo;s all working now so that&rsquo;s a relief and I&rsquo;ve got a few posts lined up.</p>

<h2 id="net-oxford-talk-asp-net-core-2-0-razor-deep-dive">.NET Oxford Talk - ASP.NET Core 2.0 Razor Deep Dive</h2>

<p>A couple of weeks ago I had the pleasure of being invited to the .NET Oxford user group to talk about all of the new features in Razor for ASP.NET Core 2.0. It&rsquo;s a talk I&rsquo;ve given a few times before but this time around I was keen to update it with a bit more focus on the newer features in 2.0, such as the <code>ITagHelperComponent</code> interface and Razor Pages.</p>

<p>As a .NET meet up organiser myself, I always enjoy the opportunity to visit other .NET focused meet ups as it&rsquo;s always interesting to see how they operate, and to meet other .NET developers in the wider UK community. It&rsquo;s also a great chance to borrow ideas to potentially incorporate into .NET South West.</p>

<p><img src="http://assets.josephwoodward.co.uk/blog/dotnet_oxford.jpg" alt="" /></p>

<p><strong>The talk</strong></p>

<p>Overall I was really happy with the way the talk went (though I do wonder if I harped on about how great Rider is a little too much) with a good number of questions throughout and afterwards. Each talk I give I always try and focus on one habit I&rsquo;ve noticed I&rsquo;ve picked up, or wish to improve on when presenting, whether it be talking slower, less or more energy etc. On this occasion it was spend less time looking at the screen behind me and more time focused on, and looking at the audience. Whilst the session wasn&rsquo;t recorded I was quite conscious of it throughout and feel I did much better than previous sessions.</p>

<h2 id="closing">Closing</h2>

<p>All in all it&rsquo;s been a crazy couple of months and now everything is settling down I&rsquo;m looking forward to getting back into regularly blogging.</p>

<p>Oh, and on one last note - I&rsquo;ve been selected to speak at DDD Wales (their first one!) which I&rsquo;m really looking forward to.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Thu, 08 Mar 2018 05:12:30 +0000</pubDate></item><item><title>GlobalExceptionHandler.NET version 2 released</title><link>https://josephwoodward.co.uk/2017/12/global-exception-handler-version-2-released</link><description></description><content:encoded><![CDATA[<p>For anyone that regularly reads this blog will remember that recently I developed a convention based ASP.NET Core exception handling library named GlobalExceptionHandler.NET (<a href="http://josephwoodward.co.uk/2017/09/global-exception-handling-asp-net-core-webapi">if you missed the post you can read about it here</a>).</p>

<p><strong>GlobalExceptionHandler.NET in a nutshell</strong><br>
GlobalExceptionHandler.NET hands off ASP.NET Core&rsquo;s <code>.UseExceptionHandler()</code> endpoint and enables developers to configure HTTP responses (including the status codes) per exception type.</p>

<p>For instance, the following configuration:</p>

<pre><code class="language-csharp">app.UseExceptionHandler().WithConventions(x =&gt; {
  x.ContentType = &quot;application/json&quot;;
  x.ForException&lt;RecordNotFoundException&gt;().ReturnStatusCode(HttpStatusCode.NotFound)
      .UsingMessageFormatter((ex, context) =&gt; JsonSerializer(new {
          Message = ex.Message
      }));
});

app.Map(&quot;/error&quot;, x =&gt; x.Run(y =&gt; throw new RecordNotFoundException(&quot;Record not be found&quot;)));
</code></pre>

<p>Will result in the following output if a <code>RecordNotFoundException</code> is thrown.</p>

<pre><code class="language-json">HTTP/1.1 404 Not Found
Date: Sat, 25 Nov 2017 01:47:51 GMT
Content-Type: application/json
Server: Kestrel
Cache-Control: no-cache
Pragma: no-cache
Transfer-Encoding: chunked
Expires: -1
{
  &quot;Message&quot;: &quot;Record not be found&quot;
}
</code></pre>

<h2 id="improvements-in-version-2">Improvements in Version 2</h2>

<p>Whilst the initial version of GlobalExceptionHandler.NET was a good start, there were a few features and internal details that I was keen to flesh out and improve, so Version 2 was a pretty big overhaul. Let&rsquo;s take a look at what&rsquo;s changed in version 2.</p>

<h3 id="it-now-extends-the-useexceptionhandler-api">It now extends the <code>UseExceptionHandler()</code> API</h3>

<p>The first version of GlobalExceptionHandler.NET had its own implementation of UseExceptionHandler which would catch any exceptions thrown further down the ASP.NET Core middleware stack, there was no real motivation for creating a separate implementation other than I didn&rsquo;t realise how extensible the <code>UseExceptionHandler</code> API was.</p>

<p>As soon as I realised I could offload some of work to ASP.NET Core I was keen to do so.</p>

<h3 id="now-invoked-with-withconventions">Now invoked with <code>WithConventions</code></h3>

<p>Now I was using the <code>UseExceptionHandler()</code> endpoint, I was keen to have a more meaningful fluent approach to integrating with ASP.NET Core, so I ultimately went with a <code>WithConventions()</code> approach as the name felt a lot more natural.</p>

<h3 id="supports-polymorphic-types">Supports polymorphic types</h3>

<p>One problem the previous version of GlobalExceptionHandler.NET had was it couldn&rsquo;t distinguish between exceptions of the same type. Version 2 will now look down the inheritance tree for the first matching type. To give an example.</p>

<p>Given the following exception type:</p>

<pre><code class="language-csharp">public class ExceptionA : BaseException {}
</code></pre>

<pre><code class="language-csharp">// startup.cs

app.UseExceptionHandler().WithConventions(x =&gt; {
  x.ContentType = &quot;application/json&quot;;
  x.ForException&lt;BaseException&gt;().ReturnStatusCode(HttpStatusCode.BadRequest)
      .UsingMessageFormatter((e, c) =&gt; JsonSerializer(new {
          Message = &quot;Base Exception response&quot;
      }));
});

app.Map(&quot;/error&quot;, x =&gt; x.Run(y =&gt; throw new ExceptionA()));
</code></pre>

<pre><code class="language-json">HTTP/1.1 400 Bad Request
Date: Sat, 25 Nov 2017 01:47:51 GMT
Content-Type: application/json
Server: Kestrel
Cache-Control: no-cache
Pragma: no-cache
Transfer-Encoding: chunked
Expires: -1
{
  &quot;Message&quot;: &quot;Base Exception response&quot;
}
</code></pre>

<p>As the above example will hopefully illustrate, as there is no configured response to <code>ExceptionA</code> GlobalExceptionHandler.NET goes to the next type in the inheritance tree to see if a formatter is specified for that type.</p>

<h2 id="content-negotiation">Content Negotiation</h2>

<p>GlobalExceptionHandler.NET verion 2 now supports content negotiation via the optional <a href="https://www.nuget.org/packages/GlobalExceptionHandler.ContentNegotiation.Mvc">GlobalExceptionHandler.ContentNegotiation.Mvc</a> package.</p>

<p>Once included you no longer need to specify the content type or response serialisation type:</p>

<pre><code class="language-csharp">//Startup.cs

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvcCore().AddXmlSerializerFormatters();
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    app.UseExceptionHandler().WithConventions(x =&gt;
    {
        x.ForException&lt;RecordNotFoundException&gt;().ReturnStatusCode(HttpStatusCode.NotFound)
            .UsingMessageFormatter(e =&gt; new ErrorResponse
            {
                Message = e.Message
            });
    });

    app.Map(&quot;/error&quot;, x =&gt; x.Run(y =&gt; throw new RecordNotFoundException(&quot;Record could not be found&quot;)));
}
</code></pre>

<p>Note how we had to include the <code>AddMvcCore</code> service, this is because ASP.NET Core MVC is required in order to take care of content negotiation which is a real shame as it would have been great to enable it without a dependency being required on MVC.</p>

<p>Now when an exception is thrown and the consumer has provided the <code>Accept</code> header:</p>

<pre><code class="language-http">GET /api/demo HTTP/1.1
Host: localhost:5000
Accept: text/xml
</code></pre>

<p>The response will be formatted according to the <code>Accept</code> header value:</p>

<pre><code class="language-http">HTTP/1.1 404 Not Found
Date: Tue, 05 Dec 2017 08:49:07 GMT
Content-Type: text/xml; charset=utf-8
Server: Kestrel
Cache-Control: no-cache
Pragma: no-cache
Transfer-Encoding: chunked
Expires: -1

&lt;ErrorResponse 
  xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot; 
  xmlns:xsd=&quot;http://www.w3.org/2001/XMLSchema&quot;&gt;
  &lt;Message&gt;Record could not be found&lt;/Message&gt;
&lt;/ErrorResponse&gt;
</code></pre>

<h2 id="wrapping-up">Wrapping up</h2>

<p>Hopefully this has given you a good idea of what you&rsquo;ll find in version 2 of the GlobalExceptionHandler.NET library. Moving forward there are a few further improvements I&rsquo;d like to make around organising configuration on a domain by domain basis. And as always, the <a href="https://github.com/JosephWoodward/GlobalExceptionHandlerDotNet">code is up on GitHub</a> so feel free to take a look.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Fri, 08 Dec 2017 05:32:47 +0000</pubDate></item><item><title>Going serverless with .NET Core, AWS Lambda and the Serverless framework</title><link>https://josephwoodward.co.uk/2017/11/going-serverless-net-core-aws-lambda-serverless-framework</link><description></description><content:encoded><![CDATA[<p>Recently I gave a talk titled &lsquo;Going serverless with AWS Lambda&rsquo; where I briefly went through what the serverless is and the architectural advantages it gives you along with the trades offs to consider. Half way through the talk I then went on to demonstrate the Serverless framework and was surprised by the number of people that are currently experimenting with AWS Lambda or Azure Functions that have never heard of it, so much so that I thought I&rsquo;d write a post demonstrating its value.</p>

<h2 id="what-is-the-serverless-framework-and-what-problem-does-it-aim-to-solve">What is the Serverless framework and what problem does it aim to solve?</h2>

<p>Serverless, which I&rsquo;ll refer to as the Serverless framework to avoid confusion, is a cloud provider agnostic toolkit designed to aid operations around building, managing and deploying serverless components, whether full-blown serverless architectures or disparate functions (or FaaS).</p>

<p>To give a more concrete example, Serverless framework aims to provide developers with an interface that abstracts away the vendor&rsquo;s cloud specific APIs and configuration whilst simultaneously providing you with additional tooling to be able to test and deploy functions with ease, perfect for rapid feedback or being able to integrate into your CI/CD pipelines.</p>

<p>Let&rsquo;s take a look.</p>

<h2 id="getting-started-with-net-core-and-the-serverless-framework">Getting started with .NET Core and the Serverless Framework</h2>

<p>First of all we&rsquo;re going to need to install the Serverless framework:</p>

<pre><code class="language-bash">$ npm install serverless -g
</code></pre>

<p>Next let&rsquo;s see what Serverless framework templates are currently available:</p>

<pre><code class="language-bash">$ serverless create --help
</code></pre>

<p>Note: In addition to the <code>serverless</code> command line argument, <code>sls</code> is a nice short hand equivalent, producing the same results:</p>

<pre><code class="language-bash">$ sls create --help
</code></pre>

<pre><code class="language-bash">Template for the service. Available templates:
&quot;aws-nodejs&quot;,
&quot;aws-python&quot;,
&quot;aws-python3&quot;,
&quot;aws-groovy-gradle&quot;,
&quot;aws-java-maven&quot;,
&quot;aws-java-gradle&quot;,
&quot;aws-scala-sbt&quot;,
&quot;aws-csharp&quot;,
&quot;aws-fsharp&quot;,
&quot;azure-nodejs&quot;,
&quot;openwhisk-nodejs&quot;,
&quot;openwhisk-python&quot;,
&quot;openwhisk-swift&quot;,
&quot;google-nodejs&quot;
</code></pre>

<p>To create our .NET Core template we use the <code>--template</code> command:</p>

<pre><code class="language-bash">$ serverless create --template aws-csharp --name demo
</code></pre>

<p>Let&rsquo;s take a moment to look at the files created by the Serverless framework and go through the more note worthy ones:</p>

<pre><code class="language-bash">$ ls -la
.
..
.gitignore
Handler.cs
aws-csharp.csproj
build.cmd
build.sh
global.json
serverless.yml
</code></pre>

<p><strong>Handler.cs</strong><br>
Opening <code>Handler.cs</code> reveals it&rsquo;s the function that will be invoked in response to an event such as notifications, S3 updates and so forth.</p>

<pre><code class="language-csharp">//Handler.cs

[assembly:LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]
namespace AwsDotnetCsharp
{
    public class Handler
    {
       public Response Hello(Request request)
       {
           return new Response(&quot;Go Serverless v1.0! Your function executed successfully!&quot;, request);
       }
    }
    ...
}
</code></pre>

<p><strong>serverless.yml</strong><br>
This is where the magic happens. The <code>serverless.yml</code> file is your schema which defines the configuration of your Lambda(s) (or Azure functions) and how they interact with your wider architecture. Once configured Serverless framework generates a Cloud Formation template off of the back of this which AWS uses to provision the appropriate infrastructure.</p>

<p><strong>global.json</strong><br>
Open <code>global.json</code> and you&rsquo;ll notice it&rsquo;s pinned to version <code>1.0.4</code> of the .NET Core framework, this is because as of the time of writing this .NET Core 2.0 isn&rsquo;t supported, though Amazon have promised support is on its way.</p>

<pre><code class="language-json">{
  &quot;sdk&quot;: {
    &quot;version&quot;: &quot;1.0.4&quot;
  }
}
</code></pre>

<p>Now, let&rsquo;s go ahead and create our Lambda.</p>

<h3 id="creating-our-net-core-lambda">Creating our .NET Core Lambda</h3>

<p>For the purpose of this demonstration we&rsquo;re going to create a Lambda that&rsquo;s reachable via HTTP. In order to do this we&rsquo;re going to need to stand up an API Gateway in front of it. Normally doing this would require logging into the AWS Console and manually configuring an API Gateway, so it&rsquo;s a perfect example to demonstrate how Serverless framework can take care of all of a lot of heavy lifting.</p>

<p>Let&rsquo;s head over to our <code>serverless.yml</code> file and scroll down to the following section:</p>

<pre><code class="language-yml"># serverless.yml
functions:
  hello:
    handler: CsharpHandlers::AwsDotnetCsharp.Handler::Hello

#    The following are a few example events you can configure
#    NOTE: Please make sure to change your handler code to work with those events
#    Check the event documentation for details
#    events:
#      - http:
#          path: users/create
#          method: get
#      - s3: ${env:BUCKET}
#      - schedule: rate(10 minutes)
#      - sns: greeter-topic
#      - stream: arn:aws:dynamodb:region:XXXXXX:table/foo/stream/1970-01-01T00:00:00.000
#      - alexaSkill
#      - iot:
#          sql: &quot;SELECT * FROM 'some_topic'&quot;
#      - cloudwatchEvent:
#          event:
#            source:
#              - &quot;aws.ec2&quot;
#            detail-type:
#              - &quot;EC2 Instance State-change Notification&quot;
#            detail:
#              state:
#                - pending
#      - cloudwatchLog: '/aws/lambda/hello'
#      - cognitoUserPool:
#          pool: MyUserPool
#          trigger: PreSignUp

#    Define function environment variables here
#    environment:
#      variable2: value2
...
</code></pre>

<p>This part of the <code>serverless.yml</code> file describes the various events that our Lambda should respond to. As we&rsquo;re going to be using API Gateway as our method of invocation we can remove a large portion of this for clarity, then uncomment the event and its properties pertaining to http:</p>

<pre><code class="language-yml">functions:
  hello:
    handler: CsharpHandlers::AwsDotnetCsharp.Handler::Hello

#    The following are a few example events you can configure
#    NOTE: Please make sure to change your handler code to work with those events
#    Check the event documentation for details
   events:
     - http:
         path: users/create
         method: get

#    Define function environment variables here
#    environment:
#      variable2: value2
...
</code></pre>

<h3 id="creating-our-net-core-c-lambda">Creating our .NET Core C# Lambda</h3>

<p>Because we&rsquo;re using HTTP as our method of invocation we need to add the <a href="https://www.nuget.org/packages/Amazon.Lambda.APIGatewayEvents/">Amazon.Lambda.APIGatewayEvents NuGet package</a> to our Lambda and reference the correct request and return types, we can do that using the following .NET Core CLI command:</p>

<pre><code class="language-bash">$ dotnet add package Amazon.Lambda.APIGatewayEvents
</code></pre>

<p>Now let&rsquo;s open our <code>Handler.cs</code> file and update our Lambda to return the correct response type:</p>

<pre><code class="language-csharp">public APIGatewayProxyResponse Hello(APIGatewayProxyRequest request, ILambdaContext context)
{
    // Log entries show up in CloudWatch
    context.Logger.LogLine(&quot;Example log entry\n&quot;);

    var response = new APIGatewayProxyResponse
    {
        StatusCode = (int)HttpStatusCode.OK,
        Body = &quot;{ \&quot;Message\&quot;: \&quot;Hello World\&quot; }&quot;,
        Headers = new Dictionary&lt;string, string&gt; {{ &quot;Content-Type&quot;, &quot;application/json&quot; }}
    };

    return response;
}
</code></pre>

<p>Now we&rsquo;re set. Let&rsquo;s move on to deploying our Lambda.</p>

<h3 id="registering-an-account-on-aws-lambda">Registering an account on AWS Lambda</h3>

<p>If you&rsquo;re reading this then I assume you already have an account with AWS, if not you&rsquo;re going to need to head over to their registration page and sign up.</p>

<h3 id="setting-aws-credentials-in-serverless-framework">Setting AWS credentials in Serverless framework</h3>

<p>In order to enable the Serverless framework to create Lambdas and accommodating infrastructure around our Lambdas we&rsquo;re going to need to set up our AWS credentials. The Serverless framework documentation does a good job of <a href="https://serverless.com/framework/docs/providers/aws/guide/credentials/?rd=true">explaining how to do this</a>, but for those that know how to generate keys in AWS you can set your credentials via the following command:</p>

<pre><code class="language-bash">serverless config credentials --provider aws --key &lt;Your Key&gt; --secret &lt;Your Secret&gt;
</code></pre>

<h3 id="build-and-deploy-our-net-core-lambda">Build and deploy our .NET Core Lambda</h3>

<p>Now we&rsquo;re good to go!</p>

<p>Let&rsquo;s verify our setup by deploying our Lambda, this will give us an opportunity to see just how rapid the feedback cycle can be when using Serverless framework.</p>

<p>At this point if we weren&rsquo;t using the Serverless framework we&rsquo;d have to manually package our Lambda up into a .zip file in a certain structure then manually log into AWS to upload our zip and create an infrastructure (in this instance API Gateway in front of our Lambda). But as we&rsquo;re using the Serverless framework it&rsquo;ll take care of all of the heavy lifting.</p>

<p>First let&rsquo;s build our .NET Core Lambda:</p>

<pre><code class="language-bash">$ sh build.sh
</code></pre>

<p>or if you&rsquo;re on Windows:</p>

<pre><code class="language-bash">$ build.cmd
</code></pre>

<p>Next we&rsquo;ll deploy it.</p>

<p>Deploying Lambdas using the Serverless framework is performed using the <code>deploy</code> argument. In this instance we&rsquo;ll set the output to be verbose using the <code>-v</code> flag so we can see what Serverless framework is doing:</p>

<pre><code class="language-bash">$ serverless deploy -v
</code></pre>

<p>Once completed you should see output similar to the following:</p>

<pre><code class="language-bash">$ serverless deploy -v

Serverless: Packaging service...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
CloudFormation - UPDATE_IN_PROGRESS - 
...
CloudFormation - UPDATE_COMPLETE - AWS::CloudFormation::Stack - demo-dev
Serverless: Stack update finished...
Service Information
service: demo
stage: dev
region: us-east-1
api keys:
  None
endpoints:
  GET - https://b2353kdlcc.execute-api.us-east-1.amazonaws.com/dev/users/create
functions:
  hello: demo-dev-hello

Stack Outputs
HelloLambdaFunctionQualifiedArn: arn:aws:lambda:us-east-1:082958828786:function:demo-dev-hello:2
ServiceEndpoint: https://b2353kdlcc.execute-api.us-east-1.amazonaws.com/dev
ServerlessDeploymentBucketName: demo-dev-serverlessdeploymentbucket-1o4sd9lppvgfv
</code></pre>

<p>Now if we were to log into our AWS account and navigate to the the Cloud Formation page in the region <code>us-east-1</code> (see the console output) we&rsquo;d see that Serverless framework has taken care of all of the heavy lifting in spinning our stack up.</p>

<p>Let&rsquo;s navigate to the endpoint address returned in the console output which is where our Lambda can be reached.</p>

<p>If all went as expected we should be greeted with a successful response, awesome!</p>

<pre><code>HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 28
Connection: keep-alive
Date: Wed, 08 Nov 2017 02:51:33 GMT
x-amzn-RequestId: bb076390-c42f-11e7-89fc-6fcb7a11f609
X-Amzn-Trace-Id: sampled=0;root=1-5a027135-bc15ce531d1ef45e3eed7a9b
X-Cache: Miss from cloudfront
Via: 1.1 3943e81340bd903a74d536bc9599c3f3.cloudfront.net (CloudFront)
X-Amz-Cf-Id: ZDHCvVSR1DAPVUfrL8bU_IuWk3aMoAotdRKBjUIor16VcBPkIiNjNw==

{
  &quot;Message&quot;: &quot;Hello World&quot;
}
</code></pre>

<p>In addition to invoking our Lambda manually via an HTTP request, we could also invoke it using the following Serverless framework command, where the <code>-l</code> flag will return any log output:</p>

<pre><code class="language-shell">$ serverless invoke -f hello -l

{
    &quot;statusCode&quot;: 200,
    &quot;headers&quot;: {
        &quot;Content-Type&quot;: &quot;application/json&quot;
    },
    &quot;body&quot;: &quot;{ \&quot;Message\&quot;: \&quot;Hello World\&quot; }&quot;,
    &quot;isBase64Encoded&quot;: false
}
--------------------------------------------------------------------
START RequestId: bbcbf6a3-c430-11e7-a2e3-132defc123e3 Version: $LATEST
Example log entry

END RequestId: bbcbf6a3-c430-11e7-a2e3-132defc123e3
REPORT RequestId: bbcbf6a3-c430-11e7-a2e3-132defc123e3	Duration: 17.02 ms	Billed Duration: 100 ms 	Memory Size: 1024 MB	Max Memory Used: 34 MB
</code></pre>

<h3 id="making-modifications-to-our-lambda">Making modifications to our Lambda</h3>

<p>At this point if we to make any further code changes we&rsquo;d have to re-run the build script (<code>build.sh</code> or <code>build.cmd</code> depending on your platform) followed by the Serverless framework <code>deploy function</code> command:</p>

<pre><code class="language-shell">$ serverless deploy function -f hello
</code></pre>

<p>However if we needed to modify the <code>serverless.yml</code> file then we&rsquo;d have to deploy our infrastructure changes via the <code>deploy</code> command:</p>

<pre><code class="language-shell">$ serverless deploy -v
</code></pre>

<p>The difference being the former is far faster as it will only deploy the source code where as the later will tear down your Cloud Formation stack and stand it back up again reflecting the changes made in your <code>serverless.yml</code> configuration.</p>

<h3 id="command-recap">Command recap</h3>

<p>So, let&rsquo;s recap on the more important commands we&rsquo;ve used:</p>

<p>Create our Lambda:</p>

<pre><code class="language-shell">$ serverless create --template aws-csharp --name demo
</code></pre>

<p>Deploy our infrastructure and code (you must have built your Lambda before hand using one of the build scripts)</p>

<pre><code class="language-shell">$ serverless deploy -v
</code></pre>

<p>Or just deploy the changes to our <code>hello</code> function (again, we need to have built our Lambda as above)</p>

<pre><code class="language-shell">$ serverless deploy function -f hello
</code></pre>

<p>At this point we can invoke our Lambda, where <code>-l</code> is whether we want to include log output.</p>

<pre><code class="language-shell">$ serverless invoke -f hello -l
</code></pre>

<p>If our commands were written in Python or Node then you could optionally use the <code>invoke local</code> command, however this isn&rsquo;t available for .NET Core.</p>

<pre><code class="language-shell">$ serverless invoke local -f hello
</code></pre>

<p>Once finished with our demo function we can clean up after ourselves using the <code>remove</code> command:</p>

<pre><code class="language-shell">$ serverless remove
</code></pre>

<h3 id="adding-more-lambdas">Adding more Lambdas</h3>

<p>Before we wrapping up, imagine we wanted to add more Lambdas to our project, to do this we can simply add another function name to the <code>functions</code> section of <code>serverless.yml</code> configuration file:</p>

<p>From this:</p>

<pre><code class="language-yml">functions:
  hello:
    handler: CsharpHandlers::AwsDotnetCsharp.Handler::Hello
    events:
     - http:
         path: users/create
         method: get
</code></pre>

<p>To this:</p>

<pre><code class="language-yml">functions:
  hello:
    handler: CsharpHandlers::AwsDotnetCsharp.Handler::Hello
    events:
     - http:
         path: users/create
         method: get
  world:
    handler: CsharpHandlers::AwsDotnetCsharp.Handler2::World
    events:
     - http:
         path: users/
         method: get
</code></pre>

<p>At this point all we&rsquo;d need to do is create a new <code>Handler</code> class (for the sake of this demonstration I called it <code>Handler2.cs</code>) and make sure we set the <code>handler</code> property in our <code>serverless.yml</code> configuration appropriately, we&rsquo;d also need to ensure we set our new <code>Handler2</code> function name to <code>World</code> as to match the handler address in our serverless configuration.</p>

<p>As the additional function will require its own set of infrastructure we would need to run the build script and then use the following command to regenerate our stack:</p>

<pre><code class="language-shell">$ serverless deploy -v
</code></pre>

<p>Once deployed we&rsquo;re able to navigate to our second function as we do our first.</p>

<p>We can also deploy our functions independent of one another by supplying the appropriate function name when executing the <code>deploy function</code> command:</p>

<pre><code class="language-shell">$ serverless deploy function -f world
</code></pre>

<h2 id="conclusion">Conclusion</h2>

<p>Hopefully this post has given you an idea as to how the Serverless framework can help you develop and manage your functions, whether using Azure, AWS, Google or any of the other providers supported.</p>

<p>If you&rsquo;re interested in learning more about the Serverless framework then I&rsquo;d highly recommend checking out <a href="https://serverless.com/framework/docs/providers/aws/guide/quick-start">their documentation</a> (which is plentiful and very well documented).</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Wed, 08 Nov 2017 23:37:27 +0000</pubDate></item><item><title>REST Client for VS Code, an elegant alternative to Postman</title><link>https://josephwoodward.co.uk/2017/10/rest- client-for-vs-Code-an-elegant-alternative-postman</link><description></description><content:encoded><![CDATA[<p>For sometime now I&rsquo;ve been a huge proponent of Postman, working in an environment that has a large number of remote services meant Postman&rsquo;s ease of generating requests, the ability to manage collections, view historic requests and so forth made it my goto tool for hand crafted HTTP requests. However there have always been features I felt were missing, one such feature was the ability to copy and paste a raw <a href="http://www.w3.org/Protocols/rfc2616/rfc2616-sec5.html">RFC 2616 compliant</a> HTTP request including request method, headers and body directly into Postman and fire it off without the need to manually tweak the request. This lead me to a <a href="https://twitter.com/joe_mighty/status/917716705488654336">discussion on Twitter</a> where Darrel Miller recommended I check out the <a href="https://marketplace.visualstudio.com/items?itemName=humao.rest-client">REST Client extension for Visual Studio Code</a>.</p>

<h2 id="rest-client-for-visual-studio-code">REST Client for Visual Studio Code</h2>

<p>After installing REST Client the first thing I noticed was how elegant it is. Simply create a new tab, paste in your raw HTTP request (ensuring the tab&rsquo;s Language Mode is set to either HTTP or Plaintext, more on this later) and in no time at all you&rsquo;ll see a &ldquo;Send Request&rdquo; button appear above your HTTP request allowing you to execute the request as is, no further modifications are required to tell REST Client how to parse or format.</p>

<h2 id="features">Features</h2>

<p>To give you a firm grasp of why you should consider adding REST Client to your tool chain, here are a few of the features that particularly stood out to me, organised in an easily consumable list format, because we all like lists:</p>

<h3 id="no-bs-request-building">No BS request building</h3>

<p>The easiest form of an HTTP request you can send is to paste in a normal HTTP GET URL like so:</p>

<pre><code>https://example.com/comments/1
</code></pre>

<p><strong>Note</strong>: You can either paste your requests into a Plaintext window where you&rsquo;ll need to highlight the request and press the Send Request keyboard shortcut (<code>Ctrl+Alt+R</code> for Windows, or <code>Cmd+Alt+R</code> for macOS) or set the tab&rsquo;s Language Mode to HTTP where a &ldquo;Send Request&rdquo; will appear above the HTTP request.</p>

<p>If you want more control over your request then a raw HTTP request will do:</p>

<pre><code class="language-http">POST https://example.com/comments HTTP/1.1
content-type: application/json

{
    &quot;name&quot;: &quot;sample&quot;,
    &quot;time&quot;: &quot;Wed, 21 Oct 2015 18:27:50 GMT&quot;
}
</code></pre>

<p>Once loaded you&rsquo;ll see the response appear in a separate pane. A nice detail that I really liked is the ability to hover my cursor over the request timer and get a breakdown of duration details, including times surrounding <em>Socket, DNS, TCP, First Byte</em> and <em>Download</em>.</p>

<p><img src="http://assets.josephwoodward.co.uk/blog/rest-client-vs-code3.gif" alt="" /></p>

<h3 id="saving-requests-as-collections-for-later-use-is-a-simple-plain-text-http-file">Saving requests as collections for later use is a simple plain text .http file</h3>

<p>Following the theme of low friction elegance, it&rsquo;s nice that saving requests for later use (or if you want to check them into your component&rsquo;s source control) are saved as simple plain text documents with an <code>.http</code> file extension.</p>

<h3 id="break-down-of-requests">Break down of requests</h3>

<p>One of the gripes I had with Postman was requests separated by tabs. If you had a number of requests you were working with they&rsquo;d quickly get lost amongst the number of tabs I tend to have open.</p>

<p>REST Client doesn&rsquo;t suffer the same fate as requests can be grouped in your documents and separated by comments, where three or more hash characters (<code>#</code>) are treated as delimiters between requests by REST Client.</p>

<p><img src="http://assets.josephwoodward.co.uk/blog/rest-client-vs-code2.png" alt="" /></p>

<h3 id="environments-and-variables">Environments and Variables</h3>

<p>REST Client has a concept of Environments and Variables, meaning if you work with different environments (ie QA, Staging and Production), you can easily switch between environments configured in the REST Client settings (see below), changing the set of variables configured without having to modify the requests.</p>

<h4 id="environments">Environments</h4>

<pre><code class="language-json">&quot;rest-client.environmentVariables&quot;: {
    &quot;local&quot;: {
        &quot;host&quot;: &quot;localhost&quot;,
        &quot;token&quot;: &quot;test token&quot;
    },
    &quot;production&quot;: {
        &quot;host&quot;: &quot;example.com&quot;,
        &quot;token&quot;: &quot;product token&quot;
    }
}
</code></pre>

<h4 id="variables">Variables</h4>

<p>Variables on the other hand allow you to simply define variables in your document and reference them throughout.</p>

<pre><code>@host = localhost:5000
@token = Bearer e975b15aa477ee440417ea069e8ef728a22933f0

GET https://{{host}}/api/comments/1 HTTP/1.1
Authorization: {{token}}
</code></pre>

<h3 id="it-s-not-electron">It&rsquo;s not Electron</h3>

<p>I have nothing against Electron, but it&rsquo;s known to be a big of a resource hog, so much so that I rarely leave Postman open between sessions, where as I&rsquo;ve always got VS Code (one Electron process is enough) open meaning it&rsquo;s far easier to dip into to test a few requests.</p>

<h3 id="conclusion">Conclusion</h3>

<p>This post is just a brief overview of some of the features in REST Client, if you&rsquo;re open to trying an alternative to your current HTTP Request generation tool then I&rsquo;d highly recommend you check it out, you can read more about it on the <a href="https://github.com/Huachao/vscode-restclient/blob/master/README.md">REST Client GitHub page</a>.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Wed, 18 Oct 2017 04:23:53 +0000</pubDate></item><item><title>Global Exception Handling in ASP.NET Core Web API</title><link>https://josephwoodward.co.uk/2017/09/global-exception-handling-asp-net-core-webapi</link><description></description><content:encoded><![CDATA[<p><em>Note: Whilst this post is targeted towards Web API, it&rsquo;s not unique to Web API and can be applied to any framework running on the ASP.NET Core pipeline.</em></p>

<p>For anyone that uses a command-dispatcher library such as  <a href="https://github.com/jbogard/MediatR">MediatR</a>, <a href="https://github.com/shaynevanasperen/Magneto">Magneto</a> or <a href="https://github.com/BrighterCommand/Brighter">Brighter</a> (to name a few), you&rsquo;ll know that the pattern encourages you to push your domain logic down into a domain library via a handler, encapsulating your app or API&rsquo;s behaviours such as retrieving an event, like so:</p>

<pre><code class="language-csharp">public async Task&lt;IActionResult&gt; Get(int id)
{
    var result = await _mediator.Send(new EventDetail.Query(id));
    return Ok(result);
}
</code></pre>

<p>Continuing with the event theme above, within your handler, or in the pipeline before it you&rsquo;ll take care of all of your validation, throwing an exception if the argument is invalid (in this case, an <code>ArgumentException</code>).</p>

<p>Now when it comes to handling that exception you&rsquo;re left having to explicitly catch it from each action method:</p>

<pre><code class="language-csharp">public async Task&lt;string&gt; Get(int id)
{
    try {
        var result = await _mediator.Send(new EventDetail.Query(id));
        return Ok(result);
    } catch (ArgumentException){
        return BadRequest();
    }
}
</code></pre>

<p>Whilst this is perfectly acceptable, I&rsquo;m always looking at ways I can reduce boiler plate code, and what happens if an exception is thrown somewhere else in the HTTP pipeline This is why I created <a href="https://github.com/JosephWoodward/GlobalExceptionHandlerDotNet">Global Exception Handler for ASP.NET Core</a>.</p>

<h2 id="what-is-global-exception-handler">What is Global Exception Handler?</h2>

<p>Available <a href="https://www.nuget.org/packages/GlobalExceptionHandler/">via NuGet</a> or <a href="https://github.com/JosephWoodward/GlobalExceptionHandlerDotNet">GitHub</a>, Global Exception Handler lets you configure an exception handling convention within your <code>Startup.cs</code> file, which will catch any of the exceptions specified, outputting the appropriate error response and status code.</p>

<p><strong>Not just for Web API or MVC</strong></p>

<p>Whilst it&rsquo;s possible to use Global Exception Handler with Web API or MVC, it&rsquo;s <strong>actually framework agnostic</strong> meaning as long as it runs, or can run on the ASP.NET Core pipeline (such as <a href="https://github.com/jchannon/Botwin">BotWin</a> or <a href="http://nancyfx.org/">Nancy</a>) then it should work.</p>

<p>Let&rsquo;s take a look at how we can use it alongside Web API (though the configuration will be the same regardless of framework).</p>

<h2 id="how-do-i-use-global-exception-handler-for-an-asp-net-core-web-api-project">How do I use Global Exception Handler for an ASP.NET Core Web API project?</h2>

<p>To configure Global Exception Handler, you call it via the  the <code>UseWebApiGlobalExceptionHandler</code> extension method in your <code>Configure</code> method, specifying the exception(s) you wish to handle and the resulting status code. In this instance a <code>ArgumentException</code> should translate to a 400 (Bad Request) status code:</p>

<pre><code class="language-csharp">public class Startup
{
    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        app.UseWebApiGlobalExceptionHandler(x =&gt;
        {
            x.ForException&lt;ArgumentException&gt;().ReturnStatusCode(HttpStatusCode.BadRequest);
        });

        app.UseMvc();
    }
}
</code></pre>

<p>Now when our MediatR pipeline throws an <code>ArgumentException</code> we no longer need to explicitly catch and handle it in every controller action:</p>

<pre><code class="language-csharp">public async Task&lt;IActionResult&gt; Get(int id)
{
    // This throws a ArgumentException
    var result = await _mediator.Send(new EventDetail.Query(id));
    ...
}
</code></pre>

<p>Instead our global exception handler will catch the exception and handle it according to our convention, resulting in the following JSON output:</p>

<pre><code class="language-json">{
    &quot;error&quot;: {
        &quot;status&quot;: 400,
        &quot;message&quot;: &quot;Invalid arguments supplied&quot;
    }
}
</code></pre>

<p>This saves us in the following three scenarios:
- You no longer have to explicitly catch exceptions per method
- Those times you forgot to add exception handling will be caught
- Enables you to catch any exceptions further up the HTTP pipeline and propagate to a configured result</p>

<h2 id="not-happy-with-the-error-format">Not happy with the error format?</h2>

<p>If you&rsquo;re not happy with the default error format then it can be changed in one of two places.</p>

<p>First you can set a global error format via the <code>MessageFormatter</code> method:</p>

<p><strong>Global formatter</strong></p>

<pre><code class="language-csharp">public class Startup
{
    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        app.UseWebApiGlobalExceptionHandler(x =&gt;
        {
            x.ForException&lt;ArgumentException&gt;().ReturnStatusCode(HttpStatusCode.BadRequest);
            x.MessageFormatter(exception =&gt; JsonConvert.SerializeObject(new {
                error = new {
                    message = &quot;Something went wrong&quot;,
                    statusCode = exception.HttpStatusCode
                }
            }));
        });

        app.UseMvc();
    }
}
</code></pre>

<p><strong>Exception specific formatter</strong></p>

<p>Alternatively you can specify a custom message per exception caught, which will override the global one demoed above:</p>

<pre><code class="language-csharp">app.UseWebApiGlobalExceptionHandler(x =&gt;
{
    x.ContentType = &quot;text/xml&quot;;
    x.ForException&lt;ArgumentException&gt;().ReturnStatusCode(HttpStatusCode.BadRequest).UsingMessageFormatter(
        exception =&gt; JsonConvert.SerializeObject(new
        {
            error = new
            {
                message = &quot;Oops, something went wrong&quot;
            }
        }));
    x.MessageFormatter(exception =&gt; &quot;This formatter will be overridden when an ArgumentException is thrown&quot;);
});
</code></pre>

<p>Resulting in the following 400 response:</p>

<pre><code class="language-json">{
    &quot;error&quot;: {
        &quot;message&quot;: &quot;Oops, something went wrong&quot;
    }
}
</code></pre>

<p><strong>Content type</strong></p>

<p>By default Global Exception Handler is set to output <code>application/json</code> content type, however this can be overridden for those that may prefer to use XML or an alternative format. This can be done via the the <code>ContentType</code> property:</p>

<pre><code class="language-csharp">app.UseWebApiGlobalExceptionHandler(x =&gt;
{
    x.ContentType = &quot;text/xml&quot;;
    x.ForException&lt;ArgumentException&gt;().ReturnStatusCode(HttpStatusCode.BadRequest)
    x.MessageFormatter(exception =&gt; {
        // serialise your XML in here
    });
});
</code></pre>

<h2 id="moving-forward">Moving forward</h2>

<p>Having used this for a little while now, one suggestion was to implement <a href="https://tools.ietf.org/html/rfc7807">problem+json</a> as the default content type, standardising the default output which I&rsquo;m seriously considering. I&rsquo;m also in the process of building ASP.NET Core MVC compatibility so exceptions can result in views being rendered or requests being redirect to routes (such as a 404 page not found view).</p>

<p>For more information feel free to check out the <a href="https://github.com/JosephWoodward/GlobalExceptionHandlerDotNet">GitHub page</a> or try it out via <a href="https://www.nuget.org/packages/GlobalExceptionHandler/">Nuget</a>.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Wed, 20 Sep 2017 23:35:17 +0000</pubDate></item><item><title>Turbo charging your command line with ripgrep</title><link>https://josephwoodward.co.uk/2017/09/turbo-charging-command-line-ripgrep</link><description></description><content:encoded><![CDATA[<p>In the past few years the command line has made a resurgence in the Windows world. With the .NET Core CLI now a first class citizen and the Linux sub-system for Windows making it easy to run Linux based tools on a Windows machine, it&rsquo;s clear that Microsoft are keen to boost the profile of the CLI amongst developers.</p>

<p>Yet Windows has always come up short on delivering a powerful grep experience for searching via a command line, this is where a tool like ripgrep can help you.</p>

<p>Having first heard about ripgrep after a conversation on Twitter, I was made aware of the fact that most people are actively using it without knowing it as <strong>ripgrep is what powers Visual Studio Code&rsquo;s search functionality!</strong> (<a href="https://github.com/Microsoft/vscode/blob/2534c2a07b5b98b4a81b2ba773fdcd4b655a6edb/OSSREADME.json#L127">see here</a>)</p>

<p>As someone that&rsquo;s a heavy user of grep in my day-to-day work flow the first thing that blew me away with ripgrep was its blazing fast speed. Having read its benchmarks that it so proudly displays on <a href="https://github.com/BurntSushi/ripgrep">its GitHub page</a>, at first I was septical - but ripgrep flies on large recursive searches like no other grepping tool I&rsquo;ve used.</p>

<p>Let&rsquo;s take a look.</p>

<h2 id="a-bit-about-ripgrep">A bit about ripgrep</h2>

<p>Written in Rust, ripgrep is a tool that combines the usability of <a href="https://github.com/ggreer/the_silver_searcher">The Silver Searcher</a> (a super fast ack clone) with the raw performance of GNU. In addition, ripgrep also has first class support for Windows, Mac and Linux (<a href="https://github.com/BurntSushi/ripgrep/releases">available on their GitHub page</a>), so for anyone who regularly works across multiple platforms and is looking to normalise their tool chain then it&rsquo;s well worth a look.</p>

<p>Some of ripgrep&rsquo;s features that sold it to me are:
- It&rsquo;s crazily fast at searching large directories
- Ripgrep won&rsquo;t search files already ignored by your .gitignore file (this can easily be overridden when needed).
- Ignores binary or hidden files by default
- Easy to search specific file types (making it great for searching for functions or references in code files)
- Highlights matches in colour
- Full unicode support
- First class Windows, Mac and Linux support</p>

<p>Let&rsquo;s take a look at how we can install ripgrep.</p>

<h2 id="installation">Installation</h2>

<p><strong>Mac</strong></p>

<p>If you&rsquo;re on a Mac using Homebrew then installation is as easy as:</p>

<pre><code>$ brew install ripgrep
</code></pre>

<p><strong>Windows</strong></p>

<ul>
<li>Download the ripgrep executable from <a href="https://github.com/BurntSushi/ripgrep/releases">their releases page on GitHub</a></li>
<li>Put the executable in a familiar location (<code>c:/tools</code> for instance)</li>
<li>Add the aforementioned tools path to your <code>PATH</code> environment</li>
</ul>

<p>Alternatively if you&rsquo;re using Chocolately then installation is as simple as:</p>

<pre><code>$ choco install ripgrep
</code></pre>

<p><strong>Linux</strong></p>

<pre><code>$ yum-config-manager --add-repo=https://copr.fedorainfracloud.org/coprs/carlwgeorge/ripgrep/repo/epel-7/carlwgeorge-ripgrep-epel-7.repo

$ yum install ripgrep
</code></pre>

<p>See the <a href="https://github.com/BurntSushi/ripgrep">ripgrep GitHub page for more installation options</a>.</p>

<h3 id="usage">Usage</h3>

<p>Next, let&rsquo;s take a look at some of the use cases for ripgrep in our day-to-day scenarios.</p>

<p>Recursively search contents of files in current directory, while respecting all <code>.gitignore</code> files, ignore hidden files and directories and skip binary files:</p>

<pre><code>$ rg hello
</code></pre>

<p><img src="http://assets.josephwoodward.co.uk/blog/new_ripgrep_search.png" alt="ripgrep search all files" title="ripgrep1" /></p>

<p>Search the contents of <code>.html</code> and <code>.css</code> files only for the word <code>foobar</code> using the Type flag (t).</p>

<pre><code>$ rg -thtml -tcss foobar
</code></pre>

<p>Or return everything but css files using the Type Not flag (T):</p>

<pre><code>$ rg -Tjs foobar
</code></pre>

<p>Returns a list of all .css files</p>

<pre><code>$ rg -g *.css --files
</code></pre>

<p><img src="http://assets.josephwoodward.co.uk/blog/ripgrep_search_1.png" alt="ripgrep search all files" title="ripgrep2" /></p>

<p><strong>More examples</strong></p>

<p>ripgrep has a whole host of other searching options so I&rsquo;d highly recommend <a href="https://github.com/BurntSushi/ripgrep">checking out their GitHub</a> page where they reference more examples.</p>

<h2 id="conclusion">Conclusion</h2>

<p>Hopefully this post has given you a taste of how awesome ripgrep is and encouraged you to at least install it and give it a spin. If you&rsquo;re someone that spends a lot of time on the command line for day to day navigation then having a powerful grepping tool at your disposal and getting into the habit of using it whenever you need to locate a file really does help your work flow.</p>

<p>Now, go forth an grep with insane speed!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Tue, 12 Sep 2017 21:29:37 +0000</pubDate></item><item><title>GraphiQL in ASP.NET Core</title><link>https://josephwoodward.co.uk/2017/08/graphiql-in-asp-net-core</link><description></description><content:encoded><![CDATA[<p>Having recently attended a talk on GraphQL and read about <a href="https://githubengineering.com/the-github-graphql-api/">GitHib&rsquo;s glowing post surrounding choice to use GraphQL over REST for their API</a>, I was interested in having a play to see what all of the fuss was about. For those that aren&rsquo;t sure what GraphQL is or where it fits in the stack let me give a brief overview.</p>

<h2 id="what-is-graphql">What is GraphQL?</h2>

<p>GraphQL is a query language (hence QL in the name) created and open-sourced by Facebook that allows you to query an HTTP (or other protocols for that matter) based API. Let me demonstrate with a simple example:</p>

<p>Say you&rsquo;re consuming a REST API on the following resource identifier:</p>

<p><code>GET service.com/user/1</code></p>

<p>The response returns on this URI is the following JSON object is:</p>

<pre><code class="language-json">{
    &quot;Id&quot;: 1,
    &quot;FirstName&quot;: &quot;Richard&quot;,
    &quot;Surname&quot;: &quot;Hendricks&quot;,
    &quot;Gender&quot;: &quot;Male&quot;,
    &quot;Age&quot;: 31,
    &quot;Occupation&quot;: &quot;Pied Piper CEO&quot;,
    &quot;RoleId&quot;: 5,
    ...
}
</code></pre>

<p>Now as a mobile app developer calling this service you&rsquo;re aware of the constraints on bandwidth users face when connected to a mobile network, so returning the whole JSON blob when you&rsquo;re only interested in their <code>FirstName</code> and <code>Surname</code> properties is wasteful, this is called <strong>over-fetching</strong> data, GraphQL solves this by letting you as a consumer dictate your data needs, as opposed to having it forced upon you by the service.</p>

<p>This problem is a fundamental requirement that REST doesn&rsquo;t solve (in fairness to REST, it never set out to solve this problem, however as the internet has changed it&rsquo;s a problem that does exist).</p>

<p>This is where GraphQL comes in.</p>

<p>Using GraphQL we&rsquo;re given control as consumers to dictate what our data requirements are, so instead of calling the aforementioned URI, instead we <code>POST</code> a query to a GraphQL endpoint (often <code>/graphql</code>) in the following shape:</p>

<pre><code>{
    Id,
    FirstName,
    Surname
}
</code></pre>

<p>Our Web API powered GraphQL server fulfills the request, returning the following response:</p>

<pre><code class="language-json">{
    &quot;Id&quot;: 1,
    &quot;FirstName&quot;: &quot;Richard&quot;,
    &quot;Surname&quot;: &quot;Hendrix&quot;
}
</code></pre>

<p>This also applies to <strong>under-fetching</strong> which can best be described as when you have to make multiple calls to join data (following the above example, retrieving the <code>RoleId</code> only to then call another endpoint to get the <code>Role</code> information). In GraphQL&rsquo;s case, we could represent that with the following query, which would save us an additional HTTP request:</p>

<pre><code class="language-json">{
    Id,
    FirstName,
    Surname,
    Role {
        RoleName
    }
}
</code></pre>

<p>The GraphQL query language includes a whole host of other functionality including static type checking, query functions and the like so I would recommend checking it out when you can (or stay tuned for a later post I&rsquo;m in the process of writing where I demonstrate how to set it up in .NET Core).</p>

<h2 id="so-what-is-graphiql">So what is GraphiQL?</h2>

<p>Now you know what GraphQL is, GraphiQL (pronounced &lsquo;graphical&rsquo;) is a web-based JavaScript powered editor that allows you to query a GraphQL endpoint, taking full advantage of the static type checking and intellisense promised by GraphQL. You can consider it the Swagger of the GraphQL world.</p>

<p>In fact, I&rsquo;d suggest you taking a moment to go try a <a href="http://graphql.org/swapi-graphql/">live exampe of GraphiQL here</a> and see how GraphQL&rsquo;s static type system and help you discover data that&rsquo;s available to you via the documentation and intellisense. GitHub also allow you to query your <a href="https://developer.github.com/v4/explorer/">GitHub activity via their example GraphiQL endpoint</a> too.</p>

<h2 id="introducing-graphiql-net-https-www-nuget-org-packages-graphiql">Introducing <a href="https://www.nuget.org/packages/graphiql/">GraphiQL.NET</a></h2>

<p>Traditionally if you wanted to set this up you&rsquo;d need to configure a whole host of node modules and JavaScript files to enable you to do this, however given .NET Core&rsquo;s powerful middleware/pipeline approach creating a GraphiQL middleware seemed like the obvious way to enable a GraphiQL endpoint.</p>

<p>Now you no longer need to worry about taking a dependency on Node or NPM in your ASP.NET Core solution and can instead add GraphiQL support via a simple middleware calls using <a href="https://www.nuget.org/packages/graphiql/">GraphiQL.NET</a> (before continuing I feel it&rsquo;s worth mentioning all of the <a href="https://github.com/JosephWoodward/graphiql-dotnet">code is up on GitHub</a>.</p>

<p><strong>Setting up GraphiQL in ASP.NET Core</strong></p>

<p>You can install GraphiQL.NET by copying and pasting the following command into your Package Manager Console within Visual Studio (Tools &gt; NuGet Package Manager &gt; Package Manager Console).</p>

<pre><code>Install-Package graphiql
</code></pre>

<p>Alternatively you can install it using the .NET Core CLI using the following command:</p>

<pre><code>dotnet add package graphiql
</code></pre>

<p>From there all you need to do is call the <code>UseGraphiQl();</code> extension method within the <code>Configure</code> method within <code>Startup.cs</code>, ensuring you do it before your <code>AddMvc();</code> registration.</p>

<pre><code class="language-csharp">public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
    app.UseGraphiQl();

    app.UseMvc();
}
</code></pre>

<p>Now when you navigate to <code>/graphql</code> you should be greeted with the same familiar GraphiQL screen but without the hassle of having to add node or any NPM packages to your project as a dependency - nice!</p>

<p>The library is still version 1 so if you run into any issues then please do feel free to report them!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Thu, 10 Aug 2017 12:57:18 +0000</pubDate></item><item><title>Injecting content into your head or body tags via dependency injection using ITagHelperComponent</title><link>https://josephwoodward.co.uk/2017/07/injecting-javascript-into-views-using-itaghelpercomponent</link><description></description><content:encoded><![CDATA[<p>Having been playing around with the ASP.NET Core 2.0 preview for a little while now, one cool feature I stumbled upon was the addition of the new <code>ITagHelperComponent</code> interface and its use.</p>

<h2 id="what-problem-does-the-itaghelpercomponent-solve">What problem does the <code>ITagHelperComponent</code> solve?</h2>

<p>Pre .NET Core 2.0, if you&rsquo;re using a library that comes bundled with some static assets such as JavaScript or CSS, you&rsquo;ll know that in order to use the library you have to manually add <code>script</code> and/or <code>link</code> tags (including a reference to the files in your <code>wwwroot</code> folder), to your views. This is far from ideal as not only does it force users to jump through additional hoops, but it also runs the risk of introducing breaking changes when a user decides to remove the library and forgets to remove the JavaScript references, or if you update the library version but forget to change the appropriate JavaScript reference.</p>

<p>This is where the <code>ITagHelperComponent</code> comes in; it allows you to inject content into the header or footer of your application&rsquo;s web page. Essentially, it&rsquo;s dependency injection for your JavaScript or CSS assets! All that&rsquo;s required of the user is they register the dependency with their IoC Container of choice within their <code>Startup.cs</code> file.</p>

<p>Enough talk, let&rsquo;s take a look at how it works. Hopefully a demonstration will clear things up.</p>

<h2 id="injecting-javascript-or-css-assets-into-the-head-or-body-tags">Injecting JavaScript or CSS assets into the head or body tags</h2>

<p>Imagine we have some JavaScript we&rsquo;d like to include on each page, this could be from either:</p>

<ul>
<li>A JavaScript and/or CSS library we&rsquo;d like to use (Bootstrap, Pure etc)</li>
<li>Some database driven JavaScript code or value that needs to be included in the head of your page</li>
<li>A JavaScript file that&rsquo;s bundled with a library that our users need to include before the closing <code>&lt;/body&gt;</code> tag.</li>
</ul>

<p>In our case, we&rsquo;ll keep it simple - we need to include some database drive JavaScript in our page in the form of some Google Analytics JavaScript.</p>

<h3 id="creating-our-javascript-tag-helper-component">Creating our JavaScript tag helper component</h3>

<p>Looking at the contract of the <code>ITagHelperComponent</code> interface you&rsquo;ll see it&rsquo;s a simple one:</p>

<pre><code class="language-csharp">public interface ITagHelperComponent{
    int Order { get; }
    void Init(TagHelperContext context);
    Task ProcessAsync(TagHelperContext context, TagHelperOutput output);
}
</code></pre>

<p>We could implement the interface ourselves, or we could lean on the existing <code>TagHelperComponent</code> base class and override only the properties and methods we require. We&rsquo;ll do the later.</p>

<p>Let&rsquo;s start by creating our implementation which we&rsquo;ll call <code>CustomerAnalyticsTagHelper</code>:</p>

<pre><code class="language-csharp">// CustomerAnalyticsTagHelper.cs

CustomerAnalyticsTagHelper : TagHelperComponent {}
</code></pre>

<p>For this example the only method we&rsquo;re concerned about is the <code>ProcessAsync</code> one, though we will touch on the <code>Order</code> property later.</p>

<p>Let&rsquo;s go ahead and implement it:</p>

<pre><code class="language-csharp">// CustomerAnalyticsTagHelper.cs

public class CustomerAnalyticsTagHelper : TagHelperComponent
{
    private readonly ICustomerAnalytics _analytics;

    public CustomerAnalyticsTagHelper(ICustomerAnalytics analytics)
    {
        _analytics = analytics;
    }

    public override Task ProcessAsync(TagHelperContext context, TagHelperOutput output)
    {
        if (string.Equals(context.TagName, &quot;body&quot;, StringComparison.Ordinal))
        {
            string analyticsSnippet = @&quot;
            &lt;script&gt;
                (function (i, s, o, g, r, a, m) {
                    i['GoogleAnalyticsObject'] = r; i[r] = i[r] || function () {
                        (i[r].q = i[r].q || []).push(arguments)
                    }, i[r].l = 1 * new Date(); a = s.createElement(o),
                        m = s.getElementsByTagName(o)[0]; a.async = 1; a.src = g; m.parentNode.insertBefore(a, m)
                })(window, document, 'script', '//www.google-analytics.com/analytics.js', 'ga');
                ga('create', '&quot; + _analytics.CustomerUaCode + @&quot;', 'auto')
                ga('send', 'pageview');
            &lt;/script&gt;&quot;;
            
            output.PostContent.AppendHtmlLine(analyticsSnippet);
        }

        return Task.CompletedTask;
    }
}
</code></pre>

<p>As you can see, the <code>TagHelperContext</code> argument gives us context around the tag we&rsquo;re inspecting, in this case we want to look for the <code>body</code> HTML element. If we wanted to drop JavaScript or CSS into the <code>&lt;head&gt;&lt;/head&gt;</code> tags then we&rsquo;d inspect tag name of &ldquo;head&rdquo; instead.</p>

<p>The <code>TagHelperOutput</code> argument gives us access to a host of properties around where we can place content, these include:</p>

<ul>
<li>PreElement</li>
<li>PreContent</li>
<li>Content</li>
<li>PostContent</li>
<li>PostElement</li>
<li>IsContentModified</li>
<li>Attributes</li>
</ul>

<p>In this instance we&rsquo;re going to append our JavaScript <em>after</em> the content located within the <code>&lt;body&gt;</code> tag, placing it just before the closing <code>&lt;/body&gt;</code> tag.</p>

<p><strong>Dependency Injection in our tag helper</strong></p>

<p>With dependency injection being baked into the ASP.NET Core framework, we&rsquo;re able to inject dependencies into our tag helper - in this case I&rsquo;m injecting our database driven consumer UA (User Analytics) code.</p>

<h2 id="registering-our-tag-helper-with-our-ioc-container">Registering our tag helper with our IoC container</h2>

<p>Now all that&rsquo;s left to do is register our tag helper with our IoC container of choice. In this instance I&rsquo;m using the build in ASP.NET Core one from the <code>Microsoft.Extensions.DependencyInjection</code> package.</p>

<pre><code class="language-csharp">// Startup.cs

public void ConfigureServices(IServiceCollection services)
{
    services.AddSingleton&lt;ICustomerAnalytics, CustomerAnalytics&gt;(); // Data source containing UA code
    services.AddSingleton&lt;ITagHelperComponent, CustomerAnalyticsTagHelper&gt;(); // Our tag helper
    ...
}
</code></pre>

<p>Now firing up our tag helper we can see our JavaScript has now been injected in our HTML page <strong>without us needing to touch any of our .cshtml Razor files!</strong></p>

<pre><code class="language-html">...
&lt;body&gt;
    ...
    &lt;script&gt;
        (function (i, s, o, g, r, a, m) {
            i['GoogleAnalyticsObject'] = r; i[r] = i[r] || function () {
                (i[r].q = i[r].q || []).push(arguments)
            }, i[r].l = 1 * new Date(); a = s.createElement(o),
                m = s.getElementsByTagName(o)[0]; a.async = 1; a.src = g; m.parentNode.insertBefore(a, m)
        })(window, document, 'script', '//www.google-analytics.com/analytics.js', 'ga');
        ga('create', 'UA-123456789', 'auto')
        ga('send', 'pageview');
    &lt;/script&gt;
&lt;/body&gt;
&lt;/html&gt;
</code></pre>

<h2 id="ordering-our-output">Ordering our output</h2>

<p>If we needed to include more than one script or script file in our output, we can lean on the <code>Order</code> property we saw earlier, overriding this allows us to specify the order of our output. Let&rsquo;s see how we can do this:</p>

<pre><code class="language-csharp">// JsLoggingTagHelper.cs

public class JsLoggingTagHelper : TagHelperComponent
{
    public override int Order =&gt; 1;

    public override Task ProcessAsync(TagHelperContext context, TagHelperOutput output)
    {
        if (string.Equals(context.TagName, &quot;body&quot;, StringComparison.Ordinal))
        {
            const string script = @&quot;&lt;script src=&quot;&quot;/jslogger.js&quot;&quot;&gt;&lt;/script&gt;&quot;;
            output.PostContent.AppendHtmlLine(script);
        }

        return Task.CompletedTask;
    }
}
</code></pre>

<pre><code class="language-csharp">// CustomerAnalyticsTagHelper.cs

public class CustomerAnalyticsTagHelper : TagHelperComponent
{
    ...
    public override int Order =&gt; 2; // Set our AnalyticsTagHelper to appear after our logger
    ...
}
</code></pre>

<pre><code class="language-csharp">// Startup.cs

public void ConfigureServices(IServiceCollection services)
{
    services.AddSingleton&lt;ICustomerAnalytics, CustomerAnalytics&gt;();
    services.AddSingleton&lt;ITagHelperComponent, CustomerAnalyticsTagHelper&gt;();
    services.AddSingleton&lt;ITagHelperComponent, JsLoggingTagHelper&gt;();
    ...   
}
</code></pre>

<p>When we we launch our application we should see the following HTML output:</p>

<pre><code class="language-html">&lt;script src=&quot;/jslogger.js&quot;&gt;&lt;/script&gt;
&lt;script&gt;
    (function (i, s, o, g, r, a, m) {
        i['GoogleAnalyticsObject'] = r; i[r] = i[r] || function () {
            (i[r].q = i[r].q || []).push(arguments)
        }, i[r].l = 1 * new Date(); a = s.createElement(o),
            m = s.getElementsByTagName(o)[0]; a.async = 1; a.src = g; m.parentNode.insertBefore(a, m)
    })(window, document, 'script', '//www.google-analytics.com/analytics.js', 'ga');
    ga('create', 'UA-123456789', 'auto')
    ga('send', 'pageview');
&lt;/script&gt;
&lt;/body&gt;
&lt;/html&gt;
</code></pre>

<h2 id="conclusion">Conclusion</h2>

<p>Hopefully this post has highlighted how powerful the recent changes to tag helpers are and how using the <code>ITagHelperComponent</code> interface allows us to inject content into our HTML without having to touch any files. This means as a library author we can ease integration for our users by simply asking them to register a type with their IoC container and we can take care of the rest!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Mon, 17 Jul 2017 01:08:03 +0000</pubDate></item><item><title>.NET Core solution management via the command line interface</title><link>https://josephwoodward.co.uk/2017/07/dotnet-core-solution-management-via-command-line-interface</link><description></description><content:encoded><![CDATA[<p>One of the strengths boasted by .NET Core is its new command line interface (CLI for short), and by now you&rsquo;re probably aware that Visual Studio, Rider, Visual Studio Code etc shell out to the .NET Core CLI under the bonnet for most .NET Core related operations, so it makes sense that what you&rsquo;re able to do in your favourite IDE you&rsquo;re also able to do via the CLI.</p>

<p>With this in mind, only recently did I spend the time and effort to investigate how easy it was to create and manage a project solution via the CLI, including creating the solution structure, referencing projects along the way and adding them to .NET&rsquo;s .sln file.</p>

<p>It turns out it&rsquo;s incredibly easy and has instantly become my preferred way of managing solutions. Hopefully by the end of this post you&rsquo;ll arrive at the same conclusion too.</p>

<h2 id="benefits-of-using-the-cli-for-solution-management">Benefits of using the CLI for solution management</h2>

<p>So what are the benefits of using the CLI for solution management? Let&rsquo;s have a look:</p>

<ul>
<li><p>Something that has always been a fiddly endevour of UI interactions is now so much simpler via the CLI - what&rsquo;s more, you don&rsquo;t need to open your editor of choice if you want to create references or update a NuGet package.</p></li>

<li><p>Using the CLI for creating projects and solutions is particularly helpful if (like me) you work across multiple operating systems and want to normalise your tool chain.</p></li>

<li><p>Loading an IDE just to update a NuGet package seems unnecessary</p></li>
</ul>

<p>Let&rsquo;s begin!</p>

<h2 id="creating-our-solution">Creating our solution</h2>

<p>So let&rsquo;s take a look at how we can create the following project structure using the .NET Core CLI.</p>

<pre><code class="language-text">piedpiper
└── src
    ├── piedpiper.domain
    ├── piedpiper.sln
    ├── piedpiper.tests
    └── piedpiper.website
</code></pre>

<p>First we&rsquo;ll create our solution (.sln) file, I&rsquo;ve always preferred to create the solution file in the top level source folder but the choice is yours (just bear in mind to specify the right path in the commands used throughout the rest of this post).</p>

<pre><code class="language-bash"># /src/

$ dotnet new sln -n piedpiper
</code></pre>

<p>This will create a new <code>sln</code> file called <code>piedpiper.sln</code>.</p>

<p>Next we use the output parameter on the <code>dotnet new &lt;projecttype&gt;</code> command to create a project in a particular folder:</p>

<pre><code class="language-bash"># /src/

$ dotnet new mvc -o piedpiper.website
</code></pre>

<p>This will create an ASP.NET Core MVC application in the piedpiper.website folder in the same directory. If we were to look at our folder structure thus far it looks like this:</p>

<pre><code class="language-bash"># /src/

$ ls -la

piedpiper.sln
piedpiper.website
</code></pre>

<p>Next we can do the same for our domain and test projects:</p>

<pre><code class="language-bash"># /src/

$ dotnet new classlib -o piedpiper.domain
$ dotnet new xunit -o piedpiper.tests
</code></pre>

<h2 id="adding-our-projects-to-our-solution">Adding our projects to our solution</h2>

<p>At this point we&rsquo;ve got a solution file that has no projects referenced, we can verify this by calling the <code>list</code> command like so:</p>

<pre><code class="language-bash"># /src/

$ dotnet sln list

No projects found in the solution.
</code></pre>

<p>Next we&rsquo;ll add our projects to our solution file. Once upon a time doing this involved opening Visual Studio then adding a reference to each project manually. Thankfully this can also be done via the .NET Core CLI.</p>

<p>Now start to add each project with the following command, we do this by referencing the .csproj file:</p>

<pre><code class="language-bash"># /src/

$ dotnet sln add piedpiper.website/piedpiper.website.csproj
$ dotnet sln add piedpiper.domain/piedpiper.domain.csproj
$ dotnet sln add piedpiper.tests/piedpiper.tests.csproj
</code></pre>

<p><strong>Note:</strong> If you&rsquo;re using a Linux/Unix based shell you can do this in a single command using a globbing pattern!</p>

<pre><code class="language-bash"># /src/

$ dotnet sln add **/*.csproj

Project `piedpiper.domain/piedpiper.domain.csproj` added to the solution.
Project `piedpiper.tests/piedpiper.tests.csproj` added to the solution.
Project `piedpiper.website/piedpiper.website.csproj` added to the solution.
</code></pre>

<p>Now when we call <code>list</code> on our solution file we should get the following output:</p>

<pre><code class="language-bash"># /src/

$ dotnet sln list

Project reference(s)
--------------------
piedpiper.domain/piedpiper.domain.csproj
piedpiper.tests/piedpiper.tests.csproj
piedpiper.website/piedpiper.website.csproj
</code></pre>

<p>So far so good!</p>

<h2 id="adding-a-project-reference-to-a-project">Adding a project reference to a project</h2>

<p>Next up we want to start adding project references to our project, linking our domain library to our website and test library via the <code>dotnet add reference</code> command:</p>

<pre><code class="language-bash"># /src/

$ dotnet add piedpiper.tests reference piedpiper.domain/piedpiper.domain.csproj

Reference `..\piedpiper.domain\piedpiper.domain.csproj` added to the project.
</code></pre>

<p>Now if you were to view the contents of your test project we&rsquo;d see our domain library has now been referenced:</p>

<pre><code class="language-bash"># /src/piedpiper.tests/

$ cat piedpiper.tests/piedpiper.tests.csproj 

&lt;Project Sdk=&quot;Microsoft.NET.Sdk&quot;&gt;

  ...

  &lt;ItemGroup&gt;
    &lt;ProjectReference Include=&quot;..\piedpiper.domain\piedpiper.domain.csproj&quot; /&gt;
  &lt;/ItemGroup&gt;

&lt;/Project&gt;
</code></pre>

<p>Next we&rsquo;ll do the same for our website project, so let&rsquo;s go to our website folder and run the same command:</p>

<pre><code class="language-bash"># /src/

$ dotnet add piedpiper.website reference piedpiper.domain/piedpiper.domain.csproj
</code></pre>

<pre><code class="language-bash"># /src/

$ cat piedpiper.website/piedpiper.website.csproj 

&lt;Project Sdk=&quot;Microsoft.NET.Sdk&quot;&gt;

  ...

&lt;ItemGroup&gt;
    &lt;ProjectReference Include=&quot;..\piedpiper.domain\piedpiper.domain.csproj&quot; /&gt;
  &lt;/ItemGroup&gt;

&lt;/Project&gt;
</code></pre>

<p>At this point we&rsquo;re done!</p>

<p>If we navigate back to our root source folder and run the build command we should see everything build successfully:</p>

<pre><code class="language-bash">$ cd ../

# /src/

$ dotnet build

icrosoft (R) Build Engine version 15.3.388.41745 for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.

    piedpiper.domain -&gt; /Users/josephwoodward/Desktop/demo/src/piedpiper.domain/bin/Debug/netstandard2.0/piedpiper.domain.dll
    piedpiper.tests -&gt; /Users/josephwoodward/Desktop/demo/src/piedpiper.tests/bin/Debug/netcoreapp2.0/piedpiper.tests.dll
    piedpiper.website -&gt; /Users/josephwoodward/Desktop/demo/src/piedpiper.website/bin/Debug/netcoreapp2.0/piedpiper.website.dll

Build succeeded.

    0 Warning(s)
    0 Error(s)

Time Elapsed 00:00:08.08
</code></pre>

<h2 id="adding-a-nuget-package-to-a-project-or-updating-it">Adding a NuGet package to a project or updating it</h2>

<p>Before wrapping up, let&rsquo;s say we wanted to add a NuGet package to one of our projects, we can do this using the <code>add package</code> command.</p>

<p>First navigate to the project you want to add a NuGet package to:</p>

<pre><code class="language-bash"># /src/

$ cd pipedpiper.tests/

$ dotnet add package shouldly

info : Adding PackageReference for package 'shouldly'
...
log  : Installing Shouldly 2.8.3.
</code></pre>

<p>Optionally we could specify a version we&rsquo;d like to install using the version argument:</p>

<pre><code>$ dotnet add package shouldly -v 2.8.2
</code></pre>

<p><strong>Updating a NuGet package</strong></p>

<p>Updating a NuGet package to the latest version is just as easy, simply use the same command without the version argument:</p>

<pre><code>dotnet add package shouldly
</code></pre>

<h2 id="conclusion">Conclusion</h2>

<p>If you&rsquo;ve managed to get this far then well done, hopefully by now you&rsquo;ve realised how easy creating and managing a solution is using the new .NET Core command line interface.</p>

<p>One of the great powers of using the CLI is you can now turn creating the same project structure into a handy bash script which you could alias and reuse!</p>

<pre><code class="language-bash">#!/bin/bash

echo &quot;Enter project name, followed by [ENTER]:&quot;
read projname

echo &quot;Creating solution for $projname&quot;

dotnet new sln -n $projname

dotnet new mvc -o $projname.website
dotnet new classlib -o $projname.domain
dotnet new xunit -o $projname.tests

echo &quot;Adding projects to solution&quot;
dotnet sln add **/*.csproj

echo &quot;Referencing projects&quot;
dotnet add $projname.website reference $projname.domain/$projname.domain.csproj
dotnet add $projname.tests reference $projname.domain/$projname.domain.csproj
</code></pre>

<p>Happy coding!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Mon, 03 Jul 2017 08:06:44 +0000</pubDate></item><item><title>Tips on starting and running a .NET user group</title><link>https://josephwoodward.co.uk/2017/06/tips-on-starting-and-running-a-dot-net-user-group</link><description></description><content:encoded><![CDATA[<p>As someone that organises and runs the <a href="https://www.meetup.com/dotnetsouthwest/">.NET South West in Bristol</a>, I&rsquo;ve had a number of people online and in person approach me expressing an interest in starting a .NET focused meet up but not sure where to start; so much so that I thought it would be good to summarise the challenges and hurdles of running a meet up in a succinct blog post.</p>

<p><img src="http://assets.josephwoodward.co.uk/blog/meetup.jpg" alt=".NET South West meet up" /></p>

<p>Running a user group isn&rsquo;t a sail in the park, but it&rsquo;s not hard or particularly time consuming either. Hopefully this post will provide some valuable information and reassurance to those looking to create and foster a local .NET focused community of people wishing to learn, share, expand their knowledge and meet local developers with similar passions and interests.</p>

<h1 id="1-you-can-do-it">1. You can do it</h1>

<p>The the very first question that can play through your mind is whether you&rsquo;re capable of starting and running such a meet up. If you&rsquo;re having any self-doubts about whether you&rsquo;re knowledgeable enough to run a user group, or whether you have the confidence to organise it - don&rsquo;t.</p>

<p>Running a user group is an incredibly rewarding experience that starts off small and grows, as it grows you grow with it. Everyone that attends user groups are there to learn, that also applies to the organiser(s) too. So don&rsquo;t let any hesitations or self-doubts get in your way.</p>

<h1 id="2-gauging-the-interest-in-a-net-user-group">2. Gauging the interest in a .NET user group</h1>

<p>One of the first hurdles you face when starting a user group is trying to gauge the level of interest that exists in your local area.</p>

<p>I&rsquo;ve found a great way to gauge interest is to simply create a meet up group on the popular user group organising site meetup.com informing people that you&rsquo;re interested in seeing what the level of interest is like. You can create an event with no date and set the title to &ldquo;To be announced&rdquo; then leaving it active for a few months. Meetup.com notifies people with similar interests of the new user group and over time you start to get people joining the group waiting for the first meet to be announced. In the meantime your meet up page has a forum where you can start conversations with some of the new members where you can look for assistance or ask if anyone knows of a suitable venue.</p>

<p>This time is a great opportunity to get to know local developers before the meet up.</p>

<h1 id="3-having-an-online-presence">3. Having an online presence</h1>

<p>When it comes to organising a meet up, websites like meetup.com make the whole process a breeze. The service isn&rsquo;t free (starting at $9.99 - see more <a href="https://www.meetup.com/pricing/">here</a>) but personally I would say it&rsquo;s worth it in terms of how much effort it saves you. MeetUp provides services such as:</p>

<ul>
<li>Posting and announcing meet ups to members</li>
<li>Sending out regular emails to your user group</li>
<li>Increases meet up visibility to local developers on the platform</li>
<li>Semi-brandable pages (though this could be better)</li>
<li>Ability to link to add sponsors within your meet up page</li>
</ul>

<p>If you are on a budget then there are free alternatives such as the free tier of EventBrite which you can link to from a website you could set up.</p>

<p><img src="http://assets.josephwoodward.co.uk/blog/meetup_site.jpg" alt="" /></p>

<h1 id="4-many-hands-make-light-work">4. Many hands make light work</h1>

<p>Starting a meet up requires a lot of work, once the meet up is running the number of hours required to keep it ticking along dramatically reduces. That said, there are times where you may have personal commitments that make it difficult to focus on the meet up - so why not look to see if anyone else is interested in helping?</p>

<p>If you don&rsquo;t have any close friends or work colleagues that are interested in helping you can mention you&rsquo;re looking for help on your meet up page discussed previously. There&rsquo;s also nothing to stop you from talking to members once the meet up is under way to see if anyone is interested in helping out. If you do have people interested in helping then why not create a Slack channel for your group where you can stay organised.</p>

<h1 id="5-the-venue">5. The Venue</h1>

<p>Next up is the venue, this is often the most challenging part as most venues will require payment for a room. All is not lost though as there are a few options available to you:</p>

<ul>
<li><p>Look for sponsorship to cover the cost of the venue - some companies (such as recruitment companies) are often open to ideas of sponsoring meet ups in one way or another for publicity. Naturally this all depends on your stance around having recruitment companies at your meet up.</p></li>

<li><p>Approach software companies to see if they are interested in hosting the event. Often you&rsquo;ll find software companies are geared up for hosting meet ups and happy to do so in exchange for interacting with the community (and potentially saving them recruitment costs).</p></li>

<li><p>Small pubs - I know of a few meet up organisers who host at pubs in a back room as the venue are often aware that a few people stay behind to have a few drinks so it works in their favour too.</p></li>
</ul>

<p>Ultimately you want to ensure is you have consistency, so talk to the venue to and make it clear that you&rsquo;re looking for a long-term solution.</p>

<h1 id="6-speakers">6. Speakers</h1>

<p>Once you&rsquo;ve got your venue sorted the next task you face ( and this will be a regular one) is sourcing speakers. Luckily this finding speakers is often reasonably simple and once your meet up becomes established you&rsquo;ll start to find you have speakers approaching you with interest in giving a talk. I would also recommend looking at other near by meet ups for past speakers and making contact with them via Twitter. Networking at conferences is also a great way of finding potential speakers too.</p>

<p>In addition to the aforementioned suggestions, Microsoft also have a <a href="http://microsoftevangelists.azurewebsites.net/">handy Microsoft Evangelists (UK only)</a> for finding Evangelists nearby that are often more than happy to travel to your user group to give a talk.</p>

<p>Finally, encourage attendees of your meet up to give talks. You&rsquo;re trying to foster a community, so try to drive engagement and ownership by opening up space for short 15 minute lightning talks.</p>

<h1 id="7-sponsorship-competitions-and-prizes">7. Sponsorship / Competitions and Prizes</h1>

<p>Once your user group is off the ground I would recommend reaching out to software companies to see if they provide sponsorship for meet ups in the shape of prize licences or extended trials - for instance, JetBrains are well known for their <a href="https://www.jetbrains.com/community/support/#section=communities">awesome community support programme</a> which I&rsquo;d highly recommend taking a look at.</p>

<p>Some companies require your meet up to be a certain size, some are more flexible on what they can provide, often being happy to ship swag such as stickers and t-shirts instead which can be given away as prizes during your meet up (though if you&rsquo;re accepting swag from abroad then do be sure to clarify import tax so you don&rsquo;t get stung).</p>

<p>Swag and prizes aren&rsquo;t essential for a meet up, but it&rsquo;s something worth considering to spice things up a bit.</p>

<p><img src="http://assets.josephwoodward.co.uk/blog/jetbrains_community.jpg" alt="" /></p>

<h1 id="go-for-it">Go for it</h1>

<p>Hopefully this post has given you some ideas if you are considering setting up a meet up. Organising a meet up and running it is an extremely satisfying responsibility and it&rsquo;s great seeing a community of developers coming together to share knowledge and learn from one another. So what are you waiting for, go for it!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Thu, 22 Jun 2017 02:53:15 +0000</pubDate></item><item><title>Retrospective of DDD 12 in Reading</title><link>https://josephwoodward.co.uk/2017/06/ddd-12-retrospective</link><description></description><content:encoded><![CDATA[<p>Yesterday was my second time both attending and speaking at the DDD conference run out of the Microsoft offices in Reading, and as a tradition I like to finish the occasion by writing a short retrospective post highlighting the day.</p>

<h2 id="arrival">Arrival</h2>

<p>Living just two hours&rsquo; drive from Reading I decided to drive up early in the morning with fellow Just Eater <a href="https://twitter.com/stuartblang">Stuart Lang</a> who was also giving a talk. After arriving and grabbing our speaker polo shirts we headed to the speaker room to say hello to the other speakers - some of whom I know from previous conferences and through organising DDD South West.</p>

<h2 id="speakers-room">Speakers&rsquo; Room</h2>

<p>I always enjoying spending time in the speakers&rsquo; room. As a relative newcomer to speaking I find it&rsquo;s a great opportunity to get some tips from more experienced speakers as well as geek out about programming. In addition, I still had some preparation to do so it&rsquo;s a quiet room where I can refine and tweak my talk slides.</p>

<p><img src="http://assets.josephwoodward.co.uk/blog/ddd-12-speakers-room.jpg" alt="" /></p>

<h2 id="talk-1-goodbye-rest-hello-graphql-by-sandeep-singh">Talk 1 - Goodbye REST; Hello GraphQL by Sandeep Singh</h2>

<p>Even though I had preparation to take care of for my post-lunch talk, I was determined to attend <a href="https://twitter.com/Initial_Spark">Sandeep Singh&rsquo;s</a> talk on GraphQL as it&rsquo;s a technology I&rsquo;ve heard lots about via podcasts and have been interested to learn more. In addition, working at Just Eat where we have a lot of distributed services that are extremely chatty over HTTP - I was interested to see if and where GraphQL could help.</p>

<p><img src="http://assets.josephwoodward.co.uk/blog/ddd-12-graphql.jpg" alt="" /></p>

<p>Having met Sandeep Singh for the first time at DDD South West he&rsquo;s clearly a knowledgeable guy so I was expecting great thing, and he delivered. The talk was very informative and by the end of the talk Sandeep had demonstrated the power of GraphQL (along with well-balanced considerations that need to be made) and answered the majority of questions I had forming in my notes as the talk progressed. It&rsquo;s definitely sparked my interest in the GraphQL and I&rsquo;m keen to start playing with it.</p>

<h2 id="my-talk-building-a-better-web-api-architecture-using-cqrs">My talk - Building a better Web API architecture using CQRS</h2>

<p>Having submitted two talks to DDD Reading (this and my Razor Deep Dive I delivered at DDD South West), the one that received the most votes was this talk, a topic and architectural style I&rsquo;ve been extremely interested in for a number of years now (long before MediatR was a thing!).</p>

<p>Having spoken at DDD Reading before, this year my talk was held in a rather intimidating room called Chicago that seats up to 90 with a stage overlooking the attendees. All in all I was happy with the way the talk went, however I did burn through my slides far faster than I did during practice, luckily though the attendees had plenty of questions so I had plenty of opportunity to answer and expand on them with the remaining time. I&rsquo;ll chalk this down to experience and learn from it.</p>

<p><img src="http://assets.josephwoodward.co.uk/blog/ddd-12-talk.jpg" alt="" /></p>

<p>I must say that one of my concerns whilst preparing the talk was the split opinions around what CQRS really is, and how it differs to Bertrand Meyer&rsquo;s formulation of CQS (coincidentally, there was a healthy debate around the definition moments before my talk in the speakers&rsquo; room between two speakers well-versed in the area!).</p>

<h2 id="talk-3-async-in-c-the-good-the-bad-and-the-ugly-by-stuart-lang">Talk 3 - Async in C# - The Good, the Bad and the Ugly by Stuart Lang</h2>

<p>Having worked with Stuart for a year now and known him for slightly longer, it&rsquo;s clear that his knowledge on async/await is spot on and certainly far deeper than any other developer I&rsquo;ve met. Having seen a slightly alternative version of his Stuart&rsquo;s talk delivered internally at Just Eat I was familiar with the narrative, however I was keen to attend to this talk because C#&rsquo;s async language construct is deep area and one I&rsquo;m interested in.</p>

<p>Overall the talk went really well with a nice break in the middle allowing for time for questions before moving on (something I may have to try in my talk moving forward).</p>

<h2 id="conclusion">Conclusion</h2>

<p>Overall DDD 12 was an awesome day, I&rsquo;d love to have attended more talks and spend more time speaking with people but keen to deliver my talk to the best of my ability I had work to be done. None the less after my talk was over it was great catching up with familiar faces and meeting new people (easily one of my favourite parts of a conference as I&rsquo;m a bit of a chatterbox!).</p>

<p>I&rsquo;ll close this post with a massive thanks to the event organisers (having helped organise DDD South West this year I have full appreciation for the hard work and time that goes into organising such an event), and also the sponsors - without them the conference would not have been possible.</p>

<p>Until next year!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Mon, 12 Jun 2017 23:57:30 +0000</pubDate></item><item><title>Another day, another fantastic DDD (DDD South West 7)</title><link>https://josephwoodward.co.uk/2017/05/ddd-southwest-7</link><description></description><content:encoded><![CDATA[<p>Saturday 6th of May marked the day of another great DDD South West event. Having attended other DDD events around the UK I&rsquo;ve always felt they had a special community feel to them, a feeling I&rsquo;ve not felt at other conferences. This year&rsquo;s DDD South West event was particularly special to me not only because I was selected to speak at the conference, but because this year I was part of the organisation team.</p>

<p>This post is just a short summary of the highs and lows of organising the conference and the day itself.</p>

<h2 id="organising-the-conference">Organising the conference</h2>

<p>This year I was honoured to be part of the organisation team for DDD South West, and I loved every minute of it. Myself and the other organisers (there were 5 of us in total, some of whom have been organising the conference for well over 5 or 6 years!) would hold regular meetings via Skype, breaking down responsibilities such as sponsorship, catering, speaker related tasks and finance. Initially these meetings were about a month apart, but as the conference drew closer and the pressure started to set in, we would meet weekly.</p>

<p>During the organising of DDD Southwest I&rsquo;ve gained a true appreciation for the amount of effort conference organisers (especially those that run non-profit events in their spare time, such as those I&rsquo;ve had the pleasure of working with) put in to organising an event for the community.</p>

<p>On the day everything went as expected with no hiccups, though as I was speaking on the day I was definitely a lot more stressed than I would have been otherwise. After the event we all headed over to the Just Eat offices for an after party, which I&rsquo;ll cover shortly.</p>

<p>For more information on the day, there are two fantastic write ups by <a href="http://blog.craigtp.co.uk/post/DDD-South-West-2017-In-Review">Craig Phillips</a> and <a href="http://www.danclarke.com/dddsw-bristol-2017/">Dan Clarke</a> that I&rsquo;d highly recommend reading.</p>

<h2 id="asp-net-core-razor-deep-dive-talk">ASP.NET Core Razor Deep Dive talk</h2>

<p>Whilst organising DDD South west 7, I figured why not pile the stress on and submit a few sessions. Last year I caught the speaking bug and this year I was keen to continue to pursue it, so I submitted 3 sessions each on very different subjects and was quite surprised to see the ASP.NET Core Razor Deep Dive was selected. It&rsquo;s not the sexiest topic to talk about, but non-the-less it was a great opportunity to share some experience and give people information they can take away and directly apply in real life (something I always try to do when putting together a talk).</p>

<p><img src="http://assets.josephwoodward.co.uk/blog/ddd_southwest_7_razor_talk.jpg" alt="" /></p>

<p>The talk itself was focused on the new Razor changes and features introduced in ASP.NET Core, why and where you&rsquo;d use them. The topics included:</p>

<ul>
<li>Razor as a view engine and it&rsquo;s syntax</li>
<li>Tag helpers and how powerful they are</li>
<li>Create your own tag helpers for some really interesting use cases - this part was especially great as I could see a lot of people in the audience had the &ldquo;light bulb&rdquo; moment. This brings me great joy as I know there&rsquo;s something they were able to take away from the talk.</li>
<li>Why Partial Views and Child Actions are limiting</li>
<li>View Components and how you can use them to create more modular, reusable views</li>
<li>And finished off with the changes people can expect to see in Razor when ASP.NET Core 2.0 is released (ITagHelperComponents and Razor Pages)</li>
</ul>

<p>Overall I was really happy with the talk and the turnout. The feedback I received was great, with lots of mentions of the word &ldquo;engaging&rdquo; (which as a relatively new speaker still trying to find his own style, is always positive to hear).</p>

<h2 id="ddd-south-west-7-after-party">DDD South West 7 after party</h2>

<p>Once all was done and dusted and the conference drew to a close, a large majority of us took a 5 minute stroll over to the Just Eat offices for an after party where free pizza and beer was on offer for the attendees (Thanks Just Eat!).</p>

<p>After a long day of ensuring the event was staying on track coupled with the stresses of talking, it was great to be able to unwind and enjoy spending time mingling with the various attendees and sponsors, all of whom made the event possible.</p>

<p><img src="http://assets.josephwoodward.co.uk/blog/ddd_southwest_7_afterparty2.jpg" alt="" /></p>

<p>Bring on DDD South West 8!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Fri, 12 May 2017 05:22:19 +0000</pubDate></item><item><title>C# 7 ValueTuple types and their limitations</title><link>https://josephwoodward.co.uk/2017/04/csharp-7-valuetuple-types-and-their-limitations</link><description></description><content:encoded><![CDATA[<p>Having been experimenting and reading lots about C# 7&rsquo;s tuples recently I thought I&rsquo;d summarise some of the more interesting aspects about this new C# language feature whilst highlighting some areas and limitations that may not be apparent when first using them.</p>

<p>Hopefully by the end of this post you&rsquo;ll have a firm understanding of the new feature and how it differs in comparison to the Tuple type introduced in .NET 4.</p>

<h2 id="a-quick-look-at-tuple-types-as-a-language-feature">A quick look at tuple types as a language feature</h2>

<p>Prior to C# 7, .NET&rsquo;s tuples were an awkward somewhat retrofitted approach to what&rsquo;s a powerful language feature. As a result you don&rsquo;t see them being used as much as they are in other language like Python or to some extend Go (which doesn&rsquo;t support Tuples, but has many features they provide such as multiple return values) - with this in mind it behoves me to briefly explain what Tuples are and why you&rsquo;d use them for those that may not have touched them before.</p>

<h2 id="so-what-are-tuples-and-where-you-would-use-them">So what are Tuples and where you would use them?</h2>

<p>The tuples type&rsquo;s main strengths lie in allowing you to group types into a closely related data structure (much like creating a class to represent more than one value), this means they&rsquo;re particularly useful in cases such as returning more than one value from a method, for instance:</p>

<pre><code class="language-csharp">public class ValidationResult {
    public string Message { get; set; }
    public bool IsValid { get; set; }
}

var result = ValidateX(x);
if (!result.IsValid)
{
    Logger.Log(Error, result.Message);
}
</code></pre>

<p>Whilst there&rsquo;s nothing wrong with this example, sometimes we don&rsquo;t want to have to create a type to represent a set of data - we want types to work for us, not against us; this is where the tuple type&rsquo;s utility is.</p>

<p>In fact, in a languages such as Go (which allow multiple return types from a function) we see such a pattern used extensively throughout their standard library.</p>

<pre><code class="language-golang">bytes, err := ioutil.ReadFile(&quot;file.json&quot;)
if err != nil {
    log.Fatal(err)
}
</code></pre>

<p>Multiple return types can also stop you from needing to use the convoluted <code>TryParse</code> method pattern with <code>out</code> parameters.</p>

<p>Now we&rsquo;ve got that covered and we&rsquo;re all on the same page, let&rsquo;s continue.</p>

<h2 id="in-the-beginning-there-was-system-tuple-t1-t2-t7">In the beginning there was System.Tuple<T1, T2, .., T7></h2>

<p><strong>Verbosity</strong><br>
Back in .NET 4 we saw the appearance of the <code>System.Tuple&lt;T&gt;</code> class which introduced a verbose and somewhat awkward API:</p>

<pre><code class="language-csharp">var person = new Tuple&lt;string, int&gt;(&quot;John Smith&quot;, 43);
Console.WriteLine($&quot;Name: {person.Item1}, Age {person.Item2}&quot;);
// Output: Name: John Smith, Age: 43
</code></pre>

<p>Alternatively there&rsquo;s a static factory method that cleared things up a bit:</p>

<pre><code class="language-csharp">var person = Tuple.Create(&quot;John Smith&quot;, 43);
</code></pre>

<p>But there was still room for improvement such as:</p>

<p><strong>No named elements</strong><br>
One of the weaknesses of the <code>System.Tuple</code> type is that you have to refer to your elements as <code>Item1</code>, <code>Item2</code> etc instead of by their &lsquo;named&rsquo; version (allowing you to unpack a tuple and reference the properties directly) like you can in Python:</p>

<pre><code class="language-python">name, age = person_tuple
print name
</code></pre>

<p><strong>Garbage collection pressure</strong><br>
In addition the <code>System.Tuple</code> type is a reference type, meaning you pay the penalty of a heap allocation, thus increasing pressure on the garbage collector.</p>

<pre><code class="language-csharp">public class Tuple&lt;T1&gt; : IStructuralEquatable, IStructuralComparable, IComparable
</code></pre>

<p>Nonetheless the <code>System.Tuple</code> type scratched an itch and solved a problem, especially if you owned the API.</p>

<h2 id="c-7-tuples-to-the-rescue">C# 7 Tuples to the rescue</h2>

<p>With the introduction of the <code>System.ValueTuple</code> type in C# 7, a lot of these problems have been solved (it&rsquo;s worth mentioning that if you want to use or play with the new Tuple type you&rsquo;re going to need to <a href="https://www.nuget.org/packages/System.ValueTuple/">download the following NuGet package</a>.</p>

<p>Now in C# 7 you can do such things as:</p>

<p><strong>Tuple literals</strong></p>

<pre><code class="language-csharp">// This is awesome and really clears things up; we can even directly reference the named value!

var person = (Name: &quot;John Smith&quot;, Age: 43);
Console.WriteLine(person.Name); // John Smith
</code></pre>

<p><strong>Tuple (multiple) return types</strong></p>

<pre><code class="language-csharp">(string, int) GetPerson()
{
    var name = &quot;John Smith&quot;;
    var age = 32;
    return (name, age);
}

var person = GetPerson();
Console.WriteLine(person.Item1); // John Smith
</code></pre>

<p><strong>Even named Tuple return types!</strong></p>

<pre><code class="language-csharp">(string Name, int Age) GetPerson()
{
    var name = &quot;John Smith&quot;;
    var age = 32;
    return (name, age);
}

var person = GetPerson();
Console.WriteLine(person.Name); // John Smith
</code></pre>

<p>If that wasn&rsquo;t enough you can also deconstruct types:</p>

<pre><code class="language-csharp">public class Person
{
    public string Name =&gt; &quot;John Smith&quot;;
    public int Age =&gt; 43;

    public void Deconstruct(out string name, out int age)
    {
        name = Name;
        age = Age;
    }
}

...

var person = new Person();
var (name, age) = person;

Console.WriteLine(name); // John Smith
</code></pre>

<p>As you can see the <code>System.ValueTuple</code> greatly improves on the older version, allowing you to write far more declarative and succinct code.</p>

<p><strong>It&rsquo;s a value type, baby!</strong><br>
In addition (if the name hadn&rsquo;t given it away!) C# 7&rsquo;s Tuple type is now a value type, meaning there&rsquo;s no heap allocation and one less de-allocation to worry about when compacting the GC heap. This means the <code>ValueTuple</code> can be used in the more performance critical code.</p>

<p>Now going back to our original example where we created a type to represent the return value of our validation method, we can delete that type (because deleting code is always a great feeling) and clean things up a bit:</p>

<pre><code class="language-csharp">var (message, isValid) = ValidateX(x);
if (!isvalid)
{
    Logger.Log(Log.Error, message);
}
</code></pre>

<p>Much better! We&rsquo;ve now got the same code without the need to create a separate type just to represent our return value.</p>

<h2 id="c-7-tuple-s-limitations">C# 7 Tuple&rsquo;s limitations</h2>

<p>So far we&rsquo;ve looked at what makes the <code>ValueTuple</code> special, but in order to know the full story we should look at what limitations exist so we can make an educated descision on when and where to use them.</p>

<p>Let&rsquo;s take the same person tuple and serialise it to a JSON object. With our named elements we should expect to see an object that resembles our tuple.</p>

<pre><code class="language-csharp">var person = (Name: &quot;John&quot;, Last: &quot;Smith&quot;);
var result = JsonConvert.SerializeObject(person);

Console.WriteLine(result);
// {&quot;Item1&quot;:&quot;John&quot;,&quot;Item2&quot;:&quot;Smith&quot;}
</code></pre>

<p>Wait, what? where have our keys gone?</p>

<p>To understand what&rsquo;s going on here we need to take a look at how ValueTuples work.</p>

<h1 id="how-the-c-7-valuetuple-type-works">How the C# 7 ValueTuple type works</h1>

<p>Let&rsquo;s take our <code>GetPerson</code> method example that returns a named tuple and check out the de-compiled source. No need to install a de-compiler for this, a really handy website called <a href="http://tryroslyn.azurewebsites.net">tryroslyn.azurewebsites.net</a> will do everything we need.</p>

<pre><code class="language-csharp">// Our code
using System;
public class C {
    public void M() {
        var person = GetPerson();
        Console.WriteLine(person.Name + &quot; is &quot; + person.Age);
    }
    
    (string Name, int Age) GetPerson()
    {
        var name = &quot;John Smith&quot;;
        var age = 32;
        return (name, age);
    }
}
</code></pre>

<p>You&rsquo;ll see that when de-compiled, the <code>GetPerson</code> method is simply syntactic sugar for the following:</p>

<pre><code class="language-csharp">// Our code de-compiled
public class C
{
    public void M()
    {
        ValueTuple&lt;string, int&gt; person = this.GetPerson();
        Console.WriteLine(person.Item1 + &quot; is &quot; + person.Item2);
    }
    [return: TupleElementNames(new string[] {
        &quot;Name&quot;,
        &quot;Age&quot;
    })]
    private ValueTuple&lt;string, int&gt; GetPerson()
    {
        string item = &quot;John Smith&quot;;
        int item2 = 32;
        return new ValueTuple&lt;string, int&gt;(item, item2);
    }
}
</code></pre>

<p>If you take a moment to look over the de-compiled source you&rsquo;ll see two areas of particular interest to us:</p>

<p>First of all, our <code>Console.WriteLine()</code> method call to our named elements have gone and been replaced with <code>Item1</code> and <code>Item2</code>. What&rsquo;s happened to our named elements? Looking further down the code you&rsquo;ll see they&rsquo;ve actually been pulled out and added via the <code>TupleElementNames</code> attribute.</p>

<pre><code class="language-csharp">...
[return: TupleElementNames(new string[] {
    &quot;Name&quot;,
    &quot;Age&quot;
})]
...
</code></pre>

<p>This is because the <code>ValueTuple</code> type&rsquo;s named elements are <strong>erased at runtime</strong>, meaning there&rsquo;s no runtime representation of them. In fact, if we were to view the IL (within the TryRoslyn website switch the Decompiled dropdown to IL), you&rsquo;ll see any mention of our named elements have completely vanished!</p>

<pre><code class="language-csharp">IL_0000: nop // Do nothing (No operation)
IL_0001: ldarg.0 // Load argument 0 onto the stack
IL_0002: call instance valuetype [System.ValueTuple]System.ValueTuple`2&lt;string, int32&gt; C::GetPerson() // Call method indicated on the stack with arguments
IL_0007: stloc.0 // Pop a value from stack into local variable 0
IL_0008: ldloc.0 // Load local variable 0 onto stack
IL_0009: ldfld !0 valuetype [System.ValueTuple]System.ValueTuple`2&lt;string, int32&gt;::Item1 // Push the value of field of object (or value type) obj, onto the stack
IL_000e: ldstr &quot; is &quot; // Push a string object for the literal string
IL_0013: ldloc.0 // Load local variable 0 onto stack
IL_0014: ldfld !1 valuetype [System.ValueTuple]System.ValueTuple`2&lt;string, int32&gt;::Item2 // Push the value of field of object (or value type) obj, onto the stack
IL_0019: box [mscorlib]System.Int32 // Convert a boxable value to its boxed form
IL_001e: call string [mscorlib]System.String::Concat(object, object, object) // Call method indicated on the stack with arguments
IL_0023: call void [mscorlib]System.Console::WriteLine(string) // Call method indicated on the stack with arguments
IL_0028: nop  // Do nothing (No operation)
IL_0029: ret  // Return from method, possibly with a value
</code></pre>

<p>So what does that mean to us as developers?</p>

<h2 id="no-reflection-on-named-elements">No reflection on named elements</h2>

<p>The absence of named elements in the compiled source means that it&rsquo;s not possible to use reflection to get those name elements via reflection, which limits <code>ValueTuple</code>&rsquo;s utility.</p>

<p>This is because under the bonnet the compiler is erasing the named elements and reverting to the <code>Item1</code> and <code>Item2</code> properties, meaning our serialiser doesn&rsquo;t have access to the properties.</p>

<p>I would highly recommend reading Marc Gravell&rsquo;s <a href="blog.marcgravell.com/2017/04/exploring-tuples-as-library-author.html">Exploring tuples as a library author</a> post where he discusses a similar hurdle when trying to use tuples within Dapper.</p>

<h2 id="no-dynamic-access-to-named-elements">No dynamic access to named elements</h2>

<p>This also means when casting your tuple to a dynamic object results in the loss of the named elements. This can be witnessed by running the following example:</p>

<pre><code class="language-csharp">var person = (Name: &quot;John&quot;, Last: &quot;Smith&quot;);
var dynamicPerson = (dynamic)person;
Console.WriteLine(dynamicPerson.Name);
</code></pre>

<p>Results in the following error RuntimeBinder exception:</p>

<pre><code>Unhandled Exception: Microsoft.CSharp.RuntimeBinder.RuntimeBinderException: 'System.ValueTuple&lt;string,string&gt;' does not contain a definition for 'Name'
   at CallSite.Target(Closure , CallSite , Object )
   at CallSite.Target(Closure , CallSite , Object )
   at TupleDemo.Program.Main(String[] args) in /Users/josephwoodward/Dev/TupleDemo/Program.cs:line 16
</code></pre>

<p>Thanks to <a href="https://www.danielcrabtree.com/blog/99/c-sharp-7-dynamic-types-and-reflection-cannot-access-tuple-fields-by-name">Daniel Crabtree&rsquo;s post</a> for highlighting this!</p>

<h2 id="no-using-named-tuples-in-razor-views-either-unless-they-re-declared-in-your-view">No using named Tuples in Razor views either (unless they&rsquo;re declared in your view)</h2>

<p>Naturally the name erasure in C# 7 tuples also means that you cannot use the names in your view from your view models. For instance:</p>

<pre><code class="language-csharp">public class ExampleViewModel {

    public (string Name, int Age) Person =&gt; (&quot;John Smith&quot;, 30);

}
</code></pre>

<pre><code class="language-csharp">public class HomeController : Controller
{
    ...
    public IActionResult About()
    {
        var model = new ExampleViewModel();

        return View(model);
    }
}
</code></pre>

<pre><code class="language-csharp">// About.cshtml
@model TupleDemo3.Models.ExampleViewModel

&lt;h1&gt;Hello @Model.Person.Name&lt;/h1&gt;
</code></pre>

<p>Results in the following error:</p>

<pre><code>'ValueTuple&lt;string, int&gt;' does not contain a definition for 'Name' and no extension method 'Name' accepting a first argument of type 'ValueTuple&lt;string, int&gt;' could be found (are you missing a using directive or an assembly reference?)
</code></pre>

<p>Though switching the print statement to <code>@Model.Person.Item1</code> outputs the result you&rsquo;d expect.</p>

<h2 id="conclusion">Conclusion</h2>

<p>That&rsquo;s enough about Tuples for now. Some of the examples used in this post aren&rsquo;t approaches you&rsquo;d use in real life, but hopefully go to demonstrate some of the limitations of the new type and where you can and can&rsquo;t use C# 7&rsquo;s new ValueTuple type.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Thu, 20 Apr 2017 09:15:53 +0000</pubDate></item><item><title>Setting up a local Selenium Grid using Docker and .NET Core</title><link>https://josephwoodward.co.uk/2017/03/setting-up-local-selenium-grid-using-docker-dot-net-core</link><description></description><content:encoded><![CDATA[<p>Since jumping on the Docker bandwagon I&rsquo;ve found its utility spans beyond the  repeatable deployments and consistent runtime environment benefits that come with the use of containers. There&rsquo;s a whole host of tooling and use cases emerging which take advantage of containerisation technology, one use case that I recently discovered after a conversation with <a href="http://blog.orangelightning.co.uk">Phil Jones</a> via Twitter is the ability to quickly set up a Selenium Grid.</p>

<p>Setting up and configuring a Selenium Grid has never been an simple process, but thanks to Docker it&rsquo;s suddenly got a whole lot easier. In addition, you&rsquo;re now able to run your own Selenium Grid locally and greatly speed up your tests&rsquo; execution. If that isn&rsquo;t enough then another benefit is because the tests execute inside of a Docker container, you&rsquo;ll no longer be blocked by your browser navigating the website you&rsquo;re testing!</p>

<p>Let&rsquo;s take a look at how this can be done.</p>

<p>Note: For the impatient, I&rsquo;ve put together a working example of the following post in a <a href="https://github.com/JosephWoodward/SeleniumGridDotNetCore">GitHub repository you can clone and run</a>.</p>

<h1 id="selenium-grid-docker-compose-file">Selenium Grid Docker Compose file</h1>

<p>For those that haven&rsquo;t touched Docker Compose (or Docker for that matter), a Docker Compose file is a Yaml based configuration document (often named <code>docker-compose.yml</code>) that allows you to configure your applications&rsquo; Docker environment.</p>

<p>Without Docker Compose you&rsquo;d need to manually run your individual <code>Dockerfile</code> files specifying their network connections and configuration parameters along the way. With Docker Compose you can configure everything in a single file and start your environment with a simple <code>docker-compose up</code> command.</p>

<p>Below is the Selenium Grid Docker Compose configuration you can copy and paste:</p>

<pre><code class="language-yml"># docker-compose.yml

version: '2'
services:
    selenium_hub:
        image: selenium/hub:3.0.1-aluminum
        container_name: selenium_hub
        privileged: true
        ports:
            - 4444:4444
        environment:
            - GRID_TIMEOUT=120000
            - GRID_BROWSER_TIMEOUT=120000
        networks:
            - selenium_grid_internal

    nodechrome1:
        image: selenium/node-chrome-debug:3.0.1-aluminum
        privileged: true
        depends_on:
            - selenium_hub
        ports:
            - 5900
        environment:
            - no_proxy=localhost
            - TZ=Europe/London
            - HUB_PORT_4444_TCP_ADDR=selenium_hub
            - HUB_PORT_4444_TCP_PORT=4444
        networks:
            - selenium_grid_internal

    nodechrome2:
        image: selenium/node-chrome-debug:3.0.1-aluminum
        privileged: true
        depends_on:
            - selenium_hub
        ports:
            - 5900
        environment:
            - no_proxy=localhost
            - TZ=Europe/London
            - HUB_PORT_4444_TCP_ADDR=selenium_hub
            - HUB_PORT_4444_TCP_PORT=4444
        networks:
            - selenium_grid_internal

networks:
    selenium_grid_internal:
</code></pre>

<p>In the above Docker Compose file we&rsquo;ve defined our Selenium Hub (<code>selenium_hub</code>) service, exposing it on port 4444 and attaching it to a custom network named <code>selenium_grid_internal</code> (which you&rsquo;ll see all of our nodes are on).</p>

<pre><code class="language-yml">selenium_hub:
    image: selenium/hub:3.0.1-aluminum
    container_name: selenium_hub
    privileged: true
    ports:
        - 4444:4444
    environment:
        - GRID_TIMEOUT=120000
        - GRID_BROWSER_TIMEOUT=120000
    networks:
        - selenium_grid_internal
</code></pre>

<p>All that&rsquo;s remaining at this point is to add our individual nodes. In this instance I&rsquo;ve added two Chrome based nodes, named <code>nodechrome1</code> and <code>nodechrome2</code>:</p>

<pre><code class="language-yml">nodechrome1:
    image: selenium/node-chrome-debug:3.0.1-aluminum
    privileged: true
    depends_on:
        - selenium_hub
    ports:
        - 5900
    environment:
        - no_proxy=localhost
        - TZ=Europe/London
        - HUB_PORT_4444_TCP_ADDR=selenium_hub
        - HUB_PORT_4444_TCP_PORT=4444
    networks:
        - selenium_grid_internal

nodechrome2:
    image: selenium/node-chrome-debug:3.0.1-aluminum
    ...
</code></pre>

<p><strong>Note:</strong> If you wanted to add Firefox to the mix then you can replace the <code>image:</code> value with the following Docker image:</p>

<pre><code class="language-yml">nodefirefox1:
    image: selenium/node-firefox-debug:3.0.1-aluminum
    ...
</code></pre>

<p>Now if we run <code>docker-compose up</code> you&rsquo;ll see our Selenium Grid environment will spring into action.</p>

<p><img src="http://assets.josephwoodward.co.uk/blog/core_selenium_grid_console.png" alt="" /></p>

<p>To verify everything is working correctly we can navigate to <code>http://0.0.0.0:4444</code> in our browser where we should be greeted with the following page:</p>

<p><img src="http://assets.josephwoodward.co.uk/blog/core_selenium_grid_page.png" alt="" /></p>

<h2 id="connecting-selenium-grid-from-net-core">Connecting Selenium Grid from .NET Core</h2>

<p>At the time of writing this post the official Selenium NuGet package does not support .NET Standard, however there&rsquo;s a <a href="https://github.com/SeleniumHQ/selenium/pull/2269">pending pull request</a> which adds support (the pull request has been on hold for a while as the Selenium team wanted to wait for the tooling to stabilise). In the mean time the developer that added support released it as a <a href="https://www.nuget.org/packages/CoreCompat.Selenium.WebDriver/3.2.0-beta003">separate NuGet package which can be downloaded here</a>.</p>

<p>Alternatively just create the following <code>.csproj</code> file and run the <code>dotnet restore</code> CLI command.</p>

<pre><code class="language-xml">&lt;Project Sdk=&quot;Microsoft.NET.Sdk&quot;&gt;

  &lt;PropertyGroup&gt;
    &lt;OutputType&gt;Exe&lt;/OutputType&gt;
    &lt;TargetFramework&gt;netcoreapp1.0&lt;/TargetFramework&gt;
  &lt;/PropertyGroup&gt;

  &lt;ItemGroup&gt;
    &lt;PackageReference Include=&quot;Microsoft.NET.Test.Sdk&quot; Version=&quot;15.0.0-preview-20170123-02&quot; /&gt;
    &lt;PackageReference Include=&quot;xunit&quot; Version=&quot;2.2.0-beta5-build3474&quot; /&gt;
    &lt;PackageReference Include=&quot;xunit.runner.visualstudio&quot; Version=&quot;2.2.0-beta5-build1225&quot; /&gt;
    &lt;PackageReference Include=&quot;CoreCompat.Selenium.WebDriver&quot; Version=&quot;3.2.0-beta003&quot; /&gt;
  &lt;/ItemGroup&gt;

&lt;/Project&gt;
</code></pre>

<p>Next we&rsquo;ll create the following base class that will create a remote connection to our Selenium Grid:</p>

<pre><code class="language-csharp">public abstract class BaseTest
{
    private IWebDriver _driver;

    public IWebDriver GetDriver()
    {
        var capability = DesiredCapabilities.Chrome();
        if (_driver == null){
            _driver = new RemoteWebDriver(new Uri(&quot;http://0.0.0.0:4444/wd/hub/&quot;), capability, TimeSpan.FromSeconds(600));
        }

        return _driver;
    }
}
</code></pre>

<p>After that we&rsquo;ll create a very simple (and trivial) test that checks for the existence of an ID on google.co.uk.</p>

<pre><code class="language-csharp">public class UnitTest1 : BaseTest 
{

    [Fact]
    public void TestForId()
    {
        using (var driver = GetDriver())
        {
            driver.Navigate().GoToUrl(&quot;http://www.google.co.uk&quot;);
            var element = driver.FindElement(By.Id(&quot;lst-ib&quot;));
            Assert.True(element != null);
        }
    }

    ...

}
</code></pre>

<p>Now if we run our test (either via the <code>dotnet test</code> CLI command or from your editor or choice) we should see our Docker terminal console showing our Selenium Grid container jump into action as it starts executing the test one one of the registered Selenium Grid nodes.</p>

<p>At the moment we&rsquo;re only executing the one test so you&rsquo;ll only see one node running the test, but as you start to add more tests across multiple classes the Selenium Grid hub will start to distribute those tests across its cluster of nodes, dramatically increasing your test execution time.</p>

<p>If you&rsquo;d like to give this a try then I&rsquo;ve added all of the source code and Docker Compose file in a <a href="https://github.com/JosephWoodward/SeleniumGridDotNetCore">GitHub repository that you can clone and run</a>.</p>

<h2 id="the-drawbacks">The drawbacks</h2>

<p>Before closing there are a few drawbacks to this method of running tests, especially if you&rsquo;re planning on doing it locally (instead of setting a grid up on a Virtual Machine via Docker).</p>

<p><strong>Debugging is made harder</strong><br>
If you&rsquo;re planning on using Selenium Grid locally then you&rsquo;ll lose the visibility of what&rsquo;s happening in the browser as the tests are running within a Docker container. This means that in order to see the state of the web page on a failing test you&rsquo;ll need to switch to local execution using the Chrome / FireFox or Internet Explorer driver.</p>

<p><strong>Reaching localhost from within a container</strong><br>
In this example we&rsquo;re executing the tests against an external domain (google.co.uk) that our container can resolve. However if you&rsquo;re planning on running tests against a local development environment then there will be some additional Docker configuration required to allow the container to access the Docker host&rsquo;s IP address.</p>

<h2 id="conclusion">Conclusion</h2>

<p>Hopefully this post has broadened your options around Selenium based testing and demonstrated how pervasive Docker is becoming. I&rsquo;m confident that the more Docker (and other container technology for that matter) matures, the more we&rsquo;ll see said technology being used for such use cases as we&rsquo;ve witnessed in this post.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Mon, 20 Mar 2017 13:28:33 +0000</pubDate></item><item><title>An in-depth look at the various ways of specifying the IP or host ASP.NET Core listens on</title><link>https://josephwoodward.co.uk/2017/02/many-different-ways-specifying-host-port-asp-net-core</link><description></description><content:encoded><![CDATA[<p>Recently I&rsquo;ve been working on an ASP.NET Core application where I&rsquo;ve needed to configure at runtime, the host address and port the webhost will listen on. For those that have built an ASP.NET Core app you&rsquo;ll know that the default approach generated by the .NET CLI is less than ideal in this case as it&rsquo;s hard-coded.</p>

<p>After a bit of digging I quickly realised there weren&rsquo;t any places that summarised all of the options available, so I thought I&rsquo;d summarise it all in a post.</p>

<p>Enough talk, let&rsquo;s begin.</p>

<h2 id="don-t-set-an-ip">Don&rsquo;t set an IP</h2>

<p>The first approach is to not specify any IP address (this means removing the .NET Core CLI template convention of using the <code>.UseUrls()</code> method. Without it the web host will listen on <code>localhost:5000</code> by default.</p>

<p>Whilst this approach is far from ideal, it is an option so deserves a place here.</p>

<h2 id="hard-coded-approach-via-useurls">Hard-coded approach via <code>.UseUrls()</code></h2>

<p>As I alluded to earlier, the default approach that .NET Core&rsquo;s CLI uses is to hard-code the IP address in your application&rsquo;s <code>program.cs</code> file via the <code>UseUrls(...)</code> extension method that&rsquo;s available on the <code>IWebHostBuilder</code> interface.</p>

<p>If you take a look at the <code>UseUrls</code> extension method&rsquo;s signature you&rsquo;ll see the argument is an unbounded string array allowing you to specify more than one IP address that the web host will listen on (which may be preferable to you depending on your development machine, network configuration or preference as specifying more than one host address can save people running into issues between <code>localhost</code> vs <code>0.0.0.0</code> vs <code>127.0.0.1</code>).</p>

<pre><code class="language-csharp">public static IWebHostBuilder UseUrls(this IWebHostBuilder hostBuilder, params string[] urls);
</code></pre>

<p>Adding multiple IP addresses can either be done by comma-separated strings, or a single string separated by a semi-colon; both result in the same configuration.</p>

<pre><code class="language-csharp">var host = new WebHostBuilder()
    .UseConfiguration(config)
    .UseKestrel()
    .UseUrls(&quot;http://0.0.0.0:5000&quot;, &quot;http://localhost:5000&quot;)
    ///.UseUrls(&quot;http://0.0.0.0:5000;http://localhost:5000&quot;) also works
    .UseContentRoot(Directory.GetCurrentDirectory())
    .UseIISIntegration()
    .UseStartup&lt;Startup&gt;()
    .Build();
</code></pre>

<p>If you&rsquo;d rather not explicitly list every IP address to listen on you can use a wildcard instead resulting in the web host binding to all IPv4 and IPv6 IPs addresses on port 5000:</p>

<pre><code class="language-csharp">.UseUrls(&quot;http://*:5000&quot;)
</code></pre>

<p><strong>A bit about wildcards:</strong> The wildcard is not special in any way, in fact anything not recognised as an IP will be bound to all IPv4 or IPv6 addresses, so <code>http://@£$%^&amp;*:5000</code> is considered the same as <code>&quot;http://*:5000&quot;</code> and vice versa.</p>

<p>Whilst this hard-coded approach makes it easy to get your application up and running, the very fact that it&rsquo;s hard-coded does make it difficult to configure externally via automation such as a continuous integration/deployment pipeline.</p>

<p><strong>Note:</strong> It&rsquo;s worth mentioning that setting a binding IP address directly on the WebHost as we are in this approach always takes preference over any of the other approaches listed in this post, but we&rsquo;ll go into this later.</p>

<h2 id="environment-variables">Environment variables</h2>

<p>You can also specify the IP your application listens on via an environment variable. To do this first you&rsquo;ll need to download the <a href="https://www.nuget.org/packages/Microsoft.Extensions.Configuration/">Microsoft.Extensions.Configuration package from NuGet</a> then call the <code>AddEnvironmentVariables()</code> extension method on your <code>ConfigurationBuilder</code> object like so:</p>

<pre><code class="language-csharp">public static void Main(string[] args)
{
    var config = new ConfigurationBuilder()
        .AddEnvironmentVariables()
        .Build();

    var host = new WebHostBuilder()
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseConfiguration(config)
        .UseStartup&lt;Startup&gt;()
        .Build();

    host.Run();
}
</code></pre>

<p>Now if you were to set the following environment variable and run your application it will listen to the IP address specified:</p>

<p><code>ASPNETCORE_URLS=https://*:5123</code></p>

<h2 id="command-line-argument">Command line argument</h2>

<p>The another option available is to supply the host name and port via a command line argument when your application&rsquo;s initially executed (notice how you can use one or more addresses as we did above).</p>

<p><code>dotnet run --urls &quot;http://*:5000;http://*:6000&quot;</code></p>

<p>or</p>

<p><code>dotnet YourApp.dll --urls &quot;http://*:5000;http://*:6000&quot;</code></p>

<p>Before you can use command line arguments, you&rsquo;re going to need the <a href="https://www.nuget.org/packages/Microsoft.Extensions.Configuration.CommandLine/">Microsoft.Extensions.Configuration.CommandLine</a> package and update your <code>Program.cs</code> bootstrap configuration accordingly:</p>

<pre><code class="language-csharp">public static void Main(string[] args)
{
    var config = new ConfigurationBuilder()
        .AddCommandLine(args)
        .Build();

    var host = new WebHostBuilder()
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseConfiguration(config)
        .UseStartup&lt;Startup&gt;()
        .Build();

    host.Run();
}
</code></pre>

<p>Notice how I&rsquo;ve removed the <code>.ForUrls()</code> method; this prevents <code>.ForUrls()</code> overwriting the IP address provided via command line.</p>

<h2 id="hosting-json-approach">hosting.json approach</h2>

<p>Another popular approach to specifying your host and port address is to read the IP address from a <code>.json</code> file during the application boot up. Whilst you can name your configuration anything, the common approach appears to be <code>hosting.json</code>, with the contents of your file containing the IP address you want your application to listen on:</p>

<pre><code class="language-json">{
  &quot;urls&quot;: &quot;http://*:5000&quot;
}
</code></pre>

<p>In order to use this approach you&rsquo;re first going to need to include the <a href="https://www.nuget.org/packages/Microsoft.Extensions.Configuration.Json/">Microsoft.Extensions.Configuration.Json package</a>, allowing you to load configurations via <code>.json</code> documents.</p>

<pre><code class="language-csharp">public static void Main(string[] args)
{
    var config = new ConfigurationBuilder()
        .SetBasePath(Directory.GetCurrentDirectory())
        .AddJsonFile(&quot;hosting.json&quot;, optional: true)
        .Build();

    var host = new WebHostBuilder()
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseConfiguration(config)
        .UseStartup&lt;Startup&gt;()
        .Build();

    host.Run();
}
</code></pre>

<p>Now when you run the <code>dotnet run</code> or <code>dotnet YourApp.dll</code> command you&rsquo;ll noticed the output will reflect the address specified within your <code>hosting.json</code> document.</p>

<p>Just a reminder that before publishing your application be sure to include your hosting file in your publishing options (in either your <code>project.json</code> or your <code>.csproj</code> file:</p>

<pre><code class="language-json">// project.json

&quot;publishOptions&quot;: {
    &quot;include&quot;: [
      &quot;wwwroot&quot;,
      &quot;Views&quot;,
      &quot;appsettings.json&quot;,
      &quot;web.config&quot;,
      &quot;hosting.json&quot;
    ]
}
</code></pre>

<pre><code class="language-xml">// YourApp.csproj

&lt;ItemGroup&gt;
  &lt;Content Update=&quot;wwwroot;Views;appsettings.json;web.config;hosting.json&quot;&gt;
  &lt;CopyToPublishDirectory&gt;PreserveNewest&lt;/CopyToPublishDirectory&gt;
  &lt;/Content&gt;
&lt;/ItemGroup&gt;
</code></pre>

<p>Out of all of the approaches available this has to be my most preferred option. It&rsquo;s simple enough to overwrite or modify within your test/release pipeline whilst removing hurdles co-workers may need to jump through when wanting to download your source code and run the application (as opposed to the command line approach).</p>

<h2 id="order-of-preference">Order of preference</h2>

<p>When it comes down to the order the IP addresses are loaded, I would recommend you check out the <a href="ordering-importance">documentation here</a>, especially this snippet:</p>

<blockquote>
<p>You can override any of these environment variable values by specifying configuration (using UseConfiguration) or by setting the value explicitly (using UseUrls for instance). The host will use whichever option sets the value last. For this reason, UseIISIntegration must appear after UseUrls, because it replaces the URL with one dynamically provided by IIS. If you want to programmatically set the default URL to one value, but allow it to be overridden with configuration, you could configure the host as follows:</p>
</blockquote>

<pre><code class="language-csharp">var config = new ConfigurationBuilder()
   .AddCommandLine(args)
   .Build();

var host = new WebHostBuilder()
   .UseUrls(&quot;http://*:1000&quot;) // default URL
   .UseConfiguration(config) // override from command line
   .UseKestrel()
   .Build();
</code></pre>

<h2 id="conclusion">Conclusion</h2>

<p>Hopefully this post helped you gain a better understanding of the <strong>many</strong> options available to you when configuring what IP address you want your application to listen on, writing it has certainly helped me cement them in my mind!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Fri, 10 Feb 2017 17:57:01 +0000</pubDate></item><item><title>C# IL Viewer for Visual Studio Code using Roslyn side project </title><link>https://josephwoodward.co.uk/2017/01/c-sharp-il-viewer-vs-code-using-roslyn</link><description></description><content:encoded><![CDATA[<p>For the past couple of weeks I&rsquo;ve been working on an IL (Intermediate Language) Viewer for Visual Studio Code. As someone that develops on a Mac, I spend a lot of time doing C# in VS Code or JetBrains&rsquo; Rider editor - however neither of them have the ability to view the IL generated (I know JetBrains are working on this for Rider) so I set out to fix this problem as a side project.</p>

<p>As someone that&rsquo;s never written a Visual Studio Code extension before it was a bit of an abmitious first extension, but enjoyable none the less.</p>

<p>Today I released the first version of the IL Viewer (0.0.1) to the Visual Studio Code Marketplace so it&rsquo;s available to download and try via the link below:</p>

<p><strong>Download IL Viewer for Visual Studio Code</strong></p>

<p><a href="https://marketplace.visualstudio.com/items?itemName=josephwoodward.vscodeilviewer">Download C# IL Viewer for Visual Studio Code</a> or install it directly within Visual Studio Code by launching Quick Open (<code>CMD+P</code> for Mac or <code>CTRL+P</code> for Windows) and pasting in the follow command and press enter.</p>

<p><code>ext install vscodeilviewer</code></p>

<p>The <a href="https://github.com/JosephWoodward/VSCodeILViewer/">source code is all up on GitHub so feel free to take a look</a>, but be warned - it&rsquo;s a little messy right now as it was hacked together to get it working.</p>

<p><img src="http://assets.josephwoodward.co.uk/blog/il_viewer_animated.gif" alt="VS Code IL Viewer" /></p>

<p>For those interested in how it works, continue reading.</p>

<h2 id="how-does-it-work">How does it work?</h2>

<h3 id="visual-studio-code-and-http-service">Visual Studio Code and HTTP Service</h3>

<p>At its heart, Visual Studio Code is a glorified text editor. C# support is added via the hard work of the OmniSharp developers, which itself is backed by Roslyn. This means that in order to add any IL inspection capabilities I needed to either hook into OmniSharp or build my own external service that gets bundled within the extension. In the end I decided to go with the later.</p>

<p>When Visual Studio Code loads and detects the language is C# and the existence of a <code>project.json</code> file, it starts an external HTTP serivce (using Web API) which is <a href="https://github.com/JosephWoodward/VSCodeILViewer/tree/6bf641d58d51c177c2420e764d44ec982c07942e/src/IlViewer.WebApi">bundled within the extension</a>.</p>

<p>Moving forward I intend to switch this out for a console application communicating over <code>stdin</code> and <code>stdout</code>. This should speed up the overall responsiveness of the extension whilst reducing the resources required, but more importantly reduce the start up time of the IL viewer.</p>

<h3 id="inspecting-the-intermediate-language">Inspecting the Intermediate Language</h3>

<p>Initially I planned on making the Visual Studio Code IL Viewer extract the desired file&rsquo;s IL directly from its project&rsquo;s  built DLL, however after a little experimentation this proved not to ideal as it required the solution to be built in order to view the IL, and built again for any inspections after changes, no matter how minor. It would also be blocking the user from doing any work whilst the project was building.</p>

<p>In the end I settled on an approach that <a href="http://josephwoodward.co.uk/2016/12/in-memory-c-sharp-compilation-using-roslyn">builds just the .cs file you wish to inspect into an in memory assembly</a> then extracts the IL and displays it to the user.</p>

<p><strong>Including external dependencies</strong></p>

<p>One problem with compiling just the source code in memory is that it doesn&rsquo;t include any external dependencies. As you&rsquo;d expect, as soon as Roslyn encounters a reference for an external binary you get a compilation error. Luckily Roslyn has the ability to automatically include external dependencies via Roslyn&rsquo;s workspace API.</p>

<pre><code class="language-csharp">public static Compilation LoadWorkspace(string filePath)
{
    var projectWorkspace = new ProjectJsonWorkspace(filePath);

    var project = projectWorkspace.CurrentSolution.Projects.FirstOrDefault();
    var compilation = project.GetCompilationAsync().Result;

    return compilation;
}
</code></pre>

<p>After that the rest was relativly straight forward, I grab the syntax tree from the compilation unit of the project the load in any additional dependencies before using Mono&rsquo;s Cecil library to extract the IL (as Cecil supports .NET Standard it does not require the Mono runtime).</p>

<p>Once I have the IL I return the contents as a HTTP response then display it to the user in Visual Studio Code&rsquo;s split pane window.</p>

<p>Below is a simplified diagram of how it&rsquo;s all tied together:</p>

<p><img src="http://assets.josephwoodward.co.uk/blog/ilviewer_diagram2.png" alt="IL Viewer" /></p>

<p>As I mentioned above, <a href="https://github.com/JosephWoodward/VSCodeILViewer">the source code is all available on GitHub</a> so feel free to take a look. Contributions are also very welcome!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Mon, 30 Jan 2017 14:06:44 +0000</pubDate></item><item><title>Year in review and looking at 2017</title><link>https://josephwoodward.co.uk/2017/01/year-in-review-looking-at-2017</link><description></description><content:encoded><![CDATA[<p>It&rsquo;s that time of the year again where I take a moment to reflect on the previous year and set some high level goals as to what I want to achieve or aim towards in the next.</p>

<p>I find these types of post really helpful at tracking progress and ensuring I stay on a positive projection as a software developer/engineer - a hobby that I&rsquo;m lucky enough to have as a career.</p>

<h2 id="review-of-2016">Review of 2016</h2>

<p>Last year was an amazing year for a couple of reasons, some planned and some not so. So, without further ado let&rsquo;s look at <a href="http://josephwoodward.co.uk/2016/02/personal-targets-and-goals-for-2016">last year&rsquo;s personal targets and goals</a> to see how I&rsquo;ve done.</p>

<h3 id="goal-1-speak-at-more-conferences-and-meet-ups">Goal 1: Speak at more conferences and meet-ups</h3>

<p>I&rsquo;m inclined to say I&rsquo;ve smashed this. At the time of writing last year&rsquo;s post (February 2016) my speaking activities included relatively small user groups and meet ups. Since that post I&rsquo;ve spoken at a few more meet ups but also at some rather large conferences which has been amazing.</p>

<p>The talks include:</p>

<ul>
<li><p>Why software engineering is a great career choice<br>
TechSpark Graduates Conference 2016 - 1st December 2016</p></li>

<li><p>Going cross-platform with ASP.NET Core<br>
BrisTech 2016 - 3rd November 2016</p></li>

<li><p>Going cross-platform with ASP.NET Core<br>
Tech Exeter Conference - 8th October 2016</p></li>

<li><p>Building rich client-side applications using Angular 2<br>
DDD 11 (Reading) - 3rd September 2016</p></li>

<li><p>.NET Rocks Podcast - Angular 2 CLI (Command Line Interface)<br>
24th August 2016</p></li>

<li><p>Angular 2<br>
BristolJS - May 25th, 2016</p></li>

<li><p>Angular 2<br>
Taunton Developers&rsquo; Meetup - June 9th, 2016</p></li>
</ul>

<p>It&rsquo;s a really great feeling reflecting on <a href="http://josephwoodward.co.uk/2015/03/presentation-from-recent-introduction-to-typescript-talk-i-gave/">my very first talk (a lightening talk on TypeScript)</a> and how nervous I was in comparison to now. Don&rsquo;t get me wrong, the nerves are still there, but as many will know - you&rsquo;re just able to cope with them better.</p>

<p>.All in all I&rsquo;m extremely satisfied at how far I&rsquo;ve progressed in this area and how much my confidence speaking in public has grown. This is certainly something I wish to continue to do in 2017.</p>

<h3 id="goal-2-start-an-exeter-based-net-user-group">Goal 2: Start an Exeter based .NET User Group</h3>

<p>At the time of setting this goal I was working in Exeter where there were no .NET focused user group, something that surprised me given the number of .NET specific jobs in the city.</p>

<p>To cut a long story short, I&rsquo;d started an Exeter .NET user group and was in the process of organising the first meet up when I got a job opportunity at Just Eat in Bristol and ended up taking over the <a href="meetup.com/dotnetsouthwest/">Bristol based .NET South West user group</a> as the organiser was stepping down and closing the rather large, well established group down. Having been to a couple of the meet ups at the user group it was a shame to see it end, and given the fact I was now working in Bristol I decided to step forward to take over along with a couple of the other members.</p>

<p>Since then we (me and the the other organisers) have been really active in keeping .NET South West alive and well, organising a range of speakers on variety of .NET related topics.</p>

<h3 id="goal-3-continue-contributing-to-the-net-oss-ecosystem">Goal 3: Continue contributing to the .NET OSS ecosystem</h3>

<p>This year I&rsquo;ve create a number of small open-source libraries, and projects (<a href="https://github.com/JosephWoodward/Angular2PianoNoteTrainingGame">Angular 2 piano sight reading game the one that&rsquo;s received the most attention</a>). However whilst I&rsquo;ve been contributing on and off to other libraries, I don&rsquo;t feel my contributions have been at a level that I&rsquo;m happy with, so this will be a goal for 2017.</p>

<h3 id="bonus-goal-f">Bonus Goal - F</h3>

<p>Making a start learning F# was one of my bonus goals that I was hoping to achieve during 2016, however other than making a few changes to a couple of lines of F#, this is a no-go.</p>

<p>In all honesty, I&rsquo;m still on the bench as to whether I want to learn F# - my only motivation being to learn a functional language (growing knowledge and thinking in a different paradigm); where as there are other languages I&rsquo;m interested in learning that aren&rsquo;t necessarily tied to the .NET eco-system.</p>

<h2 id="other-notable-events-and-achievements-in-2016">Other notable events and achievements in 2016</h2>

<p>In addition to the aforementioned goals and achievements in 2016, there have also been others.</p>

<ul>
<li>New job as a .NET software engineer at Just Eat in Bristol - a seriously awesome company that has a lot of really talented developers and interesting problems to work on.</li>
<li>Co-organiser of DDD South West conference</li>
<li>Heavy investment of time in learning .NET Core and Docker</li>
<li>Became co-organiser of the <a href="meetup.com/dotnetsouthwest/">.NET South West user group</a></li>
</ul>

<h2 id="goals-for-2017">Goals for 2017</h2>

<p>With a review of 2016 out of the way let&rsquo;s take a quick look at plans for 2017.</p>

<h3 id="goal-1-continue-to-grow-as-a-speaker-speaking-at-larger-events">Goal 1: Continue to grow as a speaker, speaking at larger events</h3>

<p>I&rsquo;ve really loved speaking at meet ups and conferences, it&rsquo;s a truly rewarding experience both on a personal development and professional development level. There&rsquo;s very little more satisfying in life than pushing yourself outside of your comfort zone. So in 2017 I&rsquo;m really keen to continue to pursue this by talking at larger conferences and events.</p>

<h3 id="goal-2-more-focus-on-contributing-to-open-source-projects">Goal 2: More focus on contributing to open-source projects</h3>

<p>Whilst I&rsquo;m satisfied with my contributions to the open-source world, with personal projects and contributions to other projects, it&rsquo;s definitely an area I would like to continue to pursue. So in 2017 I&rsquo;m aiming I&rsquo;m looking for other larger projects I invest in and contribute to on a long term basis.</p>

<h3 id="goal-3-learn-another-language">Goal 3: Learn another language</h3>

<p>Where as I previously set myself a 2016 goal of Learn F#, this time around I&rsquo;m going to keep my options open. I&rsquo;ve recently been learning a little Go, but also really interested in Rust, so this year I&rsquo;m going to simply set a goal to learn a new language. As it stands, it looks like it&rsquo;s between Go and Rust, with F# still being a possibility.</p>

<h2 id="conclusion">Conclusion</h2>

<p>Overall it&rsquo;s been a great year, I&rsquo;m really keen to keep the pace up on public speaking as it&rsquo;s far too easy to rest on one&rsquo;s laurels, so here&rsquo;s to 2017 and the challenges it brings!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Tue, 10 Jan 2017 03:25:43 +0000</pubDate></item><item><title>In-memory C# compilation (and .dll generation) using Roslyn</title><link>https://josephwoodward.co.uk/2016/12/in-memory-c-sharp-compilation-using-roslyn</link><description></description><content:encoded><![CDATA[<p>Recently I&rsquo;ve been hard at work on my first Visual Studio Code extension and one of the requirements is to extract IL from a .dll binary. This introduces a question though, do I build the solution (blocking the user whilst their project is building), read the .dll from disk then extract the IL, or do I compile the project in memory behind the scenes, then stream the assembly to Roslyn? Ultimately I went with the later approach and was pleasantly surprised at how easy Roslyn makes this - surprised enough that I thought it deserved its own blog post.</p>

<p>Before we continue, let me take a moment to explain what Roslyn is for those that may not fully understand what it is.</p>

<h2 id="what-is-roslyn">What is Roslyn?</h2>

<p>Roslyn is an open-source C# and VB compiler as a service platform.</p>

<p>The key words to take away with you here are &ldquo;compiler as a service&rdquo;; let me explain.</p>

<p>Traditionally compilers have been a black box of secrets that are hard to extend or harness, especially for any tooling or code analysis purposes. Take ReSharper for instance; Resharper has a lot of code analysis running under the bonnet that allows it to offer refactoring advice. In order for the ReSharper team to provide this they had to build their own analysis tools that would manually parse your solution&rsquo;s C# inline with the .NET runtime - the .NET platform provided no assistance with this, essentially meaning they had to duplicate a lot of the work the compiler was doing.</p>

<p>This has since changed with the introduction of Roslyn. For the past couple of years Microsoft have been rewriting the C# compiler in C# (I know, it&rsquo;s like a compiler Inception right?) and opening it up via a whole host of APIs that are easy to prod, poke and interrogate. This opening up of the C# compiler has resulted in a whole array of code analysis tooling such as better StyleCop integration and debugging tools like OzCode and the like. What&rsquo;s more, you can also harness Roslyn for other purposes such as <a href="http://josephwoodward.co.uk/2015/10/using-roslyn-to-look-for-code-smells">tests that fail as soon as common code smells are introduced into a project</a>.</p>

<h2 id="let-s-start">Let&rsquo;s start</h2>

<p>So now we all know what Roslyn is, let&rsquo;s take a look at how we can use it to compile a project in memory. In this post we will be taking some C# code written in plain text, turning it into a syntax tree that the compiler can understand then using Roslyn to compile it, resulting in a streaming in-memory assembly.</p>

<h3 id="create-our-project">Create our project</h3>

<p>In this instance I&rsquo;m using .NET Core on a Mac but this will also work on Windows, so let&rsquo;s begin by creating a new console application by using the .NET Core CLI.</p>

<p><code>dotnet new -t console</code></p>

<p>Now, add the following dependencies to your <code>project.json</code> file:</p>

<pre><code class="language-json">&quot;dependencies&quot;: {
    &quot;Microsoft.CodeAnalysis.CSharp.Workspaces&quot;: &quot;1.3.2&quot;,
    &quot;Mono.Cecil&quot;: &quot;0.10.0-beta1-v2&quot;,
    &quot;System.ValueTuple&quot;: &quot;4.3.0-preview1-24530-04&quot;
},
</code></pre>

<p>For those interested, here is a copy of the <code>project.json</code> file in its entirety:</p>

<pre><code class="language-json">{
  &quot;version&quot;: &quot;1.0.0-*&quot;,
  &quot;buildOptions&quot;: {
    &quot;debugType&quot;: &quot;portable&quot;,
    &quot;emitEntryPoint&quot;: true
  },
  &quot;dependencies&quot;: {
    &quot;Microsoft.CodeAnalysis.CSharp.Workspaces&quot;: &quot;1.3.2&quot;,
    &quot;Mono.Cecil&quot;: &quot;0.10.0-beta1-v2&quot;,
    &quot;System.ValueTuple&quot;: &quot;4.3.0-preview1-24530-04&quot;
  },
  &quot;frameworks&quot;: {
    &quot;netcoreapp1.1&quot;: {
      &quot;dependencies&quot;: {
        &quot;Microsoft.NETCore.App&quot;: {
          &quot;type&quot;: &quot;platform&quot;,
          &quot;version&quot;: &quot;1.0.1&quot;
        }
      },
      &quot;imports&quot;: &quot;portable-net45+win8+wp8+wpa81&quot;
    }
  }
}
</code></pre>

<p>Once we&rsquo;ve restored our project using the <code>dotnet restore</code> command, the next step is to write a simple class to represent our source code. This code could be read from a web form, a database or a file on disk. In this instance I&rsquo;m hard-coding it into the application itself for simplicity.</p>

<pre><code class="language-csharp">public class Program {

    public static void Main(string[] args)
    {

        var code = @&quot;
        using System;
        public class ExampleClass {
            
            private readonly string _message;

            public ExampleClass()
            {
                _message = &quot;&quot;Hello World&quot;&quot;;
            }

            public string getMessage()
            {
                return _message;
            }

        }&quot;;

        CreateAssemblyDefinition(code);
    }

    public static void CreateAssemblyDefinition(string code)
    {
        var sourceLanguage = new CSharpLanguage();
        SyntaxTree syntaxTree = sourceLanguage.ParseText(code, SourceCodeKind.Regular);

        ...
    }

}
</code></pre>

<h3 id="getting-stuck-into-roslyn">Getting stuck into Roslyn</h3>

<p>Now we&rsquo;ve got the base of our project sorted, let&rsquo;s dive into some of the Roslyn API.</p>

<p>First we&rsquo;re going to want to create an interface we&rsquo;ll use to define the language we want to use. In this instance it&rsquo;ll be C#, but Roslyn also supports VB.</p>

<pre><code class="language-csharp">public interface ILanguageService
{
    SyntaxTree ParseText(string code, SourceCodeKind kind);

    Compilation CreateLibraryCompilation(string assemblyName, bool enableOptimisations);
}
</code></pre>

<p>Next we&rsquo;re going to need to parse our plain text C#, so we&rsquo;ll begin by working on the implementation of the <code>ParseText</code> method.</p>

<pre><code class="language-csharp">public class CSharpLanguage : ILanguageService
{
    private static readonly LanguageVersion MaxLanguageVersion = Enum
        .GetValues(typeof(LanguageVersion))
        .Cast&lt;LanguageVersion&gt;()
        .Max();

    public SyntaxTree ParseText(string sourceCode, SourceCodeKind kind) {
        var options = new CSharpParseOptions(kind: kind, languageVersion: MaxLanguageVersion);

        // Return a syntax tree of our source code
        return CSharpSyntaxTree.ParseText(sourceCode, options);
    }

    public Compilation CreateLibraryCompilation(string assemblyName, bool enableOptimisations) {
        throw new NotImplementedException();
    }
}
</code></pre>

<p>As you&rsquo;ll see the implementation is rather straight forward and simply involved us setting a few parse options such as the language features we expect to see being parsed (marked via the <code>languageVersion</code> parameter) along with the <code>SourceCodeKind</code> enum.</p>

<p><strong>Looking further into Roslyn&rsquo;s SyntaxTree</strong></p>

<p>At this point I feel it&rsquo;s worth mentioning that if you&rsquo;re interested in learning more about Roslyn then I would recommend spending a bit of time looking into Roslyn&rsquo;s Syntax Tree API. <a href="https://joshvarty.wordpress.com/2014/07/06/learn-roslyn-now-part-2-analyzing-syntax-trees-with-linq/">Josh Varty&rsquo;s posts on this subject</a> are a great resource I would recommend.</p>

<p>I would also recommend taking a look at <a href="https://www.linqpad.net">LINQ Pad</a>, which amongst other great features, has the ability to show you a syntax tree generated by Roslyn your code. For instance, here is a generated syntax tree of our <code>ExampleClass</code> code we&rsquo;re using in this post:</p>

<p><img src="http://assets.josephwoodward.co.uk/blog/linqpad_tree2.png" alt="http://assets.josephwoodward.co.uk/blog/linqpad_tree2.png" /></p>

<p>Now our C# has been parsed and turned into a data structure the C# compiler can understand, let&rsquo;s look at using Roslyn to compile it.</p>

<h2 id="compiling-our-syntax-tree">Compiling our Syntax Tree</h2>

<p>Continuing with the <code>CreateAssemblyDefinition</code> method, let&rsquo;s compile our syntax tree:</p>

<pre><code class="language-csharp">public static void CreateAssemblyDefinition(string code)
{
    var sourceLanguage = new CSharpLanguage();
    SyntaxTree syntaxTree = sourceLanguage.ParseText(code, SourceCodeKind.Regular);

    Compilation compilation = sourceLanguage
      .CreateLibraryCompilation(assemblyName: &quot;InMemoryAssembly&quot;, enableOptimisations: false)
      .AddReferences(_references)
      .AddSyntaxTrees(syntaxTree);

    ...
}
</code></pre>

<p>At this point we&rsquo;re going to want to fill in the implementation of our <code>CreateLibraryCompilation</code> method within our <code>CSharpLanguage</code> class. We&rsquo;ll start this by passing the appropriate arguments into an instance of <code>CSharpCompilationOptions</code>. This includes:</p>

<ul>
<li><code>outputKind</code> - We&rsquo;re outputting a (dll) Dynamically Linked Library</li>
<li><code>optimizationLevel</code> - Whether we want our C# output to be optimised</li>
<li><code>allowUnsafe</code> - Whether we want the our C# code to allow the use of unsafe code or not</li>
</ul>

<pre><code class="language-csharp">public class CSharpLanguage : ILanguageService
{
    private readonly IReadOnlyCollection&lt;MetadataReference&gt; _references = new[] {
          MetadataReference.CreateFromFile(typeof(Binder).GetTypeInfo().Assembly.Location),
          MetadataReference.CreateFromFile(typeof(ValueTuple&lt;&gt;).GetTypeInfo().Assembly.Location)
      };

    ...

    public Compilation CreateLibraryCompilation(string assemblyName, bool enableOptimisations) {
      var options = new CSharpCompilationOptions(
          OutputKind.DynamicallyLinkedLibrary,
          optimizationLevel: enableOptimisations ? OptimizationLevel.Release : OptimizationLevel.Debug,
          allowUnsafe: true);

      return CSharpCompilation.Create(assemblyName, options: options, references: _references);
  }
}
</code></pre>

<p>Now we&rsquo;ve specified our compiler options, we invoke the <code>Create</code> factory method where we also need to specify the assembly name we want our in-memory assembly to be called (<code>InMemoryAssembly</code> in our case, passed in when calling our <code>CreateLibraryCompilation</code> method), along with additional references required to compile our source code. In this instance, as we&rsquo;re targeting C# 7, we need to supply the compilation unit with the <a href="https://www.nuget.org/packages/System.ValueTuple/">ValueTuple structs</a> implementation. If we were targeting an older version of C# then this would not be required.</p>

<p>All that&rsquo;s left to do now is to call Roslyn&rsquo;s <code>emit(Stream stream)</code> method that takes a <code>Stream</code> input parameter and we&rsquo;re sorted!</p>

<pre><code class="language-csharp">public static void CreateAssemblyDefinition(string code)
{
    ...

    Compilation compilation = sourceLanguage
        .CreateLibraryCompilation(assemblyName: &quot;InMemoryAssembly&quot;, enableOptimisations: false)
        .AddReferences(_references)
        .AddSyntaxTrees(syntaxTree);

    var stream = new MemoryStream();
    var emitResult = compilation.Emit(stream);
    
    if (emitResult.Success){
        stream.Seek(0, SeekOrigin.Begin);
        AssemblyDefinition assembly = AssemblyDefinition.ReadAssembly(stream);
    }
}
</code></pre>

<p>From here I&rsquo;m then able to pass my AssemblyDefinition to a method that extracts the IL and I&rsquo;m good to go!</p>

<h2 id="conclusion">Conclusion</h2>

<p>Whilst this post is quite narrow in its focus (I can&rsquo;t imagine everyone is looking to compile C# in memory!), hopefully it&rsquo;s served as a primer in spiking your interest in Roslyn and what it&rsquo;s capable of doing. Roslyn is a truly powerful platform that I wish more languages offered. As mentioned before there are some great resources available go into much more depth. I would especially recommend <a href="https://joshvarty.wordpress.com/learn-roslyn-now/">Josh Varty&rsquo;s posts on the subject</a>.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Wed, 28 Dec 2016 20:53:39 +0000</pubDate></item><item><title>In-memory testing using ASP.NET Core</title><link>https://josephwoodward.co.uk/2016/12/in-memory-testing-using-asp-net-core</link><description></description><content:encoded><![CDATA[<p>A fundamental problem with integration testing over finer-grained tests such as unit testing is that in order to integration test your component or application you need to spin up a running instance of your application so you can reach it over HTTP, run your tests and then spin it down afterwards.</p>

<p>Spinning up instances of your application can lead to a lot of additional work when it comes to running your tests within any type of continuous deployment or delivery pipeline. This has certainly become easier with the introduction of the cloud, but still requires a reasonable investment of time and effort to setup, as well slowing down your deployment/delivery pipeline.</p>

<p>An alternative approach to running your integration or end-to-end tests is to utilise in-memory testing. This is where your application is spun up in memory via an in-memory server and has the tests run against it. An additional benefit to running your tests this way is you&rsquo;re no longer testing any of your host OS&rsquo;s network stack either (which in most cases will be configured differently to your production server&rsquo;s stack anyway).</p>

<h2 id="testserver-package">TestServer package</h2>

<p>Thankfully in-memory testing can be performed easily in ASP.NET Core thanks to the <code>Microsoft.AspNetCore.TestHost</code> NuGet package.</p>

<p>Let&rsquo;s take a moment to look at the TestServer API exposed by the <code>TestHost</code> library:</p>

<pre><code class="language-csharp">public class TestServer : IServer, IDisposable
{
    public TestServer(IWebHostBuilder builder);

    public Uri BaseAddress { get; set; }
    public IWebHost Host { get; }

    public HttpClient CreateClient();
    public HttpMessageHandler CreateHandler();
    
    public RequestBuilder CreateRequest(string path);
    public WebSocketClient CreateWebSocketClient();
    public void Dispose();
}
</code></pre>

<p>As you&rsquo;ll see, the API has all the necessary endpoints we&rsquo;ll need to spin our application up in memory.</p>

<p>For those that are regular readers of my blog, you&rsquo;ll remember we <a href="http://josephwoodward.co.uk/2016/07/integration-testing-asp-net-core-middleware">used the same TestServer package to run integration tests on middleware</a> back in July. This time we&rsquo;ll be using it to run our Web API application in memory and run our tests against it. We&rsquo;ll then assert that the response received is expected.</p>

<p>Enough talk, let&rsquo;s get started.</p>

<h2 id="running-web-api-in-memory">Running Web API in memory</h2>

<h3 id="setting-up-web-api">Setting up Web API</h3>

<p>In this instance I&rsquo;m going to be using ASP.NET Core Web API. In my case I&rsquo;ve created a small Web API project using the ASP.NET Core yoman project template. You&rsquo;ll also note that I&rsquo;ve stripped a few things out of the application for the sake of making the post as easier to follow. Here are the few files that really matter:</p>

<p><strong>Startup.cs (nothing out of the ordinary here)</strong></p>

<pre><code class="language-csharp">public class Startup
{
    public Startup(IHostingEnvironment env)
    {
        var builder = new ConfigurationBuilder()
            .SetBasePath(env.ContentRootPath)
            .AddJsonFile(&quot;appsettings.json&quot;, optional: true, reloadOnChange: true)
            .AddJsonFile($&quot;appsettings.{env.EnvironmentName}.json&quot;, optional: true)
            .AddEnvironmentVariables();
        Configuration = builder.Build();
    }

    public IConfigurationRoot Configuration { get; }

    // This method gets called by the runtime. Use this method to add services to the container.
    public void ConfigureServices(IServiceCollection services)
    {
        // Add framework services.
        services.AddMvc();
    }

    // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
    {
        loggerFactory.AddConsole(Configuration.GetSection(&quot;Logging&quot;));
        loggerFactory.AddDebug();

        app.UseMvc();
    }
}
</code></pre>

<p><strong>ValuesController.cs</strong></p>

<pre><code class="language-csharp">[Route(&quot;api/[controller]&quot;)]
public class ValuesController : Controller
{
    [HttpGet]
    public string Get()
    {
        return &quot;Hello World!&quot;;
    }
}
</code></pre>

<p>All we&rsquo;ve got here is a simple WebAPI application that returns a single <code>Hello World!</code> value from <code>ValueController</code> when you fire a <code>GET</code> request to <code>/api/values/</code>.</p>

<h2 id="running-web-api-in-memory-1">Running Web API in memory</h2>

<p>At this point I&rsquo;ve created a test project alongside my Web API one and added the <code>Microsoft.AspNetCore.TestHost</code> package to my test project&rsquo;s <code>package.json</code> file.</p>

<p><strong>package.json</strong></p>

<pre><code class="language-json">...
&quot;dependencies&quot;: {
    &quot;dotnet-test-xunit&quot;: &quot;2.2.0-preview2-build1029&quot;,
    &quot;xunit&quot;: &quot;2.2.0-beta2-build3300&quot;,
    &quot;Microsoft.AspNetCore.TestHost&quot;: &quot;1.0.0&quot;,
    &quot;TestWebAPIApplication&quot;:{
        &quot;target&quot;:&quot;project&quot;
    }
},
...
</code></pre>

<p>Next, we&rsquo;ll create our first test class, and bootstrap our WebAPI project. Pay particular attention to our web application&rsquo;s <code>Startup.cs</code> that&rsquo;s being passed into the <code>WebHostBuilder</code>&rsquo;s <code>UseStartup&lt;T&gt;</code> method. You&rsquo;ll notice this is exactly the same way we bootstrap our application within <code>Program.cs</code> (the bootstrap file we use when deploying our application).</p>

<pre><code class="language-csharp">public class ExampleTestClass
{
    private IWebHostBuilder CreateWebHostBuilder(){
        var config = new ConfigurationBuilder().Build();
        
        var host = new WebHostBuilder()
            .UseConfiguration(config)
            .UseStartup&lt;Startup&gt;();

        return host;
    }

    ...
}
</code></pre>

<h2 id="writing-our-test">Writing our test</h2>

<p>At this point we&rsquo;re ready to write our test, so let&rsquo;s create a new instance of <code>TestServer</code> which takes and instance of <code>IWebHostBuilder</code>.</p>

<pre><code class="language-csharp">public TestServer(IWebHostBuilder builder);
</code></pre>

<p>As you can see from the following trivial example, we&rsquo;re simply capturing the response from the controller invoked when calling <code>/api/values</code>, which is our case is the <code>ValuesController</code>.</p>

<pre><code class="language-csharp">[Fact]
public async Task PassingTest()
{
    var webHostBuilder = CreateWebHostBuilder();
    var server = new TestServer(webHostBuilder);

    using(var client = server.CreateClient()){
        var requestMessage = new HttpRequestMessage(new HttpMethod(&quot;GET&quot;), &quot;/api/values/&quot;);
        var responseMessage = await client.SendAsync(requestMessage);

        var content = await responseMessage.Content.ReadAsStringAsync();

        Assert.Equal(content, &quot;Hello World!&quot;);
    }
}
</code></pre>

<p>Now, when we call <code>Assert.Equals</code> on our test we should see the test has passed.</p>

<pre><code class="language-text">Running test UnitTest.Class1.PassingTest...
Test passed 
</code></pre>

<h2 id="conclusion">Conclusion</h2>

<p>Hopefully this post has given you enough insight into how you can run your application in memory for purposes such as integration or feature testing. Naturally there&rsquo;s a lot more you could do to simplify and speed up the tests by limiting the number of times <code>TestServer</code> is created.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Tue, 06 Dec 2016 20:23:12 +0000</pubDate></item><item><title>New Blog, now .NET Core, Docker and Linux powered - and soon to be open-sourced</title><link>https://josephwoodward.co.uk/2016/11/new-blog-now-dot-net-core-docker-linux-soon-open-sourced</link><description></description><content:encoded><![CDATA[<p>A little while ago I concluded that my blog was looking a bit long in the tooth and decided that since I&rsquo;m looking at .NET Core, what better opportunity to put it through its paces than by rewriting my blog using it.</p>

<p>I won&rsquo;t go into too much detail as how the blog is built as it&rsquo;s nothing anyone wouldn&rsquo;t have seen before, but as someone that likes to read about people&rsquo;s experiencing building things, I thought I&rsquo;d break down how it&rsquo;s built and what I learned whilst doing it.</p>

<p>Here&rsquo;s an overview of some of the technology I&rsquo;ve used:</p>

<ul>
<li>.NET Core / ASP.NET Core MVC</li>
<li>MediatR</li>
<li>Azure SQL</li>
<li>Dapper</li>
<li>Redis</li>
<li>Google Authentication</li>
<li>Docker</li>
<li>Nginx</li>
<li>Ubuntu</li>
</ul>

<h2 id="net-core-asp-net-core-mvc">.NET Core / ASP.NET Core MVC</h2>

<p>Having been following the development of .NET Core, I thought I&rsquo;d wait until things stabilised before I started migrating. Unfortunately I didn&rsquo;t wait long enough and had to go through the the pain many developers experiences with breaking changes.</p>

<p>I also ran into various issues such as waiting for libraries to migrate to .NET Standard, and other problems such as RSS feed generation for which no .NET Standard libraries exist, primarily because <code>System.ServiceModel.Syndication</code> is <a href="https://github.com/dotnet/wcf/issues/76">not .NET Core compatible just yet</a>. None of these are deal breakers, with work arounds out there, but none-the-less tripped me up along the way. That said, whilst running into these issues I did keep reminding myself that this is what happens when you start building with frameworks and libraries still in beta - so no hard feelings.</p>

<p>In fact, I&rsquo;ve been extremely impressed with the direction and features in ASP.NET Core and look forward to building more with it moving forward.</p>

<h2 id="mediatr">MediatR</h2>

<p>I&rsquo;ve never been a fan of the typical N-teir approach to building an application primarily because it encourages you to split your application into various horizontal slices (generally UI, Business Logic and Data Access), which often leads of a ridged design filled with lots of very large mixed concerns. Instead I prefer breaking my application up into vertical slices based on features, such as Blog, Pages, Admin etc.</p>

<p>MediatR helps me do this and at the same time allows you to model your application&rsquo;s commands and queries, turning a HTTP request into a pipeline to which you handle and return a response. This has an added effect of keeping your controllers nice and skinny as the only responsibility of the Controller is to pass the request into MediatR&rsquo;s pipeline.</p>

<p>Below is a simplified example of what a controller looks like, forming the request then delegating it to the appropriate handler:</p>

<pre><code>// Admin Controller
public class BlogAdminController {

    private readonly IMediator _mediator;

    public BlogAdminController(IMediator mediator)
    {
        _mediator = mediator;
    }

    [Route(&quot;/admin/blog/edit/{id:int}&quot;)]
    public IActionResult Edit(BlogPostEdit.Query query)
    {
        BlogPostEdit.Response model = _mediator.Send(query);

        return View(model);
    }
}
</code></pre>

<pre><code>public class BlogPostEdit
{
    public class Query : IRequest&lt;Response&gt;
    {
        public int Id { get; set; }
    }

    public class BlogPostEditRequestHandler : IRequestHandler&lt;Query, Response&gt;
    {
        private readonly IBlogAdminStorage _storage;

        public BlogPostEditRequestHandler(IBlogAdminStorage storage)
        {
            _storage = storage;
        }

        public Response Handle(Query request)
        {
            var blogPost = _storage.GetBlogPost(request.Id);
            if (blogPost == null)
                throw new RecordNotFoundException(string.Format(&quot;Blog post Id {0} not found&quot;, request.Id.ToString()));

            return new Response
            {
                BlogPostEditModel = blogPost
            };
        }
    }

    public class Response
    {
        public BlogPostEditModel BlogPostEditModel { get; set; }
    }
}
</code></pre>

<p>A powerful feature of MediatR&rsquo;s pipelining approach is you can start to use the decorator pattern to handle cross-cutting concerns like caching, logging and even validation.</p>

<p>If you&rsquo;re interested in reading more about MediatR then I&rsquo;d highly recommend <a href="https://vimeo.com/131633177">Jimmy Bogard&rsquo;s video on favouring slices rather than layers</a>, where he covers MediatR and its architectural benefits.</p>

<h2 id="google-authentication">Google Authentication</h2>

<p>I wanted to keep login simple, and not have to worry about storing passwords. With this in mind I decided to go with Google Authentication for logging in, which I cover in my <a href="http://josephwoodward.co.uk/2016/05/setting-up-google-oauth-asp-net-core-mvc">Social authentication via Google in ASP.NET Core MVC</a> post.</p>

<h2 id="docker-ubuntu-and-redis">Docker, Ubuntu and Redis</h2>

<p>Having read loads on Docker but never having played with it, migrating my blog to .NET Core seemed like a perfect opportunity to get stuck into Docker to see what all the fuss was about.</p>

<p>Having been using Docker for a couple of months now I&rsquo;m completely sold on how it changes the deployment and development landscape.</p>

<p>This isn&rsquo;t the right post to go into too much detail about Docker, but no doubt you&rsquo;re aware of roughly what it does by now and if you&rsquo;re considering taking it for a spin to see what it can do for you then I would highly recommend it.</p>

<p>Docker&rsquo;s made configuring my application to run on Ubuntu with Redis and Nginx an absolute breeze. No longer do I have to spin up individual services and packing website up and deploy it. Now I simply have to publish an image to a repository, pull it down to my host and run <code>docker-compose up</code>.</p>

<p>Don&rsquo;t get me wrong, Docker&rsquo;s certainly not the golden bullet that some say it is, but it&rsquo;s definitely going to make your life easier in most cases.</p>

<h2 id="open-sourcing-the-blog">Open-sourcing the blog</h2>

<p>I redeveloped the blog in mind of open-sourcing it, so once I&rsquo;ve finished tidying it up I&rsquo;ll put it up on <a href="http://github.com/JosephWoodward">my GitHub account</a> so you can download it and give it a try for yourself. It&rsquo;s no Orchard CMS, but it&rsquo;ll do the job for me - and potentially you.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Sat, 26 Nov 2016 01:30:36 +0000</pubDate></item><item><title>Getting started with Elastic Search in Docker</title><link>https://josephwoodward.co.uk/2016/11/getting-started-with-elastic-search-in-docker</link><description></description><content:encoded><![CDATA[<p>Having recently spend a lot of time experimenting with Docker, other than repeatable deployment and runtime environments, one of the great benefits promised by the containerisation movement is how it can supplement your local development environment.</p>

<p>No longer do you need to simultaneously waste time and slow down your local development machine by installing various services like Redis, Postgres and other dependencies. You can simply download a Docker image and boot up your development environment. Then, once you&rsquo;re finished with it tear it down again.</p>

<p>In fact, a lot of the Docker images for such services are maintained by the development teams and companies themselves.</p>

<p>I&rsquo;d never fully appreciated this dream until recently when I partook in the quarterly three day hackathon at work, where time was valuable and we didn&rsquo;t waste time downloading and installing the required JDK just to get Elastic Search installed.</p>

<p>In fact, I was so impressed with Docker and Elastic Search that it compelled me to write this post.</p>

<p>So without further ado, let&rsquo;s get started.</p>

<h2 id="what-is-elastic-search">What is Elastic Search?</h2>

<p>Elasticsearch is a search server based on Lucene. It provides a distributed, multitenant-capable full-text search engine with a RESTful web interface and schema-free JSON documents.</p>

<p>Now that that&rsquo;s out of the way, let&rsquo;s get going. First thing&rsquo;s first, you&rsquo;re going to need Docker</p>

<h2 id="installing-docker">Installing Docker</h2>

<p>If the title didn&rsquo;t give it away, we&rsquo;re going to be setting up Elastic Search up locally using Docker, so if you don&rsquo;t have Docker installed then you&rsquo;re going to need to head over to the <a href="http://www.docker.com/products/docker">Docker download page</a> and download/install it.</p>

<h2 id="setting-up-elastic-search">Setting up Elastic Search</h2>

<p>Next we&rsquo;re going to need to find the Elastic Search Docker image.</p>

<p>To do this we&rsquo;re going to head over to <a href="https://hub.docker.com/explore/">Docker Hub</a> and seach for Elastic Search (here&rsquo;s a <a href="https://hub.docker.com/_/elasticsearch/">direct link</a> for the lazy or those pressed for time).</p>

<p>What is Docker Hub? For new to Docker, Docker Hub is a repository of popular Docker images, many of which are officially owned and supported by the owners of the software.</p>

<h2 id="pulling-and-running-the-elastic-search-docker-image">Pulling and running the Elastic Search Docker image</h2>

<p>To run the Elastic Search Docker image we&rsquo;re first going to need to pull it down to our local machine. To do this open your command prompt or terminal (any directory is fine as nothing is downloaded to the current directory) and execute the following Docker command:</p>

<pre><code>docker pull elasticsearch
</code></pre>

<h2 id="running-elastic-search">Running Elastic Search</h2>

<p>Next we want to run our Elastic Search image, do to that we need to type the following command into our terminal:</p>

<pre><code>docker run -d -p 9200:9200 -p 9300:9300 elasticsearch
</code></pre>

<p>Let&rsquo;s breakdown the command:</p>

<ul>
<li><p>First we&rsquo;re telling Docker we want to run an image in a container via the &lsquo;<em>run</em>&lsquo; command.</p></li>

<li><p>The <em>-d</em> argument will run the container in detached mode. This means it will run as a separate background process, as opposed to short-lived process that will run and immediately terminate once it has finished executing.</p></li>

<li><p>Moving on, the <em>-p</em> arguments tell the container to open and bind our local machine&rsquo;s port 9200 and 9300 to port 9200 and 9300 in the Docker container.</p></li>

<li><p>Then at the end we specify the Docker image we wish to start running - in this case, the Elastic Search image.</p></li>
</ul>

<p><strong>Note:</strong> At this point, if you&rsquo;re new to Docker then it&rsquo;s worth knowing that our container&rsquo;s storage will be deleted when we tear down our Docker container. If you wish to persist the data then we have to use the <em>-v</em> flag to map the Docker&rsquo;s drive volume to our local disk, as opposed to the default location which is the Docker&rsquo;s container.</p>

<p>If we want to map the volume to our local disk then we&rsquo;d need to run the following command instead of the one mentioned above:</p>

<pre><code>docker run -d -v &quot;$HOME/Documents/elasticsearchconf/&quot;:/usr/share/elasticsearch/data -p 9200:9200 -p 9300:9300 elasticsearch
</code></pre>

<p>This will map our <em>$HOME/Documents/elasticsearchconf</em> folder to the container&rsquo;s <em>/usr/share/elasticsearch/data</em> directory.</p>

<h2 id="checking-our-elastic-search-container-is-up-and-running">Checking our Elastic Search container is up and running</h2>

<p>If the above command worked successfully then we should see the Elastic Search container up and running, we can check this by executing the following command that lists all running containers:</p>

<pre><code>docker ps
</code></pre>

<p>To verify Elastic Search is running, you should also be able to navigate to <a href="http://localhost:9200">http://localhost:9200</a> and see output similar to this:</p>

<pre><code>{
  &quot;name&quot; : &quot;f0t5zUn&quot;,
  &quot;cluster_name&quot; : &quot;elasticsearch&quot;,
  &quot;cluster_uuid&quot; : &quot;3loHHcMnR_ekDxE1Yc1hpQ&quot;,
  &quot;version&quot; : {
    &quot;number&quot; : &quot;5.0.0&quot;,
    &quot;build_hash&quot; : &quot;253032b&quot;,
    &quot;build_date&quot; : &quot;2016-10-26T05:11:34.737Z&quot;,
    &quot;build_snapshot&quot; : false,
    &quot;lucene_version&quot; : &quot;6.2.0&quot;
  },
  &quot;tagline&quot; : &quot;You Know, for Search&quot;
}
</code></pre>

<p><strong>Container isn&rsquo;t running?</strong></p>

<p>If for some reason you container isn&rsquo;t running, then you can run the following command to see all containers, whether running or not.</p>

<pre><code>docker ps -a
</code></pre>

<p>Then once you&rsquo;ve identified the container you just tried to run (hint: it should be at the top) then run the following command, including the first 3 or 4 characters from the Container Id column:</p>

<pre><code>docker logs 0403
</code></pre>

<p>This will print out the containers logs, giving you a bit of information as to what could have gone wrong.</p>

<h2 id="connecting-to-elastic-search">Connecting to Elastic Search</h2>

<p>Now that our Docker container is up and running, let&rsquo;s get our hands dirty with Elastic Search via their RESTful API.</p>

<p><strong>Indexing data</strong></p>

<p>Let&rsquo;s begin by indexing some data in Elastic Search. We can do this by posting the following product it to our desired index (where product is our index name, and television is our type):</p>

<pre><code>// HTTP POST:
http://localhost:9200/product/television

Message Body:
{&quot;Name&quot;: &quot;Samsung Flatscreen Television&quot;, &quot;Price&quot;: &quot;£899&quot;}
</code></pre>

<p>If successful, you should get the following response from Elastic Search:</p>

<pre><code>{
    &quot;_index&quot;:&quot;product&quot;,
    &quot;_type&quot;:&quot;television&quot;,
    &quot;_id&quot;:&quot;AVhIJ4ACuKcehcHggtFP&quot;,
    &quot;_version&quot;:1,
    &quot;result&quot;:&quot;created&quot;,
    &quot;_shards&quot;:{
        &quot;total&quot;:2,
        &quot;successful&quot;:1,
        &quot;failed&quot;:0
    },
    &quot;created&quot;:true
}
</code></pre>

<p><strong>Searching for data</strong></p>

<p>Now we&rsquo;ve got some data in our index, let&rsquo;s perform a simple search for it.</p>

<p>Performing the following GET request should return the following data:</p>

<pre><code>// HTTP GET:
http://localhost:9200/_search?q=samsung
</code></pre>

<p>The response should look something like this:</p>

<pre><code>{
    &quot;took&quot;:16,
    &quot;timed_out&quot;:false,
    &quot;_shards&quot;:{
        &quot;total&quot;:15,
        &quot;successful&quot;:15,
        &quot;failed&quot;:0
    },
    &quot;hits&quot;:{
        &quot;total&quot;:1,
        &quot;max_score&quot;:0.2876821,
        &quot;hits&quot;:[{
                &quot;_index&quot;:&quot;product&quot;,
                &quot;_type&quot;:&quot;television&quot;,
                &quot;_id&quot;:&quot;AVhIJ4ACuKcehcHggtFP&quot;,
                &quot;_score&quot;:0.2876821,
                &quot;_source&quot;: {
                    &quot;Name&quot;: &quot;Samsung Flatscreen Television&quot;,&quot;Price&quot;: &quot;£899&quot;
                }
            }]
        }
}
</code></pre>

<p>One of the powerful features of Elastic Search is its  full-text search capabilities, enabling you to perform some truely impressive search queries against your indexed data.</p>

<p>For more on the search options available to you I would recommend you check out this resource.</p>

<p><strong>Deleting Indexed data</strong></p>

<p>To delete indexed data you can perform a delete request, passing the object ID, like so:</p>

<pre><code>// HTTP Delete
http://localhost:9200/product/television/AVhIJ4ACuKcehcHggtFP
</code></pre>

<h2 id="moving-forward">Moving Forward</h2>

<p>So far we&rsquo;ve been using Elastic Search&rsquo;s API query our Elastic Search index. If you&rsquo;d prefer something more visual that will aid you in your exploration and discovery of the Elastic Search structured query syntax then I&rsquo;d highly recommend you check out <a href="https://github.com/mobz/elasticsearch-head">ElasticSearch-Head</a>; a web frontend for your Elastic Search cluster. </p>

<p><img src="http://assets.josephwoodward.co.uk/blog/ElasticSearch_head_browse.png" alt="" /></p>

<p>To get started with ElasticSearch-Head you simply clone the repository to your local drive, open the _index.html_ file and point it at your <a href="http://localhost:9200">http://localhost:9200</a> endpoint.</p>

<p>If you have experience issues connecting your web client to your Dockerised Elastic Search cluster then it could be because of CORS permissions. Instead of fiddling around with configurations I simply installed and enabled this <a href="https://chrome.google.com/webstore/detail/allow-control-allow-origi/nlfbmbojpeacfghkpbjhddihlkkiljbi">Chrome plugin</a> to get around it.</p>

<p>Now you can use the web UI to view the search tab to discover more of Elastic Search&rsquo;s complex structured query syntax.</p>

<p><img src="http://assets.josephwoodward.co.uk/blog/ElasticSearch_head_search.png" alt="" /></p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Tue, 15 Nov 2016 00:23:53 +0000</pubDate></item><item><title>Going cross-platform with ASP.NET Core talk at Bristech 2016</title><link>https://josephwoodward.co.uk/2016/11/going-cross-platform-with-asp-net-core-talk-bristech-2016</link><description></description><content:encoded><![CDATA[<p>Today I had the pleasure of delivering a last minute replacement talk on going cross-platform with ASP.NET Core at BrisTech 2016. The original speaker dropped out just a couple of days before the conference so I was more than happy to step in and deliver the talk. Overall I&rsquo;m really happy with how it turned out, and considering the talk was a last minute stand-in talk I was really pleased with the number of attendees.</p>

<p>During the talk I discussed the future of the .NET platform and Microsoft&rsquo;s cross-platform commitment, the performance benefits, differences between .NET and .NET Core including ASP.NET differences, then proceeded to demonstrate the new .NET command line interface and building an ASP.NET Core application using .NET Core.</p>

<p>To put the proof in the pudding so to speak, to demonstrate that .NET Core really is cross platform, I did all of this on a Macbook Pro.</p>

<p><img src="http://assets.josephwoodward.co.uk/blog/bristech2016.jpg" alt="Going cross-platform with ASP.NET Core talk at BrisTech 2016" /></p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Thu, 03 Nov 2016 01:26:12 +0000</pubDate></item><item><title>Building rich client side apps using Angular 2 talk at DDD 11 in Reading</title><link>https://josephwoodward.co.uk/2016/09/building-rich-client-side-apps-using-angular-2-talk-ddd-11-reading</link><description></description><content:encoded><![CDATA[<p>This weekend I had the pleasure of speaking in front of this friendly bunch (pictured) at DDD 11, a .NET focused developers&rsquo; conference - this year hosted out of Microsoft&rsquo;s Reading offices.</p>

<p><img src="http://assets.josephwoodward.co.uk/blog/IMAG0469.jpg" alt="" /></p>

<p>My talk topic? Building Rich client-side applications using Angular 2.</p>

<p>As a regular speaker at meet ups and the occasional podcast, this year I&rsquo;ve been keen to step it up and move into the conference space. Speaking is something that I love doing, and DDD 11 was a great opportunity that I didn&rsquo;t want to miss.</p>

<p>Having spotted a tweet that DDD were accepting talk proposals a number of weeks ago, I quickly submitted a few talks I had up my sleeve - with Building rich client-side apps using Angular 2 being the talk that received enough votes to get me a speaking slot - yay!</p>

<p>My talk was one of the last talks of the day so I had to contend with a room full of sleepy heads, tired after a long day of networking and programming related sessions (myself included). With this in mind I decided it would be best to step up the energy levels with a hope of keeping people more engaged.</p>

<p>Overall I think the talk went well (though in hindsight I could have slowed down a little bit!) and received what I felt was a good reception, and it was great to have people in the audience (some of whom are speakers themselves) approach me afterwards saying they enjoyed the talk.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Sat, 03 Sep 2016 01:15:54 +0000</pubDate></item><item><title>Executing JavaScript inside of .NET Core using JavaScriptServices</title><link>https://josephwoodward.co.uk/2016/09/executing-javascript-inside-dot-net-core-using-javascript-services</link><description></description><content:encoded><![CDATA[<p>Recently, we were lucky enough to have Steve Sanderson speak at <a href="http://meetup.com/dotnetsouthwest/">.NET South West</a>, a Bristol based .NET meet up I help organise. His talk was titled <a href="http://www.meetup.com/dotnetsouthwest/events/232015682/">SPAs (Single Page Applications) on ASP.NET Core</a> and featured a whole host of impressive tools and APIs he&rsquo;s been developing at Microsoft, all aimed at aiding developers building single page applications (including Angular, Knockout and React) on the ASP.NET Core platform.</p>

<p>As Steve was demonstrating all of these amazing APIs (including  server side rendering of Angular 2/Reacts applications, Angular 2 validation that was integrated with .NET Core MVC&rsquo;s validation) the question that was at the end of everyone&rsquo;s tongue was &ldquo;<em>How&rsquo;s he doing this?!</em>&rdquo;.</p>

<p>When the opportunity finally arose, Steve demonstrated what I think it one of the coolest parts of the talk - the JavaScriptServices middleware - the topic of this blog post.</p>

<p>Before continuing, if you develop single page apps in either Angular, React or Knockout then I&rsquo;d highly recommend you check out the talk, <a href="https://vimeo.com/180155167">which can also be found here</a>.</p>

<h2 id="what-is-javascriptservices">What is JavaScriptServices?</h2>

<p>JavaScriptServices is a .NET Core middleware library that plugs into the .NET Core pipeline which uses Node to execute JavaScript (naturally this also includes Node modules) at runtime. This means that in order to use JavaScriptServices <strong>you have to have Node installed the host machine</strong>.</p>

<p>How does it work and what application does it have? Let&rsquo;s dive in and take a look!</p>

<h2 id="setting-up-javascriptservices">Setting up JavaScriptServices</h2>

<p>Before we continue, it&rsquo;s worth mentioning that it looks like the package is currently going through a rename (from NodeServices to JavaScriptServices) - so you&rsquo;ll notice the API and NuGet package is referenced NodeServices, yet I&rsquo;m referring to JavaScriptServices throughout. Now that that&rsquo;s out of the way, let&rsquo;s continue!</p>

<p>First of all, as mentioned above, JavaScriptServices relies on Node being installed on the host machine, so if you don&rsquo;t have Node installed then head over to <a href="https://nodejs.org/">NodeJs.org</a> to download and install it. If you&rsquo;ve already got Node installed then you&rsquo;re good to continue.</p>

<p>As I alluded to earlier, setting up the JavaScriptServices middleware is as easy as setting up any other piece of middleware in the in the new .NET Core framework. Simply include the <a href="https://www.nuget.org/packages/Microsoft.AspNetCore.NodeServices/">JavaScriptServices NuGet package</a> in your solution:</p>

<pre><code>Install-Package Microsoft.AspNetCore.NodeServices -Pre
</code></pre>

<p>Then reference it in your Startup.cs file&rsquo;s ConfigureServices method:</p>

<pre><code>public void ConfigureServices(IServiceCollection services)
{
    services.AddNodeServices();
}
</code></pre>

<p>Now we have the following interface at our disposal for calling Node modules:</p>

<pre><code>public interface INodeServices : IDisposable
{
    Task&lt;T&gt; InvokeAsync&lt;T&gt;(string moduleName, params object[] args);
    Task&lt;T&gt; InvokeAsync&lt;T&gt;(CancellationToken cancellationToken, string moduleName, params object[] args);

    Task&lt;T&gt; InvokeExportAsync&lt;T&gt;(string moduleName, string exportedFunctionName, params object[] args);
    Task&lt;T&gt; InvokeExportAsync&lt;T&gt;(CancellationToken cancellationToken, string moduleName, string exportedFunctionName, params object[] args);
}
</code></pre>

<p>Basic Usage</p>

<p>Now we&rsquo;ve got JavaScriptServices setup, let&rsquo;s look at getting started with a simple use case and run through how we can execute some trivial JavaScript in our application and capture the output.</p>

<p>First we&rsquo;ll begin by creating a simple JavaScript file that contains the a Node module that returns a greeting message:</p>

<pre><code>// greeter.js
module.exports = function (callback, firstName, surname) {

    var greet = function (firstName, surname) {
        return &quot;Hello &quot; + firstName + &quot; &quot; + surname;
    }

    callback(null, greet(firstName, surname));
}
</code></pre>

<p>Next, we inject an instance of INodeServices into our controller and invoke our Node module by calling <code>InvokeAsync&lt;T&gt;</code> where T is our module&rsquo;s return type (a string in this instance).</p>

<pre><code>public DemoController {

    private readonly INodeServices _nodeServices;

    public DemoController(INodeServices nodeServices)
    {
        _nodeServices = nodeServices;
    }

    public async Task&lt;IActionResult&gt; Index()
    {
        string greetingMessage = await _nodeServices.InvokeAsync&lt;string&gt;(&quot;./scripts/js/greeter&quot;, &quot;Joseph&quot;, &quot;Woodward&quot;);

        ...
    }

}
</code></pre>

<p>Whilst this is a simple example, hopefully it&rsquo;s demonstrated how easy it and given you an idea as to how powerful this can potentiall be. Now let&rsquo;s go one further.</p>

<h2 id="taking-it-one-step-further-transpiling-es6-es2015-to-es5-including-source-mapping-files">Taking it one step further - transpiling ES6/ES2015 to ES5, including source mapping files</h2>

<p>Whilst front end task runners such as Grunt and Gulp have their place, what if we were writing ES6 code and didn&rsquo;t want to have to go through the hassle of setting up a task runner just to transpile our ES2015 JavaScript?</p>

<p>What if we could transpile our Javascript at runtime in our ASP.NET Core application? Wouldn&rsquo;t that be cool? Well, we can do just this with JavaScriptServices!</p>

<p>First we need to include a few Babel packages to transpile our ES6 code down to ES5. So let&rsquo;s go ahead and create a packages.json in the root of our solution and install the listed packages by executing _npm install _at the same level as our newly created packages.json file.</p>

<pre><code>{
    &quot;name&quot;: &quot;nodeservicesexamples&quot;,
    &quot;version&quot;: &quot;0.0.0&quot;,
    &quot;dependencies&quot;: {
        &quot;babel-core&quot;: &quot;^6.7.4&quot;,
        &quot;babel-preset-es2015&quot;: &quot;^6.6.0&quot;
    }
}
</code></pre>

<p>Now all we need to register the NodeServices service in the ConfigureServices method of our Startup.cs class:</p>

<pre><code>public void ConfigureServices(IServiceCollection services)
{
    ...
    services.AddNodeServices();
    ...
}
</code></pre>

<p>After this we want to create our Node module that will invoke the Babel transpiler - this will also include the source mapping files.</p>

<pre><code>// /node/transpilation.js

var fs = require('fs');
var babelCore = require('babel-core');

module.exports = function(cb, physicalPath, requestPath) {
    var originalContents = fs.readFileSync(physicalPath);
    var result = babelCore.transform(originalContents, {
        presets: ['es2015'],
        sourceMaps: 'inline',
        sourceFileName: '/sourcemapped' + requestPath
    });

    cb(null, result.code);
}
</code></pre>

<p>Now comes the interesting part. On every request we want to check to see if the HTTP request being made is for a .js extension. If it is, then we want to pass its contents to our JavaScriptServices instance to transpile it to ES6/2015 JavaScript, then finish off by writing the output to the output response.</p>

<p>At this point I think it&rsquo;s only fair to say that if you were doing this in production then you&rsquo;d probably want some form of caching of output. This would prevent the same files being transpiled on every request - but hopefully the follow example is enough to give you an idea as to what it would look like:</p>

<pre><code>public void Configure(IApplicationBuilder app, IHostingEnvironment env, INodeServices nodeServices)
{
    ...

    // Dynamically transpile any .js files under the '/js/' directory
    app.Use(next =&gt; async context =&gt; {
        var requestPath = context.Request.Path.Value;
        if (requestPath.StartsWith(&quot;/js/&quot;) &amp;&amp; requestPath.EndsWith(&quot;.js&quot;)) {
            var fileInfo = env.WebRootFileProvider.GetFileInfo(requestPath);
            if (fileInfo.Exists) {
                var transpiled = await nodeServices.InvokeAsync&lt;string&gt;(&quot;./node/transpilation.js&quot;, fileInfo.PhysicalPath, requestPath);
                await context.Response.WriteAsync(transpiled);
                return;
            }
        }

        await next.Invoke(context);
    });

    ...
}
</code></pre>

<p>Here, all we&rsquo;re doing is checking the ending path of every request to see if the file exists within the /js/ folder and ends with .js. Any matches are then checked to see if the file exists on disk, then is passed to the transpilation.js module we created earlier. The transpilation module will then run the contents of the file through Babel and return the output to JavaScriptServices, which then proceeds to write to our application&rsquo;s response object before invoking the next handler in our HTTP pipeline.</p>

<p>Now that&rsquo;s all set up, let&rsquo;s go ahead and give it a whirl. If we create a simple ES2016 Javascript class in a wwwroot/js/ folder and reference it within our view in a script tag.</p>

<pre><code>// wwwroot/js/example.js

class Greeter {
    getMessage(name){
        return &quot;Hello &quot; + name + &quot;!&quot;;
    }
}

var greeter = new Greeter();
console.log(greeter.getMessage(&quot;World&quot;));
</code></pre>

<p>Now, when we load our application and navigate to our example.js file via your browser&rsquo;s devtools you should see it&rsquo;s been transpiled to ES2015!</p>

<h2 id="conclusion">Conclusion</h2>

<p>Hopefully this post has giving you enough of an understanding of the JavaScriptServices package to demonstrate how powerful the library really is. With the abundance of Node modules available there&rsquo;s all sorts of functionality you can build into your application, or application&rsquo;s build process. Have fun!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Thu, 01 Sep 2016 23:33:24 +0000</pubDate></item><item><title>Angular 2 CLI interview with .NET Rocks</title><link>https://josephwoodward.co.uk/2016/08/angular-2-cli-interview-with-dot-net-rocks</link><description></description><content:encoded><![CDATA[<p>Recently I once again (for those that remember my last talk with Carl and Richard was about <a href="http://josephwoodward.co.uk/2015/11/visual-studio-2015-shortcuts-interview-with-dot-net-rocks/">shortcuts and productivity games in Visual Studio</a>) had the pleasure of talking with Carl Franklin and Richard Campbell on the .NET Rocks Show about the Angular 2 command line interface. With all that&rsquo;s been happening with Angular 2 it was great to have the opportunity to spend a bit of time talking about the tooling around the framework and some of the features the Angular 2 CLI affords us.</p>

<p>So if you&rsquo;ve been looking at Angular 2 recently then <a href="http://www.dotnetrocks.com/default.aspx?showNum=1339">be sure to check out show 1339</a> and leave any feedback you may have.</p>

<p><a href="http://www.dotnetrocks.com/default.aspx?showNum=1339"><img src="http://assets.josephwoodward.co.uk/blog/dotnetrocks_2.jpg" alt="" /></a></p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Tue, 23 Aug 2016 01:11:30 +0000</pubDate></item><item><title>Integration testing your ASP.NET Core middleware using TestServer</title><link>https://josephwoodward.co.uk/2016/07/integration-testing-asp-net-core-middleware</link><description></description><content:encoded><![CDATA[<p>Lately I&rsquo;ve been working on a piece of <a href="https://github.com/JosephWoodward/RouteUrlRedirector">middleware that simplifies temporarily or permanently redirecting a URL from one path to another</a>, whilst easily expressing the permanency of the redirect by sending the appropriate 301 or 302 HTTP Status Code to the browser.</p>

<p>If you&rsquo;ve ever re-written a website and had to ensure old, expired URLs didn&rsquo;t result in 404 errors and lost traffic then you&rsquo;ll know what a pain this can be when dealing with a large number of expired URLs. Thankfully using .NET Core&rsquo;s new middleware approach, this task becomes far easier - so much so that I&rsquo;ve wrapped it into a library I intend to publish to NuGet:</p>

<pre><code>public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    ...
    app.UseRequestRedirect(r =&gt; r.ForPath(&quot;/32/old-path.html&quot;).RedirectTo(&quot;/32/new-path&quot;).Permanently());
    app.UseRequestRedirect(r =&gt; r.ForPath(&quot;/33/old-path.html&quot;).RedirectTo(&quot;/33/new-path&quot;).Temporarily());
    app.UseRequestRedirect(r =&gt; r.ForPath(&quot;/34/old-path.html&quot;).RedirectTo(DeferredQueryDbForPath).Temporarily());
    ...
}

private string DeferredQueryDbForPath(string oldPath){
    /* Query database for new path only if old path is hit */
    return newPath;
}
</code></pre>

<p>Whilst working on this middleware I was keen to add some integration tests for a more complete range of tests. After a bit of digging I noticed that doing so was actually really simple, thanks to the _TestServe_r class available as part of the <em>Microsoft.AspNetCore.TestHost</em> package.</p>

<h2 id="what-is-test-server">What is Test Server?</h2>

<p>TestServer is a lightweight and configurable host server, designed solely for testing purposes. Its ability to create and serve test requests without the need for a real web host is it&rsquo;s true value, making it perfect for testing middleware libraries (amongst other things!) that take a request, act upon it and eventually return a response. In the case of the aforementioned middleware, our response we&rsquo;ll be testing will be a status code informing the browser that the page has moved along with the destination the page is located at.</p>

<h2 id="testserver-usage">TestServer Usage</h2>

<p>As mentioned above, you&rsquo;ll find the TestServer class within the <a href="https://www.nuget.org/packages/Microsoft.AspNetCore.TestHost/">Microsoft.AspNetCore.TestHost</a> package, so first of all you&rsquo;ll need to add it to your test project either by using the following NuGet command:</p>

<pre><code>Install-Package Microsoft.AspNetCore.TestHost
</code></pre>

<p>Or by updating your <em>project.json</em> file directly:</p>

<pre><code>&quot;dependencies&quot;: {
    ...
    &quot;Microsoft.AspNetCore.TestHost&quot;: &quot;1.0.0&quot;,
    ...
},
</code></pre>

<p>Once the NuGet package has downloaded we&rsquo;re ready start creating our tests.</p>

<p>After creating our test class the first thing we need to do is configure an instance of WebHostBuilder, ensuring we add our configured middleware to the pipeline. Once configured we create our instance of TestServer which then bootstraps our test server based on the supplied WebHostBuilder configurations.</p>

<pre><code>[Fact]
public async void Should_Redirect_Permanently()
{
    // Arrange
    var builder = new WebHostBuilder()
        .Configure(app =&gt; {
            app.UseRequestRedirect(r =&gt; r.ForPath(&quot;/old/&quot;).RedirectTo(&quot;/new/&quot;).Permanently());
        }
    );

    var server = new TestServer(builder);

    // Act
    ...
}
</code></pre>

<p>Next we need to manually create a new HTTP Request, passing the parameters required to exercise our middleware. In this instance, using the redirect middleware, all I need to do is create a new GET request to the path outlined in my arrange code snippet above. Once created we simply pass our newly created HttpRequestMessage to our configured instance of TestServer.</p>

<pre><code>// Act
var requestMessage = new HttpRequestMessage(new HttpMethod(&quot;GET&quot;), &quot;/old/&quot;);
var responseMessage = await server.CreateClient().SendAsync(requestMessage);

// Assert
...
</code></pre>

<p>Now all that&rsquo;s left is to assert our test using the response we received from our TestServer.SendAsync() method call. In the example below I&rsquo;m using <a href="https://github.com/shouldly/shouldly">assertion library Shouldly</a> to assert that the correct status code is emitted and the correct path (/new/) is returned in the header.</p>

<pre><code>// Assert
responseMessage.StatusCode.ShouldBe(HttpStatusCode.MovedPermanently);
responseMessage.Headers.Location.ToString().ShouldBe(&quot;/new/&quot;);
</code></pre>

<p>The complete test will look like this:</p>

<pre><code>public class IntegrationTests
{
    [Fact]
    public async void Should_Redirect_Permanently()
    {
        // Arrange
        var builder = new WebHostBuilder()
            .Configure(app =&gt; {
                app.UseRequestRedirect(r =&gt; r.ForPath(&quot;/old/&quot;).RedirectTo(&quot;/new/&quot;).Permanently());
            }
        );

        var server = new TestServer(builder);

        // Act
        var requestMessage = new HttpRequestMessage(new HttpMethod(&quot;GET&quot;), &quot;/old/&quot;);
        var responseMessage = await server.CreateClient().SendAsync(requestMessage);

        // Assert
        responseMessage.StatusCode.ShouldBe(HttpStatusCode.MovedPermanently);
        responseMessage.Headers.Location.ToString().ShouldBe(&quot;/new/&quot;);
    }
}
</code></pre>

<p><strong>Conclusion</strong></p>

<p>In this post we looked at how simple it is to test our middleware using the TestServer instance. Whilst the above example is quite trivial, hopefully it provides you with enough of an understanding as to how you can start writing integration or functional tests for your middleware.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Sun, 31 Jul 2016 01:07:29 +0000</pubDate></item><item><title>Proxying HTTP requests in ASP.NET Core using Kestrel</title><link>https://josephwoodward.co.uk/2016/07/proxying-http-requests-asp-net-core-using-kestrel</link><description></description><content:encoded><![CDATA[<p>A bit of a short post this week but hopefully one that will save some a bit of googling!</p>

<p>Recently I&rsquo;ve been doing a lot of ASP.NET Core MVC and in one project I&rsquo;ve been working on I have an admin login area. My first instinct was to create the login section as an Area within the main application project.</p>

<p>Creating a login area in the same application would work, but other than sharing an <em>/admin/</em> path it really was a completely separate application. It has different concerns, a completely different UI (in this instance it&rsquo;s an Angular 2 application talking to a back-end API). For these reasons, creating the admin section as an MVC Area just felt wrong - so I began to look at what Kestrel could offer in terms of proxying requests to another application. This way I could keep my user facing website as one project and the administration area as another, allowing them to grow independent of one another.</p>

<p>Whilst proxying requests is possible in IIS, I was keen to use Kestrel as I&rsquo;d like the option of hosting the application across various platforms, so I was keen to see what Kestrel had to offer.</p>

<p><strong>Enter the ASP.NET Core Proxy Middleware!</strong></p>

<p>After a little digging it came to no surprise that there was <a href="https://www.nuget.org/packages/Microsoft.AspNetCore.Proxy/">some middleware that made proxying requests a breeze</a>. The middleware approach to ASP.NET Core MVC lends itself to such a task and setup was so simple that I felt it merited a blog post.</p>

<p>After installing the Microsoft.AspNetCore.Proxy NuGet package via the &ldquo;<em>Install-Package Microsoft.AspNetCore.Proxy</em>&ldquo; command, all I had to do was hook the proxy middleware up to my pipeline using the _MapWhen_ method within my application&rsquo;s <em>Startup</em> class:</p>

<pre><code>// Startup.cs
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    ...

    app.MapWhen(IsAdminPath, builder =&gt; builder.RunProxy(new ProxyOptions
    {
        Scheme = &quot;http&quot;,
        Host = &quot;localhost&quot;,
        Port = &quot;8081&quot;
    }));

    ...

}

private static bool IsAdminPath(HttpContext httpContext)
{
    return httpContext.Request.Path.Value.StartsWith(@&quot;/admin/&quot;, StringComparison.OrdinalIgnoreCase);
}
</code></pre>

<p>As you can see, all I&rsquo;m doing is passing a method that checks the path begins with <em>/admin/</em>.</p>

<p>Once setup, all you need to do is set your second application (in this instance it&rsquo;s my admin application) to the configured port. You can do this within the <em>Program</em> class via the _UseUrls_ extension method:</p>

<pre><code>// Program.cs
public class Program
{
    public static void Main(string[] args)
    {
        var host = new WebHostBuilder()
            .UseKestrel()
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseUrls(&quot;http://localhost:8081&quot;)
            .UseIISIntegration()
            .UseStartup&lt;Startup&gt;()
            .Build();

        host.Run();
    }
}
</code></pre>

<p>Now, if you start up your application and navigate to /admin/ (or whatever path you&rsquo;ve specified) the request should be proxied to your secondary application!</p>

<p>Happy coding!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Sat, 02 Jul 2016 01:03:45 +0000</pubDate></item><item><title>Fira Mono - An exceptional programming font, and now with (optional) ligatures</title><link>https://josephwoodward.co.uk/2016/06/fira-mono-an-exceptional-programming-font</link><description></description><content:encoded><![CDATA[<p>I&rsquo;ve always been a fan of customising my IDE or text editor of choice, and one such customisation (and often the first thing I do after installing an editor) is setup my font of choice which has long been Google&rsquo;s <a href="https://www.google.com/fonts/specimen/Droid+Sans">Droid Sans font</a>.</p>

<p>Recently however, I was introduced to a rather interesting but delightful looking typeface called <a href="https://mozilla.github.io/Fira/">Fira Mono</a> that&rsquo;s been designed by Mozilla specifically for their Firefox OS.</p>

<p>At first I didn&rsquo;t think much of the font, but the more I saw him using it the more it grew on me. Eventually I decided to download it and give it a try. And having been using it now for a number of days I&rsquo;ve no intention of switching back to Droid Sans.</p>

<p><strong>Fira Mono in Visual Studio Code:</strong></p>

<p><strong><img src="http://assets.josephwoodward.co.uk/blog/vs_code_fira_mono.png" alt="" /></strong></p>

<p><strong>Fira Mono in Visual Studio:</strong></p>

<p><strong><img src="http://assets.josephwoodward.co.uk/blog/visual_studio_fira_mono.png" alt="" /></strong></p>

<p><strong>Would you like ligatures with that?</strong></p>

<p>If you&rsquo;re a developer that likes your programming fonts with ligatures then there is a version available that includes ligatures <a href="https://github.com/tonsky/FiraCode">called Fira Code</a>.</p>

<p><a href="https://github.com/tonsky/FiraCode"><img src="http://assets.josephwoodward.co.uk/blog/programming_font_ligatures.png" alt="" /></a></p>

<p><strong>Downloading Fira Mono</strong></p>

<p>The font itself is open-source so if you&rsquo;re interesting it giving it a try then <a href="https://www.fontsquirrel.com/fonts/fira-mono">download it via Font Squirrel here</a>, then once extracted to your fonts directory load up Visual Studio (or restart so Visual Studio can load the font) and to go <em>Tools &gt; Options &gt; Environment &gt; Fonts and Colors</em> and select it from the Font dropdown.</p>

<p>Having experimented with various font sizes (In Visual Studio only), font size 9 appears to work really well with Fira Mono.</p>

<p>As mentioned above, if you&rsquo;d like the ligatures then <a href="https://github.com/tonsky/FiraCode">head over here to download them</a>.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Fri, 17 Jun 2016 00:54:48 +0000</pubDate></item><item><title>Select by camel case - the greatest ReSharper setting you never knew</title><link>https://josephwoodward.co.uk/2016/06/select-by-camal-case-resharper</link><description></description><content:encoded><![CDATA[<p>One of ReSharper&rsquo;s most powerful features is the sheer number of additional shortcuts it adds to Visual Studio, and out of the arsenal of shortcuts available my most used shortcuts have to be the ones that enable me to modify and manipulate code in as fewer wasted keystrokes as possible. Ultimately, this boils down to the following two shortcuts: </p>

<p><strong>1: Expand/Shrink Selection (CTRL + ALT + Right to expand, CTRL + ALT + Left to shrink)</strong><br>
This shortcut enables you to expand a selection by scope, meaning pressing CTRL + ALT + Right to expand your selection will start by highlighting the code your cursor is over, then the line then function scope, class and namespace. Check out the following gif to see an example. Be sure to watch the selected area!</p>

<p><strong>Expand/Shrink Selection:</strong></p>

<p><img src="http://assets.josephwoodward.co.uk/blog/expand_shrink_before.gif" alt="" /></p>

<p><strong>2: Increase selection by word (CTRL + Shift + Right to expand, CTRL + Shift + Left to shrink)</strong></p>

<p>A few of you will probably notice that this shortcut isn&rsquo;t really a ReSharper shortcut - and you&rsquo;d be right. But none the less, once harnessed, increase/decrease selection by word is extremely powerful when it comes to renaming variables, methods, classes etc. and will serve you well if mastered.</p>

<h2 id="where-am-i-going-with-this">Where am I going with this?</h2>

<p>Whilst the aforementioned shortcuts are excellent tools to add to your shortcut toolbox, one thing I always wished they would do was the expand selection by camel case, allowing me to highlight words with more precision and save me the additional key presses when it comes to highlighting the latter part of a variable in order to rename it. For instance, instead of highlighting the whole word in one go (say, the word <em>ProductService</em>, for example), it would first highlight the word <em>Product</em>, followed by <em>Service</em> after the second key press.</p>

<p>Having wanted to do this for sometime now, I was pleasantly surprised when I stumbled across a ReSharper setting to enable just this. This setting can be enabled by going to <strong>ReSharper &gt; Options &gt; Environment &gt; Editor &gt; Editor Behaviour</strong> and selecting the <strong>Use CamelHumps</strong> checkbox.</p>

<p><img src="http://assets.josephwoodward.co.uk/blog/resharper_camelhumps.png" alt="" /></p>

<p>The problem I&rsquo;ve found when enabling this is the setting overwrites the default behaviour of CTRL  + ALT + Left / Right. Whilst this may be fine for some, I would rather the ability to choose when to highlight by word and when to highlight by camel case. Luckily you can do just this via the _ReSharper<em>HumpPrev</em> and _ReSharper_HumpNext_ commands that are available for binding in Visual Studio.</p>

<p>To do this do the following:</p>

<ol>
<li>Open the Visual Studio Options window from <strong>Tools</strong> &gt; <strong>Options</strong></li>
<li>Expand Environment and scroll down to <strong>Keyboard</strong></li>
<li>Map the two commands _ReSharper_HumpNext_ and _ReSharper_HumpPrev_ to the key mappings you wish (E.g. <strong>ALT+Right Arrow</strong> and <strong>ALT+Left Arrow</strong>) by selecting the command from the list and entering the key mapping in the Press shortcut keys text box, then click <strong>Assign</strong>.</li>
</ol>

<p>Now, with UseCamelHumps enabled and my shortcut keys customised, I can choose between the default selection by string, or extend selection by camel case - giving me even more code-editing precision!</p>

<p><img src="http://assets.josephwoodward.co.uk/blog/extend_hump_selection.gif" alt="" /></p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Mon, 06 Jun 2016 18:20:27 +0000</pubDate></item><item><title>Social authentication via Google in ASP.NET Core MVC</title><link>https://josephwoodward.co.uk/2016/05/setting-up-google-oauth-asp-net-core-mvc</link><description></description><content:encoded><![CDATA[<p>Lately I&rsquo;ve been re-developing my blog and moving it to .NET Core MVC. As I&rsquo;ve been doing this I decided to change authentication methods to take advantage of Google&rsquo;s OAuth API as I didn&rsquo;t want the hastle of managing username and passwords.</p>

<p>Initially, I started looking at the <a href="https://github.com/SimpleAuthentication/SimpleAuthentication">SimpleAuthentication</a> library - but quickly realised ASP.NET Core already provided support for third party authentication providers via the <code>Microsoft.AspNet.Authentication</code> library.</p>

<p>Having implemented cookie based authentication I thought I&rsquo;d take a moment to demonstrate how easy it is with ASP.NET&rsquo;s new middleware functionality.</p>

<p>Let&rsquo;s get started.</p>

<h2 id="sign-up-for-google-auth-service">Sign up for Google Auth Service</h2>

<p>Before we start, we&rsquo;re going to need to register our application with the Google Developer&rsquo;s Console and create a Client Id and Client Secret (which we&rsquo;ll use later on in this demonstration).</p>

<ol>
<li>To do this go to <a href="https://console.developers.google.com/iam-admin/projects/">Google&rsquo;s developer console</a> and click &ldquo;<strong>Create Project</strong>&rdquo;. Enter your project name (in this instance it&rsquo;s called BlogAuth) then click <strong>Create</strong>.</li>
<li>Next we need to enable authentication with Google&rsquo;s social API (Google+). Within the Overview page click the Google+ API link located under Social API and click Enable once the Google+ page has loaded.</li>
<li>At this point you should see a message informing you that we need to create our credentials. To do this click the <strong>Credentials</strong> link on the left hand side, or the <strong>Go to Credentials</strong> button.</li>
<li>Go across to the <strong>OAuth Consent Screen</strong> and enter a name of the application you&rsquo;re setting up. This name is visible to the user when authenticating. Once done, click <strong>Save</strong>.</li>
<li>At this point we need to create our ClientId and ClientSecret, so go across to the <strong>Credentials</strong> tab and click <strong>Create Credentials</strong> and select OAuth client ID from the dropdown then select <strong>Web Application</strong>.</li>
<li>Now we need to enter our app details. Enter an app name (used for recognising the app within Google Developer Console) and enter your domain into the <strong>Authorized JavaScript origins</strong>. If you&rsquo;re developing locally then enter your localhost address into this field including port number.</li>
<li>Next enter the return path into the <strong>Authorized redirect URIs</strong> field. This is a callback path that Google will use to set the authorisation cookie. In this instance we&rsquo;ll want to enter http://<domain>:<port>/signin-google (where domain and port are the values you entered in step 6).</li>
<li>Once done click <strong>Create</strong>.</li>
<li>You should now be greeted with a screen displaying your Client ID and Client Secret. Take a note of these as we&rsquo;ll need them shortly.</li>
</ol>

<p>Once you&rsquo;ve stored your Client ID and Secret somewhere you&rsquo;re safe to close the Google Developer Console window.</p>

<h2 id="authentication-middleware-setup">Authentication middleware setup</h2>

<p>With our Client ID and Client Secret in hand, our next step is to set up authentication within our application. Before we start, we first need to to import the <strong>Microsoft.AspNet.Authentication</strong> package (<strong>Microsoft.AspNetCore.Authorization</strong> if you&rsquo;re using RC2 or later) into our solution via NuGet using the following command:</p>

<pre><code>// RC1
install Microsoft.AspNet.Authentication

// RC2
install Microsoft.AspNetCore.Authorization
</code></pre>

<p>Once installed it&rsquo;s time to hook it up to ASP.NET Core&rsquo;s pipeline within your solution&rsquo;s <code>Startup.cs</code> file.</p>

<p>First we need to register our authentication scheme with ASP.NET. within the <code>ConfigureServices</code> method:</p>

<pre><code>public IServiceProvider ConfigureServices(IServiceCollection services)
{
    ...

    // Add authentication middleware and inform .NET Core MVC what scheme we'll be using
    services.AddAuthentication(options =&gt; options.SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme);

    ...
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{

    ...
    // Adds a cookie-based authentication middleware to application
    app.UseCookieAuthentication(new CookieAuthenticationOptions
    {
        LoginPath = &quot;/account/login&quot;,
        AuthenticationScheme = &quot;Cookies&quot;,
        AutomaticAuthenticate = true,
        AutomaticChallenge = true
    });

    // Plugin Google Authentication configuration options
    app.UseGoogleAuthentication(new GoogleOptions
    {
        ClientId = &quot;your_client_id&quot;,
        ClientSecret = &quot;your_client_secret&quot;,
        Scope = { &quot;email&quot;, &quot;openid&quot; }
    });

    ...

}
</code></pre>

<p>In terms of configuring our ASP.NET Core MVC application to use Google for authentication - we&rsquo;re done! (yes, it&rsquo;s that easy, thanks to .NET Core MVC&rsquo;s middleware approach). </p>

<p>All that&rsquo;s left to do now is to plumb in our UI and controllers.</p>

<h2 id="setting-up-our-controller">Setting up our controller</h2>

<p>First, let&rsquo;s go ahead and create the controller that we&rsquo;ll use to authenticate our users. We&rsquo;ll call this controller AccountController:</p>

<pre><code>public class AccountController : Controller
{
    [HttpGet]
    public IActionResult Login()
    {
        return View();
    }

    public IActionResult External(string provider)
    {
        var authProperties = new AuthenticationProperties
        {
            // Specify where to return the user after successful authentication with Google
            RedirectUri = &quot;/account/secure&quot;
        };

        return new ChallengeResult(provider, authProperties);
    }

    [Authorize]
    public IActionResult Secure()
    {
        // Yay, we're secured! Any unauthenticated access to this action will be redirected to the login screen
        return View();
    }

    public async Task&lt;IActionResult&gt; LogOut()
    {
        await HttpContext.Authentication.SignOutAsync(&quot;Cookies&quot;);

        return RedirectToAction(&quot;Index&quot;, &quot;Homepage&quot;);
    }
}
</code></pre>

<p>Now we&rsquo;ve created our AccountController that we&rsquo;ll use to authenticate users, we also need to create our views for the Login and Secure controller actions. Please be aware that these are rather basic and are simply as a means to demonstrate the process of logging in via Google authentication.</p>

<pre><code>// Login.cshtml

&lt;h2&gt;Login&lt;/h2&gt;
&lt;a href=&quot;/account/external?provider=Google&quot;&gt;Sign in with Google&lt;/a&gt;

// Secure.cshtml

&lt;h2&gt;Secured!&lt;/h2&gt;
This page can only be accessed by authenticated users.
</code></pre>

<p>Now, if we fire up our application and head to the /login/ page and click _Sign in with Google_ we should be taken to the Google account authentication screen. Once we click Continue  we should be automatically redirected back to our /secure/ page as expected!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Sun, 29 May 2016 21:18:45 +0000</pubDate></item><item><title>ASP.NET Core tag helpers - with great power comes great responsibility</title><link>https://josephwoodward.co.uk/2016/05/asp-net-core-tag-helpers-with-great-power-comes-great-responsibility</link><description></description><content:encoded><![CDATA[<p>I recently watched a <a href="https://channel9.msdn.com/Events/ASPNET-Events/ASPNET-Fall-Sessions/RazorTag-Helpers">Build 2016 talk by N. Taylor Mullen</a> where Taylor demonstrated the power of ASP.NET Core MVC&rsquo;s new tag helpers. Whilst I&rsquo;ve been keeping up to date with the changes and improvements being made to Razor, there were a couple times my jaw dropped as Taylor talked about points that were completely new to me. These points really highlighted how powerful the Razor engine is becoming - but as Ben Parker said in Spiderman, &ldquo;<em>With great power comes great responsibility</em>&rdquo;.</p>

<p>This post serves as a review of how powerful the Razor tag engine is, but also a warning of potential pitfalls you may encounter as your codebase grows.</p>

<h2 id="the-power-of-asp-net-core-mvc-s-tag-engine">The power of ASP.NET Core MVC&rsquo;s tag engine</h2>

<p>For those of you that haven&rsquo;t been keeping up to date with the changes in ASP.NET Core MVC, one of the new features included within Razor are Tag Helpers. At their essence, tag helpers allow you to replace Razor&rsquo;s jarring syntax with a more natural HTML-like syntax. If we take moment to compare a new tag helper to the equivalent Razor function you&rsquo;ll see the difference (remember, you can still use the HTML helpers, Tag helpers do not replace them and will happily work side by side in the same view).</p>

<pre><code>// Before - HTML Helpers
@Html.ActionLink(&quot;Click me&quot;, &quot;MyController&quot;, &quot;MyAction&quot;, { @class=&quot;my-css-classname&quot;, data_my_attr=&quot;my-attribute&quot;}) 

// After - Tag Helpers
&lt;a asp-controller=&quot;MyController&quot; asp-action=&quot;MyAction&quot; class=&quot;my-css-classname&quot; my-attr=&quot;my-attribute&quot;&gt;Click me&lt;/a&gt;
</code></pre>

<p>Whilst both of these will output the same HTML, it&rsquo;s clear to see how much more natural the tag helper syntax looks and feels. Infact, with data prefix being optional when using data attributes in HTML, you could mistake the tag helper for HTML (more on this later).</p>

<h3 id="building-your-own-tag-helpers">Building your own tag helpers</h3>

<p>It goes without saying that we&rsquo;re able to create our own tag helpers, and this is where they get extremely powerful. Let&rsquo;s start by creating a tag helper from start to finish. The follow example is a trivial example, but if you stick stick with me hopefully you&rsquo;ll see why I chose this example as we near the end. So let&rsquo;s begin by creating a tag helper that automatically adds a link-juice preserving rel=&ldquo;nofollow&rdquo; attribute to links outbound links:</p>

<pre><code>public class NoFollowTagHelper : TagHelper
{
    // Public properties becomes available on our custom tag as an attribute.
    public string Href { get; set; }

    public override void Process(TagHelperContext context, TagHelperOutput output)
    {
        output.TagName = &quot;a&quot;; // Specify our tag output name
        output.TagMode = TagMode.StartTagAndEndTag; // The type of tag we wish to create

        output.Attributes[&quot;href&quot;] = Href;
        if (!output.Attributes[&quot;href&quot;].Value.ToString().Contains(&quot;josephwoodward.co.uk&quot;))
        {
            output.Attributes[&quot;rel&quot;] = &quot;nofollow&quot;;
        }

        base.Process(context, output);
    }
}
</code></pre>

<p>Before continuing, it&rsquo;s worth noting that our derived class (NoFollowTagHelper) is what will become our custom tag helper name; Razor will add hyphens between the uppercase character and then lowercase the string. It will also remove the word TagHelper from the class name if it exists.</p>

<p>Before we can use our tag helper we need to tell Razor where to find it.</p>

<p><strong>Loading our tag helper</strong></p>

<p>To load our tag helper we need to add it to our <code>_ViewImports.cshtml</code> file. The _ViewImport&rsquo;s sole purpose is to reference our assemblies relating to the views to save us littering our views with references to assemblies. Using the _ViewImport we can do this in one place, much like we used to specify custom HTLM Helpers in the Web.config in previous versions of ASP.NET MVC.</p>

<pre><code>// _ViewImports.cshtml
@using TagHelperDemo
@using TagHelperDemo.Models
@using TagHelperDemo.ViewModels.Account
@using TagHelperDemo.ViewModels.Manage
@using Microsoft.AspNet.Identity
@addTagHelper &quot;*, Microsoft.AspNet.Mvc.TagHelpers&quot;
@addTagHelper &quot;*, TagHelperDemo&quot; // Reference our assembly containing our tag helper here.
</code></pre>

<p>The asterisk will load all tag helpers within the TagHelperDemo assembly. If you wish to only load a single tag helper you can specify it like so:</p>

<pre><code>// _ViewImports.cshtml
...
@addTagHelper &quot;ImageLoaderTagHelper, TagHelperDemo&quot;
</code></pre>

<p><strong>Using our tag helper</strong></p>

<p>Now that we&rsquo;ve created our tag helper and referenced it, any references to <no-follow> elements will be transformed into no follow anchor links if the href is going to an external domain:</p>

<pre><code>// Our custom tag helper
&lt;no-follow href=&quot;http://outboundlink.com&quot;&gt;Thanks for visiting&lt;/no-follow&gt;
&lt;no-follow href=&quot;/about&quot;&gt;About&lt;/no-follow&gt;

&lt;!-- The transformed output --&gt;
&lt;a href=&quot;http://outboundlink.com&quot; rel=&quot;nofollow&quot;&gt;Thanks for visiting&lt;/a&gt;
&lt;a href=&quot;/about&quot;&gt;About&lt;/a&gt;
</code></pre>

<p><strong>But wait! There&rsquo;s more!</strong></p>

<p>Ok, so creating custom no-follow tags isn&rsquo;t ideal and is quite silly when we can just type normal HTML, so let&rsquo;s go one step further. <strong>With the new tag helper syntax you can actually transform normal HTML tags too!</strong> Let&rsquo;s demonstrate this awesomeness by modifiyng our nofollow tag helper:</p>

<pre><code>[HtmlTargetElement(&quot;a&quot;, Attributes = &quot;href&quot;)]
public class NoFollowTagHelper : TagHelper
{
    public override void Process(TagHelperContext context, TagHelperOutput output)
    {
        var href = output.Attributes[&quot;href&quot;];
        if (!href.Value.ToString().Contains(&quot;josephwoodward.co.uk&quot;))
        {
            output.Attributes[&quot;rel&quot;] = &quot;nofollow&quot;;
        }

        base.Process(context, output);
    }
}
</code></pre>

<p>As you&rsquo;ll see, we&rsquo;ve removed some redundant code and added the HtmlTargetElement attribute. This attribute is what allows us to target existing HTML elements and add additional functionality. Now, if we look at our Razor code ALL of our anchors have been processed by our NoFollowTagHelper class and only those with outbound links have been transformed:</p>

<pre><code>&lt;!-- Before --&gt;
&lt;a href=&quot;http://outboundlink.com&quot;&gt;Thanks for visiting&lt;/a&gt;
&lt;a href=&quot;/about&quot;&gt;About&lt;/a&gt;

&lt;!-- After --&gt;
&lt;a href=&quot;http://outboundlink.com&quot; rel=&quot;nofollow&quot;&gt;Thanks for visiting&lt;/a&gt;
&lt;a href=&quot;/about&quot;&gt;About&lt;/a&gt;
</code></pre>

<p>We&rsquo;ve retrospectively changed the output of our HTML without needing to go through our codebase! For those that have worked on large applications and needed to create some kind of consistency between views, you&rsquo;ll hopefully understand how powerful this can be and the potential use cases for it. In fact, this is exactly how ASP.NET uses the tilde (~/) to locate the path an img src path - see for yourself <a href="https://github.com/aspnet/Mvc/blob/dev/src/Microsoft.AspNetCore.Mvc.TagHelpers/ImageTagHelper.cs">here</a>.</p>

<p><strong>Moving on</strong></p>

<p>So far we&rsquo;ve spent the duration of this blog post talking about how powerful ASP.NET Core MVC&rsquo;s Tag Helpers, but with all great powers comes great responsibility - so let&rsquo;s take a moment to look at the downsides of tag helpers and ways we can prevent potential pitfalls as we use them.</p>

<h2 id="the-responsibility">The Responsibility</h2>

<p><strong>They look just like HTML</strong></p>

<p>When the ASP.NET team first revealed tag helpers to the world there were mixed reactions over the syntax. The powers of Tag Helpers were clear, but some people feel the blurring of lines between HTML and Razor breaks separation of concerns between HTML and Razor. Take the following comments taken from Scott Hanselman&rsquo;s <a href="http://www.hanselman.com/blog/ASPNET5VNextWorkInProgressExploringTagHelpers.aspx">ASP.NET 5 (vNext) Work in Progress - Exploring TagHelpers</a> post demonstrate the feelings of some:</p>

<blockquote>
<p>What are the design goals for this feature? I see less razor syntax, but just exchanged for more non-standard html-like markup. I&rsquo;m still fond of the T4MVC style of referencing controller actions.</p>

<p>This seems very tough to get on-board with. My biggest concern is how do we easily discern between which more obscure attributes are &ldquo;TagHelper&rdquo; related vs which ones are part of the HTML spec? When my company hires new devs, I can&rsquo;t rely on the fact that they would realize that &ldquo;action&rdquo; is a &ldquo;server-side&rdquo; attribute, but then things like &ldquo;media&rdquo; and &ldquo;type&rdquo; are HTML &hellip; not to mention how hard it would be to tell the difference if I&rsquo;m trying to maintain code where folks have mixed server-side/html attributes.</p>
</blockquote>

<p>This lack of distinction between HTML and Razor quickly becomes apparent when you open a view file in a text editor that doesn&rsquo;t support the syntax highlighting of Tag Helpers that Visual Studio does. Can you spot what&rsquo;s HTML and what&rsquo;s a tag helper the following screenshot?</p>

<p><img src="http://assets.josephwoodward.co.uk/blog/taghelper_support.png" alt="" /></p>

<p><strong>The solution</strong></p>

<p>Luckily there is a solution to help people discern between HTML and Razor, and that&rsquo;s to force prefixes using the @tagHelperPrefix declaration.</p>

<p>By adding the @tagHelperPrefix declaration to the top of your view file you&rsquo;re able to force prefixes on all of the tags within that current view:</p>

<pre><code>// Index.cshtml
@tagHelperPrefix &quot;helper:&quot;

...

&lt;div&gt;
    &lt;helper:a asp-controller=&quot;MyController&quot; asp-action=&quot;MyAction&quot; class=&quot;my-css-classname&quot; my-attr=&quot;my-attribute&quot;&gt;Click me&lt;/helper:a&gt;
&lt;/div&gt;
</code></pre>

<p>With a tagHelperPrefix declaration specified in a page, any tag helper that isn&rsquo;t prefixed with the specified prefix will be completely ignored by the Razor engine (note: You also have to prefix the closing tag). If the tag is an enclosing tag that wraps some content, then the body of the tag helper will be emitted:</p>

<pre><code>// Index.cshtml - specify helper prefix
@tagHelperPrefix &quot;helper:&quot;

...

&lt;div&gt;
    // Haven't specified helper: prefix
    &lt;asp-controller=&quot;MyController&quot; asp-action=&quot;MyAction&quot; class=&quot;my-css-classname&quot; my-attr=&quot;my-attribute&quot;&gt;Click me&lt;/a&gt;
&lt;/div&gt;

&lt;!-- Output without prefix: --&gt;
&lt;div&gt;
    Click me
&lt;/div&gt;
</code></pre>

<p>One problem that may arise from this solution is you may forget to add the prefix declaration to your view file. To combat this you can add the prefix declaration to the _ViewImport.cshtml file (which we talked about earlier). As all views automatically inherit from _ViewImport, our prefix rule will cascade down through the rest of our views. As you&rsquo;d expect, this change will force all tag helpers to require your prefix - even the native .NET MVC tags including any anchors or image HTML tags that feature a tilde:</p>

<p><strong>Unexpected HTML behaviour</strong></p>

<p>Earlier in this article, the second version of NoFollowTagHelper demonstrated how we can harness the power of Tag Helpers to transform any HTML element. Whilst the ability to perform such transformations is a extremely powerful, we&rsquo;re effectively taking HTML, a simple markup language with very little native functionality, and giving it this new power. </p>

<p>Let me try and explain.</p>

<p>If you were to copy and paste a page of HTML into a .html file and look at it, you&rsquo;d be confident that there&rsquo;s no magic going on with the markup - effectively what you see is what you get. Now rename that HTML page to .cshtml and put it in an ASP.NET Core MVC application with a number of tag helpers that don&rsquo;t require a prefix and you&rsquo;ll no longer have that same confidence. This lack of confidence can create uncertainty in what was once static HTML. It&rsquo;s the same problem you have when performing DOM manipulations using JavaScript, which is why I prefer to prefix selectors targeted by JavaScript with a &lsquo;js&rsquo;, making it clear to the reader that it&rsquo;s being touched by JavaScript (as opposed to selecting DOM elements by classes used styling purposes).</p>

<p>To demonstrate this with an example, what does the following HTML element do?</p>

<pre><code>&lt;img src=&quot;/profile.jpg&quot; alt=&quot;Profile Picture&quot; /&gt;
</code></pre>

<p>In any other context it would be a simple profile picture from the root of your application. Considering the lack of classes or Id attributes you&rsquo;d be fairly confident there&rsquo;s no JavaScript targeting it too.</p>

<p>With Tag Helpers added to the equation this HTML isn&rsquo;t what you expect. When rendered it actually becomes the following:</p>

<pre><code>&lt;img src=&quot;http://cdn.com/profile.mobile.jpg&quot; alt=&quot;Profile Picture&quot; /&gt;
</code></pre>

<p>Ultimately, the best way to avoid this unpredictable behaviour is to ensure you use prefixes, or at least be ensure your team are clear as to what tag helpers or are not active.</p>

<h2 id="on-closing">On closing</h2>

<p>Hopefully this post has been valuable in highlighting the good, the bad and the potentially ugly consequences of Tag Helpers in your codebase. They&rsquo;re an extremely powerful addition to the .NET framework, but as with all things - there is potential to shoot yourself in the foot.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Mon, 09 May 2016 21:14:06 +0000</pubDate></item><item><title>Capturing IIS / ASP.NET traffic in Fiddler</title><link>https://josephwoodward.co.uk/2016/04/capturing-asp-net-traffic-in-fiddler</link><description></description><content:encoded><![CDATA[<p>Recently, whilst debugging an issue I needed to capture the traffic being sent from my local application to an external RESTful web service. In this instance, I needed to see the contents of a JWT token being passed to the service to verify some of the data. Thankfully, Fiddler by Telerik is just the tool for the job.</p>

<p><strong>What is fiddler?</strong></p>

<p>Fiddler is a super powerful, free web debugging proxy tool created by the guys and girls at Telerik. Once launched, Fiddler will capture all incoming and outgoing traffic from your machine, allowing you to analyse traffic, manipulate HTTP (and HTTPS!) requests and perform a whole host of traffic based operations. It&rsquo;s a fantastic tool for debugging and if you don&rsquo;t have it I&rsquo;d highly recommend you take a look at it. Did I say it&rsquo;s 100% free too?</p>

<p><strong>Capturing ASP.NET / IIS Traffic</strong></p>

<p>By default, Fiddler is configured to register itself as the system proxy for Microsoft Windows Internet Services (WinInet) - the HTTP layer used by IE (and other browsers), Microsoft Office, and many other products. Whilst this default configuration is suitable for the majority of your debugging, if you wish to capture traffic from IIS (which bypasses WinInet), we&rsquo;ll need to re-route our IIS traffic through Fiddler by modifying our application&rsquo;s <strong>Web.config</strong>.</p>

<h2 id="step-1-update-your-web-config">Step 1: Update your Web.config</h2>

<p>To do this, simply open your Web.config and add the following snippet of code after the <configSections> element.</p>

<pre><code>&lt;system.net&gt;
    &lt;defaultProxy enabled=&quot;true&quot;&gt;
        &lt;proxy proxyaddress=&quot;http://127.0.0.1:8888&quot; bypassonlocal=&quot;False&quot;/&gt;
    &lt;/defaultProxy&gt;
&lt;/system.net&gt;
</code></pre>

<h2 id="step-2-configure-fiddler-to-use-the-same-port">Step 2: Configure Fiddler to use the same port</h2>

<p>Now that we&rsquo;ve routed our IIS traffic through port 8888, we have to configure Fiddler to listen to the same port. To do this simple open Fiddler, go to _Tools &gt; Fiddler Options_ &gt; <em>Connections</em> then change the port listed within the &ldquo;<em>Fiddler listens on port</em>&rdquo; setting to <strong>8888</strong>.</p>

<p>Now if you fire up your application you&rsquo;ll start to see your requests stacking up in Fiddler ready for your inspection.</p>

<p>Happy debugging!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Mon, 25 Apr 2016 21:12:51 +0000</pubDate></item><item><title>Publishing your first NuGet package in 5 easy steps</title><link>https://josephwoodward.co.uk/2016/04/publishing-your-first-nuget-package-in-5-easy-steps</link><description></description><content:encoded><![CDATA[<p>So you&rsquo;ve just finished writing a small .NET library for a one-off project and you pause for a moment and think &rdquo;<em>I should stick this on NuGet - others may find this useful!</em>&rdquo;. You know what NuGet is and how it works, but having never published a package before you&rsquo;re unsure what to do and are short on time. If this is the case then hopefully this post will help you out and show you just how painless creating your first NuGet package is.</p>

<p>Let&rsquo;s begin!</p>

<h2 id="step-1-download-the-nuget-command-line-tool">Step 1: Download the NuGet command line tool</h2>

<p>First, you&rsquo;ll need to download the NuGet command line tool. You can do this by going <a href="https://dist.nuget.org/index.html">here</a> and downloading the latest version (beneath the Windows x86 Commandlineheading).</p>

<h2 id="step-2-add-the-nuget-executable-to-your-path-system-variables">Step 2: Add the NuGet executable to your PATH system variables</h2>

<p>Now we&rsquo;ve downloaded the NuGet executable we want to add it to our PATH system variables. At this point, you could simply reference the executable directly - but before long you&rsquo;ll be wanting to contribute more of your libraries and adding it to your PATH system variables will save you more work in the long run.</p>

<p>If you already know how to add PATH variables then jump to Step 3, if not then read on.</p>

<p><strong>Adding the</strong> <strong>nuget</strong><strong> command to your PATH system variables</strong></p>

<p>First, move the NuGet.exe you downloaded to a suitable location (I store mine in <em>C:/NuGet/</em>). Now, right-click My Computer (This PC if you&rsquo;re on Windows 10). Click &ldquo;<em>Advanced System Settings</em>&rdquo; then click the &ldquo;<em>Environment Variables</em>&rdquo; button located within the Advanced tab. From here double-click the PATH variable in the top panel and create a new entry by adding the path to the directory that contains your NuGet.exe file (in this instance it&rsquo;s <em>C:/NuGet/</em>)<em>.</em></p>

<p>Now, if all&rsquo;s done right you should be able to open a Command Prompt window, type &ldquo;<strong>nuget&rdquo;</strong> and you&rsquo;ll be greeted with a list of NuGet commands.</p>

<h2 id="step-3-create-a-package-nuspec-configuration-file">Step 3: Create a Package.nuspec configuration file</h2>

<p>In short, a .nuspec file is an XML-based configuration file that describes our NuGet package and its contents. For further reading on the role of the .nuspec file see the <a href="https://docs.nuget.org/create/nuspec-reference">nuspec reference on nuget.org</a>.</p>

<p>To create a .nuspec package manifest file, let&rsquo;s go to the root of our project we wish to publish (that&rsquo;s where I prefer to keep my .nuspec file as it can be added to source control) and open a Command Prompt window (Tip: Typing &ldquo;cmd&rdquo; in the folder path of your explorer window will automatically open a Command Prompt window pointing to the directory).</p>

<p>Now type &ldquo;_nuget_<em> spec</em>&rdquo; into your Command Prompt window. If all goes well you should be greeted with a message saying &ldquo;<em>Created &lsquo;Package.nuspec&rsquo; successfully</em>&rdquo;. If so, you should now see a Package.nuspec file your project folder.</p>

<p>Let&rsquo;s take a moment to look inside of our newly created Package.nuspec file. It should look a little like below:</p>

<pre><code>&lt;?xml version=&quot;1.0&quot;?&gt;
&lt;package&gt;
  &lt;metadata&gt;
    &lt;id&gt;Package&lt;/id&gt;
    &lt;version&gt;1.0.0&lt;/version&gt;
    &lt;authors&gt;Joseph&lt;/authors&gt;
    &lt;owners&gt;Joseph&lt;/owners&gt;
    &lt;licenseUrl&gt;http://LICENSE_URL_HERE_OR_DELETE_THIS_LINE&lt;/licenseUrl&gt;
    &lt;projectUrl&gt;http://PROJECT_URL_HERE_OR_DELETE_THIS_LINE&lt;/projectUrl&gt;
    &lt;iconUrl&gt;http://ICON_URL_HERE_OR_DELETE_THIS_LINE&lt;/iconUrl&gt;
    &lt;requireLicenseAcceptance&gt;false&lt;/requireLicenseAcceptance&gt;
    &lt;description&gt;Package description&lt;/description&gt;
    &lt;releaseNotes&gt;Summary of changes made in this release of the package.&lt;/releaseNotes&gt;
    &lt;copyright&gt;Copyright 2016&lt;/copyright&gt;
    &lt;tags&gt;Tag1 Tag2&lt;/tags&gt;
    &lt;dependencies&gt;
      &lt;dependency id=&quot;SampleDependency&quot; version=&quot;1.0&quot; /&gt;
    &lt;/dependencies&gt;
  &lt;/metadata&gt;
&lt;/package&gt;
</code></pre>

<p>As you can see, all of the parameters are pretty self-explanatory (see the docs <a href="https://docs.nuget.org/create/nuspec-reference">here</a> if you have any uncertainties on what the configuration does). The only one you may have a question about is the dependencies node - this is simply a list of the dependencies and their version that your NuGet package may have (See <a href="https://docs.nuget.org/create/nuspec-reference#user-content-specifying-dependencies">here</a> for more info).</p>

<p>Now we&rsquo;ve generated our NuGet configuration file, let&rsquo;s take a moment to fill it in.</p>

<p>Once you&rsquo;re done, your manifest file should look a little like below. The next step is to reference the files to be packaged. In the following example, you&rsquo;ll see that I&rsquo;ve referenced the Release .dll file (take a look at the <a href="https://docs.nuget.org/create/nuspec-reference#user-content-specifying-files-to-include-in-the-package">documentation here</a> for more file options). If you haven&rsquo;t noticed, I&rsquo;ve also removed the <dependencies> node as my package doesn&rsquo;t have any additional dependencies.</p>

<pre><code>&lt;?xml version=&quot;1.0&quot;?&gt;
&lt;package &gt;
  &lt;metadata&gt;
    &lt;id&gt;Slugity&lt;/id&gt;
    &lt;version&gt;1.0.2&lt;/version&gt;
	&lt;title&gt;Slugity&lt;/title&gt;
    &lt;authors&gt;Joseph Woodward&lt;/authors&gt;
    &lt;owners&gt;Joseph Woodward&lt;/owners&gt;
    &lt;licenseUrl&gt;https://raw.githubusercontent.com/JosephWoodward/SlugityDotNet/master/LICENSE&lt;/licenseUrl&gt;
    &lt;projectUrl&gt;https://github.com/JosephWoodward/SlugityDotNet&lt;/projectUrl&gt;
    &lt;iconUrl&gt;https://raw.githubusercontent.com/JosephWoodward/SlugityDotNet/release/assets/logo_128x128.png&lt;/iconUrl&gt;
    &lt;requireLicenseAcceptance&gt;false&lt;/requireLicenseAcceptance&gt;
    &lt;description&gt;Slugity is a simple, configuration based class library that's designed to create search engine friendly URL slugs&lt;/description&gt;
	&lt;language&gt;en-US&lt;/language&gt;
    &lt;releaseNotes&gt;Initial release of Slugity.&lt;/releaseNotes&gt;
    &lt;copyright&gt;Copyright 2016&lt;/copyright&gt;
    &lt;tags&gt;Slug, slug creator, slug generator, url, url creator&lt;/tags&gt;
  &lt;/metadata&gt;
  &lt;files&gt;
    &lt;file src=&quot;bin\Release\Slugity.dll&quot; target=&quot;lib\NET40&quot; /&gt;
  &lt;/files&gt;
&lt;/package&gt;
</code></pre>

<h2 id="step-4-creating-your-nuget-package">Step 4: Creating your NuGet package</h2>

<p>Once your Package.nuspec file has been filled in, let&rsquo;s create our NuGet package! To do this simply run the following command, replacing the path to the .csproj file with your own.</p>

<pre><code>nuget pack Slugity/Slugity.csproj -Prop Configuration=Release -Verbosity detailed
</code></pre>

<p>If you have the time, take a moment to look over the various <a href="https://docs.nuget.org/consume/command-line-reference#user-content-pack-command">pack command options in the NuGet documentation</a>.</p>

<p>Once the above command has been run, if all has gone well then you should see a _YourPackageName._<em>nupkg</em> file appear in your project directory. This is our NuGet package that we can now submit to nuget.org!</p>

<h2 id="step-5-submitting-your-nuget-package-to-nuget-org">Step 5: Submitting your NuGet package to nuget.org</h2>

<p>We&rsquo;re almost there! Now all we need to do is head over to <a href="https://www.nuget.org/">nuget.org</a> and submit our package. If you&rsquo;re not already registered at nuget.org then you&rsquo;ll need to do so (see the &ldquo;Register / Sign in&rdquo; link at the top right of the homepage). Next, go to your <a href="https://www.nuget.org/account">Account page</a> and click &ldquo;Upload a package&rdquo;. Now all you need to do is upload your .nupkg package and verify the package details that nuget.org will use for your package listing!</p>

<p><strong>Congratulations on submitting your first NuGet package!</strong></p>

<p>I hope this guide has been helpful to you, and as always, if you have any questions then leave them in the comments!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Fri, 15 Apr 2016 21:11:25 +0000</pubDate></item><item><title>Tips for learning on the job</title><link>https://josephwoodward.co.uk/2016/03/tips-for-learning-on-the-job</link><description></description><content:encoded><![CDATA[<p>Today, whilst out walking the dog I was listening to an episode of <a href="https://developertea.com">Developer Tea</a> titled &ldquo;<a href="https://developertea.com/episodes/24531">Learning on the Job</a>&rdquo;. As someone that&rsquo;s always looking for smarter ways of learning, this episode was of particular interest to me. In fact, it got me thinking - what tips would I recommend for learning on the job?</p>

<p>I find learning on the job interesting for a few reasons. First of all, you spend a good portion of time at your computer programming. This very fact means you have plenty of opportunities to improve your knowledge and understanding of the language or framework you&rsquo;re using. However, you also have to balance this opportunity with the fact that you&rsquo;re working and have things to do and deadlines to meet. These external pressures mean you have to be smart about how you learn.</p>

<p>One of the strengths of a good developer is their ability to learn on the fly, and we should always be looking for opportunities to learn.</p>

<p>With this in mind. This post is a collection what I&rsquo;ve found to be effective ways of learning on the job, whilst not letting it get in the way of your work.</p>

<h2 id="repls-fiddlers-and-online-editors-are-your-friends">REPLs, Fiddlers and online editors are your friends</h2>

<p>REPLs are a great tool for learning. Within my browser&rsquo;s bookmark toolbar I have a whole host of online REPLs or Fiddlers for different languages. Languages such as C#, TypeScript and JavaScript.</p>

<p>If I&rsquo;m researching a problem or happen to come across some code that I&rsquo;m struggling to understand, I&rsquo;ll often take a moment to launch the appropriate REPL or Fiddler and experiment with the output to make sure I fully understand what the code is doing. These can be an invaluable tool for testing that base library function you&rsquo;ve not never used before, test the output of the algorithm you&rsquo;ve found or learn what your transpiled TypeScript code will look like. In fact, some online tools such as <a href="https://dotnetfiddle.net/">.NET Fiddle</a> also have the ability to decompile code all the way to the IL/MSIL level - enabling you to gain an even greater understanding of what your code is doing.</p>

<p>Here are a few of the online tools I would recommend:</p>

<ul>
<li><a href="https://dotnetfiddle.net/">.NET Fiddle</a> (suitable for C#, F# and even VB.NET!)</li>
<li><a href="http://www.typescriptlang.org/play/">TypeScript Playground</a></li>
<li><a href="https://jsfiddle.net/">JS Fiddle</a> or <a href="https://plnkr.co/">Plunker</a></li>
</ul>

<p>This aptly brings me onto my next tip.</p>

<h2 id="don-t-just-copy-and-paste-that-code">Don&rsquo;t just copy and paste that code</h2>

<p>Above I talked about REPLs and online Fiddlers. Use them! We&rsquo;ve all copied and pasted some code on Stack Overflow to get something working. Take a moment to figure out exactly WHY the code is working and what your code is doing. Copying and pasting code that you don&rsquo;t understand is both dangerous and a wasted opportunity to learn something new. If you&rsquo;re too busy then bookmark it, or better yet - some of the editors listed above such as .NET Fiddle allow you to save snippets of code for later.</p>

<h2 id="write-it-down-in-a-notepad">Write it down in a notepad</h2>

<p>This tip I&rsquo;ve found particularly effective and goes beyond learning whilst on the job, but I&rsquo;ll expound on this in a moment.</p>

<p>If you stumble across one of those bits of information that you think &ldquo;<em>Wow, I didn&rsquo;t know that. I should remember that!</em>&rdquo;, take a moment and write it down on a notepad. The process of writing it down helps your new found knowledge to your long-term memory. Another tip for further improving your long-term memory is to revisit some of your notes a couple of hours, or days later. Repetition is another proven way to improve the retention of such information.</p>

<p>Having a notebook on you is also useful when reading books. If you happen to read a sentence of paragraph whilst reading a book, take a moment to write it down in your own words in your notebook. I guarantee you&rsquo;ll be able to recall the information later that day far clearer than you would have had you continued reading.</p>

<h2 id="share-it-with-a-co-worker-or-summarise-it-in-a-quick-tweet">Share it with a co-worker or summarise it in a quick tweet</h2>

<p>If you&rsquo;ve just solved a problem or stumbled across a valuable piece of information then share it with your nearest co-worker. The very fact that you&rsquo;re talking about it or explaining the concept will help engrain it into your long-term memory. You also have the added benefit of sharing information knowledge with others.</p>

<p>If no-one&rsquo;s around then you could always talk it through with your <a href="https://en.wikipedia.org/wiki/Rubber_duck_debugging">rubber duck</a>, or summarise your new knowledge in a quick tweet.</p>

<h2 id="stack-overflow">Stack Overflow</h2>

<p>Stack Overflow is another great tool for learning on the job. If you&rsquo;re a registered user then you should take full advantage of the ability to favourite answers or questions. Doing so enables you to review them in more detail at a time more suitable.</p>

<h2 id="pocket-and-other-alternatives">Pocket and other alternatives</h2>

<p>Bookmark sites like <a href="https://getpocket.com">Pocket</a> are another powerful tool when it comes to learning on the job and go a step beyond bookmarking.</p>

<p>Whilst struggling with that problem, if you happened to have come across an interesting article but didn&rsquo;t have the time to read it in its entirety then why not add it to your personal Pocket list and fill in the knowledge gap at a later date whilst you&rsquo;re waiting for the bus. Many of the apps like Pocket include the ability to automatically sync bookmarks with your smartphone for offline viewing making it perfect for commuting.</p>

<p>I hope people find these tips valuable. I&rsquo;m always interested in hearing other recommendations for learning on the job so if there&rsquo;s anything I&rsquo;ve not mentioned then please feel free to add it to the comments!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Tue, 29 Mar 2016 21:10:33 +0000</pubDate></item><item><title>Easy slug generation with Slugity</title><link>https://josephwoodward.co.uk/2016/03/c-sharp-slug-generation-with-slugity</link><description></description><content:encoded><![CDATA[<p>This year, as outlined in my <a href="http://josephwoodward.co.uk/2016/02/personal-targets-and-goals-for-2016/">goals for 2016 post</a>, I&rsquo;ve been keen to turn random bits of code I&rsquo;ve written across a variety of projects, into fully-fledged open-sourced libraries that can be downloaded via NuGet. My first library aims to tackle a minor problem any developer that works on public facing web-based applications faces - generating search engine friendly URL slugs.</p>

<h2 id="introducing-slugity-the-highly-configurable-c-slug-generator">Introducing Slugity - The highly configurable C# slug generator</h2>

<p>Whilst creating Slugity, my main focus was configure-ability. People, myself included, have opinions on how a URL should be formatted. Some people like the camel-case URLs that frameworks such as ASP.NET MVC encourage, other dogmatically favour lowercase slugs. Some people still like to strip stop-words from their slugs in order to shorten the overall URL length whilst retaining keyword density within their slugs. Having jumped from pattern to pattern across a variety of projects myself, I was keen to develop Slugity to cater to these needs.</p>

<p>Having been working on Slugity for a number of days now, below are the configuration end-points included in its first release:</p>

<ul>
<li>Customisable slug text case (CamelCase, LowerCase, Ignore)</li>
<li>String Separator</li>
<li>Slug&rsquo;s maximum length</li>
<li>Optional replacement characters</li>
<li>Stop Words</li>
</ul>

<h2 id="slugity-setup-and-configuring">Slugity setup and configuring</h2>

<p>Given the fact that slugs aren&rsquo;t the most complicated strings to get right, setting up Slugity is a breeze. It comes with a default configuration that should be suitable for the majority of its users, myself includes - but it&rsquo;s simple enough to configure if the default settings don&rsquo;t meet your requirements.</p>

<p><strong>Using default configuration options:</strong></p>

<pre><code>var slugity = new Slugity();
string slug = slugity.GenerateSlug(&quot;A &lt;span style=&quot;font-weight: bold&quot;&gt;customisable&lt;/a&gt; slug generation library&quot;);

Console.Log(slug); 
//Output: a-customisable-slug-generation-library
</code></pre>

<p><strong>Configuring Slugity:</strong></p>

<p>If Slugity&rsquo;s default configuration doesn&rsquo;t meet your requirements then configuration is easy:</p>

<pre><code>var configuration = new SlugityConfig
{
    TextCase = TextCase.LowerCase,
    StringSeparator = '_',
    MaxLength = 60
};

configuration.ReplacementCharacters.Add(&quot;eat&quot;, &quot;munch on&quot;);

var slugity = new Slugity(configuration);
string slug = slugity.GenerateSlug(&quot;I like to eat lettuce&quot;);

Console.Log(slug);
//Output: i_like_to_munch_on_lettuce
</code></pre>

<p>Moving forward I&rsquo;d like to add the ability to replace the various formatting classes that are responsible for the various formatting and cleaning of the slug.</p>

<p>The code is currently up on <a href="https://github.com/JosephWoodward/SlugityDotNet">GitHub and can be viewed here</a>. I&rsquo;m in the process of finishing off the last bits and pieces and then getting it added to NuGet, so stay tuned.</p>

<h2 id="update">Update</h2>

<p>The library has recently been added to NuGet and can be <a href="https://www.nuget.org/packages/Slugity/">downloaded here</a>, or via the following NuGet command:</p>

<blockquote>
<p>Install-Package Slugity</p>
</blockquote>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Fri, 18 Mar 2016 21:09:17 +0000</pubDate></item><item><title>OzCode Review - The magical extension that takes the pain out of debugging</title><link>https://josephwoodward.co.uk/2016/03/ozcode-review</link><description></description><content:encoded><![CDATA[<p><em>TLDR; Yes, OzCode is easily worth the money. It doesn&rsquo;t take you long to realise the amount of time it saves you in debugging quickly pays for itself over time.</em></p>

<p>Debugging code can be both hard and time-consuming. Thankfully, the guys over at OzCode have been working hard to battle the debugging time sink-hole by developing OzCode, a magically intelligent debugging extension for Visual Studio.</p>

<p>Remembering back to when I first installed OzCode (when it was in its beta stage!), it took all of a few minutes to see that the OzCode team were onto a winner. I clearly remember entering into debug mode by pressing F5 and watching my Visual Studio code window light up with a host of new debugging tools. Tools that I&rsquo;ve been crying out for in Visual Studio (or other editors for that matter!) for a long time. In fact, OzCode has changed the way I debug altogether.</p>

<p><strong>What does OzCode cost, and is it worth it?</strong></p>

<p>Before continuing, let&rsquo;s go over the price. OzCode is a paid for Visual Studio extension, coming in at an extremely reasonable $79 (£55) for a personal licence. Is it worth the money? <strong>Definitely</strong>.</p>

<p>I&rsquo;m always happy to pay for good software and OzCode is no exception. Suffice it to say, <strong>it doesn&rsquo;t take long to realise the value it provides and how quickly it pays for itself in the time it saves you</strong>. Don&rsquo;t believe me? <a href="http://www.oz-code.com/">Go try the free 30-day trial and see for yourself</a>. I purchased it as soon as it was out of public beta as it was clear it was worth the money.</p>

<p>It&rsquo;s also worth mentioning that OzCode do provide open-source licences and to Microsoft MVPs.</p>

<p>So without further ado, let&rsquo;s take a deeper look at OzCode and some of the features that make it one of the first extensions I install when setting up a new development machine, and why I think you&rsquo;ll love it.</p>

<h2 id="magic-glance">Magic Glance</h2>

<p>OzCode&rsquo;s magic glance feature is by far the biggest change to how you&rsquo;ll debug your code, and probably the first thing you&rsquo;ll notice when entering into debug mode. Normally, when you&rsquo;re trying to find out the value of a variable or property within Visual Studio you need to hover over the variable(s) you&rsquo;d like to inspect. This is less than perfect as the more variables you have to inspect then the more time it&rsquo;ll take you hovering over them to keep track of what&rsquo;s changing. This is what OzCode&rsquo;s Magic Glance feature steps in.</p>

<p>With OzCode installed and Magic Glance at your disposal, you&rsquo;re treated to a helpful display of variable/parameter values without the requirement to hover over them. This also helps you build a better mental model and holistic view of the data you&rsquo;re debugging.</p>

<p><img src="https://s3-eu-west-1.amazonaws.com/cdn.josephwoodward.co.uk/blog/ozcode_magicglance.png" alt="" /></p>

<h2 id="reveal">Reveal</h2>

<p>Collections (or any derivative of an array) receive an even larger dosage of OzCode love with the Reveal feature. If you&rsquo;ve ever had to view specific properties within a collection then you&rsquo;ll know it&rsquo;s not the most pleasant experience.</p>

<p>OzCode simplifies the process of reviewing data in a collection by providing you with the ability to mark properties within a collection as favourites (via a toggleable star icon) which make them instantly visible above the variable.</p>

<p><img src="https://s3-eu-west-1.amazonaws.com/cdn.josephwoodward.co.uk/blog/ozcode_reveal.png" alt="" /></p>

<h2 id="search">Search</h2>

<p>Linked closely to the aforementioned Reveal function, Search adds the additional benefit of being able to search a collection for a particular value. To do this simply inspect the collection and enter the value you&rsquo;re searching for. OzCode will then filter the results that match your input. By default OzCode will perform a shallow search on your collection (I&rsquo;d imagine this is for performance purposes) - however, if your object graph is deeper then you can easily drill down even deeper.</p>

<p><img src="https://s3-eu-west-1.amazonaws.com/cdn.josephwoodward.co.uk/blog/ozcode_search.gif" alt="" /></p>

<h2 id="investigate">Investigate</h2>

<p>If you&rsquo;ve ever had to split a complex expression up in order to see the values whilst debugging then OzCode&rsquo;s Investigate feature will be music to your ears. When reaching a complex (or simple) if statement, OzCode adds visual aids to the expressions to indicate whether they&rsquo;re true of false.<img src="https://s3-eu-west-1.amazonaws.com/cdn.josephwoodward.co.uk/blog/ozcode_investigate.png" alt="" /></p>

<h2 id="quick-attach-to-a-process">Quick Attach to a process</h2>

<p>OzCode&rsquo;s Quick Attach to a process feature is easily one of my most used features - and I&rsquo;ll explain why in a second. Depending on your project type, you either have to enter debugging mode by pressing F5, or by manually attaching your build to a process. With OzCode, they&rsquo;ve greatly simplified this process, and as a keyboard shortcut junkie - they&rsquo;ve also provided shortcuts to make it even faster.</p>

<p>Using shortcuts to attach to a process via quickly became my de facto way of entering debugging mode, and it saves me a <strong>tonne</strong> of time. Friends and colleagues are often impressed at how quickly I&rsquo;m able to enter debugging mode, to which I explain that it&rsquo;s all thanks OzCode. Debugging is now a single keypress away and I love it! (as opposed to multiple clicks, or waiting for Visual Studio to launch my application in a new browser window)</p>

<h2 id="loads-more-features">Loads more features</h2>

<p>These are just a few of my favourite features that easily make OzCode worth the money.</p>

<p>If you want to see a full list of features then I&rsquo;d recommend taking a look over <a href="http://o.oz-code.com/features">OzCode&rsquo;s features page</a>.</p>

<p>Features include:</p>

<ul>
<li>Predict the Future</li>
<li>Live Coding</li>
<li>Conditional Breakpoints</li>
<li>When Set&hellip; Break</li>
<li>Exceptions Trails</li>
<li>Predicting Exceptions</li>
<li>Trace</li>
<li>Foresee</li>
<li>Compare</li>
<li>Custom Expressions</li>
<li>Quick Actions</li>
<li>Show All Instances</li>
</ul>

<h2 id="future-of-ozcode">Future of OzCode</h2>

<p>The future of OzCode is looking bright. With code analysis tools now being able to leverage Roslyn platform I can&rsquo;t wait to see what wizardry the OzCode guys come up with next. For a sneak peek at some of the great features on the horizon I would definitely recommend <a href="https://vimeo.com/156386590">this recent OzCode webinar</a>. If you&rsquo;ve only got a few minutes then I would highly recommend you take that time to check out the <a href="https://vimeo.com/156386590#t=1680s">28-minute mark at the LINQ visualisation</a>.</p>

<div style="background:#eee; border:1px solid #ccc; padding:5px 10px">**Disclaimer:**  
I am in no way associated to OzCode. I'm just an extremely happy user of a great Visual Studio extension that I purchased when it was first released and still use to this day.</div>

<p>Any questions about this review or OzCode itself then feel free to ask!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Thu, 10 Mar 2016 21:03:36 +0000</pubDate></item><item><title>The Ajax response object pattern</title><link>https://josephwoodward.co.uk/2016/02/ajax-response-object-pattern</link><description></description><content:encoded><![CDATA[<p>For some time now I&rsquo;ve been using what I like to call the Ajax response object pattern to great success across a variety of projects, enough so that I thought it merited its own blog post.</p>

<p>The Ajax response object pattern is an incredibly simple pattern to implement, but goes a long way to help promote a consistent API to handling most Ajax responses, and hopefully by the end of this post you&rsquo;ll be able to see the value in implementing such an approach.</p>

<p>Before we dig into what the Ajax response object pattern is and how to implement it, let&rsquo;s take a moment to look at the problem it aims to solve.</p>

<h2 id="the-common-way-to-handle-ajax-responses">The common way to handle Ajax responses</h2>

<p>Typically you&rsquo;ll see a variation of the following approach to <em>handling</em> an asynchronous javascript response (the important part here is the handling of the response rather than how we created it), and whilst the code may vary slightly (the example used is a trivial implementation) hopefully it will give you an idea as to what we&rsquo;re trying to do.</p>

<pre><code>// The back end
public class ProfileController : Controller
{

    ...

    [HttpGet]
    public ActionResult GetProfile(int profileId)
    {
        var userProfile = this.profileService.GetUserProfile(profileId);

        return View(new ProfileViewModel(userProfile));
    }
}

// The Javascript / JQuery
$.ajax({
    type: &quot;GET&quot;,
    url: &quot;/Profile/LoadProfile/&quot; + $('#userProfileId').val(),
    dataType: &quot;html&quot;,
    success: function (data, textStatus, jqXHR) {
        $('#profileContainer').html(data);
    },
    error: function (jqXHR, textStatus, errorThrown) {
        // Correctly handle error
    }
});
</code></pre>

<p>As you can see from the above code all we&rsquo;re doing is creating our asynchronous javascript request to our back end to retrieve a profile based on the profile id provided. The DOM is then updated and the response loaded via the call to JQuery&rsquo;s <em>html()</em> method.</p>

<h2 id="what-s-wrong-with-this-approach">What&rsquo;s wrong with this approach?</h2>

<p>Nothing. There&rsquo;s nothing wrong with this approach and it&rsquo;s in fact, a perfectly acceptable way to perform and handle ajax requests/responses - however, there is room for improvement. To see what can be improved ask yourself the following questions:</p>

<ol>
<li><p><strong>What does our code tell us about the state of the ajax response?</strong><br>
Judging by our code, we can tell that our HTTP request to our _LoadProfile_ controller action was successful as our <em>success</em> method is being invoked - however what does this <em>REALLY</em> tell us about the state of our response? How to do we know that our payload is in fact a user profile. After all, our success method (or our status code of 200 Ok) simply tells us that the server successfully responded to our HTTP request - it doesn&rsquo;t tell us that our domain validation conditions within our service were met.</p></li>

<li><p><strong>Is it easy to reason with?</strong><br>
When programming we should always aim to write code succinct and clear. Code that leaves no room for ambiguity and removes any guesswork. Does the above solution do this?</p></li>

<li><p><strong>How could we pass additional context to our Javascript?</strong><br>
As we&rsquo;re returning an HTML response to be rendered to the user, what happens if we wanted to pass additional context to the Javascript such as a notification of an error, or some data we wish to display to the user via a modal dialog?</p></li>

<li><p><strong>Are we promoting any kind of consistency?</strong><br>
What happens if our next Ajax response returns a Json object? That will need to be handled in a completely different way. If we have multiple developers working on a project then they&rsquo;ll each probably implement ajax responses in various ways.</p></li>
</ol>

<h2 id="a-better-approach-let-s-model-our-ajax-response">A better approach - let&rsquo;s model our Ajax response</h2>

<p>Object-orientated programming is all about modelling. Wikipedia says it best: </p>

<blockquote>
<p>Object-oriented programming (OOP) is a programming language model organized around objects rather than &ldquo;actions&rdquo; and data rather than logic.</p>
</blockquote>

<p>To us mere mortals (as opposed to computers), modelling behaviour/data makes it easier for us to grasp and reason with - this premise is what helped make the Object-orientated programming paradigm popular in the early to mid-1990s. Looking back over our previous example, are we really modelling our response?</p>

<p>When we start passing values around (such as HTML in the previous example), we&rsquo;re missing out on the benefits gained from creating a model around our expected behaviour. So, instead of simply returning a plain HTML response or a JSON object containing just our data, let&rsquo;s try and model our HTTP response and see what benefits it will bring us. This is where the Ajax response object pattern can help.</p>

<h2 id="the-ajax-response-object-pattern">The Ajax response object pattern</h2>

<p>Implementation of the Ajax response object pattern simple. To resolve the concerns and questions raised above, we simply need to model our Ajax response, allowing to us to add additional context to our asynchronous HTTP response which we can then reason with within our Javascript.</p>

<p>The following example is the object I tend to favour when applying the Ajax response object pattern - but your implementation may vary depending on what you&rsquo;re doing.</p>

<pre><code>public class AjaxResponse
{
    public bool Success { get; set; }

    public string ErrorMessage { get; set; }

    public string RedirectUrl { get; set; }

    public object ResponseData { get; set; }

    public static AjaxResponse CreateSuccessfulResult(object responseData)
    {
        return new AjaxResponse
        {
            Success = true,
            ResponseData = responseData
        };
    }
}
</code></pre>

<p>We can then use the <em>AjaxResponse</em> object to model our HTTP response to something like the following:</p>

<pre><code>[HttpGet]
public JsonResult GetProfile(int profileId)
{
    var response = new AjaxResponse();

    try
    {
        var userProfile = this.profileService.GetUserProfile(profileId);

        response.Success = true;
        response.ResponseData = RenderPartialViewToString(&quot;Profile&quot;, new ProfileViewModel(userProfile));
    }
    catch (RecordNotFoundException exception)
    {
        response.Success = false;
        response.ErrorMessage = exception.Message;
    }

    return Json(response, JsonRequestBehavior.AllowGet);
}
</code></pre>

<p>Earlier we were rendering the view and returning just the HTML payload in the response, now we need to render the view to a string and pass to the <em>ResponseData</em> property. This way we can make use of the additional properties such as whether the response the user is expecting was successful, if not then we can supply and error message. Because our ResponseData property is an object base type, we can use it to store any type, including Json.</p>

<p>Below is an implementation of the _RenderPartialViewToString_ method I often create in a base controller when writing ASP.NET MVC applications.</p>

<pre><code>public class BaseController : Controller
{
    protected string RenderPartialViewToString(string viewName, object model)
    {
        if (string.IsNullOrEmpty(viewName))
        {
            viewName = this.ControllerContext.RouteData.GetRequiredString(&quot;action&quot;);
        }

        this.ViewData.Model = model;
        using (var stringWriter = new StringWriter())
        {
            ViewEngineResult partialView = ViewEngines.Engines.FindPartialView(this.ControllerContext, viewName);
            ViewContext viewContext = new ViewContext(this.ControllerContext, partialView.View, this.ViewData, this.TempData, (TextWriter)stringWriter);
            partialView.View.Render(viewContext, (TextWriter)stringWriter);
            return stringWriter.GetStringBuilder().ToString();
        }
    }
}
</code></pre>

<p>Now that we&rsquo;ve modelled our response we have the ability to provide the response consumer far more context, this context enables us to better reason with our server response. In the cases where we may need to perform any kind of front end action based on the outcome of the response, we can easily do.</p>

<pre><code>// The Javascript / JQuery
$.ajax({
    type: &quot;GET&quot;,
    url: &quot;/Profile/LoadProfile/&quot; + $('#userProfileId').val(),
    dataType: &quot;json&quot;
    success: function (data, textStatus, jqXHR) {
        if (ajaxResponse.Success === true) {
            $('#profileContainer').html(ajaxResponse.ResponseData);
        } else {
            Dialog.showDialog(ajaxResponse.ErrorMessage);
        }
    },
    error: function (jqXHR, textStatus, errorThrown) {
        ...
    }
});
</code></pre>

<p>Additionally, if the approach is used throughout a team then you&rsquo;re promoting a consistent API that you can build around. This makes the development of general error handlers far easier. What&rsquo;s more, if you&rsquo;re using TypeScript in your codebase then you can continue to leverage the benefits by casting your response to a TypeScript implementation of the ajaxResponse class and gain all the intelli-sense and tooling support goodness that comes with TypeScript.</p>

<pre><code>// TypeScript implementation
export interface IAjaxResponse {
    Success: boolean;
    ErrorMessage: string;
    RedirectUrl: string;
    ResponseData: any;
}
</code></pre>

<p>That&rsquo;s all for now. Thoughts and comments welcome!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Sun, 21 Feb 2016 21:01:59 +0000</pubDate></item><item><title>Adding search to your website with Azure Search</title><link>https://josephwoodward.co.uk/2016/02/adding-search-to-website-with-azure-search</link><description></description><content:encoded><![CDATA[<p>As traffic for my blog has started to grow I&rsquo;ve become increasingly keen to implement some kind of search facility to allow visitors to search for content, and as most of you will probably know search isn&rsquo;t an easy problem - in fact search is an extremely hard problem, which is why I was keen to look at some of the existing search providers out there.</p>

<p>Azure&rsquo;s search platform has been on my radar for a while now, and having recently listened to an MS Dev Show interview with one of the developers behind Azure Search I was keen to give it a go.</p>

<p>This article serves as a high level overview as to how to get up and running with Azure Search service on your website or blog with a relatively simple implementation that could use some fleshing out. Before we begin it&rsquo;s also worth mentioning that Azure&rsquo;s Search service has some really powerful search capabilities that are beyond the scope of this article so I&rsquo;d highly <a href="https://azure.microsoft.com/en-gb/documentation/articles/search-create-service-portal/">recommend checking out the documentation</a>.</p>

<h2 id="azure-search-pricing-structure">Azure Search Pricing Structure</h2>

<p>Before we begin, let us take a moment to look at the Azure Search pricing:</p>

<p><img src="https://s3-eu-west-1.amazonaws.com/cdn.josephwoodward.co.uk/blog/search.png" alt="" /></p>

<p><strong>One of the great things about Azure Search is it has a free tier that&rsquo;s more than suitable for relatively low to medium traffic websites.</strong> What&rsquo;s odd though is after the free tier there&rsquo;s a sudden climb in price and specifications for the next model up - which I assume is because whilst it&rsquo;s been available for a year, it&rsquo;s still relatively new, so hopefully we&rsquo;ll see more options become available moving forward but only time will tell; but as it stands the free tier is more than what I will need so let us continue!</p>

<h2 id="setting-up-azure-search-step-1-creating-our-search-service">Setting up Azure Search - Step 1: Creating our search service</h2>

<p>Before we begin we&rsquo;ve got to create our Azure Search service within Azure&rsquo;s administration portal.</p>

<p><strong>Note:</strong> If you&rsquo;ve got the time then the Azure Search service documentation goes through this step in greater detail, so if you get stuck then feel free to refer to it <a href="https://azure.microsoft.com/en-gb/documentation/articles/search-create-service-portal/">here</a>.</p>

<p><img src="https://s3-eu-west-1.amazonaws.com/cdn.josephwoodward.co.uk/blog/azuresearchlocation.png" alt="" /></p>

<p>Enter in your service name (in this instance I used the name &ldquo;jwblog&rdquo;), select you subscription, resource group, location and select the free pricing tier.</p>

<h2 id="setting-up-azure-search-step-2-configuring-our-service">Setting up Azure Search - Step 2: Configuring our service</h2>

<p>Now that we&rsquo;ve created our free search service, next we have to configure it. To do this click the Add Index button within the top Azure Search menu.</p>

<p><img src="https://s3-eu-west-1.amazonaws.com/cdn.josephwoodward.co.uk/blog/azuresearch_configuration.png" alt="" /></p>

<p><strong>Add an index (see image above)</strong></p>

<p>Once we&rsquo;ve created our account next we need to decide which content we&rsquo;re going to make searchable and reflect that within our search indexes. What Azure Search will then do is index that content, allowing it to be searched. We can do this either programatically via the Azure Search API, or via the Portal. Personally I&rsquo;d rather do it up front in the Portal, but there&rsquo;s nothing to stop you doing it in the code, but for now lets do it via the Azure Portal. In the instance of this post, we&rsquo;d want to index our blog post title and blog post content as this is what we want the user to be able to search.</p>

<p><strong>Setting retrievable content  (see image above)</strong></p>

<p>When configuring our Azure Search profile we can specify what content we want to mark as retrievable, in this instance as I plan on only showing the post title on the search result page so I will mark the post title as retrievable content. As the search result page will also need to link to the actual blog post I will also mark the blog post id as retrievable.</p>

<p>Now we&rsquo;ve set up and profile and configured it let&rsquo;s get on with some coding!</p>

<h2 id="setting-up-azure-search-step-3-implementing-our-search-the-interface">Setting up Azure Search - Step 3: Implementing our search - the interface</h2>

<p>Because it&rsquo;s always a good idea to program to an interface rather than implementation, and because we may want to index other content moving forward we&rsquo;ll create a simple search provider interface, avoiding the use of any Azure Search specific references.</p>

<pre><code>public interface ISearchProvider&lt;T&gt; where T : class
{
    IEnumerable&lt;TResult&gt; SearchDocuments&lt;TResult&gt;(string searchText, string filter, Func&lt;T, TResult&gt; mapping);

    void AddToIndex(T document);
}
</code></pre>

<p>If you take a moment to look at the SearchDocuments method signature:</p>

<pre><code>IEnumerable&lt;TResult&gt; SearchDocuments&lt;TResult&gt;(string searchText, string filter, Func&lt;T, TResult&gt; mapper);
</code></pre>

<p>You&rsquo;ll see that we take a Func of type T and return a TResult - this will allow us to map our search index class (which we&rsquo;ll create next) to a data transfer object - but more on this later. You&rsquo;re also able to provide <a href="https://msdn.microsoft.com/library/azure/mt589323.aspx">search filters</a> to provide your website with richer searching capabilities.</p>

<p>Next we want to create our blog post search index class that will contain all of the properties required to index our content. As we&rsquo;re creating a generic interface to our search provider we&rsquo;re going to extend Azure Search&rsquo;s AzureSearchIndex class. This allows us to create a BlogPostIndex class, but also gives us the scope to easily index other content such as pages (in which case we&rsquo;d call it a BlogPageIndex).</p>

<p><strong>What&rsquo;s important to note is that the below BlogPostSearchIndex class is that our properties are the same name and type as the columns we configured earlier within Step 2 and that we&rsquo;ve passed the index name to the base constructor.</strong></p>

<pre><code>// BlogPostSearchIndex.cs

[SerializePropertyNamesAsCamelCase]
public class BlogPostSearchIndex
{
    public BlogPostSearchIndex(int postId, string postTitle, string postBody)
    {
        // Document index needs to be a unique string
        IndexId = &quot;blogpost&quot; + postId.ToString();
        PostId = postId;
        PostTitle = postTitle;
        PostBody = postBody;
    }

    // Properties must remain public as they'll be used for automatic data binding
    public string IndexId { get; set; }

    public int PostId { get; set; }

    public string PostTitle { get; set; }

    public string PostBody { get; set; }

    public override string ToString()
    {
        return $&quot;IndexId: {IndexId}\tPostId: {PostId}\tPostTitle: {PostTitle}\tPostBody: {PostBody}&quot;;
    }
}
</code></pre>

<p>Now that we&rsquo;ve created the interface to our search provider we&rsquo;ll go ahead and work on the implementation.</p>

<h2 id="setting-up-azure-search-step-5-implementing-our-search-the-implementation">Setting up Azure Search - Step 5: Implementing our search - the implementation</h2>

<p>At this point we&rsquo;re now ready to start working on the implementation to our search functionality, so we&rsquo;ll create an <em>AzureSearchProvider</em> class that extends our <em>ISearchProvider</em> interface and start fleshing out our search.</p>

<p>Before we begin it&rsquo;s worth being aware that Azure&rsquo;s search service <a href="https://msdn.microsoft.com/en-us/library/azure/dn798935.aspx">does provide RESTful API</a> that you can consume to manage and indexes, however as you&rsquo;ll see below I&rsquo;ve opted to do it via their SDK.</p>

<pre><code>// AzureSearchProvider.cs

public class AzureSearchProvider&lt;T&gt; : ISearchProvider&lt;T&gt; where T : class
{
    private readonly SearchServiceClient _searchServiceClient;
    private const string Index = &quot;blogpost&quot;;

    public AzureSearchProvider(SearchServiceClient searchServiceClient)
    {
        _searchServiceClient = searchServiceClient;
    }

    public IEnumerable&lt;TResult&gt; SearchDocuments&lt;TResult&gt;(string searchText, string filter, Func&lt;T, TResult&gt; mapper)
    {
        SearchIndexClient indexClient = _searchServiceClient.Indexes.GetClient(Index);

        var sp = new SearchParameters();
        if (!string.IsNullOrEmpty(filter)) sp.Filter = filter;

        DocumentSearchResponse&lt;T&gt; response = indexClient.Documents.Search&lt;T&gt;(searchText, sp);
        return response.Select(result =&gt; mapper(result.Document)).ToList();
    }

    public void AddToIndex(T document)
    {
        if (document == null)
            throw new ArgumentNullException(nameof(document));

        SearchIndexClient indexClient = _searchServiceClient.Indexes.GetClient(Index);

        try
        {
            // No need to create an UpdateIndex method as we use MergeOrUpload action type here.
            IndexBatch&lt;T&gt; batch = IndexBatch.Create(IndexAction.Create(IndexActionType.MergeOrUpload, document));
            indexClient.Documents.Index(batch);
        }
        catch (IndexBatchException e)
        {
            Console.WriteLine(&quot;Failed to Index some of the documents: {0}&quot;,
                string.Join(&quot;, &quot;, e.IndexResponse.Results.Where(r =&gt; !r.Succeeded).Select(r =&gt; r.Key)));
        }
    }
}
</code></pre>

<p>The last part of our implementation is to set our configure out search provider with our IoC container of choice to ensure that our _AzureSearchProvider.cs_ class and its dependencies (Azure&rsquo;s _SearchServiceClient_ class) can be resolved. Azure&rsquo;s _SearchServiceClient.cs_ constructor requires our credentials and search service name as arguments so we&rsquo;ll configure them there too.</p>

<p>In this instance I&rsquo;m using StructureMap, so if you&rsquo;re not using StructureMap then you&rsquo;ll need to configure your IOC configuration accordingly.</p>

<pre><code>public class DomainRegistry : Registry
{
    public DomainRegistry()
    {
        ...

        this.For&lt;SearchServiceClient&gt;().Use(() =&gt; new SearchServiceClient(&quot;jwblog&quot;, new SearchCredentials(&quot;your search administration key&quot;)));
        this.For(typeof(ISearchProvider&lt;&gt;)).Use(typeof(AzureSearchProvider&lt;&gt;));

        ...
    }
}
</code></pre>

<p>At this point all we need to do is add our administration key which we can get from the Azure Portal under the Keys setting within the search service blade we used to configure our search service.</p>

<h2 id="setting-up-azure-search-step-6-indexing-our-content">Setting up Azure Search - Step 6: Indexing our content</h2>

<p>Now that all of the hard work is out of the way and our search service is configured, we need to index our content. Currently our Azure Search service is an empty container with no content to index, so in the context of a blog we need to ensure that when we add or edit a blog post the search document stored within Azure Search is either added or updated. To do this we need to go to our blog&rsquo;s controller action that&rsquo;s responsible for creating a blog post and index our content.</p>

<p>Below is a <strong>rough example</strong> as to how it would look, I&rsquo;m a huge fan of a library called <a href="https://github.com/jbogard/MediatR">Mediatr</a> for delegating my requests but below should be enough to give you an idea as to how we&rsquo;d implement indexing our content. We would also need to ensure we did the same thing for updating our blog posts as we&rsquo;d want to ensure our search indexes are up to date with any modified content.</p>

<pre><code>public class BlogPostController : Controller
{
    private readonly ISearchProvider&lt;BlogPostSearchIndex&gt; _searchProvider;

    public BlogPostController(ISearchProvider&lt;BlogPostSearchIndex&gt; searchProvider)
    {
        this._searchProvider = searchProvider;
    }

    [HttpPost]
    public ActionResult Create(BlogPostAddModel model)
    {
        ...

        // Add your blog post to the database and use the Id as an index Id

        this._searchProvider.AddToIndex(new BlogPostSearchIndex(indexId, model.Id, model.Title, model.BlogPost));

        ...
    }

    [HttpPost]
    public ActionResult Update(BlogPostEditModel model)
    {
        ...
        // As we're using Azure Search's MergeOrUpload index action we can simply call AddToIndex() when updating
        this._searchProvider.AddToIndex(new BlogPostSearchIndex(indexId, model.Id, model.Title, model.BlogPost));

        ...
    }

}
</code></pre>

<p>Now that we&rsquo;re indexing our content we&rsquo;ll move onto querying it.</p>

<h2 id="setting-up-azure-search-step-7-querying-our-indexes">Setting up Azure Search - Step 7: Querying our indexes</h2>

<p>As Azure&rsquo;s Search service is built on top of Lucene&rsquo;s query parser (Lucene is well known open-source search library) we have a <a href="https://msdn.microsoft.com/en-us/library/azure/mt589323.aspx">variety of ways we can query our content</a>, including:</p>

<ul>
<li>Field-scoped queries</li>
<li>Fuzzy search</li>
<li>Proximity search</li>
<li>Term boosting</li>
<li>Regular expression search</li>
<li>Wildcard search</li>
<li>Syntax fundamentals</li>
<li>Boolean operators</li>
<li>Query size limitations</li>
</ul>

<p>To query our search index all we need to do is call our generic <em>SearchDocuments<T></em> method and map our search index object to a view model/DTO like so:</p>

<pre><code>private IEnumerable&lt;BlogPostSearchResult&gt; QuerySearchDocuments(string keyword)
{
    IEnumerable&lt;BlogPostSearchResult&gt; result = _searchProvider.SearchDocuments(keyword, string.Empty, document =&gt; new BlogPostSearchResult
    {
        Id = document.PostId,
        Title = document.PostTitle
    });

    return result;
}
</code></pre>

<p>At this point you have one of two options, you can either retrieve the indexed text (providing you marked it as retrievable earlier in step 2) and display that in your search result, or you can return your ID and query your database for the relevant post based on that blog post ID. Naturally this does introduce what is an unnecessary database call so consider your options. Personally as my linkes include a filename, I prefer to treat the posts in my database as the source of truth and prefer to check the posts exist and are published so I&rsquo;m happy to incur that extra database call.</p>

<pre><code>public IEnumerable&lt;BlogPostItem&gt; Handle(BlogPostSearchResultRequest message)
{
    if (string.IsNullOrEmpty(message.Keyword) || string.IsNullOrWhiteSpace(message.Keyword))
        throw new ArgumentNullException(nameof(message.Keyword));

    List&lt;int&gt; postIds = QuerySearchDocuments(message.Keyword).Select(x =&gt; x.Id).ToList();

    // Get blog posts from database based on IDs
    return GetBlogPosts(postIds);
}
</code></pre>

<h2 id="setting-up-azure-search-wrapping-it-up">Setting up Azure Search - Wrapping it up</h2>

<p>Now we&rsquo;re querying our indexes and retrieving the associated blog posts from the database all we have left to do is output our list of blog posts to the user.</p>

<p>I&rsquo;m hoping you&rsquo;ve found this general overview useful. As mentioned at the beginning of the post this is a high level overview and implementation of what&rsquo;s a powerful search service. At this point I would highly recommend you take the next step and look at how you can start to tweak your search results via means such as <a href="https://azure.microsoft.com/en-gb/documentation/articles/search-get-started-scoring-profiles/">scoring profiles</a> and some of the <a href="https://msdn.microsoft.com/library/azure/mt589323.aspx">features provided by Lucene</a>.</p>

<p>Happy coding!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Wed, 10 Feb 2016 20:57:02 +0000</pubDate></item><item><title>Personal targets and goals for 2016</title><link>https://josephwoodward.co.uk/2016/02/personal-targets-and-goals-for-2016</link><description></description><content:encoded><![CDATA[<p>Around this time last year I set out a list of goals and targets for myself in my <a href="http://josephwoodward.co.uk/2015/01/personal-targets-and-goals-for-2015/">Personal targets and goals for 2015</a> blog post which, with 2015 coming to a close, I reflected on last month in my <a href="http://josephwoodward.co.uk/2016/01/reflecting-on-2015/">Reflecting on 2015</a> post.</p>

<p>I feel formally setting yourself yearly goals is a great way to focus a set of specific, well thought out targets as opposed to spending the year moving from one subject to another with no real aim. So without further ado, here are my personal targets for 2016.</p>

<h2 id="goal-1-speak-at-more-conferences-and-meet-ups">Goal 1: Speak at more conferences and meet-ups</h2>

<p>Those of you that know me will know that when I start talking about programming and software development it&rsquo;s hard to shut me up. Software and web development is a subject I&rsquo;ve been hugely passionate about from a young age and since starting this blog I&rsquo;ve found sharing knowledge and documenting progress a great way to learn, memorise get involved in the community. This desire to talk about software has ultimately led me to start <a href="http://josephwoodward.co.uk/speaking/">speaking at local meet-ups (including a lunchtime lightning talk at DDD South West)</a> which I&rsquo;ve really enjoyed. This year I&rsquo;d like to continue to pursue this by speaking at larger events and meet-ups that are further afield. I already plan on submitting a few talks to this year&rsquo;s DDD South West event so we&rsquo;ll see if they get accepted.</p>

<h2 id="goal-2-start-an-exeter-based-net-user-group">Goal 2: Start an Exeter based .NET User Group</h2>

<p>Living in a rather quiet and rural part of United Kingdom has its pros and cons and whilst Somerset is a great place to live, it suffers from a lack of meet-ups, specifically .NET ones - this is something I&rsquo;m hoping to rectify by starting up a .NET user group.</p>

<p>Whilst I don&rsquo;t live in Exeter, it&rsquo;s the closest place where I feel there will be enough interest for setting up a .NET specific user group, so I&rsquo;m currently in the process of looking for a location on the outskirts of the city to make it easier for commuters living in some of the nearby towns and cities.</p>

<h2 id="goal-3-continue-contributing-to-the-net-oss-ecosystem">Goal 3: Continue contributing to the .NET OSS ecosystem</h2>

<p>One of last year&rsquo;s goals was to contribute to more open-source libraries, and whilst I felt I made good progress in achieving this goal I&rsquo;m keen to continue working in this area. Not only do I want to continue to contribute to existing projects but I&rsquo;m also keen to help others get involved in contributing to projects. It&rsquo;s a really rewarding activity that can really help develop your skill set as a software developer, with this in mind I&rsquo;ve been thinking of a few satellite sites that will help people get started.</p>

<p>I&rsquo;m also thinking about a few talks that talk about how someone can get started and the lessons you can learn by contributing to open-source software.</p>

<p>In addition to the above, I&rsquo;m also keen on contributing some of my own libraries to the .NET open-source ecosystem. More often than not I&rsquo;ve found myself creating a few classes to encapsulate certain behaviour or abstract a third party API, yet I&rsquo;ve never turned it into its own library. This year I&rsquo;m keen to take that extra step and turn it into a fully fledged library that can be downloaded via NuGet.</p>

<h2 id="bonus-goal-f">Bonus Goal: F</h2>

<p>Having been watching the F# community closely for some time now I&rsquo;m really interested in what it has to offer, so this year I&rsquo;m considering jumping in and committing myself to learning it and becoming proficient in it. I&rsquo;ve got a few side projects I plan on working on that I feel will be a great place to use F# so we&rsquo;ll see how that goes.</p>

<h2 id="conclusion">Conclusion</h2>

<p>This concludes my personal targets for 2016. Whilst it&rsquo;s not an exclusive list of what I will be focusing on, it&rsquo;s certainly a list that I wish to feel I&rsquo;ve made some progress on by the end of the year. I shall see how it goes and keep people updated.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Mon, 01 Feb 2016 20:53:25 +0000</pubDate></item><item><title>Angular 2 based Piano Note Training game side project</title><link>https://josephwoodward.co.uk/2016/01/angular-2-based-piano-note-training-game-side-project</link><description></description><content:encoded><![CDATA[<p>With <a href="http://angularjs.blogspot.co.uk/2015/12/angular-2-beta.html">Angular 2 hitting beta</a> I decided to take it for a test drive and give it a spin. As a huge, (and vocal!) fan of TypeScript I was keen to see what the Angular team had done with it and was really interested to see how the component based approach to Angular 2 made writing Angular applications different to that of the first Angular.</p>

<p>Before I go into the details on the application, feel free to try it <a href="http://ng2piano.azurewebsites.net/">give it a try here</a>.</p>

<h2 id="the-application">The application</h2>

<p>After playing with Angular 2 for a few evenings and liking what I was seeing, I decided I wanted to build something more real life than a throw-away To-Do list. Don&rsquo;t get me wrong, there&rsquo;s nothing wrong with building to-do lists, they&rsquo;re a great way to get to grips with a framework, but nothing beats a real world project - with this in mind I decided to cross a side project off of my &ldquo;apps to build&rdquo; list and build a piano note trainer (<a href="https://github.com/JosephWoodward/PianoNoteTrainer">source code available on GitHub</a>).</p>

<p>As someone who is currently learning to play the piano and isn&rsquo;t a great sight reader, I&rsquo;ve been keen to develop a note trainer that records your progress over time to give you indicators as to just how well your sight reading is progressing.</p>

<p>So far the application is going well and I&rsquo;m extremely impressed with Angular 2&rsquo;s component based approach (heavily inspired by React if I&rsquo;m not mistaken - but more on that in a moment). At the time of writing the application will generate random notes and draw them to the HTML 5 canvas whilst monitoring the user&rsquo;s input to see if they press the appropriate key on the CSS based piano (credit to <a href="http://cssdeck.com/user/tovic">Taufik Nurrohman</a> for the paino, it looks great and has saved me a tonne of time!). If the user presses the correct key then the application informs them of the correct key press and then moves onto another key. If the user fails to press the correct key then the application lets the user know and waits for them to try again.</p>

<p>As I continue to build the piano note trainer I&rsquo;m finding Angular 2 feel more and more intuitive, and whilst Angular 2 is structurally different to Angular 1, it still feels quite similar in many ways - bar no controllers and the dreaded scope object.</p>

<h2 id="angular-2-s-component-based-approach-feels-really-nice-because-we-ve-learned-over-time-that-composition-over-inheritance-the-best-way-to-build-software">Angular 2&rsquo;s component based approach feels really nice, because we&rsquo;ve learned over time that composition over inheritance the best way to build software</h2>

<p>One of my main gripes with Angular 1 is the $scope object that was passed throughout an application and all too easily became a dumping ground of functions and data. This often inadvertently resulted in controllers taking on dependencies and quickly becoming tightly coupled to one another. Alternatively, the component based approach in Angular 2 naturally encourages encapsulated building blocks that contain the behaviour, HTML and CSS specific to the component and its role - the component can then expose a strict surface to its consuming components. This component based model follows the all too important _<strong>&ldquo;composition over</strong> <em><strong>inheritance</strong></em><strong>&rdquo;</strong>_ approach and allows you to build your application out of a bunch of easily testable units.</p>

<p>For instance, if you were to take the following screenshot of the application in its current state and break it down into the various components it looks like this:</p>

<p><img src="https://s3-eu-west-1.amazonaws.com/cdn.josephwoodward.co.uk/blog/angular2_notetraininggame.png" alt="" /></p>

<p><img src="https://s3-eu-west-1.amazonaws.com/cdn.josephwoodward.co.uk/blog/angular2_notetraininggame_components.png" alt="" /></p>

<p>Overall I&rsquo;ve been really happy with Angular 2 so far and can&rsquo;t wait to see what tooling we start to see appear for it now that it&rsquo;s using TypeScript. I can&rsquo;t help but feel 2016 is going to be a big year for Angular.</p>

<p>The <a href="https://github.com/JosephWoodward/PianoNoteTrainer">source code for the application is available on my GitHub profile</a> and once finished I plan on submitting it to <a href="http://builtwithangular2.com/">builtwithangular2.com</a>. I&rsquo;m hoping to have a live link once the application has reached a point that I&rsquo;m happy with it to be used (I&rsquo;ve still got to add sharp and flat notes which are causing some issues). In the mean time, feel free to give it a try via my GitHub profile.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Sun, 24 Jan 2016 20:47:11 +0000</pubDate></item><item><title>An often overlooked reason why you should be on Stack Overflow</title><link>https://josephwoodward.co.uk/2016/01/an-often-overlooked-reason-why-you-should-be-on-stack-overflow</link><description></description><content:encoded><![CDATA[<p>Stack Overflow is a fantastic resource for developers of all ability. Gone are the days of having to trawl through blog posts, mailing lists or forums looking for solutions to issues.</p>

<p>These days the answer to the majority of a developer&rsquo;s day to day questions are just a few clicks away and often rank quite highly on Google for the search term(s) making them even easier to find. And whilst the majority of us developers use it more than a few times a day I&rsquo;m always quietly surprised when I see a developer browsing Stack Overflow looking for, or at, a solution to a problem they&rsquo;re having yet they don&rsquo;t have an account (its fairly easy to spot when someone doesn&rsquo;t have an account from a distance as advertisements are visible between the answers).</p>

<p>When you ask someone who regularly uses Stack Overflow why they don&rsquo;t have an account the response you usually get is &rdquo;<em>I usually find the answer to my question so have no need to ask</em>&rdquo;, or &rdquo;<em>I don&rsquo;t plan on answering questions</em>&rdquo; - which are all perfectly valid reasons. But as developers when have we ever had an issue, fixed it and then remembered the solution to the problem the next time we ran into the same issue - days, weeks, months of ever years later? I feel I would be fairly safe in saying we&rsquo;ve all had problems or questions we&rsquo;ve needed to look for on more than one occasion. This is where Stack Overflow can be a huge help (not to mention you&rsquo;ll be giving back to the community).</p>

<p><strong>Leaving yourself upvote breadcrumbs to your technical issues</strong></p>

<p>Next time you&rsquo;re on Stack Overflow I would encourage you to take a moment to create an account, even if you don&rsquo;t wish to fill out your profile - and the next time you find an answer to the problem you&rsquo;re facing - simply upvote it. <strong>It takes a split second to do but that split second can save you minutes or potentially hours over your programming career</strong>. I can&rsquo;t remember the number of times I&rsquo;ve had an technical issue, gone searching for a solution on Stack Overflow and noticed I&rsquo;d upvoted an answer a few months or years earlier.</p>

<p>Ultimately isn&rsquo;t that one of the things we enjoy doing as developers? Look for ways to make difficult things simple? Next time make it easier to see signals through the noise by upvoting the solutions to your problems.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Sat, 16 Jan 2016 20:45:40 +0000</pubDate></item><item><title>Are you a consumer or a creator222</title><link>https://josephwoodward.co.uk/2016/01/are-you-a-con</link><description></description><content:encoded><![CDATA[<p>35 A few years ago I had a change of mindset after reading an interesting short book (that has long since vanished from my memory) regarding consuming versus creating. Whilst I can no longer recall the discussion in its entirety, it raised an interesting point regarding whether you consider one&rsquo;s own actions to be of a consumer nature, or a creators.</p>

<p>The book wasn&rsquo;t referring directly to whether you&rsquo;re just a consumer or things, or a creator of others - but more subtley teased a thought into the readers&rsquo; mind about the way they approached something. What, as a human or a developer, can I do in this instance to make something more useful to myself and others. With this slight shift of mindset you start to see new opportunities arising that you may have not seen before.</p>

<p>With this change of mindset I found myself asking why I wasn&rsquo;t a member of Stack Overflow. As I searched and browsed Stack Overflow day by day I was in a pure consumer mode. I wasn&rsquo;t giving anything back or making anything better as I took what others had spent time writing, without giving anything of value in return. When you ask someone who regularly visits Stack Overflow for answers to their questions why they don&rsquo;t have an account you&rsquo;ll regularly get responses such as &ldquo;I don&rsquo;t have time&rdquo;, or &ldquo;I don&rsquo;t plan on answering questions&rdquo; - which, whether you agree with them or not, are all perfectly valid reasons.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Fri, 15 Jan 2016 20:43:42 +0000</pubDate></item><item><title>How to setup Google Analytics and event tracking in your Angular application</title><link>https://josephwoodward.co.uk/2016/01/how-setup-google-analytics-event-tracking-in-your-angular-application</link><description></description><content:encoded><![CDATA[<p>Google have made installing Google Analytics on a website extremely easy, simply copy and paste the tracking code provided into your website and you&rsquo;re done. But what happens when you want to set Google Analytics up on a single page application, say an Angular app? Having recently worked on an Angular application where I was tasked with registering page views and other various events within Google Analytics, I thought I&rsquo;d take a moment to document how it was done and what I&rsquo;d learned whilst doing it.</p>

<h2 id="the-problem-with-tracking-code-and-single-page-applications">The problem with tracking code and single page applications</h2>

<p>Traditionally, as your user navigates a normal website each page raises the Google Analytics pageview event (highlighted below).</p>

<pre><code>&lt;script&gt;
  (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
  (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
  m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
  })(window,document,'script','//www.google-analytics.com/analytics.js','ga');

  ga('create', 'UA-xxxxxxxx-x', 'auto');
  ga('send', 'pageview'); // Pageview event
&lt;/script&gt;
</code></pre>

<p>However within the single page application space things are slightly different. Once your initial page has loaded, all subsequent pages or views are rendered within the application without the need for a page refresh or page load - this means no further pageview events will be raised. This would result in very inaccurate visitor data as Google Analytics would capture the initial page load, and then ignore the duration of their session.</p>

<h2 id="the-solution">The solution</h2>

<p>Having had to deal with this very same issue (though I needed to raise multiple custom events when certain functionality was used) just recently the solution I eventually settled with was a simple one.</p>

<p>First of all we want to break our Analytics code into two, allowing our Google Analytics tracking code to be registered (<strong>notice how I&rsquo;ve removed the pageview event - we will call this later</strong>).</p>

<p>This code should be added into your main index.html page or your base Layout/template page.</p>

<pre><code>...
&lt;/head&gt;
&lt;body&gt;
&lt;script&gt;
  (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
  (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
  m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
  })(window,document,'script','//www.google-analytics.com/analytics.js','ga');

  ga('create', 'UA-xxxxxxxx-x', 'auto');
 // Pageview event removed so we can call it once the view has loaded.
&lt;/script&gt;
&lt;div class=&quot;wrapper&quot;&gt;
...
</code></pre>

<p>Next we&rsquo;re going to create a Google Analytics service (well call it <em>GoogleAnalyticsService.js</em>) that will be responsible for encapsulating our Analytics behaviour and used to manually raise our pageview event (and other events for that matter!) once the page has loaded. We can then inject our service into the appropriate Angular controllers and invoke the functions that will raise our Google Analytics events.</p>

<p>Take a moment to look at the <a href="https://developers.google.com/analytics/devguides/collection/analyticsjs/pages#implementation">Google Analytics page tracking documentation</a> to see that we&rsquo;re passing in a page path (excluding the domain - ie: /checkout/) to uniquely identify the page within our Analytics reports.</p>

<pre><code>(function (angular, window) {

    &quot;use strict&quot;;

    var ngModule = angular.module(&quot;ourAppModule&quot;);

    ngModule.factory(&quot;googleAnalyticsService&quot;, [
        &quot;$window&quot;, function ($window) {

            var ga = $window.ga;

            var _pageView = function(page) {			
                ga(&quot;send&quot; &quot;pageview&quot;, {
                    page: page
                });
            };

            return {
                pageView: _pageView,
            };
        }
    ]);

})(window.angular);
</code></pre>

<p>Now that we&rsquo;ve created our GoogleAnalyticsService we can simply inject it into our application&rsquo;s controllers and invoke the <em>pageView()</em> function which in turn raises the Google Analytics pageview event.</p>

<p><strong>Single page application version:</strong></p>

<p>If your Angular application is a single page application then we can go one further and automatically call the pageview function once our view has loaded by adding the event to Angular&rsquo;s $viewContentLoaded event listener, like so:</p>

<pre><code>(function (angular, window) {

    &quot;use strict&quot;;

    var ngModule = angular.module(&quot;ourAppModule&quot;);

    ngModule.factory(&quot;googleAnalyticsService&quot;, [
        &quot;$window&quot;, function ($window) {

            var ga = $window.ga; //Google Analytics

            var _pageView = function(page) {			
                ga(&quot;send&quot; &quot;pageview&quot;, {
                    page: page
                });
            };

            var _viewChangePageView = function(scope, page) {
                if (scope){
                    scope.$on('$viewContentLoaded', function(event) {
                        _pageView(page)
                    });
                }
            };

            return {
                pageView: _pageView,
                viewChangePageView: _viewChangePageView // used for SPA
            };
        }
    ]);

})(window.angular);
</code></pre>

<p>From here all we need to do is invoke our method from our application&rsquo;s controllers:</p>

<pre><code>(function(angular, window) {

    'use strict';

    var app = angular.module('ourAppModule');
    app.controller('CheckoutController', ['$location', 'googleAnalyticsService', function($location, googleAnalyticsService) {
		...

		googleAnalyticsService.pageView($location.path());

		...
	}]);

})(window.angular);
</code></pre>

<p>Now when our user&rsquo;s visit our application we should start seeing our visitors browsing data flooding into our Google Analytics account. You can verify this by setting Google Analytics into debug mode and using your browser&rsquo;s console to see the events being raised. To do this simply replace the script source from analytics.js to analytics_debug.js.</p>

<pre><code>&lt;script&gt;
  (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
  (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
  m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
  })(window,document,'script','//www.google-analytics.com/analytics_debug.js','ga'); //debug mode

  ga('create', 'UA-xxxxxxxx-x', 'auto');
&lt;/script&gt;
</code></pre>

<p>Once we&rsquo;ve verified that our events are being raised then switch your analytics path back to analytics.js and you&rsquo;re good to go!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Sat, 09 Jan 2016 20:42:34 +0000</pubDate></item><item><title>Reflecting on 2015</title><link>https://josephwoodward.co.uk/2016/01/reflecting-on-2015</link><description></description><content:encoded><![CDATA[<p>With 2015 behind us and 2016 to look forward to it&rsquo;s that time of year again where I take a moment reflect on the events of last year and set myself some targets for the year ahead.</p>

<p>2015 has been a fantastic year for me for many reasons. Looking back at the <a href="http://josephwoodward.co.uk/2015/01/personal-targets-and-goals-for-2015/">targets I set myself for 2015</a> I&rsquo;m happy with my progress, and whilst I&rsquo;ve only met two of the four goals I set myself, I&rsquo;m not going to be too hard on myself as I feel I&rsquo;ve made up for it in other areas (which I&rsquo;ll go through in a moment) to make up for the two targets missed.</p>

<h2 id="last-year-s-goals">Last year&rsquo;s goals</h2>

<p>Before setting my goals for 2016, let me take a moment to review last year&rsquo;s targets and achievements:</p>

<h3 id="goal-1-get-dangerous-with-javascript">Goal #1: Get dangerous with Javascript</h3>

<p>During 2015 I&rsquo;ve spent a great deal of time working on my Javascript knowledge, this includes:</p>

<ul>
<li><a href="http://josephwoodward.co.uk/2014/04/space-shooter-written-in-javascript-using-typescript/">A Javascript based shooting game</a> (written in TypeScript, but for those that are familiar with TypeScript will know that it&rsquo;s essentially Javascript decorated with types)</li>
<li>I&rsquo;ve also spent a great deal of time really getting to grips with some of the fundamental design patterns commonly used in libraries such as the IIFE, Module Reveal pattern and so forth.</li>
<li>In addition to the above, I&rsquo;ve also had the opportunity to write a lot of Javascript in my previous job, and have recently started working in a web services based role that involves a lot of Javascript via Angular.</li>
<li>Whilst learning Javascript I&rsquo;ve also become a huge fan of TypeScript - so much so that <a href="http://josephwoodward.co.uk/2015/07/reflections-on-recent-typescript-hands-on-session-i-presented/">I&rsquo;ve given various workshops</a> and talks at <a href="http://josephwoodward.co.uk/2015/03/presentation-from-recent-introduction-to-typescript-talk-i-gave/">local user groups</a> on the subject (more on this later though).</li>
</ul>

<p>Whilst I wouldn&rsquo;t consider myself &lsquo;dangerous&rsquo; with Javascript, I feel I&rsquo;ve reached a level where I&rsquo;m confident in my ability and comfortable writing it. This is a goal I will continue to pursue in 2016.</p>

<h3 id="goal-2-contribute-to-more-open-source-projects">Goal #2: Contribute to more open source projects</h3>

<p>Shortly after committing to this goal I stumbled upon an awesome <a href="https://github.com/shouldly/shouldly">C# assertion framework called Shoudly</a> via a tweet by its owner <a href="http://jake.ginnivan.net/">Jake Ginnivan</a> (<a href="https://twitter.com/JakeGinnivan">@JakeGinnivan</a>). Jake was very welcoming of first time contributors and helped a great deal with any Git related issues encountered as I stumbled my way through getting to grips with both the Shoudly codebase and contributing to it.</p>

<p>Today I still enjoy contributing to Shoudly and have been honoured to be added as a core contributor to the project. Jake&rsquo;s been paving the way with some truly inspiring work (including DNX support!) on the library and I look forward to continuing to contribute to it this year.</p>

<p>As a slight aside - if anyone is interested, as I was this time last year, in getting involved in contributing to an open source library then we&rsquo;re an extremely friendly bunch always looking for new contributors. I&rsquo;d be happy to help you make your first contribution to Shouldly.</p>

<h3 id="goal-3-increase-the-frequency-of-my-blog-posts">Goal #3: Increase the frequency of my blog posts</h3>

<p>As many developers will know, blogging isn&rsquo;t easy. It takes time, effort and a log of research to write blog posts that are of any real value to people and whilst I&rsquo;ve remaind consistent with my blogging throughout the year, the frequency of my posts isn&rsquo;t at a level that I am happy with. This is something I&rsquo;m looking to rectify in 2016.</p>

<h3 id="bonus-goal-achieve-over-8-000-points-on-stack-overflow">Bonus Goal: Achieve over 8,000 points on Stack Overflow</h3>

<p>When writing about this goal last year, I felt aiming to achieve over 8,000 points on StackOverflow was a little ambitious - especially considering the scope of the other (what I considered to be more important) goals, so suffice to say I fell short of the 8,000 points mark.</p>

<h2 id="other-noticable-achievements-in-2015">Other noticable achievements in 2015</h2>

<p>In addition to the above goals, 2015 was also an exciting year for me for the following reasons:</p>

<h3 id="net-rocks-interview">.NET Rocks Interview</h3>

<p>When writing my 2015 goals, if you&rsquo;d told me I&rsquo;d be appearing on .NET Rocks by the end of the year I&rsquo;d have told you to stop pulling my leg. But on December 10th my interview discussing Visual Studio shortcuts and productivity boosts was published. <a href="http://josephwoodward.co.uk/2015/11/visual-studio-2015-shortcuts-interview-with-dot-net-rocks/">You can read more about the episode here</a>.</p>

<h3 id="i-started-talking-regularly-at-local-meet-ups">I started talking regularly at local meet ups</h3>

<p>Whilst I&rsquo;ve been attending local meet ups for a little while now, I&rsquo;ve also started talking at them too. Originally it started with a TypeScript talk at a local gathering, then moved to the same talk but in a lighting talk format at DDD South West, to a TypeScript workshop and a further talk on JavaScripts IFFE pattern. This is certainly something I wish to continue doing in 2016.</p>

<p>Overall I&rsquo;ve been very satisfied with 2015; the highlight(s) of the year have got to go to being made a core contributor to Shoudly, alongside being featured on .NET Rocks. I hope everyone else has had a great year and I wish everyone the best for 2016!</p>

<p><strong>Stay tuned for my goals for 2016!</strong></p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Sat, 02 Jan 2016 20:41:22 +0000</pubDate></item><item><title>A deeper look at C# 6&#39;s nameof expression</title><link>https://josephwoodward.co.uk/2015/12/deeper-look-at-c-sharp-6-nameof-expression</link><description></description><content:encoded><![CDATA[<p>Like many developers I&rsquo;m somewhat reluctant to use string literals or magic strings to reference variables, types or members due to the increased chances of introducing bugs into a codebase and the cost of maintainability. Wherever possible I will always try my best to avoid using them if the solution doesn&rsquo;t outweigh the cost. Whilst string literals can most often be avoided, one area that C# never had an easy solution for the problem was in areas such as referencing variables when throwing exceptions.</p>

<p>With the release of C# 6 this problem and instances like can be easily avoided with the introduction of the <em>nameof</em> expression.</p>

<h2 id="c-6-nameof-expression">C#6 nameof expression</h2>

<p>The nameof expression is a simple solution to the problem, see the follow snippets from the MSDN docs for an example of how the nameof expression can be is used:</p>

<pre><code>void f(string s) {
    if (s == null) throw new ArgumentNullException(nameof(s));
}
</code></pre>

<p>Using the nameof expression in Razor becomes especially helpful:</p>

<pre><code>&lt;%= Html.ActionLink(&quot;Sign up&quot;, @typeof(UserController), @nameof(UserController.SignUp)) %&gt;
</code></pre>

<h2 id="what-s-happening-under-the-hood">What&rsquo;s happening under the hood?</h2>

<p>As it&rsquo;s always a good idea to understand how things are interpreted when compiled, let&rsquo;s take a look and see exactly what happens to the following code sample:</p>

<pre><code>public class Class1
{
    public void Output(string myString)
    {
        if (myString == null)
        {
            throw new ArgumentNullException(nameof(myString));
        }
    }
}
</code></pre>

<p>When we use a tool like <a href="https://www.jetbrains.com/decompiler/">JetBrain&rsquo;s free decompiler DotPeek</a> to take a look at the decompiled source you can see that the property passed into the nameof expression has now been <strong>evaluated at compile</strong> time and converted into a string literal representation of the parameter:</p>

<pre><code>public class Class1
{
	public void Output(string myString)
	{
		bool flag = myString == null;
		if (flag)
		{
			throw new ArgumentNullException(&quot;myString&quot;);
		}
	}
}
</code></pre>

<p>The result is the exact equivalent of what you&rsquo;d normally do without the nameof expression - <strong>except this time you have the added benefit of compile time safety, rather than having to rely on magic strings (something that&rsquo;s extremely helpful when using refactoring tools to rename classes, properties or methods).</strong></p>

<p>With this in mind, that means if we were to take the following with and without _nameof_ examples and view the IL code generated then they should be exactly the same:</p>

<pre><code>// With nameof expression:
string myString = &quot;Hello World&quot;;
throw new ArgumentNullException(nameof(myString));

// Without nameof expression:
string myString = &quot;Hello World&quot;;
throw new ArgumentNullException(&quot;myString&quot;);
</code></pre>

<p>IL code generated when using the nameof expression:</p>

<pre><code>IL_0000:  nop
IL_0001:  ldstr      &quot;Hello World&quot;
IL_0006:  stloc.0
IL_0007:  ldstr      &quot;myString&quot;
IL_000c:  newobj     instance void [mscorlib]System.ArgumentNullException::.ctor(string)
IL_0011:  throw
</code></pre>

<p>IL code generated when not using the nameof expression:</p>

<pre><code>IL_0000:  nop
IL_0001:  ldstr      &quot;Hello World&quot;
IL_0006:  stloc.0
IL_0007:  ldstr      &quot;myString&quot;
IL_000c:  newobj     instance void [mscorlib]System.ArgumentNullException::.ctor(string)
IL_0011:  throw
</code></pre>

<p>We can see that the output is identical.</p>

<p>If that isn&rsquo;t proof enough that you should favour using the nameof expression over magic strings then I don&rsquo;t know what is!</p>

<p><strong>Update:</strong><br>
If you&rsquo;re keen to go even further into the nameof expression then <a href="https://twitter.com/erikschierboom">@ErikSchierboom</a> has written an <strong>extremely</strong> indepth look at what&rsquo;s happening under the bonnet in his <a href="http://www.erikschierboom.com/2015/12/31/csharp6-under-the-hood-nameof/">C# 6 under the hood: nameof operator</a> article that I&rsquo;d highly recommend reading.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Tue, 29 Dec 2015 20:39:34 +0000</pubDate></item><item><title>Visual Studio 2015 Shortcuts interview with .NET Rocks</title><link>https://josephwoodward.co.uk/2015/11/visual-studio-2015-shortcuts-interview-with-dot-net-rocks</link><description></description><content:encoded><![CDATA[<p><a href="https://www.dotnetrocks.com/?show=1229"><img src="https://s3-eu-west-1.amazonaws.com/cdn.josephwoodward.co.uk/blog/dotnetrocks.jpg" alt="" /></a></p>

<p>Last week I had the pleasure of speaking with both Carl Franklin and Richard Campbell as part of a .NET Rocks episode talking about areas of a subject I have a great interest in - mastering your toolset. In this particular show we spent most of the time covering the most important shortcuts you can learn to increase your productivity but also touched on some of the finer details about the what, where and why you should spend time dedicated to turning these shortcuts into habits and ultimately muscle memory; to the point where you perform them instinctively.</p>

<p>If you&rsquo;ve read my <a href="http://josephwoodward.co.uk/2014/10/ways-i-improved-my-productivity-in-visual-studio-2013/">9 ways I improved my productivity in Visual Studio 2013 blog post</a> then some of the shortcuts and features will be familiar to you, however we discuss some that aren&rsquo;t on the list that I&rsquo;d highly recommend checking out.</p>

<p>The show is published on the <strong>10th of December</strong> and can be downloaded via the <a href="https://www.dotnetrocks.com">.NET Rocks website</a> or most Podcast aggregators.</p>

<h2 id="update">Update</h2>

<p><strong>You can now <a href="https://www.dotnetrocks.com/?show=1229">download my talk with Carl and Richard via the .NET Rocks website</a>.</strong></p>

<p>I&rsquo;m overwhelmed with the amount of positive feedback I&rsquo;ve had from the show via Twitter, emails and the sheer number of comments it has received on the .NET Rocks Show website. Going into the show I had my concerns listeners may not find it an interesting subject from the outset, but to the contrary it seems like it resonates with a lot of developers which is fantastic to see!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Mon, 30 Nov 2015 20:35:12 +0000</pubDate></item><item><title>Using the Surface Pro 4 Type Cover on the Surface Pro 3</title><link>https://josephwoodward.co.uk/2015/11/using-surface-pro-4-type-cover-on-surface-pro-3</link><description></description><content:encoded><![CDATA[<p>Last September I made the decision to trade in my Macbook Pro and upgrade to a Surface Pro 3. This decision wasn&rsquo;t an easy one to make but I&rsquo;m a sucker for portability and loved the hybrid approach of the Surface Pro 3 and its ability to act both as a laptop and a tablet.</p>

<p>After a week of programming on my new Surface Pro 3 I decided to put together a rather lengthy post detailing my thoughts and experiences of using the device as a development machine, and whilst the I was, and still am extremely happy with the Surface Pro 3, one aspect that I felt was really lagging was the keyboard - my main criticism being that the keys were too close together and the trackpad wasn&rsquo;t accurate enough, and slightly on the small side for my liking. It seems that these criticisms weren&rsquo;t just ones shared by me, but by many others - but with the release of the Surface Pro 4 and the backwards compatible Surface Pro 4 Type Cover it&rsquo;s clear that Microsoft have been listening to the feedback as the 4&rsquo;s Type Cover goes a long way in correcting these niggles.</p>

<p>In addition to improvements over the existing Type Cover Microsoft released a version that includes a fingerprint scanner - however it doesn&rsquo;t look like it&rsquo;s going to be released in the UK any time soon so being the impatient person I am, I went ahead and ordered myself a standard Type Cover.</p>

<h2 id="the-keyboard">The keyboard</h2>

<p>The biggest improvement between the Surface Pro 3 keyboard and the 4 is the keyboard size. The keyboard itself has increased further to the edges of the Type Cover with the additional room allowing for much needed gaps between the keys. As someone with rather fat fingers this gap between the keys has gone a long way to increasing my typing accuracy, and thus speed - something extremely important for someone who does  a lot of programming on the Surface Pro 3.</p>

<p>I addition to the aforementioned key spacing, the keys feel a lot more solid than the previous keyboard - it&rsquo;s minor but it definitely lends to the typing experience on the new keyboard.</p>

<p>Other improvements that have been made to the keyboard are:</p>

<ul>
<li>The FN (function) key now features a small light so you know whether your function keys are active or not.</li>
<li>The keyboard now features a dedicated print screen button and insert key.</li>
</ul>

<h2 id="the-trackpad">The Trackpad</h2>

<p>The next major improvement the Surface Pro 4 Type Cover has undergone in comparison to predecessor is the track pad. Not only is the track pad larger (Microsoft say 40% larger) but it&rsquo;s also made of glass - resulting in it being far smoother and FAR MORE responsive. Whilst the Surface Pro 3&rsquo;s track pad was enough to get the job done, I would never have said it was pleasurable to use. It always felt like hard work - however these recent changes have resulted in a much better experience.</p>

<p>Overall, if you&rsquo;re an owner of a Surface Pro 3 and you want to give your device a brand new lease of life then I&rsquo;d highly recommend upgrading to the Surface Pro 4 Type Cover.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Wed, 11 Nov 2015 20:10:26 +0000</pubDate></item><item><title>Podcasts for .NET Developers</title><link>https://josephwoodward.co.uk/2015/11/podcasts-for-dot-net-developers</link><description></description><content:encoded><![CDATA[<p>With an hour long commute to and from work each day I find myself listening to a lot of podcasts. Not only do they make my car journey go much faster but listening to them also help me keep upto date with all of the latest goings on in the software industry. With this in mind I thought it would be beneficial to collate the .NET (or web development related) podcasts that I would recommend.</p>

<p>Before I continue if there are any podcasts that any readers of my blog would recommend then I&rsquo;m always keen to be introduced to new ones, so please feel free to comment!</p>

<hr>

<h2 id="net-rocks">.NET Rocks!</h2>

<p><strong>Website URL:</strong> <a href="http://www.dotnetrocks.com">http://www.dotnetrocks.com</a><br>
<strong>Release Frequency:</strong> 3 times a week (Tuesday, Wednesday and Thursday)<br>
<strong>Average Episode Duration:</strong> 60 minutes</p>

<p>.NET Rocks! is easily the staple of my podcasts. With over <a href="http://www.dotnetrocks.com/archives.aspx">1159 episodes in their show archive</a> hosts Carl Franklink and Richard Campbell do a fantastic job of delivering a great, high-production value podcast that features a broad range of .NET (and on occasion not do .NET) focused podcasts. Carl and Richard travel around creating their podcasts, presenting opporunities to interview some really high profile members of the .NET community. .NET Rocks especially deserves my number 1 spot for the sheer number of episodes they&rsquo;ve released over the years and the regularity of their releases.</p>

<hr>

<h2 id="ms-dev-show">MS Dev Show</h2>

<p><strong>Website URL:</strong> <a href="http://msdevshow.com/">http://msdevshow.com</a><br>
<strong>Release Frequency:</strong> Once a week<br>
<strong>Average Episode Duration:</strong> 60 minutes</p>

<p>MS Dev Show is a recent addition to my long list of favourite development related podcasts but has certainly left a good impression. Since first learning about MS Dev Show just a few weeks ago I&rsquo;ve already made a good start on burning my way through their archive.</p>

<hr>

<h2 id="hanselminutes">Hanselminutes</h2>

<p><strong>Website URL:</strong> <a href="http://hanselminutes.com/">http://hanselminutes.com</a><br>
<strong>Release Frequency:</strong> Once a week<br>
<strong>Average Episode Duration:</strong> 30 - 40 minutes</p>

<p>Along with the aforemention podcasts, I would also highly recommend <a href="http://www.hanselman.com/">Scott Hanselman&rsquo;s</a> Hanselminutes podcast. Having been listening to Hanselminutes long before I got into the .NET world, I&rsquo;ve always been a great admirer of Scott Hanselman&rsquo;s ability effortlessly communicate concepts, ideas and thoughts in a clear manner - if he has the ability to slow down time to find the perfect word or sentence that expresses what he&rsquo;s trying to communicate. Whilst not all episodes are development related, they&rsquo;re always insightful and interesting.</p>

<hr>

<h2 id="developer-on-fire">Developer On Fire</h2>

<p><strong>Website:</strong> <a href="http://developeronfire.com/">http://developeronfire.com</a><br>
<strong>Release Frequency:</strong> Twice a week<br>
<strong>Average Episode Duration:</strong> 30 - 60 minutes</p>

<p>Developer On Fire is a recent addition to my podcast list (when I say recent, I mean <sup>6</sup>&frasl;<sub>8</sub> months) but has quickly become one of my favourites due to presentation format and the topics discussed. Where as a lot of the podcasts listed are more technology focused (where a guest comes on to discuss a particular topic), Developer On Fire does something a little different and talks about the guests more personal relation with the software industry, encouraging them to share experiences and tips with a real focus on delivering value.</p>

<p>The host, <a href="http://optimizedprogrammer.com/">Dave Rael</a> presents the show really well and does a great job of keeping the guests talking, teasing interesting stories, tips and recommendations out of them throughout - often ending with a great question where he asks for two or three book recommendations for the listeners - books that on occasion I&rsquo;ve actually gone and purchased as a direct result of the recommendation. </p>

<hr>

<h2 id="eat-sleep-code">Eat, Sleep, Code</h2>

<p><strong>Website URL:</strong> <a href="http://developer.telerik.com/community/eat-sleep-code/">http://developer.telerik.com/community/eat-sleep-code/​</a><br>
<strong>Release Frequency:</strong> Twice a month<br>
<strong>Average Episode Duration:</strong> 30 - 40 minutes</p>

<p>Eat Sleep Code is a podcast created by the guys and girls over at Telerik and is certainly worth adding to your podcast aggregator of choice. Each episode spans between 30 to 40 minutes and is a healthy mix of programming related subjects ranging from .NET to Javascript and mobile development.</p>

<hr>

<h2 id="herding-code">Herding Code</h2>

<p><strong>Website URL:</strong> <a href="http://herdingcode.com">http://herdingcode.com</a><br>
<strong>Release Frequency:</strong> Once a month, sometimes longer<br>
<strong>Average Episode Duration:</strong> 20 - 30 minutes</p>

<p>Herding Code is another get .NET focused podcast that regularly features a variety of subjects ranging from software architecture to mobile phones and javascript libraries. Whilst I would definitely recommend adding it to your podcast aggregator of choice be aware that the eposide release cycle is rather inconsistent.</p>

<hr>

<h2 id="software-engineering-radio">Software Engineering Radio</h2>

<p><strong>Website URL:</strong> <a href="http://www.se-radio.net/">http://www.se-radio.net</a><br>
<strong>Release Frequency:</strong> 2 or 3 times a month<br>
<strong>Average Episode Duration:</strong> 60 minutes</p>

<p>Whilst not specifically .NET focused (more Java) but often featuring more advanced subjects that apply to general application development, Software Engineering Radio deserves a mention for its sheer technical depth of discussions.</p>

<hr>

<h2 id="javascript-jabber">Javascript Jabber</h2>

<p><strong>Website URL:</strong> <a href="https://devchat.tv/js-jabber">https://devchat.tv/js-jabber</a><br>
<strong>Release Frequency:</strong> Once a week<br>
<strong>Average Episode Duration:</strong> 1 hour</p>

<p>Whilst Javascript Jabber is not a .NET focused podcast, with way the .NET and application development landscape is changing most of us .NET (specifically ASP.NET) developers are no doubt familiar with Javascript or will become increasingly familiar with it over the next few years - especially with tooling changes we&rsquo;re seeing in ASP.NET MVC 6. For this reason I feel Javascript Jabber deserves a place on the list. Javascript Jabber regularly features a variety of guests, most often Ruby related, however the hosts do a great job of keeping it Javascript focused. Personally I&rsquo;ve found Javascript Jabber a great way to keep upto date with the never ending emergence of Javascript libraries.</p>

<p>That&rsquo;s it for now. I&rsquo;m always on the lookout for new podcasts to ease my commute so feel free to mention any I&rsquo;ve missed in the comments below and I&rsquo;ll be sure to add them to the list!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Tue, 03 Nov 2015 20:07:56 +0000</pubDate></item><item><title>Using Roslyn to look for code smells</title><link>https://josephwoodward.co.uk/2015/10/using-roslyn-to-look-for-code-smells</link><description></description><content:encoded><![CDATA[<p>Since first hearing about Roslyn I was instantly excited about its application in being able to easily parse C# and could clearly see the benefits it could bring anyone wishing to perform analysis on an existing codebase - not to mention the great things it can do for the Visual Studio plugins and extensions.</p>

<p>Having been playing around with Roslyn for a couple of days now I thought I&rsquo;d share a cool snippet of code I put together to demonstrate how easy it is to parse C# code using the power of just a few Roslyn NuGet packages.</p>

<p>In this using Roslyn to analyse an existing codebase and flag any classes that are guilty of the &rsquo;<em>Too many parameters</em>&rsquo; code smell.</p>

<h2 id="first-let-s-load-the-solution">First, let&rsquo;s load the solution:</h2>

<p>First of all we&rsquo;re going to need to load our solution&hellip;</p>

<pre><code>public static class WorkspaceSolution
{
    public static IEnumerable&lt;Document&gt; LoadSolution(string solutionDir)
    {
        var solutionFilePath = Path.GetFullPath(solutionDir);

        MSBuildWorkspace workspace = MSBuildWorkspace.Create();
        Solution solution = workspace.OpenSolutionAsync(solutionFilePath).Result;

        var documents = new List&lt;Document&gt;();
        foreach (var projectId in solution.ProjectIds)
        {
            var project = solution.GetProject(projectId);
            foreach (var documentId in project.DocumentIds)
            {
                Document document = solution.GetDocument(documentId);
                if (document.SupportsSyntaxTree) documents.Add(document);
            }
        }

        return documents;
    }
}
</code></pre>

<p>As you can see from the above snippet this is actually quite simple thanks to the <em>Microsoft.CodeAnalysis.MSBuild</em> package, more specifically the <em>MSBuildWorkspace</em> class. This allows us to scan our solution and create a collection of any documents within it.</p>

<h2 id="analysing-the-documents-aka-class-files">Analysing the documents (aka. class files)</h2>

<p>Now we&rsquo;ve got a list of all of the documents within our solution we can begin to iterate through them, parsing the syntax tree until we come to the method parameters (though performing the same analysis on constructors is equally as easy - but for the sake of this demonstration we&rsquo;ll look at methods).</p>

<p>At this point we can check to see if the number of parameters exceeds our threshold.</p>

<pre><code>List&lt;MethodDeclarationSyntax&gt; methods = documents.SelectMany(x =&gt; x.GetSyntaxRootAsync().Result.DescendantNodes().OfType&lt;MethodDeclarationSyntax&gt;()).ToList();

var smellyClasses = new Dictionary&lt;string, int&gt;();
int paramThreshold = 5;
foreach (MethodDeclarationSyntax methodDeclarationSyntax in methods)
{
    ParameterListSyntax parameterList = methodDeclarationSyntax.ParameterList;
    if (parameterList.Parameters.Count &gt;= 5)
    {
        smellyClasses.Add(methodDeclarationSyntax.Identifier.SyntaxTree.FilePath, parameterList.Parameters.Count);
    }
}
</code></pre>

<h2 id="let-s-put-it-to-use">Let&rsquo;s put it to use</h2>

<p>If you&rsquo;re still not sold on how powerful this can be, let&rsquo;s move this into a unit test that can easily be included as a base suite of unit tests - allowing us to easily enforce rules on our development teams.</p>

<pre><code>[TestCase(5)]
public void Methods_ShouldNotHaveTooManyParams(int totalParams)
{
    List&lt;MethodDeclarationSyntax&gt; methods = this.documents.SelectMany(x =&gt; x.GetSyntaxRootAsync().Result.DescendantNodes().OfType&lt;MethodDeclarationSyntax&gt;()).ToList();
    foreach (MethodDeclarationSyntax methodDeclarationSyntax in methods)
    {
        ParameterListSyntax parameterList = methodDeclarationSyntax.ParameterList;

        // Assert
        parameterList.Parameters.Count.ShouldBeLessThanOrEqualTo(totalParams, &quot;File Location: &quot; + methodDeclarationSyntax.Identifier.SyntaxTree.FilePath);
    }
}
</code></pre>

<p>As you&rsquo;ll see in the code snippet below we then use <a href="https://github.com/shouldly/shouldly">Shouldly unit testing framework</a> and its awesome error messages to inform us of the class that&rsquo;s currently violating our code smell rule!</p>

<p>If you&rsquo;re interested in having a play with this then you can <a href="https://github.com/JosephWoodward/RoslynPlayground">find the code up on GitHub here</a>. I&rsquo;d also recommend checking out <a href="https://joshvarty.wordpress.com/learn-roslyn-now/">this post</a> if you&rsquo;re interested in learning more about Roslyn and how it works. Another great resource that gave me the idea for using Roslyn to create tests is <a href="http://www.strathweb.com/2015/09/using-roslyn-and-unit-tests-to-enforce-coding-guidelines-and-more/">this strathweb.com blog post</a> by <a href="https://twitter.com/filip_woj">@filip_woj</a>.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Fri, 09 Oct 2015 20:05:23 +0000</pubDate></item><item><title>Detecting CSS breakpoints in Javascript</title><link>https://josephwoodward.co.uk/2015/09/detecting-css-breakpoints-javascript</link><description></description><content:encoded><![CDATA[<p>A short blog post today but one I still feel will be helpful to a few of you.</p>

<p>Recently I was writing some Javascript for a responsive HTML5 front end that I was working on and needed to find out what CSS media breakpoint the browser was currently in. Naturally my initial reaction was to use Javascript to detect the browser width and go from there; however doing it this way immediately felt dirty as it would result in a duplication of breakpoint values throughout the application - something I was keen to avoid.</p>

<p>After a bit of head-scratching the solution I ended up with was to target the <a href="http://unsemantic.com/css-documentation#4-mobile-amp-hidden-classes">device specific show/hide classes</a> that most CSS framework provide.</p>

<p>In the following primitive example I&rsquo;m using the <a href="http://unsemantic.com/">Unsemantic CSS grid framework</a>, but almost all CSS frameworks provide similar classes - though even if you&rsquo;re not using a framework there&rsquo;s nothing to stop you from creating your own.</p>

<p><strong>HTML:</strong></p>

<pre><code>&lt;a href=&quot;#&quot; id=&quot;clickMeButton&quot;&gt;Click Me&lt;/a&gt;

&lt;!-- These can go just before the &lt;/body&gt; tag --&gt;
&lt;div class=&quot;show-on-desktop&quot;&gt;&lt;/div&gt;
&lt;div class=&quot;show-on-mobile&quot;&gt;&lt;/div&gt;
</code></pre>

<p><strong>CSS:</strong></p>

<pre><code>.show-on-desktop { display: none;}
.show-on-mobile { display: none; }

/* Mobile */
@media (max-width: 300px) { 
    .show-on-mobile { display: block; }
}

/* Desktop */
@media (min-width: 301px) {    
    .show-on-desktop { display: block; }
}
</code></pre>

<p><strong>Javascript:</strong></p>

<pre><code>$(function(){

   $('#clickMeButton').on('click', function(){
       if($('.show-on-desktop').is(':visible')){
           alert('desktop');
       } else if ($('.show-on-mobile').is(':visible')){
           alert('mobile'); 
       }
	}); 

});
</code></pre>

<p>You can check out <a href="http://jsfiddle.net/q2tmrme5/13/">this JSFiddle</a> to see a working example of the above code - if you resize the browser and click the button you&rsquo;ll notice the alert message will change from &ldquo;Desktop&rdquo; to &ldquo;Mobile&rdquo;.</p>

<p>Whilst looking at similar solutions I found <a href="http://stackoverflow.com/a/22885503/963542">this StackOverflow answer</a> which goes one further and turns the same approach into a <a href="https://github.com/maciej-gurban/responsive-bootstrap-toolkit">Bootstrap plugin for easier use</a> - awesome!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Mon, 21 Sep 2015 20:01:07 +0000</pubDate></item><item><title>New website, now Azure powered</title><link>https://josephwoodward.co.uk/2015/02/new-website-now-azure-powered</link><description></description><content:encoded><![CDATA[<h2 id="in-the-beginning">In the beginning</h2>

<p>When I first decided to start a development blog I wasn&rsquo;t sure as to how much time I&rsquo;d be able to dedicate to it and was concerned that, like many blogs online, my post frequency would start out string but begin to dwindle over time until it was never touched again.</p>

<p>With this uncertainty in mind, I didn&rsquo;t want to spend too much time putting together a blog so I bought a domain and setup WordPress on my Linux VPS with a simple, only slightly customised template with the intention to see how I got on.</p>

<p>Fast forward just over a year now and I&rsquo;m still blogging with reasonable frequency and feel I have gotten to a point where I feel it&rsquo;s safe to say that I won&rsquo;t be stopping any time soon. The rewards of blogging about programming related subjects have become clear to me and I&rsquo;ve learned a great deal from investigating a subject enough to confidently write a blog post; and with this in mind I decided it was time to commit to spending some time and effort in updating my upgrading my blog to aid reader retention and act a platform for me to continually develop and evolve over time.</p>

<h2 id="about-the-new-blog">About the new blog</h2>

<p>Whilst I was perfectly happy with WordPress as a blogging platform (I have reservations about the language used to create it but that&rsquo;s another story!), I was keen to develop a solution in ASP.NET, and whilst conscious that I did not want to reinvent the wheel, I built myself a simple administration area that I can use to create and maintain posts. I&rsquo;ve also been looking at other design patterns and techniques over the past few months and creating a blog seemed the perfect way to put some of them to practice and live with them a while to see their worth.</p>

<p>For those interested in what I used under the hood, below is the technology stack and why I chose to use them:</p>

<p><strong>ASP.NET MVC</strong></p>

<p>Being an ASP.NET developer MVC was an obvious choice. It&rsquo;s a great, extensible framework that I love the more and more I use. I was tempted to dive into using ASP.NET MVC vNext but I decided to wait a bit until I cross that bridge, but it&rsquo;s certainly something I entend on doing.</p>

<p><strong>Dapper ORM</strong></p>

<p>Initially I started out using Entity Framework, but given the simplicity of the blog and desire for speed, decided to give <a href="https://github.com/StackExchange/dapper-dot-net">StackOverflow&rsquo;s Dapper ORM</a> a try. It&rsquo;s a mega lightweight ORM created and used internally by StackOverflow who built it with speed in mind. I&rsquo;ve really enjoyed using it and will no doubt be using it couldn&rsquo;t recommend it enough when simplicity and speed are what you&rsquo;re after. As an example of what Dapper does, among many of the other benefits, Dapper allows you to easily hydrate POCO objects like so:</p>

<pre><code>public class Dog
{
    public int? Age { get; set; }
    public Guid Id { get; set; }
    public string Name { get; set; }
    public float? Weight { get; set; }

    public int IgnoredProperty { get { return 1; } }
}            

var guid = Guid.NewGuid();
var dog = connection.Query&lt;Dog&gt;(&quot;select Age = @Age, Id = @Id&quot;, new { Age = (int?)null, Id = guid });

dog.Count()
    .IsEqualTo(1);

dog.First().Age
    .IsNull();

dog.First().Id
    .IsEqualTo(guid);
</code></pre>

<p>Dapper also allows you to <a href="https://github.com/StackExchange/dapper-dot-net#multi-mapping">map a single row to multiple objects</a> and allows you to <a href="https://github.com/StackExchange/dapper-dot-net#multiple-results">process multiple result grids in a single query</a>.</p>

<p><strong>MediatR</strong></p>

<p>This was an interesting choice and proved a real eye-opener to me.</p>

<p>I&rsquo;ve always found software architecture an interesting subject and one that I&rsquo;ve always gravitated towards, and being a frequent reader of <a href="http://lostechies.com/jimmybogard/">Jimmy Bogard&rsquo;s blog</a> (creator of AutoMapper, and technical architect at <a href="http://www.headspring.com/">Headsprings in Austin,Texas</a>), I found his <a href="http://lostechies.com/jimmybogard/2013/10/29/put-your-controllers-on-a-diet-gets-and-queries/">Putting your controllers on a diet blog posts</a> particularly interesting as they introduced me to <a href="http://en.wikipedia.org/wiki/Command%E2%80%93query_separation">Command - Query Separation (CQS)</a>, and an alternative approach to the typical n-tier architecture that any enterprise developer will be familiar with - and Mediatr is a </p>

<p>Being a separate subject in itself that deserves its own blog post, CQS is the process of breaking a solution down into simple, isolated Command or Queries; in this instance, using <a href="https://github.com/jbogard/MediatR">Jimmy Bogard&rsquo;s MediatR</a> library. When you do this you realise that the service layer becomes redundant and your code can becomes far more succinct and expressive.</p>

<p><strong>Azure</strong></p>

<p>I&rsquo;ve been playing around with Azure for sometime now and decided to use it as my hosting provider. Using a shared instance that costs just a few pounds a month, Azure has done a great job in removing many of the issues you can have when setting up a website.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Wed, 11 Feb 2015 02:44:39 +0000</pubDate></item><item><title>Personal targets and goals for 2015</title><link>https://josephwoodward.co.uk/2015/01/personal-targets-and-goals-for-2015</link><description></description><content:encoded><![CDATA[<p>With 2014 now out of the way I always feel like it&rsquo;s a good idea to start 2015 with a blog post outlining my targets and goals for next year ahead.</p>

<p>As a software developer, the main benefit you get from setting yourself yearly goals like this is it forces you to look at your existing strengths and weaknesses (we all have weaknesses, we just need to be honest about them) and areas that you feel can be improved. This means that throughout the year as you work towards improving these areas that are lacking, you&rsquo;re focused on exactly were you need to spend time to improve to ensure that year on year you&rsquo;re going in the right direction and that in 3 years time when you look back at yourself 3 years ago, you&rsquo;ll see that you have improved and haven&rsquo;t stagnated.</p>

<p>I always feel it&rsquo;s important to set yourself only a few goals, 3 or 4 at the most. Setting yourself anything more can start to become unrealistic and can often result in you not spending enough time on each goal, or spreading your time to thin, resulting in no noticeable improvement over the year.</p>

<p>So here are my goals for 2015:</p>

<h2 id="goal-1-get-dangerous-with-javascript">Goal 1: Get dangerous with Javascript</h2>

<p>Whilst I&rsquo;ve always been comfortable writing JQuery, and to a lesser form Javascript; I&rsquo;ve always felt that my knowledge in the area was patchy. <a href="http://josephwoodward.co.uk/2014/04/space-shooter-written-in-javascript-using-typescript/">I can get by with it</a>, but I feel there is room for improvement.</p>

<p>As I consider myself a polyglot programmer I feel being dangerous with Javascript will be another string to my bow as a software developer for the following reasons:</p>

<ul>
<li>Whether you love or hate Javascript, there&rsquo;s one thing for sure and that is that it&rsquo;s here to stay.</li>
<li>Javascript is a great transferable skill to have. You may move from one language to another (say C# to Java, or .NET to Play Framework or Ruby), but providing you&rsquo;re still working on a web stack then your Javascript skill remain.</li>
<li>I strongly believe that having strong front end skills is a great way to separate yourself from other developers.</li>
</ul>

<p>So this year my main goal is to improve really develop my Javascript knowledge and skills.</p>

<h2 id="goal-2-contribute-to-more-open-source-projects">Goal 2: Contribute to more open source projects</h2>

<p>Having always been a great supporter of the open source software movement (not to mention a consumer of open source software too!) this year I&rsquo;m keen to find some projects to get involved in. I&rsquo;m also keen to get working on developing and open sourcing some more projects.</p>

<p>Whilst I have contributed to open source projects in the past, I don&rsquo;t feel I&rsquo;ve added as much value as I could do. Generally a lot of my contributes have been documentation and UI related issues, which, whilst important - don&rsquo;t give you the added valuable feedback and learning you gain when committing code and having other developers review your code.</p>

<p>As well as contributing to existing open source projects, I&rsquo;m also keen to start a few of my own. Whilst I do have one or two projects currently on <a href="https://github.com/JoeMighty/">my GitHub account</a> that have users, I&rsquo;m keen to write some more as I find writing a plugin or a piece of software that other&rsquo;s find useful is a rewarding experience.</p>

<p><strong><em>(Since writing this post I&rsquo;m already on my way to acheving this goal which you can <a href="http://josephwoodward.co.uk/2015/03/my-first-code-contributions-to-an-open-source-project/">read about here</a>).</em></strong></p>

<h2 id="goal-3-increase-the-frequency-of-my-blog-posts">Goal 3: Increase the frequency of my blog posts</h2>

<p>Another of my goals this year is to increase the frequency of my blog posts.</p>

<p>Having only started blogging this year I&rsquo;ve found it an incredibly rewarding and valuable experience. It&rsquo;s great getting feedback from other people and writing blog posts about programming topics really does help drill the subject matter into your brain.</p>

<p>When starting out blogging I felt no one would take much interested in some of the subjects I wanted to discuss - especially when you take into consideration the plethora of other great resources/developer blogs online. Yet I&rsquo;ve been pleasantly surprised by the attention some of the posts has received. So this year I&rsquo;m going to try and improve on the frequency of my blog posts.</p>

<p>I&rsquo;d really like to achieve a blog post a week but whether I can get my thoughts out on paper that quickly is another story.</p>

<h2 id="bonus-goal-achieve-over-8-000-points-on-stack-overflow">Bonus Goal: Achieve over 8,000 points on Stack Overflow</h2>

<p>Over the past few months I&rsquo;ve become quite active on Stack Overflow. I particularly enjoy answering questions relating to StructureMap. At the time of writing I have 2,266 points to my name - with an aim of making it to 8,000 by the end of the year. This, I feel is going to be a challenging figure to reach, but one that is achievable.</p>

<p>I&rsquo;ve found that Stack Overflow is another great resource to learn from. Especially when answering questions that you have no solid knowledge of as they force you to research the subject enough to construct a solution to the user&rsquo;s problem.</p>

<h2 id="conclusion">Conclusion</h2>

<p>So, there we have it; my 3 (4 including the bonus) goals to aim for this year. I&rsquo;ve tried not to have too many as I don&rsquo;t want to spread my time to thin, and I feel each goal requires enough work to make achieving it a challenge, yet providing me with enough value that by the end of the year I&rsquo;m a better developer than I am at the start.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Wed, 21 Jan 2015 02:42:56 +0000</pubDate></item><item><title>I used to use a mug for every beverage, now I use this!</title><link>https://josephwoodward.co.uk/2015/01/i-used-to-use-a-mug-for-every-beverage-now-i-use-this</link><description></description><content:encoded><![CDATA[<p><img alt="" src="https://s3-eu-west-1.amazonaws.com/cdn.josephwoodward.co.uk/blog/cup_of_type_t_programming_mug.jpg" /></p>

<p>I wouldn&#39;t normally create a blog post about a mug, but as a programmer I thought this programmer&#39;s &quot;Cup of type T&quot; was clever enough to merit sharing it with my readers.</p>

<p>I originally spotted it at a zazzle.co.uk, but having read less than enthusiastic reviews about the store and the quality of their service offered I decided to look elsewhere. Ultimately I ended up getting it printed myself via vistaprint.co.uk for about the same price.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Sun, 11 Jan 2015 02:41:10 +0000</pubDate></item><item><title>Harnessing your IoC Container to perform application event tasks in ASP.NET MVC</title><link>https://josephwoodward.co.uk/2014/12/harnessing-your-ioc-container-to-perform-application-event-tasks-in-asp-net-mvc</link><description></description><content:encoded><![CDATA[<p>I&rsquo;ve always found software architecture a fascinating subject, and one area I&rsquo;m particularly interested in at the moment is dependency management.</p>

<p>Recently I&rsquo;ve been reading a lot about IoC patterns and best practices, and being a subscriber to PluralSight.com I stumbled across a fantastic course by <a href="http://trycatchfail.com/blog/">Matt Honeycutt</a> titled &ldquo;<a href="http://www.pluralsight.com/courses/build-application-framework-aspdotnet-mvc-5">Build Your Own Application Framework with ASP.NET MVC 5</a>&rdquo;. The video course presented a whole array of work that can be done to increase productivity within ASP.NET MVC, and one part of the course that particularly stuck out was the use of start-up tasks.</p>

<p>As an aside, I would highly recommend checking out <a href="http://www.pluralsight.com/courses/build-application-framework-aspdotnet-mvc-5">Matt Honeycutt&rsquo;s course</a> if you&rsquo;ve got an active subscription. For those who do not have a PluralSight subscription then PluralSight offer a free trial so you can sign up and watch the course for free. It&rsquo;s 3 hours 25 minutes in length and packed full of tips for modifications, functionality and extensions that can make you more productive in ASP.NET MVC.</p>

<h2 id="the-problem">The problem</h2>

<p>As an ASP.NET MVC application grows so does the use of the application event methods accessible from Global.asax.cs such as Application_Start, Application_BeginRequest, Application_EndRequest and so on. After a while these methods can become unmanageable and messy as more and more functionality is tied into the application events.</p>

<p>Let me demonstrate with an example. Take this simple Application_start method:</p>

<pre><code>protected void Application_Start()
{
     IContainer container = StructureMapCoreSetup.Initialise();
     container.Configure(c =&gt; c.IncludeRegistry&lt;DefaultRegistry&gt;());

     StructureMapResolver = new StructureMapDependencyResolver(container);
     DependencyResolver.SetResolver(StructureMapResolver);

     ViewEngines.Engines.Clear();
     ViewEngines.Engines.Add(new RazorViewEngine());

     AreaRegistration.RegisterAllAreas();

     FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
     RouteConfig.RegisterRoutes(RouteTable.Routes);
     BundleConfig.RegisterBundles(BundleTable.Bundles);
}
</code></pre>

<p>If you look closely you&rsquo;ll see that this method is responsible for the following actions:</p>

<ol>
<li>Setup our IoC container (in this case it&rsquo;s StructureMap)</li>
<li>Register it with ASP.NET MVC&rsquo;s dependency resolver</li>
<li>Clear existing view engines and explicitly add Razor</li>
<li>Register all areas of the application</li>
<li>Register any global filters</li>
<li>Register the application routes</li>
<li>Finally register the application&rsquo;s Javascript and CSS bundles</li>
</ol>

<p>Even with these reasonably tidy 7 steps, you can see how the method has the potential to become quite convoluted.</p>

<h2 id="the-solution">The solution</h2>

<p>One such way to keep these application event methods in shape and prevent your global.asax.cs class becoming a <a href="http://en.wikipedia.org/wiki/God_object">god object</a>  is to harness the power of your IoC container and break the functionality down into modular tasks. These tasks can then by loaded by your IoC container and executed at the correct stage of the life-cycle of a page request. Let&rsquo;s take a look at how we can do this.</p>

<p>First we&rsquo;re going to use a simple implementation of the command pattern to define our task execution interfaces:</p>

<pre><code>public interface IExecutable
{
    void Execute();
}
</code></pre>

<p>Now we&rsquo;ve created our role interface we also need to create an interface for the various stages of the application&rsquo;s life-cycle that we wish to fire off tasks during.</p>

<pre><code>public interface IRunAtStartup : IExecutable
{
}

public interface IRunOnEachRequest : IExecutable
{
}

public interface IRunAtEndOfEachRequest : IExecutable
{
}
</code></pre>

<p>From here we can begin to categorise our methods into nice, well organised classes based off of the interface they implement.</p>

<p>For instance, I often move all of the basic framework set-up tasks into their own FrameworkTasks.cs  class like so:</p>

<pre><code>public class FrameworkSetupTask : IRunAtStartup
{
    public void Execute()
    {
        AreaRegistration.RegisterAllAreas();

        ViewEngines.Engines.Clear();
        ViewEngines.Engines.Add(new RazorViewEngine());

        FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
        RouteConfig.RegisterRoutes(RouteTable.Routes);
        BundleConfig.RegisterBundles(BundleTable.Bundles);
    }
}
</code></pre>

<p>In addition to this if my project has various AutoMapper profiles then I&rsquo;ll move them into their own start-up task class that&rsquo;s fire off when the application is run:</p>

<pre><code>public class AutoMapperProfiles : IRunAtStartup
{
    public void Execute()
    {
        Mapper.AddProfile(new CoreAutoMapperProfile());
        Mapper.AddProfile(new CartAutoMapperProfile());
        Mapper.AddProfile(new ProductAutoMapperProfile());
    }
}
</code></pre>

<h2 id="setting-up-your-ioc-container-to-execute-the-tasks">Setting up your IoC container to execute the tasks</h2>

<p>Now we&rsquo;ve abstracted our various application life-cycle tasks we can use our IoC container to load and execute the tasks based on their interface type.</p>

<p>In the following example you&rsquo;ll notice that I&rsquo;m using StructureMap as my IoC container. You may also notice that I&rsquo;m favouring the <a href="http://structuremap.github.io/the-container/nested-containers/">nested container per HTTP request approach</a> - this is a personal choice of mine so it&rsquo;s possible that your setup may vary. Other than this difference there shouldn&rsquo;t be much to change in the following examples.</p>

<p><strong>The Task Runner method</strong><br>
The task runner is the method responsible for loading and executing the tasks accordingly. You&rsquo;ll notice the argument to the TaskRunner method is a nullable container. This is so I can pass a (nested) container as an argument (see Application_Start as an example), otherwise the TaskRunner method will use an existing instance of the nested container that&rsquo;s set up within my StructureMapDependencyResolver  class.</p>

<pre><code>public class MvcApplication : HttpApplication
{
    public static StructureMapDependencyResolver StructureMapResolver { get; set; }

    protected void Application_Start
    {
        ...

        StructureMapResolver = new StructureMapDependencyResolver(container);

        var serviceLocatorProvider = new ServiceLocatorProvider(() =&gt; StructureMapResolver);
        container.Configure(cfg =&gt; cfg.For&lt;ServiceLocatorProvider&gt;().Use(serviceLocatorProvider));

        DependencyResolver.SetResolver(StructureMapResolver);

        TaskRunner&lt;IRunAtStartup&gt;(container.GetNestedContainer());
    }

    protected void Application_BeginRequest(object sender, EventArgs e)
    {
        StructureMapResolver.CreateNestedContainer();

        TaskRunner&lt;IRunOnEachRequest&gt;();
    }

    protected void Application_EndRequest(object sender, EventArgs e)
    {
        TaskRunner&lt;IRunAtEndOfEachRequest&gt;();
    }

    private static void TaskRunner&lt;T&gt;(IContainer container = null)
    {
        if (container == null)
        {
            container = StructureMapResolver.CurrentNestedContainer;
        }

        foreach (var taskInstance in container.GetAllInstances&lt;T&gt;())
        {
            var task = taskInstance as IExecutable;
            if (task != null)
            {
                task.Execute();
            }
        }
    }
}
</code></pre>

<p>As you can see our HttpApplication events are far cleaner now than they were before and will continue to remain manageable as the project grows and further functionality is added that requires hooking up to the HttpApplication events.</p>

<h2 id="conclusion">Conclusion</h2>

<p>As demonstrated, creating event tasks is straight forward and can easily be set up with any IoC container. If you&rsquo;re using StructureMap and instead opting to use an IoC Container such Ninject or Unity then it should be easy enough to create a similar TaskRunner  method.</p>

<p>You&rsquo;ll notice that for this example I&rsquo;ve set up event tasks for the main HTTP events (Application_Start , Application_BeginRequest and Application_EndRequest) but this pattern can easily be applied to some of the other HttpApplication events such as Application_Error.</p>

<p>I&rsquo;ve used this approach in various ASP.NET MVC projects with great success. The only drawback I&rsquo;ve been aware of is the fact that you lose the ability to instantly see what occurs during your application events as they&rsquo;ve now been moved into their own classes, though I feel this is a minor setback for the tidiness of the approach.</p>

<p>As mentioned earlier in my post, I would definitely recommend checking out the PluralSight course if you can.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Mon, 29 Dec 2014 02:37:25 +0000</pubDate></item><item><title>Worth a watch: John Sonmez on creating a test automation framework using Selenium</title><link>https://josephwoodward.co.uk/2014/11/worth-a-watch-john-sonmez-on-creating-an-test-automation-framework-using-selenium</link><description></description><content:encoded><![CDATA[<p>Unit tests are great at testing individual, isolated pieces of functionality and go a long way in instilling confidence when writing code and especially when modifying or maintaining and existing code base. Another valuable form of testing is Test Automation; and if you&rsquo;ve ever worked with automated tests before you&rsquo;ll no doubt agree that coupled with unit tests, test automation goes an extra mile in ensuring you&rsquo;ve got maximum test overage of your system.</p>

<p>Test automation, much like writing unit tests, requires an initial upfront investment in writing the actual tests - but once created, these tests become increasingly valuable as your project grows and ages. However, unlike unit tests there is an even greater investment required in creating the initial framework with which to build your automation tests upon. This important step requires a good deal of thought and planning to ensure the tests you write are as efficient and maintainable as possible.</p>

<p>Is this additional investment I talk of worth it in the long run? I certain think so and many others that have experience with test automation would also agree.</p>

<p>Luckily there&rsquo;s a great talk by <a href="http://simpleprogrammer.com/">John Sonmez from SimpleProgrammer.com</a> that takes you through how to create a Test Automation framework that I acts as a great foundation to write succinct, yet maintainable automation tests efficiently.</p>

<p><a href="https://www.youtube.com/watch?v=DO8KVe00kcU"><img src="https://s3-eu-west-1.amazonaws.com/cdn.josephwoodward.co.uk/blog/john_sonmez_automation_testing.jpg" alt="" /></a></p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Mon, 24 Nov 2014 02:32:49 +0000</pubDate></item><item><title>Improving upon ASP.NET MVC&#39;s default display annotation convention</title><link>https://josephwoodward.co.uk/2014/11/improving-upon-asp-net-mvcs-default-display-annotation-convention</link><description></description><content:encoded><![CDATA[<p>ASP.NET MVC is great at taking a lot of the grunt work out of creating forms. However, one area that I&rsquo;ve always felt was lacking is the way ASP.NET MVC handles outputting form property names.</p>

<p>As it stands, ASP.NET MVC&rsquo;s default behavior when outputting property names (using @Html.LabelFor() for example) is to output the property name as is. For instance, below highlights how the property name would appear if being output via a LabelFor() HTML helper.</p>

<pre><code>public class ExampleViewModel
{ 
    public string Surname { get; set; } //Output: &quot;Surname&quot;

    public string FirstName { get; set; } //Output: &quot;FirstName&quot;

    public string age { get; set; } //Output: &quot;age&quot;
}
</code></pre>

<p>Whilst the output of the Surname property is fine, given that it&rsquo;s a single word; you&rsquo;ll notice that the output of the FirstName property is less than desirable and should instead be displayed as two separate words if it is to be read correctly.</p>

<p>If you wish to update the way the FirstName property is displayed within a view then the common solution is to use the ASP.NET&rsquo;s Display annotation like so:</p>

<pre><code>public class ExampleViewModel
{ 
    [Display(Name = &quot;Surname&quot;)]
    public string Surname { get; set; } //Output: &quot;Surname&quot;

    [Display(Name = &quot;First Name&quot;)]
    public string FirstName { get; set; } //Output: &quot;First Name&quot;

    [Display(Name = &quot;Age&quot;)]
    public string age { get; set; } //Output: &quot;Age&quot;
}
</code></pre>

<h2 id="why-i-m-not-a-huge-fan-of-display-annotations">Why I&rsquo;m not a huge fan of Display annotations</h2>

<p>Whilst this solution will work, I&rsquo;m not a huge fan of it as I generally try avoid muddying my objects with annotations if I can; especially annotations that are view specific such as the Display annotation.</p>

<p>Whilst I appreciate this is a view model, containing data that is destined for the view, I would argue that a view model&rsquo;s purpose is to represent only the data - rather than how it is displayed or formatted within the UI. That&rsquo;s what HTML is for, marking up our data. If we were to output the view model above via a Restful API or web service then our display annotations suddenly become irrelevant.</p>

<p>Another issue with the overuse of data annotations is they can really start to get messy. Take the following model for instance.</p>

<pre><code>public class LoginModel
{
    [Display(Name = &quot;Account User Name&quot;)]
    [Required(ErrorMessage = &quot;Please fill in your user name.&quot;)]
    [MaxLen(30)]
    public string AccountUserName { get; set; }

    [Display(Name = &quot;Account Password&quot;)]
    [Required(ErrorMessage = &quot;Please fill in your password.&quot;)]
    [MaxLen(30)]
    public string AccountPassword { get; set; }
}
</code></pre>

<p>Instead I tend to favor a conventional or configuration approach to such a problem - and thankfully, due to ASP.NET MVC&rsquo;s extensibility, we can create our own conventional approach to displaying property names without the need to pollute our view model or data transfer objects with UI specific Display annotations.</p>

<h2 id="harnessing-the-power-of-the-modelmetadataprovider">Harnessing the power of the ModelMetaDataProvider</h2>

<p>In order to apply our conventional approach to display properties within ASP.NET MVC we first need to look at where the display data is coming from when using an extension method such as the  @Html.LabelFor() extension used in our example above.</p>

<p>If we dig into the LabelFor extension (I use ReSharper which has a built in .NET de-compiler) you&rsquo;ll see the LabelFor accesses the display property via an implementation of ModelMetadataProvider:</p>

<pre><code>internal static MvcHtmlString LabelFor&lt;TModel, TValue&gt;(this HtmlHelper&lt;TModel&gt; html, Expression&lt;Func&lt;TModel, TValue&gt;&gt; expression, string labelText, IDictionary&lt;string, object&gt; htmlAttributes, ModelMetadataProvider metadataProvider)
{
     return LabelExtensions.LabelHelper((HtmlHelper) html, ModelMetadata.FromLambdaExpression&lt;TModel, TValue&gt;(expression, html.ViewData, metadataProvider), ExpressionHelper.GetExpressionText((LambdaExpression) expression), labelText, htmlAttributes);
}
</code></pre>

<p>After a little further digging you&rsquo;ll find that the ModelMetadataProvider is a base class of DataAnnotationsModelMetadataProvider. If you take a moment to <a href="https://github.com/ASP-NET-MVC/aspnetwebstack/blob/master/src/System.Web.Mvc/DataAnnotationsModelMetadataProvider.cs">take a look at the DataAnnotationsModelMetadataProvider class</a> and its implementation you&rsquo;ll clearly see how ASP.NET MVC is accessing the Display attributes value and then returning an instance of ModelMetadata, which eventually ends up within the view helper and gets output in the view.</p>

<p>With this in mind, all we have to do in order to create our own conventional approach is create our own ModelMetadataProvider instance that overrides the current behavior and then register it with ASP.NET MVC.</p>

<h2 id="creating-our-own-convention-based-modelmetadataprovider">Creating our own convention based ModelMetadataProvider</h2>

<p>In my implementation, whilst I favor conventions over Display attributes, there may be instances where I will want to use a Display attribute, so I&rsquo;m going to create a new class that extends ASP.NET MVC&rsquo;s existing DataAnnotationsModelMetadataProvider implementation.</p>

<p>After a little work this is what I ended up with:</p>

<pre><code>public class ConventionalModelMetadataProvider : DataAnnotationsModelMetadataProvider
{
    protected override ModelMetadata CreateMetadata(IEnumerable&lt;Attribute&gt; attributes, Type containerType, Func&lt;object&gt; modelAccessor, Type modelType, string propertyName)
    {
        ModelMetadata modelMetadata = base.CreateMetadata(attributes, containerType, modelAccessor, modelType, propertyName);

        if (modelMetadata.DisplayName == null)
        {
            modelMetadata.DisplayName = modelMetadata.PropertyName.SplitWords();
        }

        return modelMetadata;
    }
}
</code></pre>

<p>As you can see, all I&rsquo;m doing here is calling the base (DataAnnotationsModelMetadataProvider) class&rsquo; implementation of the CreateMetadata method and updating the DisplayName  property if it&rsquo;s not set. This means that if the DataAnnotationsModelMetadataProvider class we&rsquo;re extending does not find a Display data annotation on the property then it will return null. Then in our implementation we&rsquo;re simply getting the property name from the ModelMetadata  class and using an extension method I created (see below) to split the property name into words based on its upper-case letters, turning a string like &ldquo;AddressLine1&rdquo; into &ldquo;Address Line 1&rdquo;.</p>

<pre><code>public static class StringExtensions
{
    public static string SplitWords(this string value)
    {
        return value != null ? Regex.Replace(value, &quot;([a-z](?=[A-Z0-9])|[A-Z](?=[A-Z][a-z]))&quot;, &quot;$1 &quot;).Trim() : null;
    }
}
</code></pre>

<h2 id="registering-our-custom-modelmetadataprovider-with-asp-net-mvc">Registering our custom ModelMetadataProvider with ASP.NET MVC</h2>

<p>Now all we have to do in order to use our conventional based approach to displaying property names is register our custom ModelMetadataProvider with ASP.NET MVC. This can be done by setting the Metadata provider within our website&rsquo;s Global.asax.cs class, like so:</p>

<pre><code>public class MvcApplication : System.Web.HttpApplication
{
    protected void Application_Start()
    {
        ...

        ModelMetadataProviders.Current = new ConventionalModelMetadataProvider();

        ...
    }
}
</code></pre>

<p>Once this is done you&rsquo;ll now be able to remove your Display annotations and have property names output as they are, but split by their uppercase letter, whilst still allowing you to override our conventional behavior using a Display attribute if you need to.</p>

<h2 id="taking-the-modelmetadataprovider-further">Taking the ModelMetadataProvider further</h2>

<p>Custom ModelMetadataProviders are a great way to harness the power of ASP.NET MVC model metadata functionality and opens up an array of possibilities. For instance, you can easily see how you could extend our solution above to query a database for field names, allowing us to have database driven properties - this same direction can be used for multilingual sites!</p>

<p>I hope you&rsquo;ve found this guide blog post helpful and if you have any questions then please feel free to ask them in the comments section below.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Tue, 04 Nov 2014 01:24:38 +0000</pubDate></item><item><title>ReSharper not running your unit tests? This could be why...</title><link>https://josephwoodward.co.uk/2014/10/resharper-not-running-your-unit-tests-this-could-be-why</link><description></description><content:encoded><![CDATA[<p>Just last week I had an issue with ReSharper where it flat out refused to run my collection of unit tests. It was working fine, then suddenly it stopped and the bar within ReSharper&rsquo;s unit test window stayed grey whilst the spinning icon endlessly span within the parent unit class project.</p>

<p>After scratching my head trying to figure out what could have gone wrong, and trying multiple solutions following a bit of Google-fu (such as restarting Visual Studio, rebuilding my solution, running Visual Studio in administration mode and even restarting my computer) - I will still no closer to getting my unit tests running.</p>

<p>Eventually I managed to figure out what the problem was after I remembered that ReSharper had a cache that could be cleared from within ReSharper&rsquo;s options (Resharper &gt; Options &gt; Environment &gt; General &gt; Clear Cache), and after clearing ReSharper&rsquo;s cache and restarting Visual Studio I was able to run my unit tests once again without any hiccups.</p>

<p>It turns out that something must have happened that caused a corruption within ReSharper&rsquo;s cache resulting in it ignoring my unit tests.</p>

<p>Whilst I know this blog post is trivial, I know I&rsquo;ll most likely run into this issue again and feel I best document it just in case anyone else is experiencing similar issues.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Mon, 20 Oct 2014 01:14:07 +0000</pubDate></item><item><title>9 ways I improved my productivity in Visual Studio 2013</title><link>https://josephwoodward.co.uk/2014/10/ways-i-improved-my-productivity-in-visual-studio-2013</link><description></description><content:encoded><![CDATA[<p>Visual Studio is a powerhouse of an IDE. One of the first things I came to appreciate when I first starting learning C# and ASP.NET MVC was how great a developer tool Visual Studio was. It far exceeds previous tools I&rsquo;ve used in other languages such as Java and PHP.</p>

<p>As with all IDEs I&rsquo;ve used I&rsquo;ve always been fond of trying to get the best from them in order to improve my productivity, so as soon as I started using Visual Studio I immediately started researching features and functions to aid my day-to-day development, and over time I&rsquo;ve built up a repertoire of common shortcuts that help me spend more time writing software and less time navigating the Visual Studio functions and menus.</p>

<p>I&rsquo;ve always believed that <strong>one of the best ways to improve your productivity is to optimise your workflow</strong>. To do this I like to pay attention to the tasks I regularly perform when programming and see what can change about these behaviors to speed them up. For instance, I noticed I was creating classes a lot, so I got into the habit of using Resharper&rsquo;s create class shortcut (ALT + Insert). This minor adjustment to my workflow dramatically reduced the time it took me to create a new class, whilst helping me stay in the zone by not having to break my concentration by taking my hands off of the keyboard and reaching for my mouse.</p>

<p>Optimising workflow is all about rewiring our habits and training our muscle memory so you get to a point where you&rsquo;re pressing shortcuts and performing tasks without needing to consciously stop and try to remember the necessary shortcut. This takes time, but only a few days of constant use.</p>

<p>With this in mind I&rsquo;ve decided to highlight some of the shortcuts and functions that helped me the most in improving my productivity in Visual Studio 2013.</p>

<h2 id="visual-studio-productivity-tip-1-get-resharper">Visual Studio Productivity Tip 1: Get ReSharper</h2>

<p>Without wanting to sound like an Jetbrains&rsquo; salesman, I honestly believe the best thing I did to improve my productivity within Visual Studio is to start using Resharper. The folks over at JetBrains have done a fantastic job of implementing some real time-saving features into Visual Studio via their plugin. They offer a 30 day free trial so at the very least give it a try and you&rsquo;ll see what I mean.</p>

<p>If you don&rsquo;t believe me then watch the below video:</p>

<p><a href="https://www.youtube.com/watch?v=ld7ubGmxL7A"><img src="https://s3-eu-west-1.amazonaws.com/cdn.josephwoodward.co.uk/blog/why_resharper_is_awesome.jpg" alt="" /></a></p>

<h2 id="visual-studio-productivity-tip-2-learn-to-abuse-peek-definition">Visual Studio Productivity Tip 2: Learn to abuse Peek Definition</h2>

<p><img src="https://s3-eu-west-1.amazonaws.com/cdn.josephwoodward.co.uk/blog/peek_definition800.gif" alt="" /></p>

<p>Peek Definition is a fantastic feature that really helped speed up my development. It particularly comes in handy if you’re working on a method that calls/references another class or method that you need to browse at or modify. Before Peek Definition was introduced in Visual Studio 2013 I would have to rely on Resharper’s CTRL + F12 shortcut that would take me to the implementation of the method or class – resulting in the whole class being loaded in another pane and breaking my flow.</p>

<p>With the introduction of Peek Definition into my workflow I’m now able to quickly open up the method/class using the ALT + F12 shortcut (I’ve changed it to ALT + Q for quicker single hand access) and analyse or make modifications to it without needing to leave the class I’m in. This helps keep me in the zone and focusing on what I’m doing.</p>

<p><strong>Additional tip:</strong> Use the Esc key to close the peek definition window – this way your hands don’t have to leave the keyboard.</p>

<h2 id="visual-studio-productivity-tip-3-switch-between-code-tabs-using-ctrl-tab">Visual Studio Productivity Tip 3: Switch between code tabs using CTRL + TAB</h2>

<p>Another really useful feature is the ability to quickly switch between all open tabs within Visual Studio using the CTRL + TAB shortcut. Pressing this will open a popup window with all active (open) files that you can quickly navigate to using the up and down keys on the keyboard and then release to load the selected file.</p>

<p>What’s especially powerful about this feature is the fact that it automatically selects the last file you opened, meaning you can quickly press CTRL + TAB to go back to your previous file . This is particularly useful if I’m working between two classes such as an interface and an implementation of the interface.</p>

<h2 id="visual-studio-productivity-tip-4-enable-enhanced-scroll-bar">Visual Studio Productivity Tip 4: Enable Enhanced Scroll Bar</h2>

<p>I was using Visual Studio 2013 for about a year before I learnt of the new enhanced scroll bar feature, and after enabling it I quickly realised how much of a benefit using it over the traditional scroll bar would bring.</p>

<p>Not only does the enhanced scroll bar help you navigate your way around a class, but it also highlights any code changes you’ve made, along with errors, bookmarks and even your caret position.</p>

<p><img src="https://s3-eu-west-1.amazonaws.com/cdn.josephwoodward.co.uk/blog/enhanded_scrollbar.png" alt="" /></p>

<p>You can enable the advanced scroll bar by right clicking the scroll bar within your code editor and selecting “Toolbar Options”. You can also access it via Tools &gt; Options &gt; Text Editor &gt; All Languages &gt; Scroll Bars.</p>

<p>Within the scoll bar’s options you have an array of choices allowing you to specify exactly what you’d like to see highlighted on the scroll bar. As I like to keep my code window as large as possible I opted for the narrower scroll bar that whilst still providing all of the normal feedback, doesn’t take up as much room as the medium or wide scroll bar.</p>

<p><img src="https://s3-eu-west-1.amazonaws.com/cdn.josephwoodward.co.uk/blog/scrollbaroptions.png" alt="" /></p>

<h2 id="visual-studio-productivity-tip-5-gotofile-shortcut-using-resharper">Visual Studio Productivity Tip 5: GoToFile shortcut (using ReSharper)</h2>

<p>As mentioned in tip 1, if you’ve got ReSharper installed then you’ve already got access to some fantastic time-saving shortcuts at your disposal that really go a long way at helping you improve your productivity when programming. One such shortcut that will dramatically improved my productivity was the use of Resharper’s GoToFile shortcut (CTRL + SHIFT + T).</p>

<p>The GoToFile shortcut is probably one of Resharper’s most powerful sortcuts that allows you to quickly locate <strong>any</strong>type of file within your solution, including views, CSS files and Javascript files. What’re more is Resharper’s GoToFile dialog has the ability to pattern match all sorts of strings. For instance, if you wish to locate a class named “LoginRepository.cs” you only need type “lr” and Resharper will display a list of all files matching your query.</p>

<h2 id="visual-studio-productivity-tip-6-gotodefinition-shortcut-f12">Visual Studio Productivity Tip 6: GoToDefinition shortcut (F12)</h2>

<p>Another time saving shortcut that really helped me speed up my code navigation and saves  me from taking my hands off of the keyboard is the GoToDefinition shortcut (F12).</p>

<p>When your keyboard cursor is over a class definition, simply press F12 to quickly open the highlighted class. The same also applies for interfaces. Whilst this may seem like a simple shortcut, when using it becomes a habit you’ll realise how much time you’ve wasted navigating to a class via the mouse.</p>

<h2 id="visual-studio-productivity-tip-7-gotofilemember-shortcut-using-resharper">Visual Studio Productivity Tip 7: GoToFileMember shortcut (using Resharper)</h2>

<p>Without wanting to make this post too Resharper focused, another great shortcut at the disposal of Resharper users is the GoToFileMember shortcut (ALT + ).</p>

<p>Using the GoToFileMember shortcut I’m able to quickly navigate to a property or a method in a few key strokes.</p>

<p>Simply press ALT + \ and you’re promptly greeted with a list of all of the classes methods and properties. In addition to this you can also quickly find the property or method you’re looking for via the GoToFileMember dialog’s search field.</p>

<h2 id="visual-studio-productivity-tip-8-quickly-create-a-constructor">Visual Studio Productivity Tip 8: Quickly create a constructor</h2>

<p>I learned about this feature later on in my Visual Studio experiences, but it quickly became my staple way of creating constructors.</p>

<p>Next time you need to create a class constructor, instead of typing the entire constructor out type “ctor” and then hit the tab key immediately afterwards. This will instantly add a constructor to your class for you.</p>

<h2 id="visual-studio-productivity-tip-9-quickly-create-properties">Visual Studio Productivity Tip 9: Quickly create properties</h2>

<p>In addition to creating constructors as highlighted in the above tip, Visual Studio has a similar feature for properties.</p>

<p>Next time you need to create a new property simple type “prop” followed by hitting the tab key to quickly create a property. Typing the type and the pressing tab again will then automatically take you to the next definition along the declaration. You can also press Shift + Tab to go back to the previous definition.</p>

<h2 id="conclusion">Conclusion</h2>

<p>So there you have it; quintessential tips that helped me to improve my productivity in Visual Studio 2013. Visual Studio is a feature packed IDE that can save you a lot of work. This is only the beginning of my journey to improve my productivity in Visual Studio.</p>

<p>If any of these features are new to you then the worst thing to do at this point is to try out the tips mentioned and leave it at that. Improving your productivity in a tool like Visual Studio doesn’t happen overnight and takes repetition to build these shortcuts into your muscle memory. But once they are well practiced you’ll really notice a difference in your workflow.</p>

<p>Don’t stop there. It’s always important to stop and take an honest look at your workflow and ask yourself “What tasks to I constantly repeat, and what an be done to reduce the time it takes to perform said tasks? Is there a shortcut or Visual Studio extension that I could learn or use to speed up the time it takes me to perform such an action? Generally I find the answer is yes!.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Tue, 07 Oct 2014 01:06:45 +0000</pubDate></item><item><title>My experiences programming on a Surface Pro 3</title><link>https://josephwoodward.co.uk/2014/09/programming-on-a-surface-pro-3</link><description></description><content:encoded><![CDATA[<p>Last week I decided to pick up a Surface Pro 3. I&rsquo;ve been following the Surface Pro hardware line for some time now but have been reluctant to commit to one as there were certain factors in the previous incarnations that meant it probably wasn&rsquo;t the best choice for me at the time.</p>

<p>With the Surface Pro&rsquo;s third iteration, all of the niggles I had with them were gone so I finally decided to jump in.</p>

<p>In order to fund the Surface Pro 3 I decided to part with my existing portable development machine - my Macbook Pro Retina that I was running Windows 8.1 on via Bootcamp.</p>

<p>Over the past few years my Macbook Pro has proven to be a great development laptop so making the choice to switch to a Surface Pro 3 wasn&rsquo;t an easy choice to make, and despite receiving glowing reviews I couldn&rsquo;t find any reviews or comments from a software developer&rsquo;s perspective on how it performs as a development machine. So having lived with the Surface Pro 3 for a week or so now I decided to highlight what I feel the pros and cons of the Surface Pro 3 are <strong>from a software developer&rsquo;s perspective,</strong> and how it performs as a portable development machine.</p>

<p>If you&rsquo;re reading this and unaware as to the hardware specifications/features of the Surface Pro 3 then you&rsquo;ll probably find more suitable information somewhere else. This post presume you already know the stats of the Surface Pro 3 and are keen to find out how it performs as a development machine.</p>

<p><img src="https://s3-eu-west-1.amazonaws.com/cdn.josephwoodward.co.uk/blog/programming_on_a_surface_pro_3.jpg" alt="" /></p>

<h2 id="programming-using-the-surface-pro-3-keyboard-and-trackpad">Programming using the Surface Pro 3 keyboard and trackpad</h2>

<p>My first concern when committing to the Surface Pro 3 was the keyboard. As any software developer will know, the keyboard is his main tool so it&rsquo;s important that it&rsquo;s responsive and comfortable to type on for long periods of time. Being a bit of a shortcut addict it was also important for me that the keyboard not be too small as to force my hands into weird positions when pressing some of the shortcuts I&rsquo;ve grown to rely on.</p>

<p>Thankfully I was pleasantly surprised. Naturally it took me a short while to feel comfortable touch typing on the keyboard, especially when programming and requiring the use of certain special characters, but before long I was programming away without and concerns. Only occasionally hitting the wrong key - but that will work itself out in due time.</p>

<p>The keyboard is just the right size to feel familiar to the faster typers out there and provides a good level of feedback with a satisfying &ldquo;click&rdquo; as you press each key.</p>

<p>As for the track pad, I wouldn&rsquo;t go as far as saying it&rsquo;s terrible - but it&rsquo;s certainly not the best trackpad I&rsquo;ve used. It&rsquo;s perfectly fine for navigating around Visual Studio or IntelliJ IDEA but it can feel a little unresponsive at times.</p>

<p>UPDATE: Since writing this post <a href="http://josephwoodward.co.uk/2015/11/using-surface-pro-4-type-cover-on-surface-pro-3/">I&rsquo;ve upgraded my Surface Pro 3 with the Surface Pro 4 Type Cover</a> - something I would highly recommend as a lot of the issues mentioned regarding the keyboard and the trackpad have been resolved.</p>

<p><strong>Typing on your lap (or lapability as Microsoft call it)</strong></p>

<p>One of the criticisms of the Surface Pro 2 was the difficultly in typing on your lap. Because the keyboard clips into place by a strong magnet and the screen&rsquo;s kick stand only had 3 points at which it could be moved to, it made typing very difficult due to the limited angles.</p>

<p>Thankfully this has been sorted in the Surface Pro 3. With the kick stand being adjustable to any level, you can lower or raise the screen to any level that suits you. What&rsquo;s more is the keyboard has an additional magnet in it that allows the keyboard to be docked at a slight angle to further aid the positioning of the keyboard to a suitable level based on how or where you&rsquo;re sitting.</p>

<p>In fact, as I write this I&rsquo;m laying in bed with the keyboard comfortably on my lap with no movements or fear of the keyboard become disconnected from the display screen.</p>

<p>One of my fears before purchasing the Surface Pro 3 was the strength of the keyboard magnet. I was concerned that if, for some reason, I were to knock the screen that the magnet wouldn&rsquo;t be strong enough to keep the display from disconnecting and falling over, possibly onto the floor. Thankfully I quickly realised this should not be a concern as the magnet is incredibly strong and holds the keyboard and display firmly together. On the contrary it actually requires quite some force in order to separate the screen from the keyboard.</p>

<h2 id="is-the-surface-pro-3-screen-too-small-for-software-web-development">Is the Surface Pro 3 screen too small for software/web development?</h2>

<p>Before committing to selling my Macbook Pro Retina to fund my Surface Pro 3 purchase, I did have concerns about whether the screen would be large enough for programming on. The Surface Pro 2&rsquo;s screen was a little too small for my taste, but Microsoft have increased the size of the screen for the Surface Pro 3, providing me with more than enough screen real-estate to write code without the need for constantly scrolling across or up and down the page.</p>

<h2 id="running-visual-studio-on-the-surface-pro-3">Running Visual Studio on the Surface Pro 3</h2>

<p>Hardware wise, the Surface Pro 3 model I decided to go for was the i5, 8gb of memory and 256GB SDD drive, which was actually a downgrade from the i7 Macbook Pro Retina I was using before - but the additional portability of the Surface Pro 3 made the sacrifice one I was happy to make.</p>

<p>The majority of my development time on the Surface Pro 3 has been some rather large Visual Studio 2014 projects, followed by some Android programming in IntelliJ IDEA. During my time programming I&rsquo;ve yet to experience any slow compile times. Visual Studio is a power-house of an IDE and loves to gobble up memory, but this doesn&rsquo;t appear to hamper the Surface Pro 3. All in all, I&rsquo;ve not noticed any performance difference between the Macbook Pro Retina and the Surface Pro 3 when programming and compiling applications.</p>

<p>Below is a screenshot of the system resources with Visual Studio 2013 and SQL Server running; as you can see there&rsquo;s more than enough memory and CPU remaining to ensure build times are fast and the system remains responsive.</p>

<p><img src="https://s3-eu-west-1.amazonaws.com/cdn.josephwoodward.co.uk/blog/programming_on_a_surface_pro_3_performance.jpg" alt="" /></p>

<h2 id="additional-ports">Additional ports</h2>

<p>As someone who does a lot of Android development I was a bit disappointed to see that the Surface Pro 3 only features one USB port. As someone who prefers to test his Android apps against a real phone rather than the Android emulator it&rsquo;s a slight annoyance that there isn&rsquo;t a t least one more port available. This would save me from having to lug a USB adapter around with me.</p>

<h2 id="conclusion">Conclusion</h2>

<p>Ultimately I&rsquo;m extremely happy with my purchase of the Surface Pro 3 and I don&rsquo;t have any regrets about selling my Macbook Pro Retina to pay for it. The Surface Pro 3 is a fantastic device that I find myself using more and more, both as a laptop and a tablet.</p>

<p>The fact that the Surface Pro 3 is more portable than my Macbook Pro means that I take it everywhere with me, allowing me to have a sneaky coding session when I have a bit of spare time.</p>

<p><em><strong>Edit: </strong>If you&rsquo;re considering purchasing the Surface Pro 3 as a development laptop then I highly recommend you listen to Scott Hanselman&rsquo;s latest Hanselminutes podcast &lsquo;<a href="http://hanselminutes.com/446/selecting-the-ultimate-developer-laptop-with-damian-edwards">Selecting the Ultimate Developer Laptop with Damian Edwards</a>&lsquo; where he and <a href="https://twitter.com/DamianEdwards">Damian Edwards</a> openly talk about how they find the Surface Pro 3 a great development laptop.</em></p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Fri, 19 Sep 2014 00:56:13 +0000</pubDate></item><item><title>ASP.NET Naming Conventions - how to avoid calling everything a service</title><link>https://josephwoodward.co.uk/2014/08/asp-net-naming-conventions-how-to-avoid-calling-everything-a-service</link><description></description><content:encoded><![CDATA[<p>Naming things can be tricky. I&rsquo;ve found that when I&rsquo;m creating a new class the first class name that pops into my head is usually one that doesn&rsquo;t describe the function and purpose of the class I&rsquo;m creating adequately enough. It usually takes a few moments of stopping and thinking to myself &ldquo;<em>What can I call this class that will describe its role clearly to other developers and/or consumers of this API, class or interface - something succinct and no longer than 4 words long</em>&ldquo; (4 words or less is a general rule of thumb I&rsquo;ve picked up throughout my years programming that seems to hit the sweet spot).</p>

<p>Over time I would start to notice patterns occurring. For instance when developing web based applications following a typical <a href="http://stackoverflow.com/questions/312187/what-is-n-tier-architecture">n-tier application structure</a> I&rsquo;ve noticed it becomes very easy to label everything at the service/business logic level a &lsquo;service&rsquo;. With this in mind I started to do a little googling for naming ideas and <a href="http://stackoverflow.com/questions/1866794/naming-classes-how-to-avoid-calling-everything-a-whatevermanager">found this little gem of a question on every developer&rsquo;s favourite site</a> that highlights a whole range of different naming ideas and conventions that might help describe that class you&rsquo;re creating perfectly.</p>

<p>These are:</p>

<ul>
<li>Coordinator</li>
<li>Builder</li>
<li>Writer</li>
<li>Reader</li>
<li>Handler</li>
<li>Container</li>
<li>Protocol</li>
<li>Target</li>
<li>Converter</li>
<li>Controller</li>
<li>View</li>
<li>Factory</li>
<li>Entity</li>
<li>Bucket</li>
<li>Attribute</li>
<li>Type</li>
<li>Helper</li>
<li>Collection</li>
<li>Info</li>
<li>Provider</li>
<li>Processor</li>
<li>Element</li>
<li>Manager</li>
<li>Node</li>
<li>Option</li>
<li>Factory</li>
<li>Context</li>
<li>Designer</li>
<li>Editor</li>
</ul>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Tue, 12 Aug 2014 00:52:53 +0000</pubDate></item><item><title>Sessions in ASP.NET MVC using Dependency Injection</title><link>https://josephwoodward.co.uk/2014/06/sessions-asp-net-mvc-using-dependency-injection</link><description></description><content:encoded><![CDATA[<p>A common approach I see whilst browsing tutorials or StackOverflow questions relating to reading and writing to sessions in ASP.NET MVC is the following:</p>

<pre><code>[Serializable]
public class UserProfileSessionData
{
    public int UserId { get; set; }

    public string EmailAddress { get; set; }

    public string FullName { get; set; }
}

public class ExampleLoginController : Controller {

    [HttpPost]
    public ActionResult Login(LoginModel model)
    {
        if (ModelState.IsValid)
        {
            var profileData = new UserProfileSessionData {
                UserId = model.UserId,
                EmailAddress = model.EmailAddress,
                FullName = model.FullName
            }

            this.Session[&quot;UserProfile&quot;] = profileData;
        }
    }
}
</code></pre>

<p>And then retrieving session data like this:</p>

<pre><code>public class ExampleDetailController : Controller
{   ...
    public ActionResult ViewProfile()
    {
        ...
        var sessionData = (UserProfileSessionData) this.Session[&quot;UserProfile&quot;];
        ...
    }
    ...
}
</code></pre>

<h2 id="what-s-the-problem-with-doing-it-this-way">What&rsquo;s the problem with doing it this way?</h2>

<p>Whilst this solution works, there&rsquo;s improvements that can be made. Take a look at the code for a moment and try and think of what problems doing the above could lead to as the project started to mature.</p>

<p>First of all, what happens if we want to modify the what information gets stored in the session? Or even worse, we want to switch from session based storage to database storage? Ultimately we&rsquo;d have to spend a great deal of time going through any controller that references the session and make changes.</p>

<p>Ultimately, what we&rsquo;re doing by accessing the Session object directly from within the controller is breaking the D (<a href="http://en.wikipedia.org/wiki/Dependency_inversion_principle">dependency inversion</a>) of the SOLID principles - a set of <a href="http://en.wikipedia.org/wiki/SOLID_(object-oriented_design)">five principles of object-orientated programming and design</a> aimed at encouraging the creation of extensible, flexible and testable software; something all developers should strive to do.</p>

<h2 id="what-is-the-dependency-inversion-principle">What is the Dependency Inversion principle?</h2>

<p>Once of the most memorable definitions of dependency inversion that I&rsquo;ve heard and which has stuck with me is:</p>

<blockquote>
<p>&ldquo;Program to an interface, not an implementation&rdquo;</p>
</blockquote>

<p>In the context of the code sample above, by directly programming against the Session object (an implementation) we&rsquo;re creating a dependency within our controller. We can avoid doing this by using an interface type to abstract and encapsulate the session&rsquo;s implementation details - allowing us to programming against some kind of simplified interface. Doing this leads to a loosely coupled controller that&rsquo;s not directly depending on the existence of the Session object, but instead depends a type that implements our IUserInformation type.</p>

<p>This powerful thing about this is the controller does not need to be concerned about the implementation details, meaning we could easily switch the implementation to read the data from a database instead of a session object - and we wouldn&rsquo;t have to touch anything in the controller.</p>

<p>This also makes the controller far more testable as we can easily mock the session&rsquo;s abstraction.</p>

<h2 id="ok-i-m-following-so-far-so-how-do-we-invert-our-controller-s-dependencies">Ok, I&rsquo;m following so far. So how do we &lsquo;invert&rsquo; our controller&rsquo;s dependencies?</h2>

<p>There are <a href="http://martinfowler.com/articles/injection.html#FormsOfDependencyInjection">multiple types of dependency injection</a> but the most common pattern for this type of challenge is constructor injection using an <a href="http://martinfowler.com/articles/injection.html">Inversion of Control Container</a>.</p>

<p>In this example I&rsquo;m going to use <a href="http://docs.structuremap.net/">StructureMap</a> as my IoC container of choice as it&rsquo;s one I&rsquo;m most comfortable with and prefer using in my day-to-day life.</p>

<p>The first step is to include the Nuget package for both <a href="http://www.nuget.org/packages/structuremap/">StructureMap</a> and <a href="http://www.nuget.org/packages/structuremap.web/">StructureMap.Web</a>. Now we&rsquo;ve got the StructureMap packages we need to hook Structuremap into ASP.NET&rsquo; MVC&rsquo;s dependency resolver by creating a new class (In this instance I called it StructureMapDependencyResolver.cs) and implementing the IDependencyResolver from the System.Web.Mvc package like so:</p>

<pre><code>public class StructureMapDependencyResolver : IDependencyResolver
{
    private readonly IContainer _container;

    public StructureMapDependencyResolver(IContainer container)
    {
        _container = container;
    }

    public object GetService(Type serviceType)
    {
        if (serviceType.IsAbstract || serviceType.IsInterface)
        {
            return _container.TryGetInstance(serviceType);
        }
        return _container.GetInstance(serviceType);
    }

    public IEnumerable&lt;object&gt; GetServices(Type serviceType)
    {
        return _container.GetAllInstances&lt;object&gt;().Where(s =&gt; s.GetType() == serviceType);
    }
}
</code></pre>

<p>Once this is done simply add our DependencyResolver to the Application_Start method within Global.asax.cs, like so:</p>

<pre><code>public class MvcApplication : HttpApplication
{
    protected void Application_Start()
    {
        ...
        ObjectFactory.Initialize(x =&gt; x.AddRegistry&lt;WebsiteRegistry&gt;());
    	DependencyResolver.SetResolver(new StructureMapDependencyResolver(ObjectFactory.Container));
        ...
    }
}
</code></pre>

<p>Now that StructureMap is hooked up to our ASP.NET MVC application, all that&rsquo;s left to do is define our dependencies by creating our WebsiteRegistry.cs class that extends StructureMap.Configuration.DSL.Registry.cs, like so:</p>

<pre><code>public class WebsiteRegistry : Registry
{
    public WebsiteRegistry()
    {
        this.For&lt;IUserInformation&gt;().HybridHttpOrThreadLocalScoped().Use(() =&gt; GetUserInformationFromSession());
    }

    public static IUserInformation GetUserInformationFromSession()
    {
        HttpSessionStateBase session = new HttpSessionStateWrapper(HttpContext.Current.Session);

        // Check to see if the session already exists, if it does then cast it to IUserInformation
        if (session[&quot;UserSessionData&quot;] != null)
        {
            return session[&quot;UserSessionData&quot;] as IUserInformation;
        }

        // Otherwise create a new instance of UserInformation.cs
        session[&quot;UserSessionData&quot;] = new UserInformation();
        return session[&quot;UserSessionData&quot;] as IUserInformation;
    }
}
</code></pre>

<p>Once we&rsquo;ve added our Registry we need to define how our instance of IUserInformation is constructed. In this case we want to deserialize our session object and cast it to an instance of IUserInformation.</p>

<p>We can now refactor our ExampleLoginController.cs file to move the dependencies outside of the controller and program against the IUserInformation interface.</p>

<pre><code>public class ExampleDetailController : Controller {

    private readonly IUserInformation userInformation;

    public void ExampleLoginController(IUserInformation userInformation) {
        this.userInformation = userInformation;
    }

    [HttpPost]
    public ActionResult ViewProfile()
    {
        ...
        string emailAddress = this.userInformation.EmailAddress;
        ...
    }
}
</code></pre>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Fri, 13 Jun 2014 00:49:08 +0000</pubDate></item><item><title>Golden Nuget Package #1: RouteDebugger by Phil Haack</title><link>https://josephwoodward.co.uk/2014/06/golden-nuget-package-1-routedebugger-by-phil-haack</link><description></description><content:encoded><![CDATA[<p>Whilst working on a side project written in ASP.NET MVC (version 5), I ran into some routing issues that involved routes just flat out not working. To make matters worse I was using Areas which add a little more complexity to routing (not much, but just enough to spice things) by splitting routes up into separate areas.</p>

<p>After spending about an hour trying to figure out why my route wasn&rsquo;t working I decided to pull out the Google-fu to look debugging ASP.NET MVC routes, which is when I came across a golden nuget package written by <a href="http://haacked.com/about/">Phil Haack</a> called <a href="https://www.nuget.org/packages/routedebugger/2.1.4">RouteDebugger</a>.</p>

<p>Setup was simple, and after downloading the RouteDebugger Nuget package via Visual Studio&rsquo;s awesome Nuget Package Manager tool and enabling it within the Web.config configuration file by adding the following it was up and running.</p>

<pre><code>&lt;add key=&quot;RouteDebugger:Enabled&quot; value=&quot;true&quot; /&gt;
</code></pre>

<p>Now when you fire up your application you&rsquo;re greeted with a useful debugging pane located at the bottom of your page (see below) showing you exactly what route your current page URL matches and which ones they don&rsquo;t. In my instance this proved extremely valuable as I was able to very quickly identify that my route was being matched to a far more vague route before it was being hit.</p>

<p><img src="http://assets.josephwoodward.co.uk/blog/RouteDebugger.png" alt="" /></p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Mon, 09 Jun 2014 00:45:45 +0000</pubDate></item><item><title>Creating reusable HTML components in ASP.NET MVC using Razor</title><link>https://josephwoodward.co.uk/2014/05/creating-reusable-html-components-in-asp-net-mvc-using-razor</link><description></description><content:encoded><![CDATA[<p>Whilst working on a side project I started to notice I was using a lot of the same HTML to create floating boxes. Whilst these boxes looked the same, the content of them changed quite dramatically; for instance, some of the content boxes contained forms, others contained straight text, like so:</p>

<pre><code>&lt;div class=&quot;panel&quot;&gt;
         &lt;div class=panel-inner&quot;&gt;
             &lt;h2 class=&quot;panel-title&quot;&gt;Panel Title&lt;/h2&gt;
             &lt;div class=&quot;panel-content&quot;&gt;
                 /* Can I pass content to be rendered in here here? */
             &lt;/div&gt;
         &lt;/div&gt;
     &lt;/div&gt;
&lt;/div&gt;
</code></pre>

<p>As my side project progressed and grew, I found myself making more and more modifications to these HTML components so I started to look how I can encapsulate the component for greater flexibility and extensibility as my project progressed.</p>

<p>The solution I ended up with was creating a HTML extension modelled off of the way the the Html.BeginForm() extension works, by writing directly to the view&rsquo;s context and then returning an instance of IDisposable, with the call to disposing of the context writing the closing HTML statements of my component to the view context - essentially wrapping the contents passed into the HTML extension in my HTML component.</p>

<p>Below is an example of what the code looks like:</p>

<pre><code>namespace System.Web.Mvc
{
    public static class HtmlHelperExtensions
    {
        public static HtmlPanelComponent PanelComponent(this HtmlHelper html, string title)
        {
            html.ViewContext.Writer.Write(
            &quot;&lt;div class=\&quot;panel\&quot;&gt;&quot; +
            &quot;&lt;div class=\&quot;panel-inner\&quot;&gt;&quot; +
            &quot;&lt;h2 class=\&quot;panel-title\&quot;&gt;&quot; + title + &quot;&lt;/h2&gt;&quot; +
            &quot;&lt;div class=\&quot;panel-content\&quot;&gt;&quot;
            );

            return new HtmlPanelComponent(html.ViewContext);
        }
    }

    public class HtmlPanelComponent : IDisposable
    {
        private readonly ViewContext _viewContext;
        public HtmlPanelComponent(ViewContext viewContext)
        {
            _viewContext = viewContext;
        }
        public void Dispose()
        {
            _viewContext.Writer.Write(
            &quot;&lt;/div&gt;&quot; +
            &quot;&lt;/div&gt;&quot; +
            &quot;&lt;/div&gt;&quot;
            );
        }
    }
}
</code></pre>

<p>Using this new HTML extension I&rsquo;m now able to reuse my HTML component and fill the panel&rsquo;s content in with whatever I please, like so:</p>

<pre><code>@using (Html.PanelComponent(&quot;Panel Title&quot;))
{
    &lt;p&gt;Welcome back, please select from the following options&lt;/p&gt;
    &lt;a href=&quot;#&quot;&gt;Profile&lt;/a&gt;
    &lt;a href=&quot;#&quot;&gt;My Defails&lt;/a&gt;
}
</code></pre>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Fri, 09 May 2014 00:39:12 +0000</pubDate></item><item><title>Generics and Lambda Expressions in Javascript? With TypeScript you can!</title><link>https://josephwoodward.co.uk/2014/04/generics-and-lambda-expressions-in-javascript-with-typescript-you-can</link><description></description><content:encoded><![CDATA[<p>Whilst on my lunch break I took some time out to watch the TypeScript presentation at Build 2014 presented by C# and TypeScript&rsquo;s father <a href="http://en.wikipedia.org/wiki/Anders_Hejlsberg">Anders Hejlsberg</a>. It&rsquo;s a fantastic presentation by Anders that really highlights how powerful TypeScript is, and the benefits it can Javascript development.</p>

<p>If you&rsquo;re short on time then I&rsquo;d recommend fast forwarding to the 20 minute mark where Anders runs through using Generics and Lambda Expressions (<a href="http://www.nczonline.net/blog/2013/09/10/understanding-ecmascript-6-arrow-functions/">instead of waiting for ECMAScript 6</a>) in TypeScript.</p>

<p><a href="http://video.ch9.ms/sessions/build/2014/3-576.mp4"><img src="https://s3-eu-west-1.amazonaws.com/cdn.josephwoodward.co.uk/blog/TypescriptBuild.jpg" alt="" /></a></p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Wed, 30 Apr 2014 00:32:43 +0000</pubDate></item><item><title>Upgrading to StructureMap 3</title><link>https://josephwoodward.co.uk/2014/04/upgrading-to-structuremap-3</link><description></description><content:encoded><![CDATA[<p><em>Note: For anyone arriving to this post looking for the StructureMap 3 documentation then <a href="http://structuremap.github.io/structuremap/">its official home is here</a>. The documentation isn&rsquo;t all there, but <a href="http://jeremydmiller.com/">Jeremy Miller</a> (author of StructureMap) is making his way through writing the new documentation.</em></p>

<p>As explained in previous blog posts, I&rsquo;m a huge fan of StructureMap and use it as my IoC container of choice when writing .NET applications. With the release of StructureMap 3, I was happy to see there have been some rather large updates to it. However, when I tried to update an application from version 2 to StructureMap 3 I quickly ran into some configuration issues as a result of the API changes, and with StructureMap 3 being as new as it is there&rsquo;s little documentation at the moment. Eventually I managed to figure out where missing classes have been moved to so I thought I&rsquo;d document it here to help anyone who experienced similar head scratching questions.</p>

<h2 id="q1-httpcontext-related-scoping-has-moved"><strong>Q1: HttpContext related scoping has moved</strong></h2>

<p>Updating my .NET MVC application that uses HttpContext I quickly became aware that HybridHttpOrThreadLocalScoped had vanished, but I noticed this has now been moved into a separate StructureMap.Web Nuget <a href="http://www.nuget.org/packages/structuremap.web/">package that can be found here</a>. It can also be found within Visual Studio&rsquo;s package manager by simply searching for &ldquo;StructureMap.Web&rdquo;. Added this package to my solution quickly resolved the HttpContext related issues I ran into.</p>

<p><a href="http://jeremydmiller.com/2014/03/31/structuremap-3-0-is-live/">Jeremy Miller did mention this on his blog</a>, however I seemed to have skimmed over little detail as I rushed to update like an excited kid at Christmas.</p>

<h2 id="q2-setallproperties-where-are-you"><strong>Q2: SetAllProperties(), where are you?</strong></h2>

<p>SetAllProperties has now been moved into a class called Policies and can be accessed like so: this.Policies.SetAllProperties();</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Sat, 19 Apr 2014 00:30:34 +0000</pubDate></item><item><title>Space shooter I wrote in Javascript using Typescript</title><link>https://josephwoodward.co.uk/2014/04/space-shooter-written-in-javascript-using-typescript</link><description></description><content:encoded><![CDATA[<p>Having heard lots about Typescript and how awesome it is, I decided to spend a few lunch breaks at work and a few evenings at home looking into what the fuss was all about. I initially started writing simple CSS based stuff that simply drew elements to the DOM, but quickly moved onto playing around with HTML Canvas. Eventually I ended up writing a <a href="http://josephwoodward.co.uk/content/simplejsgame/">Javascript game with it called Space Shooter</a>.</p>

<p><a href="http://josephwoodward.co.uk/content/simplejsgame/"><img src="https://s3-eu-west-1.amazonaws.com/cdn.josephwoodward.co.uk/blog/spaceshooter.jpg" alt="" /></a></p>

<p>All of the <a href="https://github.com/JosephWoodward/SimpleJsGame">source code is up on Github for those interested</a>.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Wed, 16 Apr 2014 00:26:45 +0000</pubDate></item><item><title>Fixing the &#34;your project file uses a different version of TypeScript compiler&#34; error in Visual Studio 2013</title><link>https://josephwoodward.co.uk/2014/04/fixing-the-your-project-file-uses-a-different-version-of-typescript-compiler-error-in-vs2013</link><description></description><content:encoded><![CDATA[<p>Last night, after downloading and installing the Visual Studio 2013 Update 2 Release Candidate update which included TypeScript 1.0, I loaded up a TypeScript project that I&rsquo;d been playing around with and started to code. Soon after starting I clicked save, expecting Visual Studio to compile the Javascript but quickly encountered the following build failure error:</p>

<p><strong>&ldquo;Your project file uses a different version of the TypeScript compiler and tools than is currently installed on this machine. No compiler was found at C:\Program Files (x86)\Microsoft SDKs\TypeScript\0.9\tsc.exe. You may be able to fix this problem by changing the <TypeScriptToolsVersion> element in your project file. <ProjectName>&rdquo;</strong></p>

<p>Scratching my head I wondered where I needed to go to reference the new Type Script 1.0 compiler that had installed along with Visual Studio 2013 Update 2 Release Candidate. After having a quick look around the project options such as the TypeScript Build panel within project properties I was still stumped.</p>

<p>It turns out that you specify the TypeScript version number within the project&rsquo;s .csproj file itself. Simple right click the project that requires TypeScript compilation, click &ldquo;Unload Project&rdquo; and then right click the unloaded project again and click &ldquo;Edit  <ProjectName>.csprok&rdquo; to view the project&rsquo;s project configuration. From here you should see a TypeScript version property that you need to update to your desired version (in my case 1.0).</p>

<p>Once you&rsquo;ve updated this, simple reload the project and you&rsquo;re good to go!</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Fri, 04 Apr 2014 00:23:48 +0000</pubDate></item><item><title>Why you shouldn&#39;t suppress errors in PHP using the @ operator</title><link>https://josephwoodward.co.uk/2014/03/why-you-shouldnt-suppress-errors-in-php-using-operators</link><description></description><content:encoded><![CDATA[<h2 id="a-brief-story-about-code-igniter-a-blank-white-page-and-ms-sql-server">A brief story about Code Igniter, a blank white page and MS SQL Server</h2>

<p>Today I was setting up a project in Code Igniter that connected to a Microsoft SQL Server database (in my case it was a local instance of SQL Server 2008 Express) and I kept experiencing an incredibly unhelpful blank white screen.</p>

<p>Now I&rsquo;ve worked with PHP for about 8 years professionally but never spent a huge amount of time with Code Igniter so it was all a little new to me. However, having previously worked with a few other MVC frameworks it didn&rsquo;t take long for me to find my way around and become familiar with it. After performing a few checks and double checks to make sure I wasn&rsquo;t doing anything silly I still hadn&rsquo;t come any closer to figuring out why Code Igniter was throwing me a blank white page.</p>

<p>I started to dig down into the way Code Igniter was connecting to the SQL Server database via its MS SQL Server driver and it suddenly hit me - I&rsquo;d fallen victim to the <em>&ldquo;we don&rsquo;t like seeing errors so let&rsquo;s suppress them by using the @ operator</em>&rdquo; school of thought.</p>

<p>That&rsquo;s right, <a href="https://github.com/EllisLab/CodeIgniter/blob/develop/system/database/drivers/mssql/mssql_driver.php?source=c">Code Igniter&rsquo;s MS SQL Server driver</a> was suppressing a simple error that would have helped me solve the problem within minutes. Interestingly <a href="https://github.com/EllisLab/CodeIgniter/issues/2424">I&rsquo;m not the only one who appears to have fallen victim to this either</a>.</p>

<h2 id="why-suppressing-warnings-and-errors-is-bad">Why suppressing warnings and errors is bad</h2>

<p><a href="http://davidwalsh.name/suppress-php-errors-warnings">One common reason some might advocate suppressing errors</a> is that showing errors to users is not only messy and unprofessional, but also poses a potential security risk as often errors or warnings can contain important configuration information such as database names and sometimes even usernames or password. This is a perfectly valid concern; HOWEVER, errors are thrown for a reason. Something somewhere is going wrong and it&rsquo;s important that errors are handled correctly. <strong>Ignoring errors is not handling them</strong>.</p>

<p>One way to correctly handle such errors is to wrap the cause of the potential error in a try catch statement and output any occurring exceptions to a log file. Doing this not only prevents users from seeing the error but also provides us with a way to track if an error is being thrown in the first place, something that can become increasingly important as an application or website grows in size and traffic as <a href="http://seanmonstar.com/post/909029460/php-error-suppression-performance">the suppression of errors can start to affect performance</a>.</p>

<p>And, as my story above highlighted, you&rsquo;re no longer hiding an important error from someone who might be using your code.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Fri, 14 Mar 2014 01:46:59 +0000</pubDate></item><item><title>The architecture of StackOverflow.com</title><link>https://josephwoodward.co.uk/2014/02/the-architecture-of-stackoverflow</link><description></description><content:encoded><![CDATA[<p>Unless you&rsquo;re a developer who&rsquo;s been living under a rock for the past 5 years you&rsquo;d no doubt have heard of Stack Overflow, and no doubt you already be aware of what a great resource it is as a developer.</p>

<p>Since it&rsquo;s launch in August 2008 Stack Overflow spread like wildfire throughout the development community, quickly establishing itself as the place to go to find programming and development related issues answered. No longer did we need to trawl through the noise of forum posts looking for solutions to questions or fixes for bugs, all we needed to do was type our question into Google with a suffix of &ldquo; + stackoverflow&rdquo; to find exactly what we were looking for.</p>

<p><a href="http://skeptics.stackexchange.com/questions/18539/has-stack-overflow-saved-billions-of-dollars-in-programmer-productivity">Whilst skeptical of claims</a> that <a href="https://twitter.com/ID_AA_Carmack/status/380018564792455168">Stack Overflow has saved companies billions of dollars in developer productivity</a>, once thing for sure is that whether or not the total figure is billions, Stack Overflow has certainly saved a lot of companies a lot of money in the 5 years it&rsquo;s been alive.</p>

<p>Currently sitting above sites such as NetFlix, BBC and Flickr in the Alexa Top 100 the most visited websites (position 58 at time of writing), <a href="http://stackexchange.com/sites?view=list#traffic">Stack Overflow receives roughly 1.7 million visits a day</a>. With such high traffic numbers and a huge fan of system architecture I was really interested to know a bit more about what was going on under the hood of Stack Overflow. We all know that it&rsquo;s <strong>powered by Microsoft&rsquo;s fantastic .NET MVC framework</strong>, but what about the structure, the database access, the servers and all of the other juicy bits that get us developers salivating?! Thankfully StackOverflow software engineer <a href="https://twitter.com/sklivvz">Marco Cecconi</a>, gave a talk at the <a href="http://www.developer-conference.eu/session_post/the-architecture-of-stack-overflow/">Developer Conference 2013</a> on this very subject.</p>

<p>At just over 46 minutes long I highly recommend you check out the talk if like me, you love to know about systems and software architecture.</p>

<p><a href="http://youtu.be/OGi8FT2j8hE"><img src="https://s3-eu-west-1.amazonaws.com/cdn.josephwoodward.co.uk/blog/so.jpg" alt="" /></a></p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Mon, 24 Feb 2014 00:17:36 +0000</pubDate></item><item><title>Microsoft&#39;s latest advert reminds the world why technology (and developers) rocks</title><link>https://josephwoodward.co.uk/2014/02/microsofts-advert-reminds-world-why-technology-developers-rock</link><description></description><content:encoded><![CDATA[<p>I logged into Facebook this evening to see a new Microsoft advert visible on my news stream. Having grimaced at Microsoft&rsquo;s <a href="http://techcrunch.com/2013/08/26/microsofts-new-scroogled-ad-sets-fresh-low-for-bad-writing-boring-argument/">recent adverts</a> that do nothing but fire low, unprofessional blows at competitors, reminiscent of an Apple keynote making desperate attacks at Android (rather than demonstrate why as consumers, we should favour their products over others) I wasn&rsquo;t sure if I wanted to continue. But, with the accompanying text enticingly reading <em>&ldquo;Without developers, a lot of this technology would just be dreams. You help make dreams come true. For those about to code, we salute you!&rdquo;</em>; how could I not?!</p>

<p><a href="http://www.youtube.com/embed/qaOvHKG0Tio"><img src="https://s3-eu-west-1.amazonaws.com/cdn.josephwoodward.co.uk/blog/ms-advert.jpg" alt="" /></a></p>

<p>Bravo Microsoft, I loved this advert. It was incredibly moving and highlighted how important software and technology is to the world - something that is so easily forgotten, helped by the fact it&rsquo;s so silently prevalent; until we experience a power cut or broadband outage and suddenly realise how much it enriches our lives.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Sun, 02 Feb 2014 23:44:52 +0000</pubDate></item><item><title>Mocking AutoMapper with the IMapperEngine</title><link>https://josephwoodward.co.uk/2014/01/mocking-automapper</link><description></description><content:encoded><![CDATA[<p>I&rsquo;ve been using <a href="http://automapper.org/">Jimmy Bogard&rsquo;s awesome AutoMapper library</a> for a while now and find it an invaluable tool for mapping entities within my ASP.NET MVC application to DTOs (Data transfer objects) before passing them through the various layers within my application. This keeps my presentation layer nicely decoupled as it&rsquo;s no longer referencing entities, and instead only knows about the data transfer objects.</p>

<p>If you&rsquo;re new to AutoMapper and have been following a lot of the tutorials online then you your mappings might look a little something like this:</p>

<pre><code>public class BlogService : IBlogService
{
    ...

    public BlogPostDto GetPost(int id)
    {
        BlogPost blogPost = this.blogRepository.GetPost(id);
        if (blogPost == null)
        {
            throw new EntityNotFoundException(&quot;blogPost&quot;);
        }

        return Mapper.Map&lt;BlogPost, BlogPostDto&gt;(blogPost);
    }

    ...
}
</code></pre>

<p>Whilst doing it this way will work, if we want to stick to Uncle Bob&rsquo;s SOLID princples and promote abstraction and code testability then we really want to do it a different way, as not to couple AutoMapper&rsquo;s static Mapper.Map class to, in this instance, our BlogService class.</p>

<p>This is where AutoMappers IMappingEngine interface comes in handy.</p>

<h2 id="imappingengine-interface">IMappingEngine Interface</h2>

<p>By using the IMapperEngine interface we&rsquo;re able to &ldquo;<em>program to an interface and not implementation</em>&rdquo; (<a href="http://stackoverflow.com/questions/383947/what-does-it-mean-to-program-to-an-interface">more on the importance of that here</a>) by using our IoC container of choice to inject the IMappingEngine interface into our object. In this case it&rsquo;s a Blog service object.</p>

<p>If you take a moment to <a href="https://github.com/AutoMapper/AutoMapper/blob/develop/src/AutoMapper/IMappingEngine.cs?source=cc">look at what methods the IMappingEngine interface provides</a> us you&rsquo;ll see it provides us access to all of the mapping methods we need (specifically the Mapper.Map method used above). Great! Now all we need to do is inject our the MappingEngine and we&rsquo;re good to go.</p>

<h2 id="injecting-imappingengine-using-an-ioc-container">Injecting IMappingEngine using an IoC Container</h2>

<p>In this instance I&rsquo;m using StructureMap for my IoC container needs as I&rsquo;m comfortable with it, but any IoC container will do.</p>

<p>To start with we need to map AutoMapper&rsquo;s Mapper.Engine to the IMapperEngine interface within our custom Structuremap registry (I&rsquo;ve included the full namespace to the registry below to make it easier for those trying to follow).</p>

<pre><code>public class WebsiteRegistry : StructureMap.Configuration.DSL.Registry
{
    public WebsiteRegistry()
    {
        this.For().Use();

        Mapper.AddProfile(new AutoMapperMappingProfiles());
        this.For().Use(() =&gt; Mapper.Engine);
    }
}
</code></pre>

<p>Then all we have to do is register our new registry with StructureMap and call _StructureMapCoreSetup.Configure()_ in our <a href="http://blog.ploeh.dk/2011/07/28/CompositionRoot/">application&rsquo;s composition root</a>. As this is an ASP.NET MVC application the composition root is the Global.asax.cs</p>

<pre><code>public static class StructureMapCoreSetup
{
    public static void Configure()
    {
        ObjectFactory.Initialize(x =&gt;
        {
            x.AddRegistry();
        });
    }
}
</code></pre>

<p>Now that our instance of IMappingEngine is configured with our dependency injection container we&rsquo;re able to inject AutoMapper whenever we need it, turning our first example below.</p>

<pre><code>public class BlogService : IBlogService
{

    private readonly IMappingEngine mappingEngine;
    private readonly IBlogRepository blogRepository;

    public BlogService(IMappingEngine mappingEngine, IBlogRepository blogRepository)
    {
        this.mappingEngine = mappingEngine;
        this.blogRepository = blogRepository;
    }

    ...

    public BlogPostDto GetPost(int id)
    {
        BlogPost blogPost = this.blogRepository.GetPost(id);
        if (blogPost == null)
        {
            throw new EntityNotFoundException(&quot;blogPost&quot;);
        }

        return this.mappingEngine.Map&lt;BlogPost, BlogPostDto&gt;(blogPost);
    }

    ...
}
</code></pre>

<p>Now that we&rsquo;re injecting the MappingEngine we&rsquo;re easily able to mock it when writing tests, and can continue to program happy in the knowledge we&rsquo;ve abstracted the AutoMappers mapping functionality and are programming to an interface rather than implementation.</p>

<p>See below for a simple test case using a mocked IMappingEngine instance:</p>

<pre><code>[Test]
public void Get_Post_By_Id_Test()
{
    var blogPost = new BlogPost();
    var blogPostDto = new BlogPostDto();

    /* Setup */
    var mappingEngine = new Mock();
    mappingEngine.Setup(x =&gt; x.Map&lt;BlogPost, BlogPostDto&gt;(blogPost)).Returns(blogPostDto);

    this._blogRepository.Setup(x =&gt; x.GetPost(It.IsAny())).Returns(blogPost);

    /* Test */
    this._blogService = new BlogService(this._blogRepository.Object, mappingEngine.Object);
    BlogPostDto blogPostResult = this._blogService.GetPost(5);

    /* Assert */
    Assert.NotNull(blogPostResult);
    Assert.IsInstanceOf(blogPostResult);
}
</code></pre>

<h2 id="taking-automapper-further">Taking AutoMapper Further</h2>

<p>If we wanted to further abstract AutoMapper from our web application then we could use the Facade Pattern to encapsulate the IMappingEngine and inject that instead. This means that if a time came where we wanted to completely replace AutoMapper with another object mapping engine, or even write our own then we wouldn&rsquo;t need to violate the <a href="http://en.wikipedia.org/wiki/Liskov_substitution_principle">Liskov Substitution Principle</a>. I will save this for another post though.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Tue, 28 Jan 2014 23:42:17 +0000</pubDate></item><item><title>JCrop integration with CKFinder</title><link>https://josephwoodward.co.uk/2014/01/jcrop-integration-with-ckfinder</link><description></description><content:encoded><![CDATA[<p>Recently I was working on a project that required cropping capabilities CK Finder. After looking around I couldn&rsquo;t find any suitable solution so decided to write my own. Having never built a plugin for CK Finder I hit the documentation to look at what was involved. It&rsquo;s all pretty straight forward after a little big of digging into the API and got started.</p>

<p>First of all I had to find a suitable cropping plugging and after a bit of googling I stumbled upon the a JQuery based image cropping script called <a href="http://deepliquid.com/content/Jcrop.html">JCrop</a> that did everything I needed. It had a nice clean API, was regularly updated and worked well in multiple browsers.</p>

<p>All of the image processing is performed in PHP, but now that I&rsquo;m doing more and more C# lately I&rsquo;ll probably write a C# version too.</p>

<p><img src="https://s3-eu-west-1.amazonaws.com/cdn.josephwoodward.co.uk/blog/jcrop.jpg" alt="" /></p>

<p>In total it took me an evening to write and works pretty well. You can drag across your chosen image to set the crop or you can specify the width and/or height of the crop and image quality of the cropped image.</p>

<p>I’ve also <a href="https://github.com/JosephWoodward/CKFinderJcrop">added it to Github</a> so if anyone would like to use/fork it then feel free to do so.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Thu, 23 Jan 2014 23:28:35 +0000</pubDate></item><item><title>How to create a virtual directory in IIS Express</title><link>https://josephwoodward.co.uk/2014/01/how-to-create-a-virtual-directory-in-iis-express</link><description></description><content:encoded><![CDATA[<p>I’m currently working on a rather large side project in ASP.NET MVC that requires an administrative area that’s best suited to its own project. Inside of this project is an upload folder that contains assets such as uploaded images a files. Whilst this is all fine and good I also have a separate project within the same solution that needs to reference some of these files that have been uploaded and have actions performed against them such as checking to see whether they exist and how large they are. Naturally these files are outside the directory of the project that needs access to them, so the best way to gain access to them is by creating a virtual directory.</p>

<p>Creating virtual directories is pretty straight forward when developing against IIS, however this particular project is using IIS Express I wasn’t too sure how easy this was to do, or whether it was even possible. However, it seems that it’s actually very straight forward to set up.</p>

<h2 id="creating-a-virtual-directory-in-iis-express">Creating a virtual directory in IIS Express</h2>

<p>To create a Virtual Directory in IIS Express simple navigate to the IIS Express’ config directory located in _C:\Users&lt;User&gt;\Documents\IISExpress\config_ and open the applicationhost.config file.</p>

<p>The applicationhost.config file holds all configuration data for your Visual Studio projects that require IIS Express. As you browse through the file you’ll notice a sites section that contains your application(s) IIS Express configuration in the following format:</p>

<pre><code>&lt;site name=&quot;MvcApplication&quot; id=&quot;1&quot;&gt;
    &lt;application path=&quot;/&quot; applicationPool=&quot;Clr4IntegratedAppPool&quot;&gt;
    &lt;virtualDirectory path=&quot;/&quot; physicalPath=&quot;c:\users\&lt;User&gt;\documents\visual studio 2012\Projects\MvcApplication\MvcApplication&quot; /&gt;
     &lt;/application&gt;
     &lt;bindings&gt;
         &lt;binding protocol=&quot;http&quot; bindingInformation=&quot;*:1239:localhost&quot; /&gt;
     &lt;/bindings&gt;
&lt;/site&gt;
</code></pre>

<p>Once you’ve found this creating a virtual directory is very straight forward. Simply add an additional line like below:</p>

<pre><code>&lt;site name=&quot;MvcApplication&quot; id=&quot;1&quot;&gt;
    &lt;application path=&quot;/&quot; applicationPool=&quot;Clr4IntegratedAppPool&quot;&gt;
    &lt;virtualDirectory path=&quot;/&quot; physicalPath=&quot;c:\users\&lt;User&gt;\documents\visual studio 2012\Projects\MvcApplication\MvcApplication&quot; /&gt;
     &lt;/application&gt;

     &lt;application path=&quot;/Uploads&quot; applicationPool=&quot;Clr4IntegratedAppPool&quot;&gt;
         &lt;virtualDirectory path=&quot;/&quot; physicalPath=&quot;C:\temp\YourVirtualDirectoryPath&quot; /&gt;
     &lt;/application&gt;

     &lt;bindings&gt;
         &lt;binding protocol=&quot;http&quot; bindingInformation=&quot;*:1239:localhost&quot; /&gt;
     &lt;/bindings&gt;
&lt;/site&gt;
</code></pre>

<p>Once done load your project up and your virtual directory (in my case the /Uploads directory) will be accessible from within your project as if it were in the same directory.</p>
]]></content:encoded><author>Joseph Woodward</author><pubDate>Mon, 20 Jan 2014 23:24:14 +0000</pubDate></item></channel></rss>