software development

Branching and Deployment Flow

Branching and Deployment Flow

TL;DR: We can deploy and test each feature in our dev environment independently and in combination before promotion. This is done through some simple Git and Jenkins setup and simple team discipline. Promotion-ready features are not blocked by ‘immature’ work-in-progress (WIP), but WIP is still independently testable. The build server tells us when Feature Branches are out of date.

I’m quite proud of the delivery flow that one of my teams is currently using. The setup and workflows/discipline are quite simple and relatively easy to create and have turned out to be quite big enablers for increased agility, responsiveness and quality.

The basic setup

  • We use GitFlow with Feature Branching, without release or hotfix branches
  • On Git Push each Feature Branch is built, tested and deployed to its own IIS Application/Virtual Directory in the dev environment (using msdeploy)
  • Jenkins tells us when there are Merge Conflicts with development
  • Developers can do testing on the deployed feature before marking the work as Ready for Test
  • When the tester is happy with the feature, dev merges feature back into develop
  • If bugs are found, work continues in isolation in the Feature Branch
  • The changes to develop are merged into existing Feature Branches
  • Each push to develop triggers another build-test-deploy CI job to the ‘master’ dev environment
  • Deploys to the QA, Staging and Production environments are one-click and triggered manually.
  • Deploys to the Staging and Production environments are based on master
  • When features are ready for production, develop is merged into master
  • The team also makes use of a canary production server (served a fraction of live traffic through load balancing) with automated fan-out and roll-back.
  • The team has little automated acceptance testing at the moment, but is working to improve this.


This setup has solved some issues the team had in the past, such as not being able to deploy ready-for-test work to the QA environment, due to ‘contamination’ by not-ready-for-test work. This in turn was caused by unpredictable and highly varying priorities, and in varaible cycle-time within the dev and ready-for-test phases.


Using feature branches means we’re not doing true Continuous Delivery or even Continuous Integration. We have tried to mitigate this by being very disciplined around not letting Feature Branches diverge too far from develop. We have a Jenkins job which attempts a ‘reverse merge’ of the feature into develop on every feature branch push, which fails if there are any merge conflicts.

There is additional house-keeping in that feature branches need to be created on commencement of work and deleted at end-of-life. Not only that, the IIS Applications created for each Feature Branch need to be deleted manually (creation is automatic). The Git housekeeping is made easier using SourceTree, which has great GitFlow support.

I’d love to hear your comments on this set-up, especially how it can be improved. If you want more details on any of the configuration/set-up/workflows/discipline, feel free to give me a shout 🙂

software development, Uncategorized

Testing CORS-enabled endpoints

TL;DR: We needed a simple tool to check whether an HTTP endpoint had CORS enabled. I created one. You can use it here for public endpoints or clone it for local endpoints – its open source. I’d love it if you helped me improve it!

A while ago, we had need in the team to verify that an endpoint (URL) on our website had Cross-Origin Resource Sharing (CORS) enabled. The endpoint was being consumed by a 3rd party, and we’d broken their site with one of our changes. Our team-internal processes had missed the regression, mainly because we didn’t have a tool we could use to check whether CORS was enabled for the endpoint. I looked for a simple tool that we could use, and didn’t find any, so I decided to create one.

The slight subtlety with CORS is that the request must be made by the client browser, i.e. in Javascript. I created a very small and simple HTML page, with some Javascript, that can be used to check whether CORS is enabled for an HTTP endpoint. You can use it live here: Note that if you use the live tool, the endpoint you’re checking must be publicly available. If you want to use it on your own machine, or within a private network, just clone the Git repository, and open index.html from your local file system.

I hope you find it useful 🙂

If you’d like to help me improve it, take a look at the issues on Github.


Maturing my unit of work implementation

Maturing my unit of work implementation

My previous post was on testing my simple unit of work implementation, (which was described in this post). At the conclusion of the post on testing, the Unit of work code looked like this:

public interface IUnitOfWork : IDisposable { void Begin(); void Commit(); void RollBack();  void Begin(Action action); void Begin<T>(Action<T> action, T t); }

The problem with this design is that when IUnitOfWork.Begin<>() is called by clients, the signature of the Delegate passed in is different. The example I’ve been using so far is the method void IDependency.SomeMethod(String). The next method may have more parameters, and a return type. Each new method signature therefore needs a corresponding IUnitOfWork.Begin<>(). After a little while, IUnitOfWork became:

public interface IUnitOfWork : IDisposable { void Begin(); void Commit(); void RollBack();  void Begin(Action action); void Begin<T>(Action<T> action, T t); void Begin<T1, T2>(Action<T1, T2> action, T1 t1, T2 t2); TResult Begin<T1, T2, TResult>(Func<T1, T2, TResult> func, T1 t1, T2 t2); }

You can see where this is going. The implementation of each of these methods is almost identical to the implementation described in my previous post, the only difference being the invocation of the Action<> (or Func<> ).

Even though the implementation of these new methods is simple (but drudge work), I didn’t like this design because IUnitOfWork and UnitOfWork would continually be changing. Even though I thought this was a reasonable trade-off for testability, I wanted a better solution.

The first thing to do was to move the profusion of Begin<>()s out of UnitOfWork. I felt Extension Methods are a great option in this case. I would be left with a simple, unpolluted IUnitOfWork, clients that get a simple and elegant IUnitOfWork usage pattern, and all the similar Begin<>()s in a central place.

The clean IUnitOfWork and the Extension Methods are:

public interface IUnitOfWork : IDisposable { void Begin(); void Commit(); void RollBack(); }  public static class UnitOfWorkHelpers { public static void Begin<T>(this IUnitOfWork uow, Action<T> action, T t) { uow.Begin(); try { action.Invoke(t); uow.Commit(); } catch (Exception) { uow.RollBack(); throw; } }  public static void Begin<T1, T2>(this IUnitOfWork uow, Action<T1, T2> action, T1 t1, T2 t2) { //Repeated action.Invoke(t1, t2); //Repeated }  public static TResult Begin<T1, T2, TResult>(this IUnitOfWork uow, Func<T1, T2, TResult> func, T1 t1, T2 t2) { TResult returnVal; //Repeated returnVal = func.Invoke(t1, t2); //Repeated return returnVal; } }

This code looks great, but we’ve introduced a problem: the unit test fails. In the test we’re setting an expectation on IUnitOfWork.Begin<>(). This is no longer possible because we’re mocking IUnitOfWork of which Begin<>() is no longer part; and we cannot mock an Extension Method. We can set expectations on the methods of IUnitOfWork only (like Begin(), Commit() and Rollback()). The test must therefore change to this:

[Test] public void TestDoWork() { var dependency = MockRepository.GenerateMock<IDependency>(); dependency.Expect(d => d.SomeMethod("hi"));  var uow = MockRepository.GenerateMock<IUnitOfWork>(); uow.Expect(u => u.Begin()); uow.Expect(u => u.Commit()); uow.Expect(u => u.Dispose());  var uowProvider = MockRepository.GenerateMock<Func<IUnitOfWork>>(); uowProvider.Expect(u => u.Invoke()) .Return(uow);  var sut = new MyClass(dependency, uowProvider); sut.DoWork();  dependency.VerifyAllExpectations(); uowProvider.VerifyAllExpectations(); uow.VerifyAllExpectations();  }

This new test structure is a lot simpler, more straightforward and less esoteric. I think it is easier to read. The expectation on IDependency.SomeMethod() is explicit. We can also now set clear expectations on IUnitOfWork’s methods, like Begin() and Commit() (and Rollback() if appropriate). The only drawback to this design, which I consider minor, is that the code for expectations on IUnitOfWork’s methods will be repeated in every test.

This design provides another major advantage: originally, IUnitOfWork.Begin() was called with a () => closure, which was changed for testability. However, there is no longer a need to set an expectation based on a Delegate in the unit test. So we can go back to using () => in the calling code.

And there’s more! Because we can use () =>, we no longer need all those Begin<>() Extension Methods! We need only the first one, Begin(Action). Our Extension Method class now looks like:

public static class UnitOfWorkHelpers { public static void Begin(this IUnitOfWork uow, Action action) { uow.Begin(); try { action.Invoke(); uow.Commit(); } catch (Exception) { uow.RollBack(); throw; } }

I mentioned earlier that there was a minor drawback with the unit test: repeated code for the expectations on IUnitOfWork. This can be improved a lot by simply packaging the repeated code:

public class UnitOfWorkTestHelper { public readonly Func<IUnitOfWork> UowProvider; public readonly IUnitOfWork Uow;  private UnitOfWorkTestHelper() { Uow = MockRepository.GenerateMock<IUnitOfWork>();  UowProvider = MockRepository.GenerateMock<Func<IUnitOfWork>>(); UowProvider.Expect(u => u.Invoke()) .Return(Uow); }  public void VerifyAllExpectations() { Uow.VerifyAllExpectations(); UowProvider.VerifyAllExpectations(); }  public static UnitOfWorkTestHelper GetCommitted() { var uow = new UnitOfWorkTestHelper(); uow.Uow.Expect(u => u.Begin()); uow.Uow.Expect(u => u.Commit()); uow.Uow.Expect(u => u.Dispose());  return uow; }  public static UnitOfWorkTestHelper GetRolledBack() { var uow = new UnitOfWorkTestHelper(); uow.Uow.Expect(u => u.Begin()); uow.Uow.Expect(u => u.RollBack()); uow.Uow.Expect(u => u.Dispose());  return uow; }  }

Which is used simply in the test, like this:

[Test] public void TestDoWork() { var dependency = MockRepository.GenerateMock(); dependency.Expect(d => d.SomeMethod("hi"));  var uow = UnitOfWorkTestHelper.GetCommitted(); //OR var uow = UnitOfWorkTestHelper.GetRolledBack(); var sut = new MyClass(dependency, uow.UowProvider); sut.DoWork();  dependency.VerifyAllExpectations(); uow.VerifyAllExpectations(); }

I’m quite happy with this design. It has evolved a fair amount since I started, and I’m getting more comfortable with it. I think its quite frictionless in use and testing but offers enough flexibility at the same time.

I have only one (major) criticism, and that is the way IDependency.SomeMethod() is called in the production code is different to the test – IUnitOfWork.Begin(() => dependency.SomeMethod(“hi”)) in the production code, and dependency.SomeMethod(“hi) and IUnitOfWork.Begin() in the test. There is a small dissonance, and this might cause confusion down the road with some developers.

Having said that though, I’m happy with the efficacy and leanness of the solution :).


Blog moving soon

You may have noticed my two previous posts have looked a bit strange. I am in the process of moving blog platforms (again) and the previous 2 posts have actually been posted automatically from the new platform. I’m moving for a few reasons, among them:

  • WordPress doesn’t present the best friction-free posting experience
  • WordPress doesn’t support Markdown
  • WordPress doesn’t support Github Gists.

I’ve set up a Feedburner Feed real feed link (sorry) to syndicate my blog. This feed still points here (this blog). When I do cut over to the new platform I’ll update the feed to point there. The upshot is you’ll continue to get updates without having to change any subscription info. I’d appreciate it if you updated your reader to subscribe to the new feeed.

Many thanks!

software development

Testing my simple unit of work implementation

Testing my simple unit of work implementation

My previous post describes the design of my simple unit of work implementation. I was quite happy with this design until I tried testing classes that use the implementation. This is a major obstacle because I’m a firm believer in unit testing and TDD. This post details the evolution of design of the unit of work implementation to allow for testability while maintaining low-friction consumption.

The last post left us with this solution:


public interface IDependency { void SomeMethod(string s); } public class MyClass { private readonly IDependency dependency; private readonly Func startNewUnitOfWork;  public MyClass(IDependency dependency, Func startNewUnitOfWork) { this.dependency = dependency; this.startNewUnitOfWork = startNewUnitOfWork; }  public void DoWork() { using (var uow = startNewUnitOfWork()) { uow.Begin(() => dependency.SomeMethod("hi")); } } } public interface IUnitOfWork : IDisposable { void Begin(); void Commit(); void RollBack();  void Begin(Action action); } //Implementation public class UnitOfWork : IUnitOfWork { private readonly ISessionFactory sessionFactory; private ISession currentSession; private ITransaction currentTransaction;  public UnitOfWork(ISessionFactory sessionFactory) { this.sessionFactory = sessionFactory; }  public void Begin() { currentSession = sessionFactory.OpenSession(); CurrentSessionContext.Bind(currentSession); currentTransaction = currentSession.BeginTransaction(); }  public void Commit() { currentTransaction.Commit(); CurrentSessionContext.Unbind(sessionFactory); }  public void RollBack() { currentTransaction.Rollback(); CurrentSessionContext.Unbind(sessionFactory); }  public void Dispose() { currentTransaction.Dispose(); currentSession.Dispose(); }  public void Begin(Action action) { try { Begin(); action.Invoke(); Commit(); } catch (Exception ex) { RollBack(); throw ex; } } }


When testing DoWork(), the primary goal is to test that dependency.SomeMethod is called with the correct methods. I’d also like to test whether uow.Begin(), uow.Commit (or uow.Rollback if appropriate) and uow.Dispose() are called. (Ideally I’d also like to test the order in which these methods are called but support for ordering was removed from Rhino Mocks 3.5).

My first pass at a test for MyClass is shown as MyClassTest below:


[Test] public void TestDoWork() { var dependency = MockRepository.GenerateMock(); dependency.Expect(d => d.SomeMethod("hi"));  var uow = MockRepository.GenerateMock();  var uowProvider = MockRepository.GenerateMock>(); uowProvider.Expect(u => u.Invoke());  var sut = new MyClass(dependency, uowProvider); sut.DoWork();  uowProvider.VerifyAllExpectations(); uow.VerifyAllExpectations(); }


There are however several issues with this approach:

  • There is no expectation that uow.Begin is called (or any other methods of uow
  • The expectation on IDependency.SomeMethod() will fail. IUnitOfWork is being mocked, so even when uow.Begin() is called, it is the methd of the mock proxy that is callsed, and not the real Begin(). Therefore the line action.Invoke() in UnitOfWork will never be called.

In actual fact, with this design of UnitOfWork, we cannot set any expectations or assertions on IDependency. We can only set expectations on the calls made by our system under test. In this case, it is the call made to IUnitOfWork.Begin(Action action).

The test should therefore change to the one shown here:


[Test] public void TestDoWork() { var dependency = MockRepository.GenerateMock(); Action expectedAction = () => dependency.SomeMethod("hi");  var uow = MockRepository.GenerateMock(); uow.Expect(u => u.Begin(Arg.Matches(actual => DelegatesEqual(expectedAction, actual))));  var uowProvider = MockRepository.GenerateMock>(); uowProvider.Expect(u => u.Invoke()) .Return(uow);  var sut = new MyClass(dependency, uowProvider); sut.DoWork();  uowProvider.VerifyAllExpectations(); uow.VerifyAllExpectations(); } private bool DelegatesEqual(Action expected, Action actual) { if (expected.Target != actual.Target) return false;  var firstMethodBody = expected.Method.GetMethodBody().GetILAsByteArray(); var secondMethodBody = actual.Method.GetMethodBody().GetILAsByteArray();  if (firstMethodBody.Length != secondMethodBody.Length) return false;  for (var i = 0; i < firstMethodBody.Length; i++) { if (firstMethodBody[i] != secondMethodBody[i]) return false; } return true; }


I.e., we’re setting an expectation on the Action passed into IUnitOfWork.Begin() (as well as expecting that method to be called).

This looks a lot better and makes a lot more sense. However, the test fails. It took me a little while to understand why. It fails, because the Action is represented by a lambda/closure.

Something strange happens to the delegate when the closure is closed over. If you debugs the test code above, and inspect the expected and actual delegates, you’ll find that their Method and Target properties are different. I’ve used the method from this StackOverflow answer to compare delegates for equality.) The expected delegate is associated with the test class. The actual delegate (created in UnitOfWork) refers to the UnitOfWork class. Neither of them refer to IDependency.SomeMethod, even though both are instantiated by () => IDependency.SomeMethod. Therefore this way of testing is not feasible.

What’s required is a way to compare delegates without worrying about the closures being closed over in strange ways. However it would be nice to retain the elegant way of using IUnitOfWork.Begin() with a delegate.

I managed to achieve this by changing IUnitOfWork.Begin() so that instead of taking an Action it takes an Action. The changed unit of work and calling code change to:

//Test: [Test]

public void TestDoWork() { var dependency = MockRepository.GenerateMock(); Action expectedAction = dependency.SomeMethod;  var uow = MockRepository.GenerateMock(); uow.Expect(u => u.Begin( Arg>.Matches(actual => DelegatesEqual(actual, expectedAction)), Arg.Matches(actual => actual == "hi"))); uow.Expect(u => u.Dispose()); //As before }

//MyClass.DoWork(): public void DoWork()

{ using (var uow = startNewUnitOfWork()) { uow.Begin(dependency.SomeMethod, "hi"); } }


public interface IUnitOfWork : IDisposable { //As before void Begin(Action action, T t); }

//New method on UnitOfWork:

public void Begin(Action action, T t) { try { Begin(); action.Invoke(t); Commit(); } catch (Exception ex) { RollBack(); throw ex; } }

We can now specify the delegate without using lambdas, and incurring funny closure behaviour. This test now works as expected, and we can test everything we wanted to. The only concession is instead of using IUnitOfWork.Begin(() => dependency.SomeMethod(“hi”)); we now use IUnitOfWork.Begin(dependency.SomeMethod, “hi”); which is maybe a little less intuitive to read. In my opinion this is a reasonable trade-off to increase testability of the solution.

software development

Simple Unit of Work implementation

##Simple Unit of Work implementation##

The Unit of Work pattern is an important part of many architecture designs. I refer you to Martin Fowler’s [description]( for more details on the pattern.

Typically, a unit of work is ‘opened’ at the beginning of an operation in an application, which I’ll call an entry point, and closed when that operation ends, the exit point. A simple example is with a web application: the user requesting some resource from the web application is the entry point.

In ASP.Net, this entry point is explioit and accessible, the HTTPApplication’s Begin_Request method. Symmetrically, the End_Request method provides an exit point. These methods provide convenient locations in which to open and close the unit of work.

What’s also very nice about these methods is by the time the End_Request method is closed, we have finished making changes to the Response sent by the server. We are protected from things like lazy loading exceptions being thrown due to closed sessions.

In my current project, the application is a Windows service. The Windows service hosts three different ‘operation stacks’, each of which has a different entry point:

– A request to a webservice (powered by [ServiceStack](

– Receiving a message from a Bus or queue (powered by [Mass Transit]( or in-memory with [BlockingCollection](

– A timer ticking (database records are processed every *X* minutes

Each of these stacks runs on a different thread.

This project uses [NHibernate]( My main concern with unit of work is the timing of database connections and sessioning, and associated issues like lazy loading of graphs. NHibernate’s ISession is already an implementation of the Unit of Work pattern. However I find it is not enough on its own.

I’ll use some code to illustrate why.

The majority of the DoWork() method is taking up with unit of work housekeeping: opening the session, binding the contextual session etc. There is in fact only one line in the whole method (dependency.SomeMethod()) which actually does any work. This is problematic for several reasons:

– Unit of work housekeeping is definitely not the concern of the DoWork() method or the MyClass class (violation of principle of Separation of Concerns).

– The same housekeeping code would have to be repeated in every entry point (at least five times currently).

– It couples MyClass (and all other entry points) to NHibernate.

Fortunately, it isn’t too difficult to refactor this into a more palatable solution.

The first thing is to move all the actual unit of work code into its own class:

This would be used like this (note that the Func dependency is injected by a Dependency Injection container):

We’ve removed the NHibernate dependency, but there’s still a lot of housekeeping code. A further refactoring, with the help of an Action and lambdas makes it a lot cleaner. First, the unit of work usage:

Much less housekeeping, and the Separation of Concerns violation isn’t as bad. The unit of work class is refactored to:

The addition of the Begin(Action action) class makes the consumption of the unit of work much cleaner: All the housekeeping takes place within the UnitOfWork class, and the actual method representing the work to be done is called as a Delegate.

This pattern of passing in the work method as an Action is easily repeated in the other entry point methods in the system. There is little violation of Separation of Concerns, the calling code has no inkling that NHibernate is involved, and the amount of repeated code becomes tiny.

An elegant solution in my opinion!

software development

Creating a local Nuget cache/repository

Update: It appears Nuget automatically creates its own local cache, located at C:\Users\<username>\AppData\Local\NuGet\Cache. You can use this as a package source as well. You can read more on Scott Hanselman’s post.

Update: If you want to share a local, organisation-wide repository, you can use Nuger.Server. Check here and here for more information. Nuget.Server can be used with the local repository I describe in the rest of the post.

Nuget took a long time coming. Maven has already been around for years, and the .Net space needed a package management solution. Now that it is here, I use it extensively. I find the VS integration quite polished and enjoyable to use. There are however several friction points I dislike. The most major of these is that there is no local cache or repository of packages created for you. I.e. when you download a package, it gets added to the particular solution only. When you remove the package from a project, the package gets deleted from your hard disk. The current workflow (with the VS plugin) requires you to re-download a package every time you add it in a new solution. This is a bit silly considering you may use the same package in many different projects and  solutions at the same time.

Update: in the comments below, Riaan reminded me of other reasons I wanted to have a local repository. Firstly, the package manager’s connection model is terrible. For slow or occasional connections (like mobile connections), it just isn’t feasible to download the same package multiple times for different projects/solutions. Secondly, the package manager dialog is modal, so you can’t use Visual Studio at all while you’re downloading packages (a real pain with a slow connection). [end update]

What I require, (and which Maven does by default), is to download the packages to an isolated repository on my own machine, and then install packages into projects/solutions from the local repository . It turns out this is pretty easy to achieve. All you need is a one or two Powershell commands, and some minor fiddling with the VS package manager.

First off, create a new folder on your drive for your repository. I put mine in D:\Work\Nuget, but it doesn’t matter where it is.  You’ll need the Nuget.exe, either from Nuget 1.6 or from the Nuget Command Line package, and to make it easy the path to the executable should be on your path. When you want to download a Nuget package, fire up a new Powershell shell, navigate to your repository folder and issue the command:Powershell screenshot

nuget install <package name>

for example:

nuget install MassTransit

Nuget will then download the package and its dependencies. There will be a folder per package.

The next step is to set up the VS package manager so it knows where to look. In VS, go to Tools -> Library Packet Manager -> Packet Manager Settings

Navigate to Package Manager -> Package Sources. Fill in the two boxes with the details of your repository. I’ve named mine Local, and it can be found at D:\Work\Nuget.

Then click Add. You should now be able to install packages into projects from your local repository. To check, open the Library Package Manager in VS. The Online section on the left hand should now contain both ‘Nuget official package source’ and ‘Local’. Click on Local and you’ll see the list of locally installed packages.

You now need download each package only once, and can install to any project.