Uncategorized

Finding my Tribes and Leveling Up

TL;DR: There are many channels available for leveling-up and for finding your tribe, some of them less ‘traditional’. These are some that I use:
– In-person gatherings (meetups, conferences etc)
– Twitter
– Instant messaging (Slack/HipChat/Jabbr/Gitter/IRC etc)
– Websites (MSDN/Stack Overflow etc)
– Podcasts


This post was prompted by several things: My interview for Source; a conversation I had with two developers I am coaching, who had been thrown into the deep-end on a project with new, unfamiliar technology, and very little support from their company; a conversation with an acquaintance about resources available for leveling up.

A number of years ago, I felt very lonely and misunderstood as a professional developer. I care deeply about my craft and self-development and self-improvement, but struggled to find people with a similar outlook and experience. Part of my frustration was not having anyone with whom to discuss things and soundboard ideas.

I’m glad to say that today my life is totally different. I belong to several tribes, both in meatspace and virtual, have access to a lot more people and resources, with lots of different experiences and points of view. In fact I could probably spend my days only in debate and discussion now, without doing any other work. Besides the communities and resources discussed below, I’m extremely fortunate to be working at nReality where I have amazing colleagues, as well as access to a broad range of people through my training and coaching work.

The resources I use the most these days to level up are
– Meatspace Events
– Twitter
– Slack

Meatspace events are great for many reasons, including: you learn at a lot during the talks; you get to meet awesome, like-minded people and have stimulating conversations. There are a number of great in-person events. The best place to find them is meetup.com.

Of particular significance is DeveloperUG (Twitter) which has monthly events in Jo’burg and Pretoria/Centurion. I owe a massive debt to the founder and past organisers of DeveloperUG, Mark Pearl, Robert MacLean and Terence Kruger for creating such an amazing community.

I am involved in running or hosting these meatspace communities:
Jo’burg Lean Coffee
Jo’burg Domain Driven Design

These meatspace communities are also valuable:
Scrum User Group Jo’burg
Jozi.rb
Jozi JUG
Code & Coffee
Code Retreat SA
Jo’burg Software Testers

Conferences that I’ve attented and spoken at include
Agile Africa
Scrum Gathering
DevConf, and
Let’s Test SA

I haven’t yet had the opportunity to attend others, like JSinSA, RubyFuza, ScaleConf and others, but I know the same applies to them as well. Pro-tip: Getting accepted to speak or present a poster at a conference usually gets you in free, sometimes the conference also pays for your travel and accommodation costs.

As important as meatspace events and communities are, virtual communities provide access to people in other locations. My current way of connecting is finding people and conversations on Twitter, then using Slack to have deeper, ‘better’ conversations. I do have good conversations via Twitter, but its a bit clumsy for a few reasons, and Slack often works better for real conversations.

Twitter and Slack are great for connecting with people for a number of reasons:
– public & discoverable
– low ceremony
– no strings attached

This means that its very easy to start a conversation with anyone, and they’re more likely to respond since its easy to (low ceremony), and they’re not making any kind of commitment (no strings attached).

I’ve been lucky enough to have conversations with some of my idols, like Kent Beck, Uncle Bob, Woody Zuil, Tim Ottinger etc, some on Twitter, some on Slack, some on both.

I belong to these open-to-the-public Slack communities:
– ZADevelopers – South African Developers (invitation on request)
Legacy Code Rocks – All things Legacy Code (technical debt, TDD, refactoring etc)
ddd-cqrs-es – Domain-Driven Design, CQRS, Event Sourcing
Software Craftsmanship – Software Craftsmanship
– Coaching Circles – Coaching, especially Agile, Lean etc (invitation on request)
WeDoTDD – Test-Driven Development
Testing Community – Testing (I joined very recently)

What resources do you use to level up and connect to communities of interest? Let me know in the comments!

software development

How I do TDD

TL;DR: The specification I write are based on domain-level Ubiquitous Language-based specifications of system behaviour. These spec tests describe the functional behaviour of the system, as it would be specified by a Product Owner (PO), and as it would be experienced by the user; i.e. user-level functional specifications.


I get asked frequently how I do Test-Driven Development (TDD), especially with example code. In my previous post I discussed reasons why I do TDD. In this post I’ll discuss how I do TDD.

First-off, in my opinion, there is no difference between TDD, Behaviour-Driven Development (BDD) and Acceptance-Test-Driven Development (ATDD). The fundamental concept is identical: write a failing test, then write the implementation that makes that test pass. The only difference is that these terms are used to describe the ‘amount’ of the system being specified. Typically, with BDD, at a user level, with ATDD, at a user acceptance level (like BDD), and TDD for much finer-grained components or parts of the system. I see very little value in these distinctions: they’re all just TDD to me. (I do find it valuable to discuss the ROI of TDD specifications at different levels, as well as their cost and feedback speed. I’m currently working on a post discussing these ideas.) Like many practices and techniques, I see TDD as fractal. We can get the biggest ROI for a spec test that covers as much as possible.

The test that covers the most of the system is a user-level functional test. That is, a test written at the level of a user’s interaction with the system – i.e. outside of the system – which describes the functional behaviour of the system. The ‘user-level’ part is important, since this level is by definition outside of the system, and covers the entirety of the system being specified.

Its time for an example. The first example I’ll use is specifying Conway’s Game of Life (GoL).

These images represent an (incomplete) specification of GoL, and can be used to specify and validate any implementation of Conway’s Game of Life, from the user’s (i.e. a functional) point of view. These specifications make complete sense to a PO specifying GoL – in fact, this is often how GoL is presented.

These images translate to the following code:

[Test]
public void Test_Barge()
{
    var initialGrid = new char[,]
    {
        {'.', '.', '.', '.', '.', '.'},
        {'.', '.', '*', '.', '.', '.'},
        {'.', '*', '.', '*', '.', '.'},
        {'.', '.', '*', '.', '*', '.'},
        {'.', '.', '.', '*', '.', '.'},
        {'.', '.', '.', '.', '.', '.'},
    };

    Game.PrintGrid(initialGrid);

    var game = CreateGame(initialGrid);
    game.Tick();
    char[,] generation = game.Grid;

    Assert.That(generation, Is.EqualTo(initialGrid));
}

[Test]
public void Test_Blinker()
{
    var initialGrid = new char[,]
    {
        {'.', '.', '.'},
        {'*', '*', '*'},
        {'.', '.', '.'},
    };

    var expectedGeneration1 = new char[,]
    {
        {'.', '*', '.'},
        {'.', '*', '.'},
        {'.', '*', '.'},
    };

    var expectedGeneration2 = new char[,]
    {
        {'.', '.', '.'},
        {'*', '*', '*'},
        {'.', '.', '.'},
    };

    var game = CreateGame(initialGrid);
    Game.PrintGrid(initialGrid);
    game.Tick();

    char[,] actualGeneration = game.Grid;
    Assert.That(actualGeneration, Is.EqualTo(expectedGeneration1));

    game.Tick();
    actualGeneration = game.Grid;
    Assert.That(actualGeneration, Is.EqualTo(expectedGeneration2));
}

[Test]
public void Test_Glider()
{
    var initialGrid = new char[,]
    {
        {'.', '*', '.'},
        {'.', '.', '*'},
        {'*', '*', '*'},
    };

    var expectedGeneration1 = new char[,]
    {
        {'.', '.', '.'},
        {'*', '.', '*'},
        {'.', '*', '*'},
        {'.', '*', '.'},
    };

    var expectedGeneration2 = new char[,]
    {
        {'.', '.', '.'},
        {'.', '.', '*'},
        {'*', '.', '*'},
        {'.', '*', '*'},
    };

    var game = CreateGame(initialGrid);
    Game.PrintGrid(initialGrid);

    game.Tick();
    char[,] actualGeneration = game.Grid;

    Assert.That(actualGeneration, Is.EqualTo(expectedGeneration1));

    game.Tick();
    actualGeneration = game.Grid;
    Assert.That(actualGeneration, Is.EqualTo(expectedGeneration2));
}

All I’ve done to make the visual specification above executable is transcode it into a programming language – C# in this case. Note that the tests do not influence or mandate anything of the implementation, besides the data structures used for input and output.

I’d like to introduce another example, this one a little more complicated and real-world. I’m currently developing a book discovery and lending app, called Lend2Me, the source code for which is available on Github. The app is built using Event Sourcing (and thus Domain-Driven Design and Command-Query Responsibility Segregation). The only tests in this codebase are user-level functional tests. Because I’m using DDD, the spec tests are written in the Ubiquitous Language of the domain, and actually describe the domain, and not a system implementation. Since user interaction with the system is in the form of Command-Result and Query-Result pairs, these are what are used to specify the system. Spec tests for a Command generally take the following form:

GIVEN:  a sequence of previously handled Commands
WHEN:   a particular Command is issued
THEN:   a particular result is returned

In addition to this basic functionality, I also want to capture the domain events I expect to be persisted (to the event store), which is still a domain-level concern, and of interest to the PO. Including event persistence, the tests, in general, look like:

GIVEN:  a sequence of previously handled Commands
WHEN:   a particular Command is issued
THEN:   a particular result is returned
And:    a particular sequence of events is persisted

A concrete example from Lend2Me:

User Story: AddBookToLibrary – As a User I want to Add Books to my Library so that my Connections can see what Books I own.
Scenario: AddingPreviouslyRemovedBookToLibraryShouldSucceed

GIVEN:  Joshua is a Registered User AND Joshua has Added and Removed Oliver Twist from his Library
WHEN:   Joshua Adds Oliver to his Library
THEN:   Oliver Twist is Added to Joshua's Library

This spec test is written at the domain level, in the Ubiquitous Language. The code for this test is:

[Test]
public void AddingPreviouslyRemovedBookToLibraryShouldSucceed()
{
    RegisterUser joshuaRegisters = new RegisterUser(processId, user1Id, 1, "Joshua Lewis", "Email address");
    AddBookToLibrary joshuaAddsOliverTwistToLibrary = new AddBookToLibrary(processId, user1Id, user1Id, title, author, isbnnumber);
    RemoveBookFromLibrary joshuaRemovesOliverTwistFromLibrary = new RemoveBookFromLibrary(processId, user1Id, user1Id, title, author, isbnnumber);

    UserRegistered joshuaRegistered = new UserRegistered(processId, user1Id, 1, joshuaRegisters.UserName, joshuaRegisters.PrimaryEmail);
    BookAddedToLibrary oliverTwistAddedToJoshuasLibrary = new BookAddedToLibrary(processId, user1Id, title, author, isbnnumber);
    BookRemovedFromLibrary oliverTwistRemovedFromJoshuaLibrary = new BookRemovedFromLibrary(processId, user1Id, title, author, isbnnumber);

    Given(joshuaRegisters, joshuaAddsOliverTwistToLibrary, joshuaRemovesOliverTwistFromLibrary);
    When(joshuaAddsOliverTwistToLibrary);
    Then(succeed);
    AndEventsSavedForAggregate(user1Id, joshuaRegistered, oliverTwistAddedToJoshuasLibrary, oliverTwistRemovedFromJoshuaLibrary, oliverTwistAddedToJoshuasLibrary);
}

The definitions of the Commands in this code (e.g. RegisterUser) would be specified as part of the domain:

public class RegisterUser : AuthenticatedCommand
{
    public long AuthUserId { get; set; }
    public string UserName { get; set; }
    public string PrimaryEmail { get; set; }

    public RegisterUser(Guid processId, Guid newUserId, long authUserId, string userName, string primaryEmail)
    : base(processId, newUserId, newUserId)
    {
        AuthUserId = authUserId;
        UserName = userName;
        PrimaryEmail = primaryEmail;
    }

}

Again, the executable test is just a transcoding of the Ubiquitous Language spec test, into C#. Note how closely the code follows the Ubiquitous Language used in the natural language specification. Again, the test does not make any assumptions about the implementation of the system. It would be very easy, for example, to change the test so that instead of calling CommandHandlers and QueryHandlers directly, HTTP requests are issued.

It could be argued that the design of this system makes it very easy to write user-level functional spec tets. However these patterns can be used to specify any system any system, since it is possible to automatically interact with any system through the same interface a user uses. Similarly, it is possible to automatically assert any state in any boundary-interfacing system, e.g. databases, 3rd party webservices, message busses etc. It may not be easy or cheap or robust, but it is possible. For example, a spec test could be:

GIVEN:  Pre-conditions
WHEN:   This HTTP request is issued to the system
THEN:   Return this response
AND:    This HTTP request was made to this endpoint
AND:    This data is in the database
AND:    This message was put onto this message bus

This is how I do TDD: the spec tests are user-level functional tests. If its difficult, expensive or brittle to use the same interface the user uses, e.g. GUIs, then test as close to the system boundary, i.e. as close to the user, as you can.

For interest’s sake, all the tests in the Lend2Me codebase hit a live PostgreSQL database, and an embedded eventstore client. There is no mocking anywhere. All interactions and assertions are outside the system.

software development

Why I do TDD

People give many reasons for why they do Test-Driven Development (TDD). The benefits I get from TDD, and thus my reasons for practicing it, are given below, in order of most important to least important. The order is based on whether the benefit is available without test-first, and the ease with which the benefit is gained without a test-first approach.

1. Executable specification

The first step in building the correct thing is knowing what the correct thing is (or what it does). I find it very difficult to be confident that I’m building the correct thing without TDD. If we can specify the expectations of our system so unambiguously that a computer can understand and execute that specification, there’s no room for ambiguity or misinterpretation of what we expect from the system. TDD tests are executable specifications. This is pretty much impossible without test-first.

I also believe Specification by Example, which is necessary for TDD, is a great way to get ‘Product Owners’ to think about that they want (without having to first build software).

2. Living documentation

The problem with most other forms of documentation is there is no guarantee that it is current. Its easy to change the code without changing the natural-language documentation, including comments. There’s no way to force the documentation to be current. However, with TDD tests, as long as the tests are passing, and assuming the tests are good and valid, the tests act as perfectly up-to-date documentation for and of the code, (from an intent and expectation point of view, not necessarily from an implementation PoV). The documenation is thus considered ‘living’. This is possible with a test-after approach but is harder than with test-first.

3. Focus

TDD done properly, i.e. true Red-Green-Refactor, forces me to focus on one area of work at a time, and not to worry about the future. The future may be as soon as the next test scenario or case, but I don’t worry about that. The system gets built one failing test at a time. I don’t care about the other tests, I’ll get there in good time. This is possible without test-first, but harder and nigh-impossible for me.

4. As a design tool

This is the usual primary benefit touted for TDD. My understanding of this thinking is thus: in general, well-designed systems are decoupled. Decoupling means its possible to introduce a seam for testing components in isolation. Therefore, in general, well-designed systems are testable. Since TDD forces me to design components that are testable, inherently I’m forced to design decoupled components, which are in general considered a ‘better design’. This is low down on my list because this is relatively easy to do without a test-first approach, and even without tests at all. I’m pretty confident I can build what I (and I think others) consider to be well-designed systems without going test-first.

5. Suite of automated regression tests

Another popular selling point for TDD is that the tests provide a suite of automated regression tests. This is absolutely true. However I consider this a very pleasant by-product of doing TDD. A full suite of automated regression tests is very possible with a test-after approach. Another concern I have with this selling point is that TDD is not about testing, and touting regression tests as a benefit muddies this point.

Why do you do TDD? Do you disagree with my reasons or my order? Let me know in the comments!

software development

Branching and Deployment Flow

Branching and Deployment Flow

TL;DR: We can deploy and test each feature in our dev environment independently and in combination before promotion. This is done through some simple Git and Jenkins setup and simple team discipline. Promotion-ready features are not blocked by ‘immature’ work-in-progress (WIP), but WIP is still independently testable. The build server tells us when Feature Branches are out of date.


I’m quite proud of the delivery flow that one of my teams is currently using. The setup and workflows/discipline are quite simple and relatively easy to create and have turned out to be quite big enablers for increased agility, responsiveness and quality.

The basic setup

  • We use GitFlow with Feature Branching, without release or hotfix branches
  • On Git Push each Feature Branch is built, tested and deployed to its own IIS Application/Virtual Directory in the dev environment (using msdeploy)
  • Jenkins tells us when there are Merge Conflicts with development
  • Developers can do testing on the deployed feature before marking the work as Ready for Test
  • When the tester is happy with the feature, dev merges feature back into develop
  • If bugs are found, work continues in isolation in the Feature Branch
  • The changes to develop are merged into existing Feature Branches
  • Each push to develop triggers another build-test-deploy CI job to the ‘master’ dev environment
  • Deploys to the QA, Staging and Production environments are one-click and triggered manually.
  • Deploys to the Staging and Production environments are based on master
  • When features are ready for production, develop is merged into master
  • The team also makes use of a canary production server (served a fraction of live traffic through load balancing) with automated fan-out and roll-back.
  • The team has little automated acceptance testing at the moment, but is working to improve this.

Results

This setup has solved some issues the team had in the past, such as not being able to deploy ready-for-test work to the QA environment, due to ‘contamination’ by not-ready-for-test work. This in turn was caused by unpredictable and highly varying priorities, and in varaible cycle-time within the dev and ready-for-test phases.

Analysis

Using feature branches means we’re not doing true Continuous Delivery or even Continuous Integration. We have tried to mitigate this by being very disciplined around not letting Feature Branches diverge too far from develop. We have a Jenkins job which attempts a ‘reverse merge’ of the feature into develop on every feature branch push, which fails if there are any merge conflicts.

There is additional house-keeping in that feature branches need to be created on commencement of work and deleted at end-of-life. Not only that, the IIS Applications created for each Feature Branch need to be deleted manually (creation is automatic). The Git housekeeping is made easier using SourceTree, which has great GitFlow support.


I’d love to hear your comments on this set-up, especially how it can be improved. If you want more details on any of the configuration/set-up/workflows/discipline, feel free to give me a shout🙂

software development, Uncategorized

Testing CORS-enabled endpoints

TL;DR: We needed a simple tool to check whether an HTTP endpoint had CORS enabled. I created one. You can use it here for public endpoints or clone it for local endpoints – its open source. I’d love it if you helped me improve it!

A while ago, we had need in the team to verify that an endpoint (URL) on our website had Cross-Origin Resource Sharing (CORS) enabled. The endpoint was being consumed by a 3rd party, and we’d broken their site with one of our changes. Our team-internal processes had missed the regression, mainly because we didn’t have a tool we could use to check whether CORS was enabled for the endpoint. I looked for a simple tool that we could use, and didn’t find any, so I decided to create one.

The slight subtlety with CORS is that the request must be made by the client browser, i.e. in Javascript. I created a very small and simple HTML page, with some Javascript, that can be used to check whether CORS is enabled for an HTTP endpoint. You can use it live here: http://joshilewis.github.io/CORStest/. Note that if you use the live tool, the endpoint you’re checking must be publicly available. If you want to use it on your own machine, or within a private network, just clone the Git repository, and open index.html from your local file system.

I hope you find it useful🙂

If you’d like to help me improve it, take a look at the issues on Github.

Uncategorized

Maturing my unit of work implementation

Maturing my unit of work implementation

My previous post was on testing my simple unit of work implementation, (which was described in this post). At the conclusion of the post on testing, the Unit of work code looked like this:

https://gist.github.com/1745709/81f15d8a0e473a757f74dd1c2fcc698e8fffea6a

public interface IUnitOfWork : IDisposable { void Begin(); void Commit(); void RollBack();  void Begin(Action action); void Begin<T>(Action<T> action, T t); }

The problem with this design is that when IUnitOfWork.Begin<>() is called by clients, the signature of the Delegate passed in is different. The example I’ve been using so far is the method void IDependency.SomeMethod(String). The next method may have more parameters, and a return type. Each new method signature therefore needs a corresponding IUnitOfWork.Begin<>(). After a little while, IUnitOfWork became:

https://gist.github.com/1745709/fc2742dbeed948172071b2125d0ee44a7ea94994

public interface IUnitOfWork : IDisposable { void Begin(); void Commit(); void RollBack();  void Begin(Action action); void Begin<T>(Action<T> action, T t); void Begin<T1, T2>(Action<T1, T2> action, T1 t1, T2 t2); TResult Begin<T1, T2, TResult>(Func<T1, T2, TResult> func, T1 t1, T2 t2); }

You can see where this is going. The implementation of each of these methods is almost identical to the implementation described in my previous post, the only difference being the invocation of the Action<> (or Func<> ).

Even though the implementation of these new methods is simple (but drudge work), I didn’t like this design because IUnitOfWork and UnitOfWork would continually be changing. Even though I thought this was a reasonable trade-off for testability, I wanted a better solution.

The first thing to do was to move the profusion of Begin<>()s out of UnitOfWork. I felt Extension Methods are a great option in this case. I would be left with a simple, unpolluted IUnitOfWork, clients that get a simple and elegant IUnitOfWork usage pattern, and all the similar Begin<>()s in a central place.

The clean IUnitOfWork and the Extension Methods are:

https://gist.github.com/1745709/5ece82f7c0a1884e4099bb448479cbec2f7e4e39

public interface IUnitOfWork : IDisposable { void Begin(); void Commit(); void RollBack(); }  public static class UnitOfWorkHelpers { public static void Begin<T>(this IUnitOfWork uow, Action<T> action, T t) { uow.Begin(); try { action.Invoke(t); uow.Commit(); } catch (Exception) { uow.RollBack(); throw; } }  public static void Begin<T1, T2>(this IUnitOfWork uow, Action<T1, T2> action, T1 t1, T2 t2) { //Repeated action.Invoke(t1, t2); //Repeated }  public static TResult Begin<T1, T2, TResult>(this IUnitOfWork uow, Func<T1, T2, TResult> func, T1 t1, T2 t2) { TResult returnVal; //Repeated returnVal = func.Invoke(t1, t2); //Repeated return returnVal; } }

This code looks great, but we’ve introduced a problem: the unit test fails. In the test we’re setting an expectation on IUnitOfWork.Begin<>(). This is no longer possible because we’re mocking IUnitOfWork of which Begin<>() is no longer part; and we cannot mock an Extension Method. We can set expectations on the methods of IUnitOfWork only (like Begin(), Commit() and Rollback()). The test must therefore change to this:

[Test] public void TestDoWork() { var dependency = MockRepository.GenerateMock<IDependency>(); dependency.Expect(d => d.SomeMethod("hi"));  var uow = MockRepository.GenerateMock<IUnitOfWork>(); uow.Expect(u => u.Begin()); uow.Expect(u => u.Commit()); uow.Expect(u => u.Dispose());  var uowProvider = MockRepository.GenerateMock<Func<IUnitOfWork>>(); uowProvider.Expect(u => u.Invoke()) .Return(uow);  var sut = new MyClass(dependency, uowProvider); sut.DoWork();  dependency.VerifyAllExpectations(); uowProvider.VerifyAllExpectations(); uow.VerifyAllExpectations();  }

This new test structure is a lot simpler, more straightforward and less esoteric. I think it is easier to read. The expectation on IDependency.SomeMethod() is explicit. We can also now set clear expectations on IUnitOfWork’s methods, like Begin() and Commit() (and Rollback() if appropriate). The only drawback to this design, which I consider minor, is that the code for expectations on IUnitOfWork’s methods will be repeated in every test.

This design provides another major advantage: originally, IUnitOfWork.Begin() was called with a () => closure, which was changed for testability. However, there is no longer a need to set an expectation based on a Delegate in the unit test. So we can go back to using () => in the calling code.

And there’s more! Because we can use () =>, we no longer need all those Begin<>() Extension Methods! We need only the first one, Begin(Action). Our Extension Method class now looks like:

https://gist.github.com/1745709/314a3c31718a939ba6a89a1251cacb4d5e4f1804

public static class UnitOfWorkHelpers { public static void Begin(this IUnitOfWork uow, Action action) { uow.Begin(); try { action.Invoke(); uow.Commit(); } catch (Exception) { uow.RollBack(); throw; } }

I mentioned earlier that there was a minor drawback with the unit test: repeated code for the expectations on IUnitOfWork. This can be improved a lot by simply packaging the repeated code:

https://gist.github.com/1745709/98ed07964f1cae70bd0386b9e6b35fb7745220ac

public class UnitOfWorkTestHelper { public readonly Func<IUnitOfWork> UowProvider; public readonly IUnitOfWork Uow;  private UnitOfWorkTestHelper() { Uow = MockRepository.GenerateMock<IUnitOfWork>();  UowProvider = MockRepository.GenerateMock<Func<IUnitOfWork>>(); UowProvider.Expect(u => u.Invoke()) .Return(Uow); }  public void VerifyAllExpectations() { Uow.VerifyAllExpectations(); UowProvider.VerifyAllExpectations(); }  public static UnitOfWorkTestHelper GetCommitted() { var uow = new UnitOfWorkTestHelper(); uow.Uow.Expect(u => u.Begin()); uow.Uow.Expect(u => u.Commit()); uow.Uow.Expect(u => u.Dispose());  return uow; }  public static UnitOfWorkTestHelper GetRolledBack() { var uow = new UnitOfWorkTestHelper(); uow.Uow.Expect(u => u.Begin()); uow.Uow.Expect(u => u.RollBack()); uow.Uow.Expect(u => u.Dispose());  return uow; }  }

Which is used simply in the test, like this:

[Test] public void TestDoWork() { var dependency = MockRepository.GenerateMock(); dependency.Expect(d => d.SomeMethod("hi"));  var uow = UnitOfWorkTestHelper.GetCommitted(); //OR var uow = UnitOfWorkTestHelper.GetRolledBack(); var sut = new MyClass(dependency, uow.UowProvider); sut.DoWork();  dependency.VerifyAllExpectations(); uow.VerifyAllExpectations(); }

I’m quite happy with this design. It has evolved a fair amount since I started, and I’m getting more comfortable with it. I think its quite frictionless in use and testing but offers enough flexibility at the same time.

I have only one (major) criticism, and that is the way IDependency.SomeMethod() is called in the production code is different to the test – IUnitOfWork.Begin(() => dependency.SomeMethod(“hi”)) in the production code, and dependency.SomeMethod(“hi) and IUnitOfWork.Begin() in the test. There is a small dissonance, and this might cause confusion down the road with some developers.

Having said that though, I’m happy with the efficacy and leanness of the solution🙂.

Uncategorized

Blog moving soon

You may have noticed my two previous posts have looked a bit strange. I am in the process of moving blog platforms (again) and the previous 2 posts have actually been posted automatically from the new platform. I’m moving for a few reasons, among them:

  • WordPress doesn’t present the best friction-free posting experience
  • WordPress doesn’t support Markdown
  • WordPress doesn’t support Github Gists.

I’ve set up a Feedburner Feed real feed link (sorry) to syndicate my blog. This feed still points here (this WordPress.com blog). When I do cut over to the new platform I’ll update the feed to point there. The upshot is you’ll continue to get updates without having to change any subscription info. I’d appreciate it if you updated your reader to subscribe to the new feeed.

Many thanks!