software development, Uncategorized

What is ‘Agile’?

TL;DR Agile is a way of operating wherein the decisions we make are guided primarily by 4 trade-off statements expressed as value statements in the Agile Manifesto. Following this guidance is believed to lead to improved ways of developing software.

Recently I have again started to question what is meant by ‘Agile’ (I make no distinction between ‘Agile’ and ‘agile’). I have asked this question at a few conferences, Lean Coffees etc. This is my current interpretation of ‘Agile’, as informed by the Manifesto for Agile Software Development, specifically the first statement and the four value statements:

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on
the right, we value the items on the left more.

These statements provide guidelines for trade-off decisions we should be making. If you’re in a situation where you can choose between working on either working software or comprehensive documentation, rather choose working software, especially if you don’t yet have working software.

If you’re in a situation where you have a plan but have discovered that the plan no longer makes sense, you can choose between either following the plan (even though you now know it no longer makes sense to do so), or responding to the change, the guidance from the Manifesto is to rather respond to the change. (I personally prefer adapt to learning over responding to change but that’s another story.)

The same thinking applies to the other two value statements: should you have to choose either contract negotiation or customer collaboration, rather choose the latter. If you need to choose between processes and tools and individuals and interactions, again rather choose the latter.

The original authors and signatories of the Manifesto believed, based on their collective experience, that if you follow these decision making guidelines, you will become better at developing software. I certainly don’t disagree with them.


Finding my Tribes and Leveling Up

TL;DR: There are many channels available for leveling-up and for finding your tribe, some of them less ‘traditional’. These are some that I use:
– In-person gatherings (meetups, conferences etc)
– Twitter
– Instant messaging (Slack/HipChat/Jabbr/Gitter/IRC etc)
– Websites (MSDN/Stack Overflow etc)
– Podcasts

This post was prompted by several things: My interview for Source; a conversation I had with two developers I am coaching, who had been thrown into the deep-end on a project with new, unfamiliar technology, and very little support from their company; a conversation with an acquaintance about resources available for leveling up.

A number of years ago, I felt very lonely and misunderstood as a professional developer. I care deeply about my craft and self-development and self-improvement, but struggled to find people with a similar outlook and experience. Part of my frustration was not having anyone with whom to discuss things and soundboard ideas.

I’m glad to say that today my life is totally different. I belong to several tribes, both in meatspace and virtual, have access to a lot more people and resources, with lots of different experiences and points of view. In fact I could probably spend my days only in debate and discussion now, without doing any other work. Besides the communities and resources discussed below, I’m extremely fortunate to be working at nReality where I have amazing colleagues, as well as access to a broad range of people through my training and coaching work.

The resources I use the most these days to level up are
– Meatspace Events
– Twitter
– Slack

Meatspace events are great for many reasons, including: you learn at a lot during the talks; you get to meet awesome, like-minded people and have stimulating conversations. There are a number of great in-person events. The best place to find them is

Of particular significance is DeveloperUG (Twitter) which has monthly events in Jo’burg and Pretoria/Centurion. I owe a massive debt to the founder and past organisers of DeveloperUG, Mark Pearl, Robert MacLean and Terence Kruger for creating such an amazing community.

I am involved in running or hosting these meatspace communities:
Jo’burg Lean Coffee
Jo’burg Domain Driven Design

These meatspace communities are also valuable:
Scrum User Group Jo’burg
Jozi JUG
Code & Coffee
Code Retreat SA
Jo’burg Software Testers

Conferences that I’ve attented and spoken at include
Agile Africa
Scrum Gathering
DevConf, and
Let’s Test SA

I haven’t yet had the opportunity to attend others, like JSinSA, RubyFuza, ScaleConf and others, but I know the same applies to them as well. Pro-tip: Getting accepted to speak or present a poster at a conference usually gets you in free, sometimes the conference also pays for your travel and accommodation costs.

As important as meatspace events and communities are, virtual communities provide access to people in other locations. My current way of connecting is finding people and conversations on Twitter, then using Slack to have deeper, ‘better’ conversations. I do have good conversations via Twitter, but its a bit clumsy for a few reasons, and Slack often works better for real conversations.

Twitter and Slack are great for connecting with people for a number of reasons:
– public & discoverable
– low ceremony
– no strings attached

This means that its very easy to start a conversation with anyone, and they’re more likely to respond since its easy to (low ceremony), and they’re not making any kind of commitment (no strings attached).

I’ve been lucky enough to have conversations with some of my idols, like Kent Beck, Uncle Bob, Woody Zuil, Tim Ottinger etc, some on Twitter, some on Slack, some on both.

I belong to these open-to-the-public Slack communities:
– ZADevelopers – South African Developers (invitation on request)
Legacy Code Rocks – All things Legacy Code (technical debt, TDD, refactoring etc)
ddd-cqrs-es – Domain-Driven Design, CQRS, Event Sourcing
Software Craftsmanship – Software Craftsmanship
– Coaching Circles – Coaching, especially Agile, Lean etc (invitation on request)
WeDoTDD – Test-Driven Development
Testing Community – Testing (I joined very recently)

What resources do you use to level up and connect to communities of interest? Let me know in the comments!

software development, Uncategorized

Testing CORS-enabled endpoints

TL;DR: We needed a simple tool to check whether an HTTP endpoint had CORS enabled. I created one. You can use it here for public endpoints or clone it for local endpoints – its open source. I’d love it if you helped me improve it!

A while ago, we had need in the team to verify that an endpoint (URL) on our website had Cross-Origin Resource Sharing (CORS) enabled. The endpoint was being consumed by a 3rd party, and we’d broken their site with one of our changes. Our team-internal processes had missed the regression, mainly because we didn’t have a tool we could use to check whether CORS was enabled for the endpoint. I looked for a simple tool that we could use, and didn’t find any, so I decided to create one.

The slight subtlety with CORS is that the request must be made by the client browser, i.e. in Javascript. I created a very small and simple HTML page, with some Javascript, that can be used to check whether CORS is enabled for an HTTP endpoint. You can use it live here: Note that if you use the live tool, the endpoint you’re checking must be publicly available. If you want to use it on your own machine, or within a private network, just clone the Git repository, and open index.html from your local file system.

I hope you find it useful 🙂

If you’d like to help me improve it, take a look at the issues on Github.


Maturing my unit of work implementation

Maturing my unit of work implementation

My previous post was on testing my simple unit of work implementation, (which was described in this post). At the conclusion of the post on testing, the Unit of work code looked like this:

public interface IUnitOfWork : IDisposable { void Begin(); void Commit(); void RollBack();  void Begin(Action action); void Begin<T>(Action<T> action, T t); }

The problem with this design is that when IUnitOfWork.Begin<>() is called by clients, the signature of the Delegate passed in is different. The example I’ve been using so far is the method void IDependency.SomeMethod(String). The next method may have more parameters, and a return type. Each new method signature therefore needs a corresponding IUnitOfWork.Begin<>(). After a little while, IUnitOfWork became:

public interface IUnitOfWork : IDisposable { void Begin(); void Commit(); void RollBack();  void Begin(Action action); void Begin<T>(Action<T> action, T t); void Begin<T1, T2>(Action<T1, T2> action, T1 t1, T2 t2); TResult Begin<T1, T2, TResult>(Func<T1, T2, TResult> func, T1 t1, T2 t2); }

You can see where this is going. The implementation of each of these methods is almost identical to the implementation described in my previous post, the only difference being the invocation of the Action<> (or Func<> ).

Even though the implementation of these new methods is simple (but drudge work), I didn’t like this design because IUnitOfWork and UnitOfWork would continually be changing. Even though I thought this was a reasonable trade-off for testability, I wanted a better solution.

The first thing to do was to move the profusion of Begin<>()s out of UnitOfWork. I felt Extension Methods are a great option in this case. I would be left with a simple, unpolluted IUnitOfWork, clients that get a simple and elegant IUnitOfWork usage pattern, and all the similar Begin<>()s in a central place.

The clean IUnitOfWork and the Extension Methods are:

public interface IUnitOfWork : IDisposable { void Begin(); void Commit(); void RollBack(); }  public static class UnitOfWorkHelpers { public static void Begin<T>(this IUnitOfWork uow, Action<T> action, T t) { uow.Begin(); try { action.Invoke(t); uow.Commit(); } catch (Exception) { uow.RollBack(); throw; } }  public static void Begin<T1, T2>(this IUnitOfWork uow, Action<T1, T2> action, T1 t1, T2 t2) { //Repeated action.Invoke(t1, t2); //Repeated }  public static TResult Begin<T1, T2, TResult>(this IUnitOfWork uow, Func<T1, T2, TResult> func, T1 t1, T2 t2) { TResult returnVal; //Repeated returnVal = func.Invoke(t1, t2); //Repeated return returnVal; } }

This code looks great, but we’ve introduced a problem: the unit test fails. In the test we’re setting an expectation on IUnitOfWork.Begin<>(). This is no longer possible because we’re mocking IUnitOfWork of which Begin<>() is no longer part; and we cannot mock an Extension Method. We can set expectations on the methods of IUnitOfWork only (like Begin(), Commit() and Rollback()). The test must therefore change to this:

[Test] public void TestDoWork() { var dependency = MockRepository.GenerateMock<IDependency>(); dependency.Expect(d => d.SomeMethod("hi"));  var uow = MockRepository.GenerateMock<IUnitOfWork>(); uow.Expect(u => u.Begin()); uow.Expect(u => u.Commit()); uow.Expect(u => u.Dispose());  var uowProvider = MockRepository.GenerateMock<Func<IUnitOfWork>>(); uowProvider.Expect(u => u.Invoke()) .Return(uow);  var sut = new MyClass(dependency, uowProvider); sut.DoWork();  dependency.VerifyAllExpectations(); uowProvider.VerifyAllExpectations(); uow.VerifyAllExpectations();  }

This new test structure is a lot simpler, more straightforward and less esoteric. I think it is easier to read. The expectation on IDependency.SomeMethod() is explicit. We can also now set clear expectations on IUnitOfWork’s methods, like Begin() and Commit() (and Rollback() if appropriate). The only drawback to this design, which I consider minor, is that the code for expectations on IUnitOfWork’s methods will be repeated in every test.

This design provides another major advantage: originally, IUnitOfWork.Begin() was called with a () => closure, which was changed for testability. However, there is no longer a need to set an expectation based on a Delegate in the unit test. So we can go back to using () => in the calling code.

And there’s more! Because we can use () =>, we no longer need all those Begin<>() Extension Methods! We need only the first one, Begin(Action). Our Extension Method class now looks like:

public static class UnitOfWorkHelpers { public static void Begin(this IUnitOfWork uow, Action action) { uow.Begin(); try { action.Invoke(); uow.Commit(); } catch (Exception) { uow.RollBack(); throw; } }

I mentioned earlier that there was a minor drawback with the unit test: repeated code for the expectations on IUnitOfWork. This can be improved a lot by simply packaging the repeated code:

public class UnitOfWorkTestHelper { public readonly Func<IUnitOfWork> UowProvider; public readonly IUnitOfWork Uow;  private UnitOfWorkTestHelper() { Uow = MockRepository.GenerateMock<IUnitOfWork>();  UowProvider = MockRepository.GenerateMock<Func<IUnitOfWork>>(); UowProvider.Expect(u => u.Invoke()) .Return(Uow); }  public void VerifyAllExpectations() { Uow.VerifyAllExpectations(); UowProvider.VerifyAllExpectations(); }  public static UnitOfWorkTestHelper GetCommitted() { var uow = new UnitOfWorkTestHelper(); uow.Uow.Expect(u => u.Begin()); uow.Uow.Expect(u => u.Commit()); uow.Uow.Expect(u => u.Dispose());  return uow; }  public static UnitOfWorkTestHelper GetRolledBack() { var uow = new UnitOfWorkTestHelper(); uow.Uow.Expect(u => u.Begin()); uow.Uow.Expect(u => u.RollBack()); uow.Uow.Expect(u => u.Dispose());  return uow; }  }

Which is used simply in the test, like this:

[Test] public void TestDoWork() { var dependency = MockRepository.GenerateMock(); dependency.Expect(d => d.SomeMethod("hi"));  var uow = UnitOfWorkTestHelper.GetCommitted(); //OR var uow = UnitOfWorkTestHelper.GetRolledBack(); var sut = new MyClass(dependency, uow.UowProvider); sut.DoWork();  dependency.VerifyAllExpectations(); uow.VerifyAllExpectations(); }

I’m quite happy with this design. It has evolved a fair amount since I started, and I’m getting more comfortable with it. I think its quite frictionless in use and testing but offers enough flexibility at the same time.

I have only one (major) criticism, and that is the way IDependency.SomeMethod() is called in the production code is different to the test – IUnitOfWork.Begin(() => dependency.SomeMethod(“hi”)) in the production code, and dependency.SomeMethod(“hi) and IUnitOfWork.Begin() in the test. There is a small dissonance, and this might cause confusion down the road with some developers.

Having said that though, I’m happy with the efficacy and leanness of the solution :).


Blog moving soon

You may have noticed my two previous posts have looked a bit strange. I am in the process of moving blog platforms (again) and the previous 2 posts have actually been posted automatically from the new platform. I’m moving for a few reasons, among them:

  • WordPress doesn’t present the best friction-free posting experience
  • WordPress doesn’t support Markdown
  • WordPress doesn’t support Github Gists.

I’ve set up a Feedburner Feed real feed link (sorry) to syndicate my blog. This feed still points here (this blog). When I do cut over to the new platform I’ll update the feed to point there. The upshot is you’ll continue to get updates without having to change any subscription info. I’d appreciate it if you updated your reader to subscribe to the new feeed.

Many thanks!


Branch-per-version Workflow in TFS

This is cross-posted from here.

Hi, I am having trouble with the workflows involved in a branch-per-version (release) scenario. I have a Subversion background, so my understanding of how TFS works may be ‘biased’ towards Subversion.

Our policy involves branching-per-version and branching-per-feature. Typically, changes will be developed in a feature branch (whether new features, bugs etc). Once complete, these feature branches will be merged into trunk (mainline) and the branch deleted. For a deployment, the changes made in each branch will be merged onto the last deployed version. This ‘patched’ version will then be branched as the new version. This requires that the new version branch be created from the workspace, and not from a latest version in the repository. When we try perform a branch from the workspace version, the opposite of what we expect occurs.

Let’s assume that I have a release branch version-1.0.0. A production bug is reported, and the bug is fixed in mainline as changeset 25. I now want to apply changeset to version 1.0.0 to create version 1.0.1. So I open my workspace copy of the version-1.0.0 branch and perform a merge of mainline changeset 25 onto the branch. Now, the change is correctly applied to my workspace copy, and the relvant files have been changed and are checked out. Then, in the source control explorer, I branch the version-1.0.0 branch to a new branch caleld version-1.0.1, and I choose to branch it from the workspace. When I check this change in, what happens is that changeset 25 is applied to the version-1.0.0 branch, and NOT to the version-1.0.1 branch as expected. That is, version-1.0.1 looks like version-1.0.0 before the change, and version-1.0.0 contains the change.

The same happens if I create a label from the merged workspace copy.

Effectively what we’re trying to achieve is simulate a ‘tag’ in Subversion (which are just branches anyway with some extra semantics).

I’d like some guidance on how to apply ‘hotfixes’ to a version branch to create a new version branch, and not change the ‘original’ or baseline version.


TFS and SVN Documentation

Full disclosure: I have been working with Subversion for a number of years. I’d consider myself a Subversion evangelist. I often have an aversion to how Microsoft approach things.

I’ve recently been looking in to Team Foundation Version Control, since we’re adopting it in my organisation. Some other teams in the organisation are already using it, but the team I am in is not.  In the past, at previous positions, I have been responsible for synthesizing usage policies and best practices for lots of tools, including Subversion. I have also trained other team members (and sometimes superiors) in tool usage. This included things like forcing commit messages, to understanding branching and merging. My current role includes these responsibilities (including a lot more process-oriented guidance).

I am astounded at the lack of usage documentation there is for TFVC available on the Internet (i.e. not in books, since I haven’t had the chance to read any). The Subversion book must rank as one of the best pieces of documentation for an open source product around. It really does contain a wealth of well-written information on not only how to administer Subversion (I’ve seen plenty of guides on installing TFS), but how to use it – especially the ‘Daily usage guide’.  Things like how to set up working copies, committing, updating, resolving conflicts, branching and merging. The corresp0nding MSDN documentation is very very thin in this regard.

Has anyone else had this issue?

Can you recommend some resources for me?