Uncategorized

Finding my Tribes and Leveling Up

TL;DR: There are many channels available for leveling-up and for finding your tribe, some of them less ‘traditional’. These are some that I use:
– In-person gatherings (meetups, conferences etc)
– Twitter
– Instant messaging (Slack/HipChat/Jabbr/Gitter/IRC etc)
– Websites (MSDN/Stack Overflow etc)
– Podcasts


This post was prompted by several things: My interview for Source; a conversation I had with two developers I am coaching, who had been thrown into the deep-end on a project with new, unfamiliar technology, and very little support from their company; a conversation with an acquaintance about resources available for leveling up.

A number of years ago, I felt very lonely and misunderstood as a professional developer. I care deeply about my craft and self-development and self-improvement, but struggled to find people with a similar outlook and experience. Part of my frustration was not having anyone with whom to discuss things and soundboard ideas.

I’m glad to say that today my life is totally different. I belong to several tribes, both in meatspace and virtual, have access to a lot more people and resources, with lots of different experiences and points of view. In fact I could probably spend my days only in debate and discussion now, without doing any other work. Besides the communities and resources discussed below, I’m extremely fortunate to be working at nReality where I have amazing colleagues, as well as access to a broad range of people through my training and coaching work.

The resources I use the most these days to level up are
– Meatspace Events
– Twitter
– Slack

Meatspace events are great for many reasons, including: you learn at a lot during the talks; you get to meet awesome, like-minded people and have stimulating conversations. There are a number of great in-person events. The best place to find them is meetup.com.

Of particular significance is DeveloperUG (Twitter) which has monthly events in Jo’burg and Pretoria/Centurion. I owe a massive debt to the founder and past organisers of DeveloperUG, Mark Pearl, Robert MacLean and Terence Kruger for creating such an amazing community.

I am involved in running or hosting these meatspace communities:
Jo’burg Lean Coffee
Jo’burg Domain Driven Design

These meatspace communities are also valuable:
Scrum User Group Jo’burg
Jozi.rb
Jozi JUG
Code & Coffee
Code Retreat SA
Jo’burg Software Testers

Conferences that I’ve attented and spoken at include
Agile Africa
Scrum Gathering
DevConf, and
Let’s Test SA

I haven’t yet had the opportunity to attend others, like JSinSA, RubyFuza, ScaleConf and others, but I know the same applies to them as well. Pro-tip: Getting accepted to speak or present a poster at a conference usually gets you in free, sometimes the conference also pays for your travel and accommodation costs.

As important as meatspace events and communities are, virtual communities provide access to people in other locations. My current way of connecting is finding people and conversations on Twitter, then using Slack to have deeper, ‘better’ conversations. I do have good conversations via Twitter, but its a bit clumsy for a few reasons, and Slack often works better for real conversations.

Twitter and Slack are great for connecting with people for a number of reasons:
– public & discoverable
– low ceremony
– no strings attached

This means that its very easy to start a conversation with anyone, and they’re more likely to respond since its easy to (low ceremony), and they’re not making any kind of commitment (no strings attached).

I’ve been lucky enough to have conversations with some of my idols, like Kent Beck, Uncle Bob, Woody Zuil, Tim Ottinger etc, some on Twitter, some on Slack, some on both.

I belong to these open-to-the-public Slack communities:
– ZADevelopers – South African Developers (invitation on request)
Legacy Code Rocks – All things Legacy Code (technical debt, TDD, refactoring etc)
ddd-cqrs-es – Domain-Driven Design, CQRS, Event Sourcing
Software Craftsmanship – Software Craftsmanship
– Coaching Circles – Coaching, especially Agile, Lean etc (invitation on request)
WeDoTDD – Test-Driven Development
Testing Community – Testing (I joined very recently)

What resources do you use to level up and connect to communities of interest? Let me know in the comments!

software development, Uncategorized

Testing CORS-enabled endpoints

TL;DR: We needed a simple tool to check whether an HTTP endpoint had CORS enabled. I created one. You can use it here for public endpoints or clone it for local endpoints – its open source. I’d love it if you helped me improve it!

A while ago, we had need in the team to verify that an endpoint (URL) on our website had Cross-Origin Resource Sharing (CORS) enabled. The endpoint was being consumed by a 3rd party, and we’d broken their site with one of our changes. Our team-internal processes had missed the regression, mainly because we didn’t have a tool we could use to check whether CORS was enabled for the endpoint. I looked for a simple tool that we could use, and didn’t find any, so I decided to create one.

The slight subtlety with CORS is that the request must be made by the client browser, i.e. in Javascript. I created a very small and simple HTML page, with some Javascript, that can be used to check whether CORS is enabled for an HTTP endpoint. You can use it live here: http://joshilewis.github.io/CORStest/. Note that if you use the live tool, the endpoint you’re checking must be publicly available. If you want to use it on your own machine, or within a private network, just clone the Git repository, and open index.html from your local file system.

I hope you find it useful 🙂

If you’d like to help me improve it, take a look at the issues on Github.

Uncategorized

Maturing my unit of work implementation

Maturing my unit of work implementation

My previous post was on testing my simple unit of work implementation, (which was described in this post). At the conclusion of the post on testing, the Unit of work code looked like this:

https://gist.github.com/1745709/81f15d8a0e473a757f74dd1c2fcc698e8fffea6a

public interface IUnitOfWork : IDisposable { void Begin(); void Commit(); void RollBack();  void Begin(Action action); void Begin<T>(Action<T> action, T t); }

The problem with this design is that when IUnitOfWork.Begin<>() is called by clients, the signature of the Delegate passed in is different. The example I’ve been using so far is the method void IDependency.SomeMethod(String). The next method may have more parameters, and a return type. Each new method signature therefore needs a corresponding IUnitOfWork.Begin<>(). After a little while, IUnitOfWork became:

https://gist.github.com/1745709/fc2742dbeed948172071b2125d0ee44a7ea94994

public interface IUnitOfWork : IDisposable { void Begin(); void Commit(); void RollBack();  void Begin(Action action); void Begin<T>(Action<T> action, T t); void Begin<T1, T2>(Action<T1, T2> action, T1 t1, T2 t2); TResult Begin<T1, T2, TResult>(Func<T1, T2, TResult> func, T1 t1, T2 t2); }

You can see where this is going. The implementation of each of these methods is almost identical to the implementation described in my previous post, the only difference being the invocation of the Action<> (or Func<> ).

Even though the implementation of these new methods is simple (but drudge work), I didn’t like this design because IUnitOfWork and UnitOfWork would continually be changing. Even though I thought this was a reasonable trade-off for testability, I wanted a better solution.

The first thing to do was to move the profusion of Begin<>()s out of UnitOfWork. I felt Extension Methods are a great option in this case. I would be left with a simple, unpolluted IUnitOfWork, clients that get a simple and elegant IUnitOfWork usage pattern, and all the similar Begin<>()s in a central place.

The clean IUnitOfWork and the Extension Methods are:

https://gist.github.com/1745709/5ece82f7c0a1884e4099bb448479cbec2f7e4e39

public interface IUnitOfWork : IDisposable { void Begin(); void Commit(); void RollBack(); }  public static class UnitOfWorkHelpers { public static void Begin<T>(this IUnitOfWork uow, Action<T> action, T t) { uow.Begin(); try { action.Invoke(t); uow.Commit(); } catch (Exception) { uow.RollBack(); throw; } }  public static void Begin<T1, T2>(this IUnitOfWork uow, Action<T1, T2> action, T1 t1, T2 t2) { //Repeated action.Invoke(t1, t2); //Repeated }  public static TResult Begin<T1, T2, TResult>(this IUnitOfWork uow, Func<T1, T2, TResult> func, T1 t1, T2 t2) { TResult returnVal; //Repeated returnVal = func.Invoke(t1, t2); //Repeated return returnVal; } }

This code looks great, but we’ve introduced a problem: the unit test fails. In the test we’re setting an expectation on IUnitOfWork.Begin<>(). This is no longer possible because we’re mocking IUnitOfWork of which Begin<>() is no longer part; and we cannot mock an Extension Method. We can set expectations on the methods of IUnitOfWork only (like Begin(), Commit() and Rollback()). The test must therefore change to this:

[Test] public void TestDoWork() { var dependency = MockRepository.GenerateMock<IDependency>(); dependency.Expect(d => d.SomeMethod("hi"));  var uow = MockRepository.GenerateMock<IUnitOfWork>(); uow.Expect(u => u.Begin()); uow.Expect(u => u.Commit()); uow.Expect(u => u.Dispose());  var uowProvider = MockRepository.GenerateMock<Func<IUnitOfWork>>(); uowProvider.Expect(u => u.Invoke()) .Return(uow);  var sut = new MyClass(dependency, uowProvider); sut.DoWork();  dependency.VerifyAllExpectations(); uowProvider.VerifyAllExpectations(); uow.VerifyAllExpectations();  }

This new test structure is a lot simpler, more straightforward and less esoteric. I think it is easier to read. The expectation on IDependency.SomeMethod() is explicit. We can also now set clear expectations on IUnitOfWork’s methods, like Begin() and Commit() (and Rollback() if appropriate). The only drawback to this design, which I consider minor, is that the code for expectations on IUnitOfWork’s methods will be repeated in every test.

This design provides another major advantage: originally, IUnitOfWork.Begin() was called with a () => closure, which was changed for testability. However, there is no longer a need to set an expectation based on a Delegate in the unit test. So we can go back to using () => in the calling code.

And there’s more! Because we can use () =>, we no longer need all those Begin<>() Extension Methods! We need only the first one, Begin(Action). Our Extension Method class now looks like:

https://gist.github.com/1745709/314a3c31718a939ba6a89a1251cacb4d5e4f1804

public static class UnitOfWorkHelpers { public static void Begin(this IUnitOfWork uow, Action action) { uow.Begin(); try { action.Invoke(); uow.Commit(); } catch (Exception) { uow.RollBack(); throw; } }

I mentioned earlier that there was a minor drawback with the unit test: repeated code for the expectations on IUnitOfWork. This can be improved a lot by simply packaging the repeated code:

https://gist.github.com/1745709/98ed07964f1cae70bd0386b9e6b35fb7745220ac

public class UnitOfWorkTestHelper { public readonly Func<IUnitOfWork> UowProvider; public readonly IUnitOfWork Uow;  private UnitOfWorkTestHelper() { Uow = MockRepository.GenerateMock<IUnitOfWork>();  UowProvider = MockRepository.GenerateMock<Func<IUnitOfWork>>(); UowProvider.Expect(u => u.Invoke()) .Return(Uow); }  public void VerifyAllExpectations() { Uow.VerifyAllExpectations(); UowProvider.VerifyAllExpectations(); }  public static UnitOfWorkTestHelper GetCommitted() { var uow = new UnitOfWorkTestHelper(); uow.Uow.Expect(u => u.Begin()); uow.Uow.Expect(u => u.Commit()); uow.Uow.Expect(u => u.Dispose());  return uow; }  public static UnitOfWorkTestHelper GetRolledBack() { var uow = new UnitOfWorkTestHelper(); uow.Uow.Expect(u => u.Begin()); uow.Uow.Expect(u => u.RollBack()); uow.Uow.Expect(u => u.Dispose());  return uow; }  }

Which is used simply in the test, like this:

[Test] public void TestDoWork() { var dependency = MockRepository.GenerateMock(); dependency.Expect(d => d.SomeMethod("hi"));  var uow = UnitOfWorkTestHelper.GetCommitted(); //OR var uow = UnitOfWorkTestHelper.GetRolledBack(); var sut = new MyClass(dependency, uow.UowProvider); sut.DoWork();  dependency.VerifyAllExpectations(); uow.VerifyAllExpectations(); }

I’m quite happy with this design. It has evolved a fair amount since I started, and I’m getting more comfortable with it. I think its quite frictionless in use and testing but offers enough flexibility at the same time.

I have only one (major) criticism, and that is the way IDependency.SomeMethod() is called in the production code is different to the test – IUnitOfWork.Begin(() => dependency.SomeMethod(“hi”)) in the production code, and dependency.SomeMethod(“hi) and IUnitOfWork.Begin() in the test. There is a small dissonance, and this might cause confusion down the road with some developers.

Having said that though, I’m happy with the efficacy and leanness of the solution :).

Uncategorized

Blog moving soon

You may have noticed my two previous posts have looked a bit strange. I am in the process of moving blog platforms (again) and the previous 2 posts have actually been posted automatically from the new platform. I’m moving for a few reasons, among them:

  • WordPress doesn’t present the best friction-free posting experience
  • WordPress doesn’t support Markdown
  • WordPress doesn’t support Github Gists.

I’ve set up a Feedburner Feed real feed link (sorry) to syndicate my blog. This feed still points here (this WordPress.com blog). When I do cut over to the new platform I’ll update the feed to point there. The upshot is you’ll continue to get updates without having to change any subscription info. I’d appreciate it if you updated your reader to subscribe to the new feeed.

Many thanks!

Uncategorized

Branch-per-version Workflow in TFS

This is cross-posted from here.

Hi, I am having trouble with the workflows involved in a branch-per-version (release) scenario. I have a Subversion background, so my understanding of how TFS works may be ‘biased’ towards Subversion.

Our policy involves branching-per-version and branching-per-feature. Typically, changes will be developed in a feature branch (whether new features, bugs etc). Once complete, these feature branches will be merged into trunk (mainline) and the branch deleted. For a deployment, the changes made in each branch will be merged onto the last deployed version. This ‘patched’ version will then be branched as the new version. This requires that the new version branch be created from the workspace, and not from a latest version in the repository. When we try perform a branch from the workspace version, the opposite of what we expect occurs.

Let’s assume that I have a release branch version-1.0.0. A production bug is reported, and the bug is fixed in mainline as changeset 25. I now want to apply changeset to version 1.0.0 to create version 1.0.1. So I open my workspace copy of the version-1.0.0 branch and perform a merge of mainline changeset 25 onto the branch. Now, the change is correctly applied to my workspace copy, and the relvant files have been changed and are checked out. Then, in the source control explorer, I branch the version-1.0.0 branch to a new branch caleld version-1.0.1, and I choose to branch it from the workspace. When I check this change in, what happens is that changeset 25 is applied to the version-1.0.0 branch, and NOT to the version-1.0.1 branch as expected. That is, version-1.0.1 looks like version-1.0.0 before the change, and version-1.0.0 contains the change.

The same happens if I create a label from the merged workspace copy.

Effectively what we’re trying to achieve is simulate a ‘tag’ in Subversion (which are just branches anyway with some extra semantics).

I’d like some guidance on how to apply ‘hotfixes’ to a version branch to create a new version branch, and not change the ‘original’ or baseline version.

Uncategorized

TFS and SVN Documentation

Full disclosure: I have been working with Subversion for a number of years. I’d consider myself a Subversion evangelist. I often have an aversion to how Microsoft approach things.

I’ve recently been looking in to Team Foundation Version Control, since we’re adopting it in my organisation. Some other teams in the organisation are already using it, but the team I am in is not.  In the past, at previous positions, I have been responsible for synthesizing usage policies and best practices for lots of tools, including Subversion. I have also trained other team members (and sometimes superiors) in tool usage. This included things like forcing commit messages, to understanding branching and merging. My current role includes these responsibilities (including a lot more process-oriented guidance).

I am astounded at the lack of usage documentation there is for TFVC available on the Internet (i.e. not in books, since I haven’t had the chance to read any). The Subversion book must rank as one of the best pieces of documentation for an open source product around. It really does contain a wealth of well-written information on not only how to administer Subversion (I’ve seen plenty of guides on installing TFS), but how to use it – especially the ‘Daily usage guide’.  Things like how to set up working copies, committing, updating, resolving conflicts, branching and merging. The corresp0nding MSDN documentation is very very thin in this regard.

Has anyone else had this issue?

Can you recommend some resources for me?

Uncategorized

Applying TDD in principle if not in practice

My previous post was about describing the essence of TDD as a specification and direction tool, using non-technical examples. A quick recap: the point of TDD is to know where you going, and how to know when you’ve achieved your goal.

Several people have told me that TDD is all well and good, but it doesn’t apply to them, for various reasons. One chap told me he can’t to TDD because he’s a ‘SQL developer’ (his words). A friend told me he works on a vendor system, which is configured with XSLT files, and there is no tooling to support TDD. Its all very well to use whatever flavour of xUnit you want, but what about those who don’t have frameworks? What can the SQL, ASP and XSLT progammers and configurers out there do?

This is why it is important to understand the principle of TDD, and not only its practice (and why I think most demonstrations of TDD that I’ve seen focus too much on the practice).

As long as you have some way to set up a desired outcome, and can compare your actual outcome to your desired, you can perform TDD. You may not get some side benefits like automated regression testing and test coverage, but as mentioned in my previous post, those aren’t primary benefits. You also might not get the benefits of a decoupled design, since this may not make sense in your application.

You can still convey your intent. You will still have direction and focus. You will still be able to answer the question ‘am I done?’

A nice example of not getting the primary benefit of a decoupled design is the case of TDD in the SQL context. The team in which I’m currently working is still strongly focused on stored procedure and dataset-based programming. Therefore I’ve done a fair amount of thinking and playing in the TDD for SQL space. I’ve come up with a framework based on nUnit and TSQLUnit, but that’s another point.

I’ve used this framework to do TDD for stored procedures. One major shortcoming of T-SQL as a language (I feel dirty now) is that there’s no concept of abstraction. There is no way to break dependencies. If stored proceudre 1 (SP1) depends on user-defined function a (UDFa), there is no way I can test SP1 in isolation to UDFa. I therefore don’t get the benefit of a decoupled design, but I can still express the intent of my stored procedure, and I can prove when I’ve satisfied the requirment.

This is a clear case of the tooling failing the technique, but the technique can still be applied in principle. The same should be applicable in many other instances where there either isn’t tooling for the product (e.g. a vendor’s product which is configured).

Again, as long as you can create an expectation, and compare your actual outcome to that expectation, you can apply TDD in principle. So if you’ve previously had a look at TDD and wanted to apply it, but felt a lack of tooling for your particular case, think about how you can apply it in principle.