software development, Uncategorized

What is ‘Agile’?

TL;DR Agile is a way of operating wherein the decisions we make are guided primarily by 4 trade-off statements expressed as value statements in the Agile Manifesto. Following this guidance is believed to lead to improved ways of developing software.


Recently I have again started to question what is meant by ‘Agile’ (I make no distinction between ‘Agile’ and ‘agile’). I have asked this question at a few conferences, Lean Coffees etc. This is my current interpretation of ‘Agile’, as informed by the Manifesto for Agile Software Development, specifically the first statement and the four value statements:

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on
the right, we value the items on the left more.

These statements provide guidelines for trade-off decisions we should be making. If you’re in a situation where you can choose between working on either working software or comprehensive documentation, rather choose working software, especially if you don’t yet have working software.

If you’re in a situation where you have a plan but have discovered that the plan no longer makes sense, you can choose between either following the plan (even though you now know it no longer makes sense to do so), or responding to the change, the guidance from the Manifesto is to rather respond to the change. (I personally prefer adapt to learning over responding to change but that’s another story.)

The same thinking applies to the other two value statements: should you have to choose either contract negotiation or customer collaboration, rather choose the latter. If you need to choose between processes and tools and individuals and interactions, again rather choose the latter.

The original authors and signatories of the Manifesto believed, based on their collective experience, that if you follow these decision making guidelines, you will become better at developing software. I certainly don’t disagree with them.

Uncategorized

Finding my Tribes and Leveling Up

TL;DR: There are many channels available for leveling-up and for finding your tribe, some of them less ‘traditional’. These are some that I use:
– In-person gatherings (meetups, conferences etc)
– Twitter
– Instant messaging (Slack/HipChat/Jabbr/Gitter/IRC etc)
– Websites (MSDN/Stack Overflow etc)
– Podcasts


This post was prompted by several things: My interview for Source; a conversation I had with two developers I am coaching, who had been thrown into the deep-end on a project with new, unfamiliar technology, and very little support from their company; a conversation with an acquaintance about resources available for leveling up.

A number of years ago, I felt very lonely and misunderstood as a professional developer. I care deeply about my craft and self-development and self-improvement, but struggled to find people with a similar outlook and experience. Part of my frustration was not having anyone with whom to discuss things and soundboard ideas.

I’m glad to say that today my life is totally different. I belong to several tribes, both in meatspace and virtual, have access to a lot more people and resources, with lots of different experiences and points of view. In fact I could probably spend my days only in debate and discussion now, without doing any other work. Besides the communities and resources discussed below, I’m extremely fortunate to be working at nReality where I have amazing colleagues, as well as access to a broad range of people through my training and coaching work.

The resources I use the most these days to level up are
– Meatspace Events
– Twitter
– Slack

Meatspace events are great for many reasons, including: you learn at a lot during the talks; you get to meet awesome, like-minded people and have stimulating conversations. There are a number of great in-person events. The best place to find them is meetup.com.

Of particular significance is DeveloperUG (Twitter) which has monthly events in Jo’burg and Pretoria/Centurion. I owe a massive debt to the founder and past organisers of DeveloperUG, Mark Pearl, Robert MacLean and Terence Kruger for creating such an amazing community.

I am involved in running or hosting these meatspace communities:
Jo’burg Lean Coffee
Jo’burg Domain Driven Design

These meatspace communities are also valuable:
Scrum User Group Jo’burg
Jozi.rb
Jozi JUG
Code & Coffee
Code Retreat SA
Jo’burg Software Testers

Conferences that I’ve attented and spoken at include
Agile Africa
Scrum Gathering
DevConf, and
Let’s Test SA

I haven’t yet had the opportunity to attend others, like JSinSA, RubyFuza, ScaleConf and others, but I know the same applies to them as well. Pro-tip: Getting accepted to speak or present a poster at a conference usually gets you in free, sometimes the conference also pays for your travel and accommodation costs.

As important as meatspace events and communities are, virtual communities provide access to people in other locations. My current way of connecting is finding people and conversations on Twitter, then using Slack to have deeper, ‘better’ conversations. I do have good conversations via Twitter, but its a bit clumsy for a few reasons, and Slack often works better for real conversations.

Twitter and Slack are great for connecting with people for a number of reasons:
– public & discoverable
– low ceremony
– no strings attached

This means that its very easy to start a conversation with anyone, and they’re more likely to respond since its easy to (low ceremony), and they’re not making any kind of commitment (no strings attached).

I’ve been lucky enough to have conversations with some of my idols, like Kent Beck, Uncle Bob, Woody Zuil, Tim Ottinger etc, some on Twitter, some on Slack, some on both.

I belong to these open-to-the-public Slack communities:
ZATech – South Africans in Tech
Legacy Code Rocks – All things Legacy Code (technical debt, TDD, refactoring etc)
ddd-cqrs-es – Domain-Driven Design, CQRS, Event Sourcing
Software Craftsmanship – Software Craftsmanship
– Coaching Circles – Coaching, especially Agile, Lean etc (invitation on request)
WeDoTDD – Test-Driven Development
Testing Community – Testing (I joined very recently)

What resources do you use to level up and connect to communities of interest? Let me know in the comments!

software development, Uncategorized

Testing CORS-enabled endpoints

TL;DR: We needed a simple tool to check whether an HTTP endpoint had CORS enabled. I created one. You can use it here for public endpoints or clone it for local endpoints – its open source. I’d love it if you helped me improve it!

A while ago, we had need in the team to verify that an endpoint (URL) on our website had Cross-Origin Resource Sharing (CORS) enabled. The endpoint was being consumed by a 3rd party, and we’d broken their site with one of our changes. Our team-internal processes had missed the regression, mainly because we didn’t have a tool we could use to check whether CORS was enabled for the endpoint. I looked for a simple tool that we could use, and didn’t find any, so I decided to create one.

The slight subtlety with CORS is that the request must be made by the client browser, i.e. in Javascript. I created a very small and simple HTML page, with some Javascript, that can be used to check whether CORS is enabled for an HTTP endpoint. You can use it live here: http://joshilewis.github.io/CORStest/. Note that if you use the live tool, the endpoint you’re checking must be publicly available. If you want to use it on your own machine, or within a private network, just clone the Git repository, and open index.html from your local file system.

I hope you find it useful 🙂

If you’d like to help me improve it, take a look at the issues on Github.

Uncategorized

Maturing my unit of work implementation

Maturing my unit of work implementation

My previous post was on testing my simple unit of work implementation, (which was described in this post). At the conclusion of the post on testing, the Unit of work code looked like this:

https://gist.github.com/1745709/81f15d8a0e473a757f74dd1c2fcc698e8fffea6a

public interface IUnitOfWork : IDisposable { void Begin(); void Commit(); void RollBack();  void Begin(Action action); void Begin<T>(Action<T> action, T t); }

The problem with this design is that when IUnitOfWork.Begin<>() is called by clients, the signature of the Delegate passed in is different. The example I’ve been using so far is the method void IDependency.SomeMethod(String). The next method may have more parameters, and a return type. Each new method signature therefore needs a corresponding IUnitOfWork.Begin<>(). After a little while, IUnitOfWork became:

https://gist.github.com/1745709/fc2742dbeed948172071b2125d0ee44a7ea94994

public interface IUnitOfWork : IDisposable { void Begin(); void Commit(); void RollBack();  void Begin(Action action); void Begin<T>(Action<T> action, T t); void Begin<T1, T2>(Action<T1, T2> action, T1 t1, T2 t2); TResult Begin<T1, T2, TResult>(Func<T1, T2, TResult> func, T1 t1, T2 t2); }

You can see where this is going. The implementation of each of these methods is almost identical to the implementation described in my previous post, the only difference being the invocation of the Action<> (or Func<> ).

Even though the implementation of these new methods is simple (but drudge work), I didn’t like this design because IUnitOfWork and UnitOfWork would continually be changing. Even though I thought this was a reasonable trade-off for testability, I wanted a better solution.

The first thing to do was to move the profusion of Begin<>()s out of UnitOfWork. I felt Extension Methods are a great option in this case. I would be left with a simple, unpolluted IUnitOfWork, clients that get a simple and elegant IUnitOfWork usage pattern, and all the similar Begin<>()s in a central place.

The clean IUnitOfWork and the Extension Methods are:

https://gist.github.com/1745709/5ece82f7c0a1884e4099bb448479cbec2f7e4e39

public interface IUnitOfWork : IDisposable { void Begin(); void Commit(); void RollBack(); }  public static class UnitOfWorkHelpers { public static void Begin<T>(this IUnitOfWork uow, Action<T> action, T t) { uow.Begin(); try { action.Invoke(t); uow.Commit(); } catch (Exception) { uow.RollBack(); throw; } }  public static void Begin<T1, T2>(this IUnitOfWork uow, Action<T1, T2> action, T1 t1, T2 t2) { //Repeated action.Invoke(t1, t2); //Repeated }  public static TResult Begin<T1, T2, TResult>(this IUnitOfWork uow, Func<T1, T2, TResult> func, T1 t1, T2 t2) { TResult returnVal; //Repeated returnVal = func.Invoke(t1, t2); //Repeated return returnVal; } }

This code looks great, but we’ve introduced a problem: the unit test fails. In the test we’re setting an expectation on IUnitOfWork.Begin<>(). This is no longer possible because we’re mocking IUnitOfWork of which Begin<>() is no longer part; and we cannot mock an Extension Method. We can set expectations on the methods of IUnitOfWork only (like Begin(), Commit() and Rollback()). The test must therefore change to this:

[Test] public void TestDoWork() { var dependency = MockRepository.GenerateMock<IDependency>(); dependency.Expect(d => d.SomeMethod("hi"));  var uow = MockRepository.GenerateMock<IUnitOfWork>(); uow.Expect(u => u.Begin()); uow.Expect(u => u.Commit()); uow.Expect(u => u.Dispose());  var uowProvider = MockRepository.GenerateMock<Func<IUnitOfWork>>(); uowProvider.Expect(u => u.Invoke()) .Return(uow);  var sut = new MyClass(dependency, uowProvider); sut.DoWork();  dependency.VerifyAllExpectations(); uowProvider.VerifyAllExpectations(); uow.VerifyAllExpectations();  }

This new test structure is a lot simpler, more straightforward and less esoteric. I think it is easier to read. The expectation on IDependency.SomeMethod() is explicit. We can also now set clear expectations on IUnitOfWork’s methods, like Begin() and Commit() (and Rollback() if appropriate). The only drawback to this design, which I consider minor, is that the code for expectations on IUnitOfWork’s methods will be repeated in every test.

This design provides another major advantage: originally, IUnitOfWork.Begin() was called with a () => closure, which was changed for testability. However, there is no longer a need to set an expectation based on a Delegate in the unit test. So we can go back to using () => in the calling code.

And there’s more! Because we can use () =>, we no longer need all those Begin<>() Extension Methods! We need only the first one, Begin(Action). Our Extension Method class now looks like:

https://gist.github.com/1745709/314a3c31718a939ba6a89a1251cacb4d5e4f1804

public static class UnitOfWorkHelpers { public static void Begin(this IUnitOfWork uow, Action action) { uow.Begin(); try { action.Invoke(); uow.Commit(); } catch (Exception) { uow.RollBack(); throw; } }

I mentioned earlier that there was a minor drawback with the unit test: repeated code for the expectations on IUnitOfWork. This can be improved a lot by simply packaging the repeated code:

https://gist.github.com/1745709/98ed07964f1cae70bd0386b9e6b35fb7745220ac

public class UnitOfWorkTestHelper { public readonly Func<IUnitOfWork> UowProvider; public readonly IUnitOfWork Uow;  private UnitOfWorkTestHelper() { Uow = MockRepository.GenerateMock<IUnitOfWork>();  UowProvider = MockRepository.GenerateMock<Func<IUnitOfWork>>(); UowProvider.Expect(u => u.Invoke()) .Return(Uow); }  public void VerifyAllExpectations() { Uow.VerifyAllExpectations(); UowProvider.VerifyAllExpectations(); }  public static UnitOfWorkTestHelper GetCommitted() { var uow = new UnitOfWorkTestHelper(); uow.Uow.Expect(u => u.Begin()); uow.Uow.Expect(u => u.Commit()); uow.Uow.Expect(u => u.Dispose());  return uow; }  public static UnitOfWorkTestHelper GetRolledBack() { var uow = new UnitOfWorkTestHelper(); uow.Uow.Expect(u => u.Begin()); uow.Uow.Expect(u => u.RollBack()); uow.Uow.Expect(u => u.Dispose());  return uow; }  }

Which is used simply in the test, like this:

[Test] public void TestDoWork() { var dependency = MockRepository.GenerateMock(); dependency.Expect(d => d.SomeMethod("hi"));  var uow = UnitOfWorkTestHelper.GetCommitted(); //OR var uow = UnitOfWorkTestHelper.GetRolledBack(); var sut = new MyClass(dependency, uow.UowProvider); sut.DoWork();  dependency.VerifyAllExpectations(); uow.VerifyAllExpectations(); }

I’m quite happy with this design. It has evolved a fair amount since I started, and I’m getting more comfortable with it. I think its quite frictionless in use and testing but offers enough flexibility at the same time.

I have only one (major) criticism, and that is the way IDependency.SomeMethod() is called in the production code is different to the test – IUnitOfWork.Begin(() => dependency.SomeMethod(“hi”)) in the production code, and dependency.SomeMethod(“hi) and IUnitOfWork.Begin() in the test. There is a small dissonance, and this might cause confusion down the road with some developers.

Having said that though, I’m happy with the efficacy and leanness of the solution :).

Uncategorized

Blog moving soon

You may have noticed my two previous posts have looked a bit strange. I am in the process of moving blog platforms (again) and the previous 2 posts have actually been posted automatically from the new platform. I’m moving for a few reasons, among them:

  • WordPress doesn’t present the best friction-free posting experience
  • WordPress doesn’t support Markdown
  • WordPress doesn’t support Github Gists.

I’ve set up a Feedburner Feed real feed link (sorry) to syndicate my blog. This feed still points here (this WordPress.com blog). When I do cut over to the new platform I’ll update the feed to point there. The upshot is you’ll continue to get updates without having to change any subscription info. I’d appreciate it if you updated your reader to subscribe to the new feeed.

Many thanks!

Uncategorized

Branch-per-version Workflow in TFS

This is cross-posted from here.

Hi, I am having trouble with the workflows involved in a branch-per-version (release) scenario. I have a Subversion background, so my understanding of how TFS works may be ‘biased’ towards Subversion.

Our policy involves branching-per-version and branching-per-feature. Typically, changes will be developed in a feature branch (whether new features, bugs etc). Once complete, these feature branches will be merged into trunk (mainline) and the branch deleted. For a deployment, the changes made in each branch will be merged onto the last deployed version. This ‘patched’ version will then be branched as the new version. This requires that the new version branch be created from the workspace, and not from a latest version in the repository. When we try perform a branch from the workspace version, the opposite of what we expect occurs.

Let’s assume that I have a release branch version-1.0.0. A production bug is reported, and the bug is fixed in mainline as changeset 25. I now want to apply changeset to version 1.0.0 to create version 1.0.1. So I open my workspace copy of the version-1.0.0 branch and perform a merge of mainline changeset 25 onto the branch. Now, the change is correctly applied to my workspace copy, and the relvant files have been changed and are checked out. Then, in the source control explorer, I branch the version-1.0.0 branch to a new branch caleld version-1.0.1, and I choose to branch it from the workspace. When I check this change in, what happens is that changeset 25 is applied to the version-1.0.0 branch, and NOT to the version-1.0.1 branch as expected. That is, version-1.0.1 looks like version-1.0.0 before the change, and version-1.0.0 contains the change.

The same happens if I create a label from the merged workspace copy.

Effectively what we’re trying to achieve is simulate a ‘tag’ in Subversion (which are just branches anyway with some extra semantics).

I’d like some guidance on how to apply ‘hotfixes’ to a version branch to create a new version branch, and not change the ‘original’ or baseline version.

Uncategorized

TFS and SVN Documentation

Full disclosure: I have been working with Subversion for a number of years. I’d consider myself a Subversion evangelist. I often have an aversion to how Microsoft approach things.

I’ve recently been looking in to Team Foundation Version Control, since we’re adopting it in my organisation. Some other teams in the organisation are already using it, but the team I am in is not.  In the past, at previous positions, I have been responsible for synthesizing usage policies and best practices for lots of tools, including Subversion. I have also trained other team members (and sometimes superiors) in tool usage. This included things like forcing commit messages, to understanding branching and merging. My current role includes these responsibilities (including a lot more process-oriented guidance).

I am astounded at the lack of usage documentation there is for TFVC available on the Internet (i.e. not in books, since I haven’t had the chance to read any). The Subversion book must rank as one of the best pieces of documentation for an open source product around. It really does contain a wealth of well-written information on not only how to administer Subversion (I’ve seen plenty of guides on installing TFS), but how to use it – especially the ‘Daily usage guide’.  Things like how to set up working copies, committing, updating, resolving conflicts, branching and merging. The corresp0nding MSDN documentation is very very thin in this regard.

Has anyone else had this issue?

Can you recommend some resources for me?

Uncategorized

Applying TDD in principle if not in practice

My previous post was about describing the essence of TDD as a specification and direction tool, using non-technical examples. A quick recap: the point of TDD is to know where you going, and how to know when you’ve achieved your goal.

Several people have told me that TDD is all well and good, but it doesn’t apply to them, for various reasons. One chap told me he can’t to TDD because he’s a ‘SQL developer’ (his words). A friend told me he works on a vendor system, which is configured with XSLT files, and there is no tooling to support TDD. Its all very well to use whatever flavour of xUnit you want, but what about those who don’t have frameworks? What can the SQL, ASP and XSLT progammers and configurers out there do?

This is why it is important to understand the principle of TDD, and not only its practice (and why I think most demonstrations of TDD that I’ve seen focus too much on the practice).

As long as you have some way to set up a desired outcome, and can compare your actual outcome to your desired, you can perform TDD. You may not get some side benefits like automated regression testing and test coverage, but as mentioned in my previous post, those aren’t primary benefits. You also might not get the benefits of a decoupled design, since this may not make sense in your application.

You can still convey your intent. You will still have direction and focus. You will still be able to answer the question ‘am I done?’

A nice example of not getting the primary benefit of a decoupled design is the case of TDD in the SQL context. The team in which I’m currently working is still strongly focused on stored procedure and dataset-based programming. Therefore I’ve done a fair amount of thinking and playing in the TDD for SQL space. I’ve come up with a framework based on nUnit and TSQLUnit, but that’s another point.

I’ve used this framework to do TDD for stored procedures. One major shortcoming of T-SQL as a language (I feel dirty now) is that there’s no concept of abstraction. There is no way to break dependencies. If stored proceudre 1 (SP1) depends on user-defined function a (UDFa), there is no way I can test SP1 in isolation to UDFa. I therefore don’t get the benefit of a decoupled design, but I can still express the intent of my stored procedure, and I can prove when I’ve satisfied the requirment.

This is a clear case of the tooling failing the technique, but the technique can still be applied in principle. The same should be applicable in many other instances where there either isn’t tooling for the product (e.g. a vendor’s product which is configured).

Again, as long as you can create an expectation, and compare your actual outcome to that expectation, you can apply TDD in principle. So if you’ve previously had a look at TDD and wanted to apply it, but felt a lack of tooling for your particular case, think about how you can apply it in principle.

Uncategorized

Test-Driven Development – Part 1

I’ve found Test-Driven Development (TDD) to be pretty much revolutionary in my own software development efforts. Yet I find it is often very poorly explained, and explanations focus too much on the technical aspects of the technique, instead of the ‘philosophy’ or principle behind it. I place a lot more value on principles and philosophies over practices, since often practices can’t be implemented exactly as described, whereas there can still be a takeaway from the philosophy.

In my opinion, TDD is unfortunately named, since it confuses people. The intent of TDD really has nothing to do with testing. Testing is merely the mechanism through which one gains the primary benefits.

My primary TDD mantra is ‘TDD’s not about testing, its about specification’. In my interpretation, TDD has 2 primary benefits:

  1. It forces you to specify the problem you’re trying to solve very narrowly, and very explicitly.
  2. It promotes a loosely-coupled, modular design, since testable designs are generally loosely-coupled by nature.

Most people will state that the aim of TDD is a large proportion of test coverage and automated regression tests. This is not true. Both of these goals are readily achievable without doing test-first TDD; i.e. they can be easily achieved by writing tests after writing code. Test coverage and automated regression tests are very nice by-products of TDD (when its done correctly), but to my mind they are certainly not the primary benefit nor aim of TDD.

I’d like to explain the first of these primary benefits of  TDD with a non-software related, everyday (somewhat contrived) example. (I’ll need to think of another contrived example to explain the second.) The reason for a non-software related example is I want you to be able to explain the concept to a non-technical person, like your mother or your CEO (or maybe even your team lead 😛 ).

Imagine that I get a crazy idea in my head that I want to become more ‘eco friendly’ in my day-to-day life, and contribute less to global warming. My statement of intent, or vision may be something like ‘I want to make a personal contribution to the lessening of global warming.’ This statement makes one warm and fuzzy, but on its own, its quite difficult to implement. I call this my strategy.

Good management textbooks and courses will tell you one has to implement strategy through strategic objectives. So in order to implement my strategy of being more eco friendly, I come up with the ‘strategic objective’ of reducing my daily carbon emissions.

After some thought about this objective being a little ‘hand-wavy’ and hard to measure, I can refine it by phrasing it as ‘I will decrease my daily commute petrol usage by 10% by the end of this month’. The objective now is measurable, and has a concrete time frame. I have a concrete goal towards which I can work, and measure daily progress (my car has a trip computer which indicates current and average fuel consumption). I can easily see whether I have met (or exceeded) this goal next time I fill up my car.

I have progressed from the warm-and-fuzzy but somewhat hand-wavy statement of ‘I want to do my bit to decrease global warming’ to something tangible to which I can work towards. I know when I have achieved my goal, without a shadow of a doubt, and I can prove it. I can now brag to my friends about how wonderful I am, and maybe I can even get a tax break or something.

Now that the example is concluded, I’d like to point out some things about strategic objectives that I learned at university. Strategic objectives (SOs) must be phrased so that they are measurable, and have a time frame. It says only what must be achieved, but not how. The fact that its measurable means that you know when its done.  You don’t know how to do it (that comes later, at the tactical and operations levels), but you know when its achieved.

If an SO is not measurable, it is considered bad and must be rephrased and rethough. This is akin to designing the SO and implicitly the fulfilment of the SO.

So, an SO is a statement of intent, its a description of a requirement. In x months’ time, we must have sold y units. The fact that it is measurable means  you know when you’re done, but you can also manage it, analyse it, report on it etc. If its not measurable, rephrase it.

If an SO is not a strong enough statement of intent,will those members at the tactical and operations levels  – who don’t operate at all at the strategic level – be able to interpret it and implement it?

If an SO is not measurable, how can you prove to your board that the strategy has been fulfilled?

The testing aspect of this example is simply asking oneself: ‘its now x months down the line, have I sold y units? Its a yes or no answer. Its comparing the actual outcome to the desired outcome.

This example is not test oriented at all. We don’t create strategic objectives in order to run a somewhat trivial test at the end of the prescribed period. The SO exists as a direction, as a statement of intent, as a specification of the where the business needs to be at some point. Sure the measurment aspect is a big part of the SO, but we don’t write SOs for the sake of measuring them.

TDD is no different. The point of TDD is not to write tests. TDD uses the test as a mechanism for feedback. Are we done yet? You can know if you’re done only you know where you’re supposed to be. I don’t know if I’ve made a meaningful contribution to decreasing my emissions until I’m at the petrol pump. A company doesn’t know if its fulfiled its strategy until it can prove it sold x units by a certain date.

TDD forces you to make that end-goal explicit, and to start with that end-goal. Businesses don’t write down strategic sales targets after they’ve made the sales. They start with the target, then put measures in place to achieve that target. TDD in software development is the same.

Start with your end-goal, stated in the form of tests, usually unit tests. You then know where you going. You can then start to decide how you’re going to get there.

To use another example, developing without TDD is like driving around with a sat-nav device, ending up somewhere, and post-haste setting that point as your destination.

Unfortunately, the examples I’ve discussed here don’t really explain the second primary goal of TDD, a loosely-coupled design. I will try think of an example for this benefit and blog it.

Some more notes:

1: As mentioned, businesses start with strategy and strategic objectives, and implement through tactical and operational implementations. They don’t run for a couple of months then retrofit their strategy to where they land up. One can state that ‘its more important to know where you’re going, than how to get there’.

One might also be able to state that ‘its more important to know where you’re going, than actually getting there’. This statement arises from the following: according to agile principles, one delivers the highest value tasks first. Since TDD tests are written before the code to satisfy them, one might argue that the test is more important than the code written to pass the test.

This can be reinforced by considering software product maintenance. The majority of these efforts is spent in understanding the code written by other developers. TDD tests can be a shortcut to understanding the intent of the code.

Also, as businesses changed, an application’s code is likely to change too (as should the TDD tests). Essentially, the intent of the code (conveyed by the test) is likely to outlive the code itself.

2: Behaviour-Driven Design (BDD) is an evolution of TDD, which basically has the notion of ‘executable specifications’ as its holy grail. Instead of developers having to interpret the end-goal presented to them in some type of specification and translating that into code, the customer of the software (business) should be able to codify their specifications in a language as close to their own natural language which can be executed as code.

3: Good TDD tests should be statements of intent. This is often what is meant as ‘tests documenting the code’.

Uncategorized

Hello World

Hello all. This is the second blog I’ve started. My first blog is more of a personal blog, and can be found at http://joshi.blog.com. I’ve started this blog as a technical blog (and also because the other one is pretty much defunct at this stage).

I plan to use this space to air my random thoughts on software engineering in the hope that I can assist other people, especially those searching for answers to some of the problems I’ve dealt with and will deal with. Hopefully I’ll find the experience as rewarding as the technical  bloggers I read, but I guess that’s primarily up to me. I hope you enjoy it.