TDD Painkillers slides from DDD Dublin 2010

Attached are my slides on TDD Painkillers from DDD Dublin. Thank you to everyone who came to the session, I hope you gained some value and tips you can apply on Monday.

One thing I did mention during the presentation was with regard to pairing. I know some people want to learn, but sadly are environments which make it difficult. As such, if people want to pair with me (over skype or in a bar) on a coding kata then please give me a shout! Alternatively, if you fancy a test review or feedback on your current approach then just let me know. Twitter is always the best way to contact me.

If people want to know more about my approach to testing, then you could always buy our book Testing Web Applications

Installing Cucumber 0.8.5 fails due to gherkin not installed

This morning I was attempting to install the latest version of Cucumber, however I recieved an error saying gherkin (the language parser of the tests) was not installed.

C:>gem install cucumber –no-ri –no-rdoc
ERROR:  Error installing cucumber:
        cucumber requires gherkin (~> 2.1.4, runtime)

Generally, gems install all of the dependencies so I found this a little bit strange. Naturally, I manually install it.

C:>gem install gherkin –no-ri –no-rdoc
Successfully installed gherkin-2.2.0-x86-mswin32
1 gem installed

Sadly, this still didn’t work. The reason was it needs 2.1.4 installed, not the 2.2 version.

Executing the following allowed me to install Cucumber as normal

gem install gherkin –version 2.1.4 –no-ri –no-rdoc

Videos of my presentations at #NDC2010 and #QConLondon2010

From NDC2010 – IronRuby – A brave new world for .Net
From NDC2010 – Testing C# and ASP.NET Applications with Ruby
From QCon London 2010 – Testing C# and ASP.NET Applications with Ruby

While QCon and NDC sessions have the same title, QCon was more focused on the why with NDC2010 being more focused on the how. Don’t forget, if you want to know more about Testing there is a (great) book available –

Improving testability with the Castle Dictionary Adapter

Frequently when reviewing code I see one of my pet hates appear and that’s a direct dependency on the ConfigurationManager.  The ConfigurationManager provides a way to access values in the WebApp.config. Yet, like any dependency, they generally bite you at some point – generally when you attempt to write the test.

Let’s imagine that our web.config has a value like below:

EnableNewsletterSignup = “false”

This value defines if we should hit the live web service. During developmentsystest we don’t want this to happen, however we do in UAT and Live. As a result, our code will generally look like this:

public bool Signup(string email)
    if (!Boolean.Parse(ConfigurationManager.AppSettings[“EnableNewsletterSignup”]))
        return false;

    return true; // technically this would go off to an external web service

Simple, yet with multiple problems

Firstly, we have a magic string value which relates to the key in the config. If we wanted to change this value we would have to perform an error-prone SearchReplace. Secondly, we have to manually parse the string value to a boolean – again, this is error prone as we’ll need to protect against bad data. This additional logic hides the true intent of what the method is meant to be doing which increases complexity. To make matters worse, we have a major problem when it comes to testability.

The configuration manager will automatically detect the config file based on the executing assembly, this means that your test assembly’s App.config needs to match your implementation’s (web.)config with all the values pre-configured for testing purposes. Having pre-configured values offers you very limited flexibility and in the example above we would be unable to test both paths (without our tests changing the value directly)? If we had multiple possible paths, this would cause us a very real problem.

This week I came across the issue were I required an AppSetting value. Not wanting to face the issues above I looked for help.

Thankfully, help’s available

The Castle Dictionary Adapter removes these problem for us. Given an interface and a dictionary of values, the adapter will create an object with all the properties populated for us. Our interface will match the settings in our config file.

public interface IApplicationConfiguration
    bool EnableNewsletterSignup { get; set; }

The same implementation mentioned before becomes this, with a dependency on the above interface instead of the concrete ConfigurationManager. Notice our ‘if’ statement now uses a strongly typed property without all the noise associated.

class NewsletterSignupService
    private readonly IApplicationConfiguration _configuration;

    public NewsletterSignupService(IApplicationConfiguration configuration)
        _configuration = configuration;

    public bool Signup(string email)
        if (!_configuration.EnableNewsletterSignup)
            return false;

        return true; // technically this would go off to an external web service


The real advantage arrives when you look at the problem from the testing point of view. Because it’s an interface, we can use Rhino.Mocks to produce a stub, allowing us to test using any possible value.

var stubConfig = MockRepository.GenerateStub();
stubConfig.EnableNewsletterSignup = true;

We also no-longer need to maintain the App.Config as everything is driven by stub config objects, making life easier all round.

The next level comes when you use it with an IoC framework such as Castle Windsor. When an object defines a dependency on IApplicationConfiguration, they will be provided with an object created via the DictionaryAdapterFactory with the values coming from our actual AppSettings.

WindsorContainer container = new WindsorContainer();

        () => new DictionaryAdapterFactory().GetAdapter(ConfigurationManager.AppSettings)));

As a result of implementing the adapter together with it’s use in Windsor we have more control, less complexity and a more maintainable solution going forward.

But it’s not only for AppSettings, the Castle Dictionary Adapter works on a number of different directories and collections meaning you no longer need to index into them using strings. If you want to know more, then CastleCasts has a great screencast on this at

In order to implement this in your own codebase, Castle Dictionary Adapter is currently a separate single assembly with no external dependencies that you can download from

Going forward, it will be part of Castle Windsor 2.5 with some interesting improvement as discussed at

The code for the above example is available at

QCon London 2010 – Testing C# and using Ruby

Last week I had the amazing honour of presenting at QCon London 2010 on Testing using Ruby. QCon was an amazing conference with some truly amazing speakers, and I did feel a little out of place 🙂

Nonetheless, it was a great experience. A huge thank you to Josh Graham from HashRocket who organised the track and invited me to speak!

For those who are interested, my slides are below. Within the next six months the video should be online too.

Time for a new stubbing and mocking syntax?

When using IronRuby to test C# applications we are still faced with similar issues as with C# – the different is how we can handle them. For example, to stub the HttpRequestBase in C#, we could use Rhino Mocks as follows.

var stubbedHttpRequest = MockRepository.GenerateStub().Stub(x=>x.ApplicationPath).Return(“~/test”);

I do really like this syntax and think it works for C#.  However, if we are looking to use Ruby and a dynamic language we have the potential to be more inventive.

IronRuby has an excellent framework called Caricature which allows you to fake CLR objects. For example, here we are stubbing the HttpRequestBase from MVC.

require 'caricature'
include Caricature
stubHttpRequest = isolate System::Web::HttpRequestBase     

However, this got me thinking. With Ruby being dynamic, how could we take advantage when defining fakes? For example, what about the following syntax:

stubHttpRequest = stub 'System::Web::HttpRequestBase
                          .ApplicationPath.returns("~/test") &&

This would stub two properties, ApplicationPath and FilePath to return “~/test” and an empty string respectively. If we wanted to handle method calls and arguments, we could have the following:

stubHttpRequest = stub 'System::Web::HttpRequestBase
                          .SomeMethodCall("WithArgument").returns( && 

Here we stub two methods, one stubs with a particular argument (must be the string “WithArgument”) while the other matches on any argument.

My aim is to reduce the ceremony associated with the act of stubbing and instead focus on the true intent of the defined behaviour.

Note: Imagine the ‘refactoring’ problem has been solved, and changing the method names would also update the tests.

If we look at other languages, for example Javascript’s jqMock and Ruby’s NotAMock are using a similar syntax to C#.

var alertMock = new jqMock.Mock(window, "alert");
alertMock.modify().args("hello world new!").returnValue();

I think it is time to start looking beyond the existing syntax and reveal our true intent. What do you think?

MEFUnit – Prototype of a MEF Unit Testing Framework

Recently I have been paying close attention to MEF, the Managed Extensibility Framework. MEF is an extremely powerful framework aimed at making parts (internal or external extensibility points) more discoverable.

While I was looking at this, it dawn on me. Unit testing frameworks main role is identifying and executing methods. If that is their main function – how could you use MEF to identify and execute those methods? The result of this pondering is MEFUnit. Like with xUnit.GWT this is located on GitHub and has been mentioned on twitter once or twice.

The Tests

The tests look very similar to any other kind of unit testing framework you might have used. The attribute [Mef] identifies which methods should be executed, and like with any other framework they can pass (no exception), fail (assertion exception) and be skipped (skipped exception).

public class Example
    public void Test1()
        throw new AssertException("Test Failed");

    public void Test2()
        throw new SkipException();

    public void Test3()

Fundamentally, this is the main concept of most unit testing frameworks. Yes, some have parameterized tests and other such features which are great, but many people got by with just this. When you run the tests, you get the following output.

Executing Tests
Executing Test1… Test Failed
Executing Test2… Skipped
Executing Test3… Passed

Key implementations points of MEFUnit

But how are these methods actually turned into unit tests? The main point in the above example is the MEF attribute. This is simply a custom ExportAttribute. I could have wrote:

[Export("Mef", typeof(Action)]
public void Test() {}

However, I feel creating a custom attribute improves the readability and usability of my framework for the user. It also means if I need to change the contract I can do it without effecting my dependent exports. The second important fact is that the exports are of type Action.  By storing the method references as Action I can executed them anywhere in my code. This is the trick which makes this all possible. It means I can execute each test as shown and report the result to the console.

public void RunMefs()
    foreach (var mef in Mefs)
        Console.Write("Executing " + mef.Method.Name + "... ");

        catch (AssertException ex)
        catch (SkipException)

The Mefs collection is populated via the MEF framework and the use of an Import attribute. Within the class, the property looks like this: 

public IEnumerable Mefs { get; set; }

As with the Export attribute, I wanted to hide the actual contract I used to import the Action methods. This allowed me to hide the fact I was using AllowRecomposition, enabling tests to be dynamically identify when new assemblies are loaded.

public MefTests() : base("Mef", typeof(Action))
    AllowRecomposition = true;

When you combine this with the Compose method which hooks together the core parts of MEF you have the core concept of a unit testing framework.

I have to admit, I’m not expecting anyone to actually use this. However, I think it shows some of the interesting capabilities which MEF brings to the table.

.Net Fault Injection – It’s not just about exceptions

I had a interesting comment on my previous post about .Net fault injection. ‘Losing Side’ asked if this would work for simulating other faults such as timeouts. It’s a good point and one I didn’t think about yesterday, but there are other faults which are interesting when testing the application. Performance is one of those areas, creating performance problems, such as slow disk IO or a slow server is difficult if you don’t have the setup in place, and even then not always possible. How can you effectively, repeatability test for a slow hard drive (and using a virtual machine doesn’t count). Tools such as ANTS Profiler will help tell you where the problems are, but only if you can reproduce the problem.

First demo of the day, I ask the question – how can you simulate a slow write process when using StreamWriter?  Based on my previous post, I’ve changed the method to this:

private static void MethodFails()
    Console.WriteLine(“Writing to a file @ ” + DateTime.Now);

    string path = Path.GetTempFileName();
    StreamWriter sw = new StreamWriter(path);
    sw.WriteLine(“This is a test @ ” + DateTime.Now);
    sw.WriteLine(“This is a test @ ” + DateTime.Now);
    sw.WriteLine(“This is a test @ ” + DateTime.Now);
    sw.WriteLine(“This is a test @ ” + DateTime.Now);

    foreach (var s in File.ReadAllLines(path))

Running my console application normally, I get the following output, notice no delays between each write:

Writing to a file @ 15/11/2008 13:23:48
This is a test @ 15/11/2008 13:23:48
This is a test @ 15/11/2008 13:23:48
This is a test @ 15/11/2008 13:23:48
This is a test @ 15/11/2008 13:23:48

Running this using my injector, I have a five second delay (which I set, easily could have been a random number) between each of my writes to the file.

Writing to a file @ 15/11/2008 13:19:45
This is a test @ 15/11/2008 13:19:51
This is a test @ 15/11/2008 13:19:56
This is a test @ 15/11/2008 13:20:01
This is a test @ 15/11/2008 13:20:06

I now get the same five second delay each time this happens, creating a scenario repeatable.

Disappointingly, I tried to use the SQLConnection object, but I couldn’t get this to work. I don’t know if its a limitation or a bug.  Still a lot more work to do until its even remotely useable, but I’m finding the concepts interesting.

Technorati Tags: , ,

.Net Fault Injection – Very early proof of concept

I’ve just completed my first proof of concept which I’m very excited about and I just had to write a quick blog about it – plus it breaks my blogging silence.

I find fault injection an interesting concept, I’ll explore it in more detail in later posts, but the aim is to insert exceptions at various points in order to test certain behaviours and how the application copes when an error occurs. However, raising exceptions is not a easy task, you have to create the scenario and environment in order for the exception to occur. With my fault injection application, you don’t need to create the scenario, you simply say when you want the exception to occur. For example, I would be able to throw an IOException when System.IO.File.ReadAllLines() is called without actually having to create the scenario for the exception to be raised – saving me time and effort but also allowing me to test more scenarios and error handling.

Tonight, I created a very simple concept to throw exceptions on method calls. I’ve got a very simple console application which calls two methods – nothing special about this.  When MethodFails() is called, my injector will step in and throw a MethodAccessException (could be any exception) which will be caught by the console application with the details of the exception being outputted.

using System;

namespace ConsoleApplication2
    class Program
        static void Main(string[] args)
            Console.WriteLine(“Should output 1 message then fail”);
            catch (Exception e)
                Console.WriteLine(“Exception caught”);
                Console.WriteLine(“Type: ” + e.GetType());
                Console.WriteLine(“Message: ” + e.Message);

        private static void MethodA()
            Console.WriteLine(“This should work”);

        private static void MethodFails()
            Console.WriteLine(“this should have failed”);

The output of the execution is the following information:

Should output 1 message then fail
This should work
Exception caught
Type: System.MethodAccessException
Message: You attempted to call ConsoleApplication2.Program.MethodFails, this has been blocked. Goodbye


Technical details to come at a later point, it’s getting late in the UK. However, I would love to hear your comments on this idea.

Technorati Tags: ,

Testing Certification – My experience of the ISEB Foundation Certification

Recently, Red Gate sent me on the ISEB Foundation Testing certification course. I had heard a lot of reports about the certification from fellow testers, but the course was only two days, in house and I had some free time so I thought why not – I might learn something.

Now I have my mark, I feel I should ‘share’ my experience and my view on the certification.

Theory and Terminology

I understand that this is a foundation course, however it still includes a lot of terminology which accounts for a large percentage of the course. The aim is to  have a consistent terminology across the industry and make sure everyone is reading from the same page. All of the terminology used is listed in the glossary. Personally, knowing the correct terminology isn’t actually that useful in the real world, for example knowing what the term ‘failure’ means is useless if you cannot identity it. 

In general, the course was focused more on the theory side of testing, which was mainly based on theory in the perfect world where you have a fully testable spec. The advice was that if the spec wasn’t testable, alert the business that you can’t do any work until it is testable – I’m not sure how many companies would welcome this demanding stance. As a result of focusing on specification driven testing, it didn’t go into much depth about Exploratory Testing, apart from define the term, which I feel is much more important and generally more applicable.

Ignorant of new development processes

Most of the testing process discussion was based around the V-Model. The V-Model is great, its a visual presentation of a sequence of steps,  on the left you have tasks, while on the right you have verification steps. For example, when you receive your detailed testable specification, you create your acceptance tests to be used to verify the system. However, it starts to get a bit murky when you haven’t got the stages down the left, or when your doing Test Driven Development (according to the model, you design, code, test).

While the V-Model doesn’t define what development methodology it applies to, it sits perfectly in the waterfall development lifecycle. I would have liked to see more of a focus on agile development, mentioning iterative and Rational Unified Process (RUP) doesn’t count.

We should be encouraging agile processes, how testers fit into the process and how they can work more closely with BA and Devs to ensure testable requirements and testable stories.

Agile is not just a fad, a new-age way which will never catch on! It should be taking a much more positive stance in our teaching.

“Just learn it, don’t argue with it”

For me, this is the single biggest problem with certification.  Even if you feel, or know, it is wrong – don’t argue, just learn it so you can pass and have your name on a piece of paper. This is encouraging the MCSE 2000 style certifications, where all you need is a brain dump of all the terminology and sample questions to be able to pass.

This is not a very effective way to teach, and it is definitely not an effective way to learn. Generally with software development, I have learnt by having in-depth discussions about the topic in question, the rights and wrongs, best practices and alternative approaches in order to gain a good understanding.  Someone saying that this is the only way you can do something is not very constructive.

However, you do just need to ‘learn’ the material in order to pass the exam. During my scrum master training, we had really good discussions about how scrum works in the real world. By using the material as a guide, and not the be all and end all, and not having to worry about passing an exam, we was able to dig deep into certain areas and have a discussion, as a result taking much more away from the two days.

“Principle 6: Standardized tasks and processes are the foundation for continuous improvement and employee empowerment(The Toyota Way)

Maybe we (I) have this all wrong? Maybe the aim of testing certification isn’t to teach you the latest and great techniques, but to provide you with a set of standardised tasks and processes to use as a foundation – it is after all, a foundation certification. I’m currently reading The Toyota Way and this is similar to principle 6, have standardised tasks and processes to allow for improvement instead of reinventing the wheel each time.  It would make more sense.

If this is the case, then where is the continuous improvement and updating of the course material to take into account new processes, tools and best practices. By standardising these new ideas, we could improve them to create new best practices improving the industry in general. While the content is updated, is appears to be very static in terms of ideas.

The future for testing certifications?

What is the future for testing certification? From the numbers taking the examination, it looks like testing certification is here to stay. I think there are two initial approaches to improve the certification.  The first is that the foundation course doesn’t have an exam and instead follows a similar approach to the certified scrum master training to allow for discussion and sharing of ideas. With no exam, there is not as much red tape, there is no need for writing and marking papers allowing the content to be updated with more flexibility. The course could be changed to include the new ideas, sharing the best practices and improving the industry in general.

In terms of training, it would be great to see a similar course aimed at testers, which developers have with “Nothing but .Net” from Jean-Paul Boodhoo. A serious deep dive into different testing techniques, tools and approaches. Along side this, conferences have their role to play. This year, DeveloperDay in the UK has a number of different testing based sessions, all of which are real-world ‘take back to your office and use’ subjects. A number of testing conferences I have seen are more focused on the academic side and papers on testing, while interesting do not apply to improving your work today.

I wonder if the practitioner exam is any better?

Technorati Tags: