21 February, 2018


When you were young, failure was something obvious, quantifiable: Failing an exam, failing to run laps round the school. And when you got older this turns into being a financial, social, sexual or moral failure. With the shift into more serious territory one thing remains the same: the rigorous, finite and binary distinction of pass or fail; good or bad.  

I endured many failures in my life, none were ever Epic, but they would always find a way to burrow into my thoughts during a late night. It wasn’t until I became a Tester I could take that definition and redefine what it meant to fail. A failure wasn’t bad or negative. It’s just something that hasn’t performed in the way you would intend or expect.

I’ve found in my life as a Tester that I tend to come across higher ups not understanding the meaning or purpose of testing requirements. At their heart, they are a set of true/false questions to determine whether the system is operating correctly and not just a list of new web pages or features. I couldn’t count the number times I’ve had to make sure something is just ‘working correctly’ or to make sure a text field has unlimited capacity. These things just cannot be done in isolation and one should never succumb to the temptation to cut corners. The full list of testing requirements, for a fully developed system, should tell the story of exactly what the system should and shouldn’t do. It is the results that demonstrate what it can and cannot do. 


We’ve all heard the requirement mantra:

“Requirements should be clear and specific with no uncertainty. Requirements should be measurable in terms of specific values. Requirements should be testable having some evaluation criteria for each requirement. And requirements should be complete, without any contradictions”

But have you asked why? Have you asked what the cost of failure actually is?

Projects can be delayed due to simple failings at this fundamental level. The tighter and more specific your testing requirements are, the less defects can slip through the cracks. You not only have to consider the time wasted because of misunderstanding from the documentation but the costs of having to go back and fix something after a release. Testing is by no means perfect and no one Tester can ever prove that a software is without bugs. However, what he/she can do, what he/she should always be able to do is to provide a list of the systems functionality. With a simple test report, you should be able to say with confidence what works and what does not.

The other problem you run into when trying to convey a systems failure is dealing with the ‘irreproachable’ Developer. This by no means applies to all but a subset, a subset of developers that feel like a failure on a test is a failure with their work. As mentioned before, at times requirements can be sketchy and lacking in information. This doesn’t only confuse the Tester but it can confuse the developer too. What to code exactly, size limits on data sets, limits for logic calculations and even down to how descriptive an error message should be. This comes back to the good and bad confusion. A fail is never bad, a fail just means you’ve just saved money, money you would waste having someone tracking this defect after release. Getting down to the brass tacks, it’s better for everyone finding and fixing a defect as early as possible. 

So, I set out to write a blog for the purpose of writing a blog. I wanted to talk about Failure, in general and about my work as a Tester. If you want to take anything away from this then take away this; failing test isn’t negative, or a bad thing for the project but failing to test correctly, is a bad thing for you as a tester. I suppose, finally, the question is: Is this blog another fai…


By Mike Tindal, Test Analyst at Edge Testing

Back to Blog