Tag Archives: cast2015

Testers Who Code Get More Respect

The debate-style “Should Testers Code?” presentation at CAST 2015 was one of the best attended talks of the conference. And with good reason: This question is everywhere. But after more than an hour of Henrik Andersson defending testers who don’t code and Jeff Morgan defending testers who do code, I was convinced. Everyone should stop asking this question.

Yes. Testers should know how to code.

At CAST, Jeff Morgan boiled it down to this: testers who can code are more flexible. There is a larger, more diverse market for their skills. Elizabeth Hendrickson found this by counting job descriptions. Testers who can code serve their teams in more ways and jump in to solve problems and ask questions that only a developer can. Rob Lambert argues that you need coding or some other niche to get a testing jobPaul Gerrard notes that adding more skills to your repertoire can only add to your knowledge, not subtract. A skilled tester can put on many hats – user, business analyst, client – and adding developer to that mix can help.

Testers who can code are treated with respect by their developersMarlena Compton worries that individuals with less power have been pushed into testing rather than development. Michael Bolton notes that a tester’s empathy grows when they are able to gain a greater insight into the software environment and the problems developers face.

Testers are constantly asking themselves if the task they’re performing is providing a higher value than some other task. A tester who can code has one more tool in their tool belt to help eliminate bottlenecks along the way. When a tester has the engineering skills to craft an automated suite of checks, they’re often able to provide more value to their team than a tester checking the same boxes manually.

For all these reasons, they make more money.

More interesting work, more respect, and more money: What more do you want?

CAST 2015: How I’m Using Reason And Argument in My Testing

Scott Allman and Thomas Vaniotis condensed an introductory logic course into an hour-long presentation at CAST this year.  Their focus on deductive reasoning was a great template for how to write a solid bug report or how find the crux of an issue when talking to a colleague. Scott and Thomas’s statements are in bold and my takeaways for how I’m applying it to my work follow below.


Assume your opponent is attempting to construct a valid argument.

Assume the developer read the ticket, implemented the feature in a way that made sense to them, and pushed the code to the testing environment. What could you be missing? Have you downloaded the most recent build or cleared your cache? Do you need to be logged in? Are you on the right page?

When you’re trying to prove a premise is invalid, provide evidence. 

If a developer tells you a feature works on their machine, attach a screen shot or a log file of an instance when a feature did not work on your machine. Include relevant environment information and steps to reproduce to determine which premises you don’t share.

What kind of argument would someone construct to disagree with you?

If you’re writing a bug that says something’s taking too long, say how long it should take and why. If you’re writing a bug that says something is the wrong color, cite the style guide or use the WebAIM contrast checker to prove the item is not accessible to color blind people.

Use as few premises as possible so your argument and conclusion shine through.

Look at the steps to reproduce you’ve included in your bug report. Is there anything you can remove? Are there any crucial steps your developer may not have taken that you did?


I’ve never seen such an engaging presentation where the presenters were reading off pieces of paper. The mindmap below includes more of what I enjoyed about the presentation and about testing software.

Reason and Argument for Testers Mindmap

In the open season after the session, Scott and Thomas went into other types of reasoning (inductive for example) testers use when investigating software. Most follow-up questions were about soft skills. Scott and Thomas suggested that examples would be better received than lingo-heavy accusations.