Category Archives: Software Testing

Testers Who Code Get More Respect

The debate-style “Should Testers Code?” presentation at CAST 2015 was one of the best attended talks of the conference. And with good reason: This question is everywhere. But after more than an hour of Henrik Andersson defending testers who don’t code and Jeff Morgan defending testers who do code, I was convinced. Everyone should stop asking this question.

Yes. Testers should know how to code.

At CAST, Jeff Morgan boiled it down to this: testers who can code are more flexible. There is a larger, more diverse market for their skills. Elizabeth Hendrickson found this by counting job descriptions. Testers who can code serve their teams in more ways and jump in to solve problems and ask questions that only a developer can. Rob Lambert argues that you need coding or some other niche to get a testing jobPaul Gerrard notes that adding more skills to your repertoire can only add to your knowledge, not subtract. A skilled tester can put on many hats – user, business analyst, client – and adding developer to that mix can help.

Testers who can code are treated with respect by their developersMarlena Compton worries that individuals with less power have been pushed into testing rather than development. Michael Bolton notes that a tester’s empathy grows when they are able to gain a greater insight into the software environment and the problems developers face.

Testers are constantly asking themselves if the task they’re performing is providing a higher value than some other task. A tester who can code has one more tool in their tool belt to help eliminate bottlenecks along the way. When a tester has the engineering skills to craft an automated suite of checks, they’re often able to provide more value to their team than a tester checking the same boxes manually.

For all these reasons, they make more money.

More interesting work, more respect, and more money: What more do you want?

Advertisements

CAST 2015: How I’m Using Reason And Argument in My Testing

Scott Allman and Thomas Vaniotis condensed an introductory logic course into an hour-long presentation at CAST this year.  Their focus on deductive reasoning was a great template for how to write a solid bug report or how find the crux of an issue when talking to a colleague. Scott and Thomas’s statements are in bold and my takeaways for how I’m applying it to my work follow below.


Assume your opponent is attempting to construct a valid argument.

Assume the developer read the ticket, implemented the feature in a way that made sense to them, and pushed the code to the testing environment. What could you be missing? Have you downloaded the most recent build or cleared your cache? Do you need to be logged in? Are you on the right page?

When you’re trying to prove a premise is invalid, provide evidence. 

If a developer tells you a feature works on their machine, attach a screen shot or a log file of an instance when a feature did not work on your machine. Include relevant environment information and steps to reproduce to determine which premises you don’t share.

What kind of argument would someone construct to disagree with you?

If you’re writing a bug that says something’s taking too long, say how long it should take and why. If you’re writing a bug that says something is the wrong color, cite the style guide or use the WebAIM contrast checker to prove the item is not accessible to color blind people.

Use as few premises as possible so your argument and conclusion shine through.

Look at the steps to reproduce you’ve included in your bug report. Is there anything you can remove? Are there any crucial steps your developer may not have taken that you did?


I’ve never seen such an engaging presentation where the presenters were reading off pieces of paper. The mindmap below includes more of what I enjoyed about the presentation and about testing software.

Reason and Argument for Testers Mindmap

In the open season after the session, Scott and Thomas went into other types of reasoning (inductive for example) testers use when investigating software. Most follow-up questions were about soft skills. Scott and Thomas suggested that examples would be better received than lingo-heavy accusations.

STAREast 2014: Days 3 & 4

The second half of STAREast was in classic conference style: keynotes, lectures with brief Q&As, and scurvy-inducing food. Thomas Cagley gave me great tools for personality analysis. Jason Arbon gave me his book to read and the most actionable ideas for breaking our app and prioritizing our bugs. Florin Ursu gave me the most advice about how to document my testing, integrate myself into my team’s process, and have my team take ownership of the quality of our products. Playing dice games and other puzzles with James Bach, Michael Bolton, and Griffin Jones got me thinking about my strengths and weaknesses as a tester.

Below are my key takeaways from all the sessions I attended.

Randy Rice’s keynote’s about key testing concepts

It doesn’t matter how good your tests are if you’re testing the wrong version. Take the time to clear your cache, install the new build, or go to the developer’s branch.

Zeger Van Hese’s keynote about testing focus and distraction

Making lots of decisions drains mental energy and causes the next decision to be more difficult. Rather than list all the things you are going to do this month, list the things you’re not going to do so you can forget them.

Bob Galen’s session about Agile testing

Calling a project “Agile” to compensate for a lack of requirements will not make it go faster. Make sure you’re building the right thing by frequently communicating with the stakeholders during the development process. Leave enough time in the process for refactoring and bug fixing.

Erik van Veenendaal’s session about risk-based testing

Features that users use all the time are more important than rarely-used features. Figuring out where bugs are more likely to arise can help focus your testing be it inexperienced team members, distributed team members, or new technology. Not all bugs are equally risky; you can start to order them by their relative importance. Items identified as high impact and high likelihood of occurring must be tested and should include a definition of “done.”

Thomas Cagley’s session about cognitive biases

The zero-risk bias causes us to reduce small risks down to zero rather than mitigating the highest risks first. The illusion of control causes us to overestimate our influence in external events. The illusion of transparency causes us to overestimate how well others understand us, especially on long-standing teams. Noting the biases you and your team exhibit can help you avoid being manipulated by them.

Theresa Lanowitz’s keynote about extreme automation

Do public relations for your testing within your organization. Take advantage of the HealthCare.gov debacle to get parity between development and testing. Provide a high-quality experience on the customer’s desired platform so they don’t leave for a competitor.

Jason Arbon’s session about the secrets of mobile app testing

The best way to crash an app is to open and close it a bunch of times, change its orientation, and remove permissions from the phone’s settings instead of the app’s settings. Change your description in the app store to tell your users when you’re fixing bugs. Look at app store reviews to decide what you should be testing today. Make sure your crash SDKs are installed from the first build. There is no correlation between the number of shipments and the quality of the app. Teams that actually fix their bugs have higher quality apps.

Florin Ursu’s session about lightweight testing documentation

It’s hard to find where you’ve missed something when there’s a lot of documentation. Presenting a mind map to stakeholders helps them take ownership of the process and keeps you from being the gatekeeper to production. Mind maps help focus on features overall so you don’t get bogged down in individual test cases. Add attachments, progress clocks, happy/sad faces, links to JIRA tickets, or notes for a more complete document of your testing.

Lloyd Roden’s session about challenges in gathering requirements

Bad requirements can be non-existent, in flux, too few, too numerous, too vague, or too detailed. Error states and performance expectations don’t need specific requirements because you know it when you see it. Too much detail in requirements squashes creativity and leads to useless testing. Find out what features are used the most often and test there the most.

Kamini Dandapani’s session about testing on production at eBay

Load balancers, caching, and monitoring tools will be more robust on production. Manufacturing data in test environments doesn’t account for legacy requirements. Track your progress by measuring the time from a bug report to when a fix is available on production.

STAREast 2014: Days 1 & 2

Michael Bolton and James Bach are friends with the same people: Cem Kaner, Doug Hoffman, Paul Holland, and James’s brother Jon. The material in their full day tutorials at STAREast 2014 covered some of the same material I was exposed to at CAST 2013 and the BBST Foundations course I just finished. Luckily they were good enough speakers that the techniques still felt fresh and motivating. I would have tweeted the following if this conference provided WiFi beyond the registration desk.

On prioritizing and finding bugs:

  • You clients are your boss, the development team and the customer in that order.
  • Usability problems are also testability problems because they allow bugs to hide.
  • Create a test plan before looking at the specification to generate more ideas.
  • Describing your entire test plan allows others to participate in your thinking.
  • The slowest test you can do is the test you don’t need to do.
  • Testers should look for value in the software, not bugs.
  • Memorizing types of heuristics and techniques will help you internalize them.
  • Automated tests are like a check engine light; they only tell you where more investigation is needed.
  • No amount of testing proves that the product always works.

On reporting bugs:

  • Report bugs concisely.
  • Treat boundary requirements as rumors.
  • Separate observation from inference with safety language (i.e. “It appears to be broken” instead of “It’s broken”).
  • Some things are so important or so embedded in the culture (tacit knowledge) that we don’t need to write them down (explicit knowledge).
  • Testers should report bugs and issues. Bugs threaten the value of a product to someone who matters. Issues threaten the value of the testing or business.
  • Testing reports should include the status of the product, how you tested it, and how good that testing was.

Important questions to ask without accusing:

  • Can I ask you lots of questions?
  • Is there any more information?
  • Do you have any particular concerns?
  • Is there a problem here?
  • Can I help you with tasks you don’t like so you have more time to answer my questions?

On the testing profession:

  • Testing is learning about a product through experimentation. And creating the conditions to make that happen. And building credibility with developers.
  • Testing stops when our client has enough information to make a shipping decision.
  • You will never have a complete specification.
  • In Agile, everyone should be willing to help each other answer questions, not abandon their roles or expertise.
  • It is emotionally draining to constantly be fighting with someone who can make your life difficult.
  • Testing is the opposite of Hollywood: The older you are, the better it gets.
  • Testers can’t fear doubt or complexity.
  • A tester’s main job is not to be fooled.
  • Bureaucracy is what people do when everyone has forgotten why they’re doing it.
  • Our job is to tell developers their babies are ugly.
  • Music is something that happens. Sheet music : music :: documents : testing.
  • Apple makes you forget its products don’t work. Microsoft keeps reminding you.

Writing Clear and Effective Bug Reports

Clear Bug Reports

You’re looking at an existing product and you think you’ve found a bug. Your bug report needs to include (1) what the feature is, (2) how you’re expecting the feature to work, and (3) what the feature is actually doing at the moment.

1. What the feature is: For the front end of the website, the admin, or an API point, a URL can be sufficient. If it’s a certain part of a page, name the title of the section and include a screenshot. If it’s a visual thing rather than a data thing, try it on more than one browser or mobile device, or at least specify the environment where you saw the error.

2. How you’re expecting the feature to work: This turns out to be where miscommunication most often occurs. It’s also the easiest section to leave out of a ticket. It will feel like you’re writing exposition that everyone already knows. Unfortunately, you can’t read the minds of designers, developers, or users, nor can they read yours. Developers probably can’t remember what you said if you mentioned what you had in mind at a scrum. Write it down so everyone can refer back to it and angrily point at it later.

3. What the feature is actually doing at the moment: Screenshots are great. If a hard error is returned, include the stack trace or a link to the stack trace. If it’s working in one place and not in another, take a screenshot of the two URLs side-by-side. If your screenshot includes  features you’re not addressing, take a smaller view or draw an arrow to the point you’re talking about. For transitions and scrolling issues, take a video or call a second team member over to make sure the way you’re describing the experience makes sense.  If it takes more than visiting the URL or mobile app screen to see what you’re talking about, include how you’re able to reproduce it and how often if it doesn’t happen every time.

These guidelines apply to any user, internal or external, trying to communicate with the development team. If you’re a QA person, all of the above is contained in the description and attachments of the ticket. Filling out the rest of the fields make the difference between a ticket getting the appropriate attention instead of languishing in the backlog.

Effective Bug Reports

Before you decide to create a new ticket in JIRA, see what other tickets already exist. Find other tickets about the same feature. Find the ticket where the same thing happened but in a different environment. Find the ticket that was closed because no one could reproduce it. Find the ticket that’s already open that a developer wrote and you didn’t understand until you saw the bug yourself. If possible, reopen or add to those existing tickets. If that’s going to create more confusion, keep this tickets open in separate tabs so you can link your new ticket to them to prevent developers from closing your ticket as a duplicate.

In JIRA at my job, we use a limited number of Projects (Web, Mobile Apps, and a few particular internal departments) and Issue Types (Epics to group tickets within projects, Bugs for most other things). Choosing the Issue Type as New Feature or Task can give a project manager a better idea of how long a ticket will take, but since Bug is the default we end up using that most often. We completely ignore the Due Date and Component fields and we only use Affects Version and Fix Version for the mobile apps, where there are clearly delineated versions.

Adding a JIRA ticket

The Summary is best when it addresses (1), (2), and (3) as described above. Given the implicit character limit of the width of an Agile board (70-80 characters), it makes more sense to address (1) and either (2), (3), or the specific environment where you were able to reproduce the bug. If possible, include a salient keyword so it’s easy to refer to the bug at meetings and search for it in JIRA.

Set the Priority as the lowest possible option unless a project manager or a pillar of the business suggests otherwise. The QA team can help everyone understand the pros and cons of prioritizing bugs, but we’re not the ones to do it. Bug the project managers, not the developers, if your bug isn’t getting the attention it deserves.

If there’s a project currently in development and it’s clear a particular developer’s work caused the bug, make that developer the Assignee. If it’s not clear which developer may have caused the bug, make the lead developer the Assignee and add suspect developers as Watchers. If it’s not clear how the feature was intended to work or how the bug should be fixed, assign the ticket to the lead UX designer. If it’s not clear if the project is being worked on, assign the ticket to the project manager. Add yourself as a Watcher to all tickets so JIRA will email you when someone goes rogue from the workflow. See also: My Biggest JIRA Pet Peeve.

While Environment seems like it should be used for the browser version, OS version, or model of phone, that information gets lost unless it’s included in both the Summary and the Description. I use Environment for an important but too often forgotten part of the software development lifecycle: listing the people who were affected by the bug so you know who to email when you fix it. It’s in a location on the ticket that’s both easy for project managers to find and easy for developers to ignore.

See the Clear Bug Reports section at the top for what you need to include in the Description and Attachments. If there’s more than one attachment, change the filename to what differentiates them from each other (before and after, browser version, date, production vs. test server) before you upload them.

Add the ticket to the current Sprint if the priority has been set higher than the default and the project manager requests it. If there’s an upcoming themed sprint that the bug falls into, put it there. Otherwise leave it in the backlog for the project and product managers to prioritize. Include the ticket in an Epic if the Project designation is too broad or the ticket will get lost in the backlog without it.

JIRA automatically assigns a unique ticket number to each new ticket using the Project slug and the next available integer that corresponds to the permalink domain/browse/[Project slug]-[integer]. When tickets are moved between projects, old URLs redirect to new ones.

EuroStar Webinar: Delivering Unwelcome Messages

A big part of my job as Quality Assurance Manager is delivering bad news. The Eurostar webinar “I think we have an issue – Delivering Unwelcome Messages” hosted by Fiona Charles on Tuesday was great at reinforcing the communication conventions we should practice when delivering the bad news but don’t consciously consider all the time. We have to (1) deliver the bad news (2) to the right person/people (3) at the right time (4) with facts rather than emotion.

1. Deliver the bad news.

Fiona began with a counterexample of what happens when the first step – actually delivering the bad news – is ignored.

Sweedish ship slide

My favorite slide.

When the king of Sweden was building a fancy ship and nobody dared to tell him that the basic safety test failed (the top-heavy design wouldn’t allow for people to run back and forth without tipping), the ship was launched anyway and sank almost immediately.

2. Deliver the bad news at the right time.

The right time to deliver bad news is at a meeting with your bad news on the agenda. If it’s really bad news, make it the only thing on the agenda and schedule the meeting sooner rather than later. Make sure the recipients of the bad news can hear you. Get a conference room, or at least cut the multi-tasking.

We are paid to tell the truth as we see it.

People don’t like hearing bad news.

3. Deliver the bad news to the right person.

Next, deliver the bad news to the decision maker. As the QA person, this is not you. If you don’t interact with the decision maker, speak to someone who can. Confirm the workflow before problems arise so you know where to go when the time comes. If the bad news is big, bring an ally to break it with. Consider whether the recipient of the bad news will believe you. A person who you don’t interact with everyday or haven’t met in person would prefer to ignore your bad news than deal with it.

4. Deliver facts, not emotions.

Present the recipient of the bad news with facts so they believe you. Stick to those facts rather than giving your opinion. If you do give your opinion, make it clear that it’s separate from the facts. If you need to gather more facts, give them a “let me get back to you on that.” Explain the problem without assigning blame to anyone. Other people have feelings. Do not embarrass them or they won’t want to listen when you have more bad news.

In summary, be sure to (1) deliver the bad news (2) to the right person/people (3) at the right time (4) with facts rather than emotion. If the recipient believes your information is valuable, you’ve done your duty.

Many thanks to Fiona Charles for hosting the webinar. Check out the EuroStarConferences website for the complete slides and video archive of the webinar.

My Biggest JIRA Pet Peeve

The never-ending to do list in my department is managed by the web app JIRA, built by Atlassian. Each JIRA ticket is one feature we’re building, one question we need to look into, or one problem we need to solve. As the one QA manager supporting a thirteen-person development, project management, and design team along with a six-person data news team and over one hundred internal producers, I write a lot of tickets. Every ticket is assigned to a person. When a ticket is created or edited, the Reporter (most often me) and the Assignee get an email. Once you add Watchers to the ticket, they also get an email.

I want to add Watchers when I create a ticket in JIRA. I want my project managers to get an email when I find a bug so I don’t have to chat them the URL. I want my UX designers to get an email so I make sure the behavior I’m expecting is the behavior they’re expecting. I want to copy more than one developer when I don’t know who created the bug. I want to copy the lead developer so he knows how his developers are spending their time. I don’t want to create a ticket, click through to the ticket before the green notice disappears from the top of my window, add the Watchers, and then change the Priority or the Summary so all relevant parties know the ticket exists.

Given the 232 Watchers and 417 Votes on this Atlassian ticket about the feature, so do others. Every six months, Atlassian reevaluates the tickets in the backlog and lets us know that this ticket won’t be addressed in the next twelve months. As a Watcher, I get the email about it.