Tag Archives: conference

STAREast 2014: Days 3 & 4

The second half of STAREast was in classic conference style: keynotes, lectures with brief Q&As, and scurvy-inducing food. Thomas Cagley gave me great tools for personality analysis. Jason Arbon gave me his book to read and the most actionable ideas for breaking our app and prioritizing our bugs. Florin Ursu gave me the most advice about how to document my testing, integrate myself into my team’s process, and have my team take ownership of the quality of our products. Playing dice games and other puzzles with James Bach, Michael Bolton, and Griffin Jones got me thinking about my strengths and weaknesses as a tester.

Below are my key takeaways from all the sessions I attended.

Randy Rice’s keynote’s about key testing concepts

It doesn’t matter how good your tests are if you’re testing the wrong version. Take the time to clear your cache, install the new build, or go to the developer’s branch.

Zeger Van Hese’s keynote about testing focus and distraction

Making lots of decisions drains mental energy and causes the next decision to be more difficult. Rather than list all the things you are going to do this month, list the things you’re not going to do so you can forget them.

Bob Galen’s session about Agile testing

Calling a project “Agile” to compensate for a lack of requirements will not make it go faster. Make sure you’re building the right thing by frequently communicating with the stakeholders during the development process. Leave enough time in the process for refactoring and bug fixing.

Erik van Veenendaal’s session about risk-based testing

Features that users use all the time are more important than rarely-used features. Figuring out where bugs are more likely to arise can help focus your testing be it inexperienced team members, distributed team members, or new technology. Not all bugs are equally risky; you can start to order them by their relative importance. Items identified as high impact and high likelihood of occurring must be tested and should include a definition of “done.”

Thomas Cagley’s session about cognitive biases

The zero-risk bias causes us to reduce small risks down to zero rather than mitigating the highest risks first. The illusion of control causes us to overestimate our influence in external events. The illusion of transparency causes us to overestimate how well others understand us, especially on long-standing teams. Noting the biases you and your team exhibit can help you avoid being manipulated by them.

Theresa Lanowitz’s keynote about extreme automation

Do public relations for your testing within your organization. Take advantage of the HealthCare.gov debacle to get parity between development and testing. Provide a high-quality experience on the customer’s desired platform so they don’t leave for a competitor.

Jason Arbon’s session about the secrets of mobile app testing

The best way to crash an app is to open and close it a bunch of times, change its orientation, and remove permissions from the phone’s settings instead of the app’s settings. Change your description in the app store to tell your users when you’re fixing bugs. Look at app store reviews to decide what you should be testing today. Make sure your crash SDKs are installed from the first build. There is no correlation between the number of shipments and the quality of the app. Teams that actually fix their bugs have higher quality apps.

Florin Ursu’s session about lightweight testing documentation

It’s hard to find where you’ve missed something when there’s a lot of documentation. Presenting a mind map to stakeholders helps them take ownership of the process and keeps you from being the gatekeeper to production. Mind maps help focus on features overall so you don’t get bogged down in individual test cases. Add attachments, progress clocks, happy/sad faces, links to JIRA tickets, or notes for a more complete document of your testing.

Lloyd Roden’s session about challenges in gathering requirements

Bad requirements can be non-existent, in flux, too few, too numerous, too vague, or too detailed. Error states and performance expectations don’t need specific requirements because you know it when you see it. Too much detail in requirements squashes creativity and leads to useless testing. Find out what features are used the most often and test there the most.

Kamini Dandapani’s session about testing on production at eBay

Load balancers, caching, and monitoring tools will be more robust on production. Manufacturing data in test environments doesn’t account for legacy requirements. Track your progress by measuring the time from a bug report to when a fix is available on production.


STAREast 2014: Days 1 & 2

Michael Bolton and James Bach are friends with the same people: Cem Kaner, Doug Hoffman, Paul Holland, and James’s brother Jon. The material in their full day tutorials at STAREast 2014 covered some of the same material I was exposed to at CAST 2013 and the BBST Foundations course I just finished. Luckily they were good enough speakers that the techniques still felt fresh and motivating. I would have tweeted the following if this conference provided WiFi beyond the registration desk.

On prioritizing and finding bugs:

  • You clients are your boss, the development team and the customer in that order.
  • Usability problems are also testability problems because they allow bugs to hide.
  • Create a test plan before looking at the specification to generate more ideas.
  • Describing your entire test plan allows others to participate in your thinking.
  • The slowest test you can do is the test you don’t need to do.
  • Testers should look for value in the software, not bugs.
  • Memorizing types of heuristics and techniques will help you internalize them.
  • Automated tests are like a check engine light; they only tell you where more investigation is needed.
  • No amount of testing proves that the product always works.

On reporting bugs:

  • Report bugs concisely.
  • Treat boundary requirements as rumors.
  • Separate observation from inference with safety language (i.e. “It appears to be broken” instead of “It’s broken”).
  • Some things are so important or so embedded in the culture (tacit knowledge) that we don’t need to write them down (explicit knowledge).
  • Testers should report bugs and issues. Bugs threaten the value of a product to someone who matters. Issues threaten the value of the testing or business.
  • Testing reports should include the status of the product, how you tested it, and how good that testing was.

Important questions to ask without accusing:

  • Can I ask you lots of questions?
  • Is there any more information?
  • Do you have any particular concerns?
  • Is there a problem here?
  • Can I help you with tasks you don’t like so you have more time to answer my questions?

On the testing profession:

  • Testing is learning about a product through experimentation. And creating the conditions to make that happen. And building credibility with developers.
  • Testing stops when our client has enough information to make a shipping decision.
  • You will never have a complete specification.
  • In Agile, everyone should be willing to help each other answer questions, not abandon their roles or expertise.
  • It is emotionally draining to constantly be fighting with someone who can make your life difficult.
  • Testing is the opposite of Hollywood: The older you are, the better it gets.
  • Testers can’t fear doubt or complexity.
  • A tester’s main job is not to be fooled.
  • Bureaucracy is what people do when everyone has forgotten why they’re doing it.
  • Our job is to tell developers their babies are ugly.
  • Music is something that happens. Sheet music : music :: documents : testing.
  • Apple makes you forget its products don’t work. Microsoft keeps reminding you.