STAREast 2014: Days 3 & 4

The second half of STAREast was in classic conference style: keynotes, lectures with brief Q&As, and scurvy-inducing food. Thomas Cagley gave me great tools for personality analysis. Jason Arbon gave me his book to read and the most actionable ideas for breaking our app and prioritizing our bugs. Florin Ursu gave me the most advice about how to document my testing, integrate myself into my team’s process, and have my team take ownership of the quality of our products. Playing dice games and other puzzles with James Bach, Michael Bolton, and Griffin Jones got me thinking about my strengths and weaknesses as a tester.

Below are my key takeaways from all the sessions I attended.

Randy Rice’s keynote’s about key testing concepts

It doesn’t matter how good your tests are if you’re testing the wrong version. Take the time to clear your cache, install the new build, or go to the developer’s branch.

Zeger Van Hese’s keynote about testing focus and distraction

Making lots of decisions drains mental energy and causes the next decision to be more difficult. Rather than list all the things you are going to do this month, list the things you’re not going to do so you can forget them.

Bob Galen’s session about Agile testing

Calling a project “Agile” to compensate for a lack of requirements will not make it go faster. Make sure you’re building the right thing by frequently communicating with the stakeholders during the development process. Leave enough time in the process for refactoring and bug fixing.

Erik van Veenendaal’s session about risk-based testing

Features that users use all the time are more important than rarely-used features. Figuring out where bugs are more likely to arise can help focus your testing be it inexperienced team members, distributed team members, or new technology. Not all bugs are equally risky; you can start to order them by their relative importance. Items identified as high impact and high likelihood of occurring must be tested and should include a definition of “done.”

Thomas Cagley’s session about cognitive biases

The zero-risk bias causes us to reduce small risks down to zero rather than mitigating the highest risks first. The illusion of control causes us to overestimate our influence in external events. The illusion of transparency causes us to overestimate how well others understand us, especially on long-standing teams. Noting the biases you and your team exhibit can help you avoid being manipulated by them.

Theresa Lanowitz’s keynote about extreme automation

Do public relations for your testing within your organization. Take advantage of the debacle to get parity between development and testing. Provide a high-quality experience on the customer’s desired platform so they don’t leave for a competitor.

Jason Arbon’s session about the secrets of mobile app testing

The best way to crash an app is to open and close it a bunch of times, change its orientation, and remove permissions from the phone’s settings instead of the app’s settings. Change your description in the app store to tell your users when you’re fixing bugs. Look at app store reviews to decide what you should be testing today. Make sure your crash SDKs are installed from the first build. There is no correlation between the number of shipments and the quality of the app. Teams that actually fix their bugs have higher quality apps.

Florin Ursu’s session about lightweight testing documentation

It’s hard to find where you’ve missed something when there’s a lot of documentation. Presenting a mind map to stakeholders helps them take ownership of the process and keeps you from being the gatekeeper to production. Mind maps help focus on features overall so you don’t get bogged down in individual test cases. Add attachments, progress clocks, happy/sad faces, links to JIRA tickets, or notes for a more complete document of your testing.

Lloyd Roden’s session about challenges in gathering requirements

Bad requirements can be non-existent, in flux, too few, too numerous, too vague, or too detailed. Error states and performance expectations don’t need specific requirements because you know it when you see it. Too much detail in requirements squashes creativity and leads to useless testing. Find out what features are used the most often and test there the most.

Kamini Dandapani’s session about testing on production at eBay

Load balancers, caching, and monitoring tools will be more robust on production. Manufacturing data in test environments doesn’t account for legacy requirements. Track your progress by measuring the time from a bug report to when a fix is available on production.

Respond to my jibber jabber

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s