Date: Sunday 22nd March 2015
Time: 3.30pm – 5.30pm GMT (click here to check your local time)
Facilitator: Dan Billing
Participants: Trisha Agarwal, Kai Bischoff, Christian Legget, Siim Sutrop, Namita Jain, Mark Winteringham, Marine Serre-Debray, Richard Bradshaw
In this Weekend Testing Europe session, we discussed the experiences, issues and thinking around the use of tools. We began the discussion by examining what we felt a tool was within the context of testing. There were a range of opinions here such as; tools are a way of making testing activities easier or more efficient, or support the activity of testing. Also, Mark explained his view that “tools are instruments to assist us in measuring and answering our questions”.
One amusing but poignant demonstration of how tools are important to testers came from Marine, who has a rubber duck on her desk, which she uses as a proxy for asking and answering questions about problems she has whilst testing. The tool here isn’t even interacting with the application under test, but interacting with the tester! A fascinating concept.
Equally, a tool, rather than a software or hardware artifact, could be any process we use that solves a problem during testing, which Trisha reminded us of during the session. We could use patterns we have observed to inform future testing, models we develop to allow us to know when there are problems with software we are testing. In addition, any oracles or heuristics we use to aid our testing could also be considered a tool.
Some of the testers had created their own means of recording data that they collect, using things like Wikis or note taking tools. Some are personal Wikis, whereas others use corporate wikis to create and share information with others they work with.
We then moved on to discuss the sorts of issues we encounter when selecting and using tools. We found that a lot of tools are often forced on testers by managers or organisations, who have decided that certain tools fit their business processes, but without considering what value they contribute to the activity of testing. Many of the participants agreed that they preferred to have some input into the tools they use, rather than having tools mandated on them. Mark for example explained that he sometimes creates his own tools, especially to solve a specific problem, which might not be solved by a commercial or open source tool.
It was also discovered that some tools may even make it harder to perform certain testing tasks. Kai raised the issue of false positives, where a tool indicates that there may be a problem with a system under test when in fact there is not. It could also be possible that the tool is either not working as it should, or as we expect, which in turn creates problems for testers. This informs the decision making processes around tool selection.
Other issues we explored were the propensity for some managers and teams to only see value of the tool in the reporting and metrics they provide them, rather than the value of the testing and the information that testing can provide. Well-known enterprise level toolsets are sold on the basis that they solve management problems, rather than testing problems. Invariably they do not always aid the tester. Metrics are important, but if the testing that discovers them is flawed, then surely the metrics are flawed?
As we progressed, we also discussed that testers need to have the right mindset to use a tool. Richard reminded us of something Dorothy Graham said at CAST conference last year, and that is “it takes intelligence to use a tool effectively…unless a tester is a mindless moron, no tool can replace them.” So, without us understanding how to use a tool, why we should use it, what problems it either solves or may even introduce, it might cost us precious time, not add value to our testing and the information it provides our customers, and potentially could cost more money than it needs to.
One final theme that stood out was that tools can be great aids to collaboration with others, be they developers or testers. Collaborative tools such as Wikis have already been mentioned, but also tools with sharing features and integration with file sharing and cloud drive services.
Thanks to all the testers that took part in the session, which was a dynamic and exciting discussion. Lots of great ideas, tools and thinking were shared. Hopefully you all took something from the session.
- WTEU-55 Session Transcript
- James Bach’s repository of test tools: http://www.satisfice.com/tools.shtml
- Zim – a desktop Wiki: http://zim-wiki.org/
- Alan Richardson’s “The Evil Tester’s guide to Technical Testing” webinar: http://testhuddle.com/resource/the-evil-testers-guide-to-technical-testing-with-alan-richardson/
- MindMup – a collaborative mind mapping tool: https://www.mindmup.com/
- Ministry of Testing – List of test tools: http://www.ministryoftesting.com/resources/software-testing-tools/