EWT01 – Experience Report

Date: 16th January 2010
Time: 15:30 – 18:00 GMT

Website Tested: http://www.splashup.com

Mission: To find a list of bugs in the SplashUp application. Please note down your findings and email it to europetesters@googlemail.com.

Testers: Anne-Marie Charrett, Phil Kirkham, Jeroen De Cock, Zeger Van Hese, Thomas Ponnet, Maik Nogens, Jassi, Anna Baik, Markus Gaertner

Each tester was asked about the experiences and their feeling after in the discussion facilitated by Ajay.

Maik Nogens started. He explained since he participated in another Weekend Testing session earlier on the same day, his concentration took some harm. He assumed that this reflected on his testing work and efficiency. Another trap Maik fell into was the format of the buglist. Despite these, it was a good learning exercise and there were already different ideas and questions arising for him. When asked what he would do differently if given another chance, Maik explained that he would start thinking before diving into the testing part too early.

Zeger van Hese learned that he shouldn’t assume anything. Asking instead helps a great deal setting up the mission for focussed testing. Zeger continued that he was probably biased since the knew the product. He explained that he might have focussed on another part of the application, if it was unknown for him. Zeger stuck to the Open & Save area of the application instead of exploring the features available. Without his bias he would have explored more of the functions.

Phil Kirkham dived in next. He explained that he was used to find bugs in the first 10 minutes when he got the product in his hands. The application under test here seemed a bit more solid than that. He continued that he had some lack of domain knowledge for the application. Reflecting over the course he seemed to be looking for the big showstopper and felt pressure arising as he wasn’t finding bugs that fast. He really liked the learning experience and found the first session very interesting.

Anne-Marie Charrett started to test the application from the reverse order. She assumed that most would start with the menu-options on the left, so she started on the right side of the menu. Anne-Marie learned that the other testers asked great questions and she often just needs to listen to the answers. Anne-Marie pointed out that directly asking for the trap could have helped her testing. She was distracted by looking for traps in the manner, that wasn’t testing much. She also explained she was tempted to take her approach on video, but gave up on that after 5 minutes. For the next session she would probably continue to take videos and find less bugs. According to Anne-Martie reporting bugs is worthless, if the developer cannot reproduce and fix them.

Markus Gärtner continued with his experience. Among the important problems is the problem with loss of data which happens every now and then. The product seems to compete with other products on the market, but providing lower quality of the features. There were even pretty questionable features in the product. Overall the application was interesting to test. Following the questions from testers was very interesting.

Anna Baik explained that she started to search for bug lists on the web, since the mission did not explicitly excluse it. She then fell for the trap to read these bug reports instead of using them as a testing aid and moving on. She explained that pairing with another tester with more domain knowledge could have helped her mission a lot.

Thomas Ponnet explained that he learned using two PCs was a bad idea. Since he used the wrong URL for testing, he wasted half of the time. For an improved learning experience he would like to pair up with someone next time around. On could do the note taking while the other tester explored the product. Thomas raised the point for bad test environments infecting the testing efficiency. Overall one hour of testing seemed to be rather short, especially when combined with clarifying the testing mission and asking questions about the product itself.

Several points from the discussion interleaved with the experience reports:

  • The mission was vague. One could have searched for a bug list already existing on the web and provide this one. Participants pointed out that such a list could also give additional information about weak parts of the application – even if the versions from the buglist and the latest version differed.
  • Another point raised included Pair Testing, even done remotely. We could have split up the work among us focussing on different areas of the application regarding the individual expertise for example.
  • One important thing that might be worth remembering the next time around is that Weekend Testing is not a competition. Feeling pressure to find bugs and the accompanied fear decreases the bug finding abilities. Showing that some portions of the application are pretty stable is worth information, too.
  • One hour of testing seemed to be rather short. Some of the participants struggled with giving quality related information about the production that early. Domain expertise can help in this quest. The discussion around this topic got distracted to two courses: testing as information gathering and what this information should look like after just a single hour of experience.
  • Another stream of discussion arised around the topic whether to ship a product with problems in it or not. One suggestion was to release the product as a beta (which was actually done) and let reasonable users find the bugs that still existed.
  • The amount of pinpointing was discussed to some extent. Finding the right trade-off between useless bug reporting and useful for finding and fixing the bug from the developer perspective is crucial.
Some more Experience Reports List: Please click HERE

About the Author