WTA-74: Organizing Your Environment for a Mobile App Testing Session
Date: 8-13- 2016 (1:00 p.m. – 3:00 p.m.)
Facilitator: Jean Ann Harrison
When conducting tests for a Mobile App, the number of test management can get overwhelming. Because you’re testing an entire system beyond the GUI, Test ideas start to blend and important tests can be lost. This Weekend Testing Americas session, we will focus on using an organizational method of managing your testing of a mobile app.
We offer the following sample categories and feel free to suggest your own ideas during the session.
A. Functional testing
– Works as designed?
– Consistent with the purpose?
B. UX testing
– Perceivable? (Font size, contrast, screen size and rotation)
– Readable? (Text and images understandable and useful, hints, help links, etc.)
C. App performance
– App response and performance
– Multiple apps
– The app in background
– The app in sleep mode
D. Device performance with the app
– Battery levels
– Battery drain/charge
– CPU load and stress
– Hot/cold temperature
E. Storage and Memory
– Required to install
– Memory usage
– Starving conditions
– Cloud memory
F. Network
– Connectivity
– Speed
– WiFi access
– Cellular access
G. Workflow
– Start/end points
– Steps
– Loops
– Interruptions (push notifications, timeout, etc)
– Termination/restart
Testing Sessions
- Observation and learning
- Learning about the interface and functionality
- Usage scenarios
- Following common and uncommon usage scenarios
- Function and data
- Exploring aspects of behavior of a function (valid/invalid steps, conditions, data boundaries, etc.)
- Trying applicable data combinations
- Bug hunt
- Deliberately trying erroneous / invalid operations and conditions
Product under test:
MapQuest (Maps, Navigation, and Directions)
Recommended: have the app installed prior to the session.
Mission: Based on the sample categories coverage and testing sessions, come up with a testing strategy for the selected product. Consider that each session is one hour long. Consider that you have a budget of 8 hours.
NOTE:
To participate in this WTA session, send us an email (WTAmericas AT gmail DOT com) and/or send a contact request to Skype ID “WeekendTestersAmericas”). Once you indicate you wish to participate, we will add your Skype ID to the session setup.
The day of the session, please contact us 20 minutes prior to the start of the session, so that we can add you to the group (if you email or Skype us in advance, we will add you based on that RSVP).
FacebookTwitterGoogle+Share
Hi,
I’ve somehow missed this session . Although, i would really appreciate if i can get some recorded session of this. Also, Are you planning to conduct meetup on Mobile Testing in near future.
Experience Report for WTA-74 session
Introductions were made which found a mix of levels of experience from attendees. This can make for a rousing session of questions and new perspectives. This session didn’t disappoint. Our mission was to try a method where we create categories for testing a mobile app and then plan out those categorical test sessions within a time frame. Our mission during this session was to come up with a plan of how to come up with a test strategy in testing GPS mobile app within 8hrs and work out the plan within 8hrs.
Using a predetermined set of categories which included: functional, User Experience (UX) tests, Performance including device performance factors, network communication and general workflow. We started with coming up with some thoughts around time-boxing functional tests and defining functional as well as what factors are UX tests. Breaking down these tests into what tests also incorporated what tests are most significant based on priority for the user. What followed was a discussion in types of performance including an attendee offered a mindmap to start to visualize the categories of the types of tests to consider.
In breaking down the functional tests, there was a question of installation, user permissions and the question of security among objects as part of the GPS app. One attendee decided test coverage could be done within an hour. Then the discussion moved on to topics like globalization, loss of connectivity, changing connectivity from WiFi to cellular and back again, incoming calls and using the app. This portion came down to some general UX conditions but yet also realized UX category needed further breakdown into sub-categories. One very important question came up, “did anyone notice that grouping the categories could be done in more than one way?” This question had a strong impact in this session discoveries. In answering this question of coming up with more than one way to group the categories, one attendee offered to not put focus on broader groups than Installation itself. This does depend on the criticality of the app and the type of users for the app, but the point is to cover as much testing in a most efficient way rather than haphazard ad-hoc type of approach.
The discussion then deviated to prioritizing the tests based on UX and also considering what a company wants to accomplish in deploying such an app. It’s important for testers to think about how an app is going to be viewed by its user public and the potential of perceptions not only the app itself but the company brand. So efficiency in test coverage is vital to the test strategy approach. Breaking down tests to include hardware and operating system conditions affects software behavior requires testers to negotiate for more time to provide further test coverage.
One key factor in discussing categories of tests which came up was combining test; taking notes of what is happening in more than one area. For example while performing a particular function, how much of a factor is the battery drain important, and then think about the temperature of the device while performing functions but also while charging the device too. These actions could all be combined tests and do not have to be performed separately. Combining test areas was a key take-away from this session as attendees found is not only true for mobile testing but also can be applied to non-mobile testing as well.
Another observation was the realization that mobile testing is far more complex in that testers need to think about different situations than what is typical for a desktop, web or a server application. Just because we’re testing a mobile app does not translate automatically to less time to accomplish acceptable test coverage. Understanding to break down the tests into some sort of test strategy in helping to prioritize the tests required to reach acceptable test coverage from a user’s perspective and the company deploying the mobile app.
By breaking down tests perhaps in categories and putting time-boxed values, a couple of points to consider:
1) testers have leverage to negotiate more time for testing
2) testers can share more information regarding specific risks to help stakeholders make more informed decisions about releasing the product or if more testing is required. In discussing the organization of our tests for a mobile app, our discussion evolved into how we incorporate an efficient process to report bugs, be a part of the resolution of bug fixes and how testers need to work more directly with developers to assist in building quality into the product design rather than wait to test after the product is built. During our discussion we discovered we wanted to include a separate category for “Bug Reporting” had to be a part of organizing our environment.
Some quoted conclusions from attendees:
1) ‘Idea of combining manual checks. We seem to be running under the assumption that checks are being run manually and take nontrivial time… so we need to parallelize things like “walking around outside.’’’
2) “Is to share is about testers’ initiative. Yes, if they have to, PM and other stakeholders may tell you what to test. But why take a passive position? Yes, we often don’t have enough time. But we’re never going to get more if we don’t negotiate. And negotiation includes coverage and risks talk – so gotta know them well!’
3) “Highlight risks while planning test strategy. – Estimation = Negotiation” and “Categories for mobile app tests are closely related, cannot be isolated. So essential to make regular notes within sessions such that they can be later used in other categories.”
4) “I think that given the fact that most of the mobile app development projects are in Agile environments (Testers/Dev/product owners) are also agile enough to collaborate.”
5) “Yeah, it is a little different world as I can see.”
Attendees experienced the fact that mobile testing involves whole system testing which includes hardware, operating system conditions which affect software behavior and can have an adverse effect on a user unless the testing can reveal that behavior. Mobile test coverage is much more complex and a solid test strategy needs to be developed prior to jumping into testing.