Skip to main content

UiWatchers + Factory Pattern UI Fragmentation handling = Device lab lock screen Jeet Kune Do

In 2013 at a Google-hosted event in New York called the "Google Test Automation Conference", or GTAC for short, a team of Google engineers tried to sell the attendees the idea that a device lab consisting of real world devices is not as maintainable or optimal of a test bed as a vast server farm running emulators.

That's right.

Google said that in order to avoid the tyranny of dealing with the real world everyone should have a Google-scale server farm for testing their apps (video, slides).

And then 2 months later they bought Appurify.

Those are pretty epic mixed signals.
Example of less paradoxical signaling.
Now don't get me wrong, I would LOVE to have Google's server resources for my automated testing. But like many companies who do not make mobile apps or even software in general as their primary business, my current employer seems to prefer to invest more shall we say "modestly" in their app test automation environment resources. But don't miss the point that Google, for all their nearly infinite server resources still finds live-device-based test automation so important that they bought a company clearly doing it better than they were so far.

Maybe it is because I like working with my own two hands. Maybe it is because I find a zen-like relaxation when up to my earlobes in cable-wrangling. Or maybe I just like to see all the different screens going blinky blinky at the same time. Beyond their practical value as fully realistic representations of the actual users' experience, I just prefer running automation on live devices. So the cautions in that GTAC 2013 presentation do not fall on deaf ears with me. I am battle-tested. I know their pain all too well. It seems like every week there's some new oddball device issue keeping your tests from running completely smoothly.

The irksome thing bothering me these days is that my tests are sometimes begun on devices with a locked screen. Which is to say, not really running at all. Sure, ADB will dutifully install the app and start the uiautomator test suite but that damned locked screen gets in the way of, well, everything.
My test runs be all like...
Nothing is worse than spending all that time and effort building your device lab, meticulously enabling developer mode's "stay awake while charging" on each device, getting a scalable CI service up and running with good end-to-end scenarios, then seeing that series of test runs failing at 100% and throwing off all your metrics. Invariably this happens when you or anyone else is not available to simply walk over and unlock the screen. Because life is pain.

And because we noble automators like to approach our task by selecting a variety of devices, the methods for undoing those tricksy locks can vary. But before you interrupt me, I know what you're thinking. You're thinking, "wait, they're YOUR devices, why not just disable the lock screen and start each test with a little ...UiDevice.getInstance().wakeUp()...?" Well before you get to feeling too clever, I may have mentioned before that my present employer doesn't exactly invest in mobile device testing labs. The "lab" in question may or may not be on or around my desk so it is unsafe to not include basic security like PIN codes and screen timeouts. On top of that, I firmly support enabling all reasonable security on easily removed hardware that is running unreleased code. Seriously, we've been over this before.

In the time since I first wrote suggesting the use of UiAutomator to deal with the locked screen issue, I've learned a lot about how to write tests with UiAutomator, how to write automation in general, and in recently writing about handling the variances in device system UIs using the Factory Design Pattern. My first shot at this involved a dummy test case that ran the UiWatcher and unlocked the device. That's cool and all but with UiAutomator, you're not able to necessarily dictate the order in which tests run. Furthermore, each individual test should be written without any dependency on any other test. This makes debugging them for logic flaws easier, makes re-running failed tests more useful, and generally keeps the code much cleaner and more maintainable. Seriously, do it. And as I'll briefly describe, it really isn't that hard. It's not, like, rocket surgery or something.
Selfies are like, totes rocket surgery right?
Before we begin, please review the previous posts on using UiWatchers to unlock your devices and the Factory Design Pattern to simplify your system UI interactions. Go ahead, we'll wait...

...Now that it is fresh in your mind, here's the secret to my lock screen Jeet Kune Do:
  1. Create a separate WatchersHelper class and define a lock screen watcher just like you would if you were including a more specific watcher in your test class.
  2. However, for all UI interactions, use the factory design pattern approach by calling the create method from your factory to instantiate an interface object and reference all UI interactions through those methods
  3. Remember to use the system fingerprint in your factory to select the appropriate device-specific lock screen helper class that implements the required UI interaction methods
  4. Then in your test classes, just add the lock screen watcher, register it during setUp(), and remove it during tearDown().
So the summary is, the lock screen UiWatcher in your WatcherHelpers class is generic. All your tests will reference this without specific attention paid to their environments. That class merely calls the factory through the appropriate interface. It doesn't take long to set this up once you're comfortable building UiWatchers in the first place but the first time it saves your bacon on a unexpectedly locked screen, it will pay for itself in lost time for generating valuable test results. Each time afterwards is just gravy. And hey, on top of that you've built the foundation for dealing with any lock screen-based tests (since lock screen notifications are now stock in Lollipop, you can anticipate needing to handle that UI). If there's interest, I can post some sample code showing where to organize this in your project. Until then, keep it groovy, Gotham.

Activate the Bat Hand Jive, Robin!

Comments

Popular posts from this blog

UiAutomator and Watchers: Adding Async Robustness to UI Automation

"I'm looking over your shoulder... only because I've got your back." ~ Stephen Colbert After my recent UiAutomator review a user brought up an important question about the use of UiWatcher. The watchers serve as async guardians of the test flow, making sure the odd dialog window doesn't completely frustrate your tests. Having a tool that automatically watches your back when you're focused on the functional flow of your tests is awesome. 100% pure awesomesauce. Since the API documentation on watchers is scant and the UI Testing tutorial on the Android dev guide doesn't cover their use in depth, I figured I should add a post here that goes over a simple scenario demonstrating how to use this fundamentally important UI automation tool. In my example code below, I'm using uiautomator to launch the API Demo app (meaning run this against an Emulator built in API level 17 - I used the Galaxy Nexus image included in the latest ADT and platform tools).

UiAutomator.jar: What happened when Android's JUnit and MonkeyRunner got drunk and hooked up

"Drunkenness does not create vice; it merely brings it into view" ~Seneca So Jelly Bean 4.2 landed with much fanfare and tucked in amongst the neat new OS and SDK features (hello, multi-user tablets!) was this little gem for testers: UiAutomator.jar. I have it on good authority that it snuck in amongst the updates in the preview tools and OS updates sometime around 4.1 with r3 of the platform. As a code-monkey of a tester, I was intrigued. One of the best ways Google can support developers struggling with platform fragmentation is to make their OS more testable so I hold high hopes with every release to see effort spent in that area. I have spent a couple days testing out the new UiAutomator API  and the best way I can think of describing it is that Android's JUnit and MonkeyRunner got drunk and had a code baby. Let me explain what I mean before that phrase sinks down into "mental image" territory. JUnit, for all its power and access to every interface, e

Why Developers Shouldn't Perform Software Testing - A Rebuttal

Take a minute and read the following 2-pager entitled " Guest View: Why developers shouldn’t perform software testing ". Man, I tried to like this article because the author, Martin Mudge, clearly knows that businesses who undervalue quality by trying to eliminate testing through simply shuffling the traditional testing task to developers are making a huge mistake. Unfortunately he begins by making an assertion that testing overburdens a developer which at face value is complete nonsense. If you feel overburdened, it is a timeline issue and that is true no matter WHAT the nature of the tasking is.  So his assertion of “one the most important” contributing factors being overburdening the developers is massively flawed. It gets immediately worse from there because his second point is about time constraints.  Mr Mudge just gets so much wrong. What he really should be shooting down is the idea that testing, as a cost-center not as a task, can be eliminated by having your p