Skip to main content

Deploying Android Automation in Parallel

So here we go, right off the deep end.

One of the things that has bugged me about Android since I first started with it 18 months ago is the half-complete features and tools supporting on-device automation at any kind of enterprise scale (as opposed to just a few developers and devices in some garage somewhere). So much of the platform just integrates so beautifully with other scalable components of a build and test system that it felt like a hack when we managed to get parallelization up and running in our device lab. Hey, at least the platform is open and flexible enough to do so in the first place, right? A startup in Portland is currently building their business model off this functionality (holla, AppThwack, go on with your bad selves!).

So it is there for the clever. I just wish it didn't take being clever to manage running and monitoring multiple, simultaneous jobs on actual, live devices. Why? Well because my cleverness is not nearly as strong as my laziness (one enables the other, I suppose). Knowing it is there isn't necessarily all that reassuring because there are plenty of snares along the road from single device testing to scalable parallelization.

The tools that we use in the Seattle Deloitte Digital studio are open source for the most part except for the single most important piece of the loop. Let's start by talking about the loop itself though. For the unfamiliar, continuous integration usually involves the following pieces:

  1. source control - in our case, Github
  2. build server - in our case, Jenkins
  3. test framework - in our case, JUnit in Android
    1. third party JAR from Polidea for XML output 
    2. third party JAR from Jayway (the ever-popular Robotium library) for easy UI automation
    3. a custom device I/O script hosted on the lab server
  4. reporting and IDE integration - in our case, Jenkins and a Jenkins Eclipse plug-in, respectively.
There are plenty of blog posts about most of those pieces except for 3.3 and that's what I'll focus on here. With that script and the use of slave node configs on Jenkins, we can parallelize test jobs, queue jobs for specific devices, and build comprehensive coverage passes with multiconfig jobs. Jenkins slave node handling is well documented, as is multiconfig job setup. So let's get to that script already, eh?

Without pasting the whole thing in here (since your mileage may vary), let me just break it down by functional step. I am positive you, dear reader, will easily think of how to put this together in a shell script and your implementation will likely be pretty snazzy by comparison. You're awesome like that and we both know it. So awesome that the following functions probably leap off the screen in perfect syntactic BASH within seconds of viewing them...

  1. Collect parameters passed by Jenkins into local variables. These are defined here and also here.
  2. Define directories on device for file output for screenshots, test results xml, and code coverage output.
  3. Clean up the device in prep for a test run
  4. Install new build and test APKs
  5. Run parameterized tests based on the variables from step 1. Typical parameters are: package name, test suite, XML and emma code coverage filenames and output directories, test runner, and  any boolean flags enabling required features.
  6. Collect test artifacts and copy them to Jenkins
Because we've configured device-oriented slave nodes on the Jenkins server managing the test jobs, we can use a common server SSH config for the machine in the lab to which all the devices are connected and then simply supply the device ID you'd get from running 'adb devices' as an environment variable for that slave node. With that in place, each test job config on the Jenkins server calls something like '/working/directory/path/and/testShellScript.sh parameter1 parameter2 ... parameterN'

Jenkins' own slave node job queuing and job reporting dashboards handle the rest.

If you really want to get tricksie, watch this blog for future posts on how we do the same things for iOS devices.

Comments

Popular posts from this blog

UiAutomator and Watchers: Adding Async Robustness to UI Automation

"I'm looking over your shoulder... only because I've got your back." ~ Stephen Colbert After my recent UiAutomator review a user brought up an important question about the use of UiWatcher. The watchers serve as async guardians of the test flow, making sure the odd dialog window doesn't completely frustrate your tests. Having a tool that automatically watches your back when you're focused on the functional flow of your tests is awesome. 100% pure awesomesauce. Since the API documentation on watchers is scant and the UI Testing tutorial on the Android dev guide doesn't cover their use in depth, I figured I should add a post here that goes over a simple scenario demonstrating how to use this fundamentally important UI automation tool. In my example code below, I'm using uiautomator to launch the API Demo app (meaning run this against an Emulator built in API level 17 - I used the Galaxy Nexus image included in the latest ADT and platform tools). ...

UiAutomator.jar: What happened when Android's JUnit and MonkeyRunner got drunk and hooked up

"Drunkenness does not create vice; it merely brings it into view" ~Seneca So Jelly Bean 4.2 landed with much fanfare and tucked in amongst the neat new OS and SDK features (hello, multi-user tablets!) was this little gem for testers: UiAutomator.jar. I have it on good authority that it snuck in amongst the updates in the preview tools and OS updates sometime around 4.1 with r3 of the platform. As a code-monkey of a tester, I was intrigued. One of the best ways Google can support developers struggling with platform fragmentation is to make their OS more testable so I hold high hopes with every release to see effort spent in that area. I have spent a couple days testing out the new UiAutomator API  and the best way I can think of describing it is that Android's JUnit and MonkeyRunner got drunk and had a code baby. Let me explain what I mean before that phrase sinks down into "mental image" territory. JUnit, for all its power and access to every interface, e...

Why Developers Shouldn't Perform Software Testing - A Rebuttal

Take a minute and read the following 2-pager entitled " Guest View: Why developers shouldn’t perform software testing ". Man, I tried to like this article because the author, Martin Mudge, clearly knows that businesses who undervalue quality by trying to eliminate testing through simply shuffling the traditional testing task to developers are making a huge mistake. Unfortunately he begins by making an assertion that testing overburdens a developer which at face value is complete nonsense. If you feel overburdened, it is a timeline issue and that is true no matter WHAT the nature of the tasking is.  So his assertion of “one the most important” contributing factors being overburdening the developers is massively flawed. It gets immediately worse from there because his second point is about time constraints.  Mr Mudge just gets so much wrong. What he really should be shooting down is the idea that testing, as a cost-center not as a task, can be eliminated by having your p...