So here we go, right off the deep end.
One of the things that has bugged me about Android since I first started with it 18 months ago is the half-complete features and tools supporting on-device automation at any kind of enterprise scale (as opposed to just a few developers and devices in some garage somewhere). So much of the platform just integrates so beautifully with other scalable components of a build and test system that it felt like a hack when we managed to get parallelization up and running in our device lab. Hey, at least the platform is open and flexible enough to do so in the first place, right? A startup in Portland is currently building their business model off this functionality (holla, AppThwack, go on with your bad selves!).
So it is there for the clever. I just wish it didn't take being clever to manage running and monitoring multiple, simultaneous jobs on actual, live devices. Why? Well because my cleverness is not nearly as strong as my laziness (one enables the other, I suppose). Knowing it is there isn't necessarily all that reassuring because there are plenty of snares along the road from single device testing to scalable parallelization.
The tools that we use in the Seattle Deloitte Digital studio are open source for the most part except for the single most important piece of the loop. Let's start by talking about the loop itself though. For the unfamiliar, continuous integration usually involves the following pieces:
One of the things that has bugged me about Android since I first started with it 18 months ago is the half-complete features and tools supporting on-device automation at any kind of enterprise scale (as opposed to just a few developers and devices in some garage somewhere). So much of the platform just integrates so beautifully with other scalable components of a build and test system that it felt like a hack when we managed to get parallelization up and running in our device lab. Hey, at least the platform is open and flexible enough to do so in the first place, right? A startup in Portland is currently building their business model off this functionality (holla, AppThwack, go on with your bad selves!).
So it is there for the clever. I just wish it didn't take being clever to manage running and monitoring multiple, simultaneous jobs on actual, live devices. Why? Well because my cleverness is not nearly as strong as my laziness (one enables the other, I suppose). Knowing it is there isn't necessarily all that reassuring because there are plenty of snares along the road from single device testing to scalable parallelization.
The tools that we use in the Seattle Deloitte Digital studio are open source for the most part except for the single most important piece of the loop. Let's start by talking about the loop itself though. For the unfamiliar, continuous integration usually involves the following pieces:
- source control - in our case, Github
- build server - in our case, Jenkins
- test framework - in our case, JUnit in Android
- third party JAR from Polidea for XML output
- third party JAR from Jayway (the ever-popular Robotium library) for easy UI automation
- a custom device I/O script hosted on the lab server
- reporting and IDE integration - in our case, Jenkins and a Jenkins Eclipse plug-in, respectively.
There are plenty of blog posts about most of those pieces except for 3.3 and that's what I'll focus on here. With that script and the use of slave node configs on Jenkins, we can parallelize test jobs, queue jobs for specific devices, and build comprehensive coverage passes with multiconfig jobs. Jenkins slave node handling is well documented, as is multiconfig job setup. So let's get to that script already, eh?
Without pasting the whole thing in here (since your mileage may vary), let me just break it down by functional step. I am positive you, dear reader, will easily think of how to put this together in a shell script and your implementation will likely be pretty snazzy by comparison. You're awesome like that and we both know it. So awesome that the following functions probably leap off the screen in perfect syntactic BASH within seconds of viewing them...
- Collect parameters passed by Jenkins into local variables. These are defined here and also here.
- Define directories on device for file output for screenshots, test results xml, and code coverage output.
- Clean up the device in prep for a test run
- Install new build and test APKs
- Run parameterized tests based on the variables from step 1. Typical parameters are: package name, test suite, XML and emma code coverage filenames and output directories, test runner, and any boolean flags enabling required features.
- Collect test artifacts and copy them to Jenkins
Because we've configured device-oriented slave nodes on the Jenkins server managing the test jobs, we can use a common server SSH config for the machine in the lab to which all the devices are connected and then simply supply the device ID you'd get from running 'adb devices' as an environment variable for that slave node. With that in place, each test job config on the Jenkins server calls something like '/working/directory/path/and/testShellScript.sh parameter1 parameter2 ... parameterN'
Jenkins' own slave node job queuing and job reporting dashboards handle the rest.
If you really want to get tricksie, watch this blog for future posts on how we do the same things for iOS devices.
Comments
Post a Comment