Skip to main content

Jenkins + Devices + AndroidJUnitRunner

New Android build system and test runner, same goose chase using undocumented features and hacks


Oh, you thought they were ready to support full enterprise scale CI just because they moved to Gradle?

As I've posted before, I am a big fan of Jenkins. It is extremely flexible, open source, and supported by a staggering array of plugins actively developed by engineers running over 100,000 instances of the server worldwide. With it's distributed node model, you can even build your own device cloud for hosting enterprise-scale automation, economizing hardware investments by sharing resources across multiple projects as well as speeding up automation by parallelizing test runs. I had been using a Jenkins-based system in the past to support instrumentation automation with Robotium quite happily. For the last couple years however, my work hasn't required that as much and I've found myself doing a lot more manual testing and using UiAutomator which didn't require a tight integration between the product codebase and the test code. As a result, I've been slow to adopt the new build system. A combination of changing roles at work and the release of the Android Testing Support Library, which merges the 3 major Google-supported Android automation frameworks (basic JUnit, Espresso, and UiAutomator) under a single test runner, has brought me back to building tests in the same space as the project code. In that time, Android moved to a new build system, forcing me to re-learn how to build and deploy instrumentation automation. From an automation framework perspective, things have never been better. From a build-test-report Jenkins CI workflow perspective, well, let's just get into it.

Android Test Automation and Gradle

When Android Studio went prime time and left Beta, it shipped with a new Gradle-based build system designed to replace both Eclipse as an IDE as well as Eclipse as a compiler along with other popular Android compilers Ant and Maven. Through the Android Plugin, Gradle was already capable of running your on-device tests in parallel using a task called "connectedCheck" (which is now "connectedAndroidTest"). Unfortunately it doesn't fit automatically into the distributed device node scalability model I use in Jenkins. The provided instrumentation test task, "connectedAndroidTest", will automatically use all detected devices and emulators on the adb instance of the machine running that Gradle task (though there is a work-around for running on a specific device). Furthermore, it is still a compiler-based task which means you'd be invoking the compiler just to run your tests (and you'd need to have the entire project structure in place on your downstream job to accommodate that). Finally it will run all tests on all devices instead of performing a more fine-tuned and scalable run parallelizing with shards. Also compounding this "run all the tests" scenario is the fact that my current project includes Robolectric-based tests and my Gradle-FU is not strong enough yet to allow that framework to co-exist while still leaving me to run a Gradle task with no flags. No, for now it is clear that what I really need Gradle for is still just a compiler.

Getting the *-test-unaligned.apk

So we like Gradle for it's versatility with building projects but the built-in testing tasks just don't cut it for what we need. Instead I went back to looking at running tests directly via shell commands. At first I was confused about whether the test APK and product APK were merged into a single entity the way that the test project and the product project were in the new build system. It turns out that the old instrumented test APK is still around as a separate entity. With Ant, the dependency on the main project was explicit so that when you went to build both the main and test projects, you could call "ant" from the shell and use the test project's build.xml file. Both apks would automatically get built. In Gradle, there is no separate test project build file. Additionally, the test project isn't automatically compiled by default when you call build on the root project's build.gradle file. The trick here is to call the "assembleDebugTest" task which will produce the familiar test apk (albeit named *-test-unaligned.apk because why zipalign a test apk anyway?) along with the debug version of your build.

Running Instrumentation on Device

Now that you've got your Jenkins job building your test and main APKs, it's time to push them to a device and run your tests. I prefer to do this in a downstream job so that it can queue at the slave node with any other jobs sharing that resource, and also fail separately from the build phase in the CI process. Per my post on enterprise scale automation, this results in a unique job per device target (I'll get into how to leverage this model for parallelization using shards later on). After using "adb install" to install BOTH the main and test APKs on your intended device target, Android can run your instrumentation tests using the old familiar adb shell command "am instrument". The new AndroidJUnitRunner supports a large variety of flags allowing for fine-grained control over what and how your test job will run. In my case, because my test project package includes Robolectric-based unit tests that are executed in the JVM on the build phase, I use the supported flags for specifying test classes and/or annotations to focus simply on the instrumentation tests for on-device testing. Because the new test runner supports both Espresso and UiAutomator tests, you can combine them in the same test runs as needed depending on the scope of your test plan which simplifies a lot of things including reporting.

Publishing JUnit Reports on Jenkins

Jenkins prefers a specific format of XML for reporting JUnit test results. This format is output automatically when you run the Gradle task "connectedAndroidTest" but isn't built into the new test runner directly. Additionally, the complexity of the new test runner means you can't simply wrap it the way the excellent Polidea XML test runner did with AndroidTestRunner. That means that you are still stuck trying to parse your terminal output from the "am instrument" command into the desired XML format. Unfortunately, the AndroidJUnitRunner's output from your "am instrument" shell command is an awkward non-standard text output.

IMPORTANT: you need to add the "-r" flag for raw output to capture anything meaningful to parse for your XML in the first place.

Thankfully once you've set that flag, the output should look familiar-ish if you've been using Danny Preussler's handy UiAutomator output parser for your previous UiAutomator work. The parser won't work automatically though since there is a section at the end of the AndroidJUnitRunner output recounting the stack traces of all failed tests. That section can simply be deleted before parsing as it contains redundant information. I pipe the stdout to a text file then trim that text file's redundant section using the following (OSX-friendly) sed command:

sed '/Time: /,$d' testResults.txt > trimmed_testResults.txt

From there I simply call the parser and pass it that trimmed results text file which will produce the XML in a format Jenkins likes.

Summary

In the end, updating my previous experience with Ant-based builds and the Polidea test runner has been a surprisingly challenging process. I really need to spend some quality time learning Gradle in ways I never had to do with Ant. There have been a few blockers along the way that resulted from undocumented features that don't play well together. I won't even go into how long I spent poking around at DDMLIB and Trade Federation to see if they could be used to solve some of my problems (hint: yes, and not really). The good news is that once I stopped looking at the challenging or curiosity-provoking solutions and started looking at the fast solutions, I had my problems solved in an hour or two. The bad news is I spent an embarrassing amount of time in challenging/curiosity-provoking investigations. The result is more or less the following:
  • use the "assembleDebugTest" Gradle task in your build upstream build job on Jenkins to generate the test APK when you're not going to take advantage of the "connectedAndroidTest" task.
  • install both the test and main APKs on your target device then launch instrumentation using "adb shell am instrument -w -r com.android.foo/android.support.test.runner.AndroidJUnitRunner > test_output.txt"
  • trim the test_output.txt file using sed so that it is readily consumable by Danny Preussler's UiAutomator log output parser
  • generate your Jenkins-friendly XML by passing your trimmed_test_output.txt to the parser
  • write lots of useful, debuggable, awesome, confidence-inspiring test automation.
UPDATE: Danny and other contributors have updated that parser to suit the new test runner's output. You don't need to do any trimming. Danny has updated the project to no longer store snapshots though so you might want to consider creating a job on your Jenkins instance that pulls updates from that project, builds it, and provides the compiled parser as a jar to your other projects.

Comments

  1. Thanks, this was super useful! Ac couple of points:
    1. The current version of automator-log-converter doesn't seem to mind the final summary section, so no need for the sed trimming
    2. The output xml generated by automator-log-converter is not compliant with Jenkins xUnit plugin (which can also parse JUnit reports). Specifically, it's missing the "tests" and "errors" attributes on the "testsuite" element. Adding them should be a simple manner though (simply count the "testcase" elements for the former, and filter to those that have an internal "failures" element for the latter). See: https://github.com/dpreussler/automator-log-converter/issues/11

    ReplyDelete
    Replies
    1. I have seen a bunch of pull requests on that project lately so keep checking back, it may have already been fixed. You're right about there being no need to trim the results summary anymore. I have updated my servers several times over the last couple months to pick up the updates from that awesome project and just forgot to update this blog post.

      I'm glad you found the post useful! Let me know if there are any other topics you'd like to see covered.

      Delete
  2. Dude, after some days struggling with the same problems you had, this post is being a life saver for me. Really thank you! a lot!

    ReplyDelete
    Replies
    1. That's really good to hear. I should spend more time documenting the details of the process we use but this covers a lot of the gotchas that I suffered through. Google have really stepped up their game (well, relatively speaking) in terms of developer outreach for a lot of their issues but the lion's share of their work has been built around the concept of their core audience being individual or very small company developers. Certainly nobody needing scalability. That's changing slowly though.

      Delete
  3. This article is very much helpful and i hope this will be an useful information for the needed one. Keep on updating these kinds of informative things...
    Mobile App Development Company
    Android App Development Company

    ReplyDelete
  4. I'm hoping you keep writing like this. I love how careful and in depth you go on this topic aws certification. Keep up the great work

    ReplyDelete

Post a Comment

Popular posts from this blog

UiAutomator and Watchers: Adding Async Robustness to UI Automation

"I'm looking over your shoulder... only because I've got your back." ~ Stephen Colbert After my recent UiAutomator review a user brought up an important question about the use of UiWatcher. The watchers serve as async guardians of the test flow, making sure the odd dialog window doesn't completely frustrate your tests. Having a tool that automatically watches your back when you're focused on the functional flow of your tests is awesome. 100% pure awesomesauce. Since the API documentation on watchers is scant and the UI Testing tutorial on the Android dev guide doesn't cover their use in depth, I figured I should add a post here that goes over a simple scenario demonstrating how to use this fundamentally important UI automation tool. In my example code below, I'm using uiautomator to launch the API Demo app (meaning run this against an Emulator built in API level 17 - I used the Galaxy Nexus image included in the latest ADT and platform tools). ...

UiAutomator.jar: What happened when Android's JUnit and MonkeyRunner got drunk and hooked up

"Drunkenness does not create vice; it merely brings it into view" ~Seneca So Jelly Bean 4.2 landed with much fanfare and tucked in amongst the neat new OS and SDK features (hello, multi-user tablets!) was this little gem for testers: UiAutomator.jar. I have it on good authority that it snuck in amongst the updates in the preview tools and OS updates sometime around 4.1 with r3 of the platform. As a code-monkey of a tester, I was intrigued. One of the best ways Google can support developers struggling with platform fragmentation is to make their OS more testable so I hold high hopes with every release to see effort spent in that area. I have spent a couple days testing out the new UiAutomator API  and the best way I can think of describing it is that Android's JUnit and MonkeyRunner got drunk and had a code baby. Let me explain what I mean before that phrase sinks down into "mental image" territory. JUnit, for all its power and access to every interface, e...

Why Developers Shouldn't Perform Software Testing - A Rebuttal

Take a minute and read the following 2-pager entitled " Guest View: Why developers shouldn’t perform software testing ". Man, I tried to like this article because the author, Martin Mudge, clearly knows that businesses who undervalue quality by trying to eliminate testing through simply shuffling the traditional testing task to developers are making a huge mistake. Unfortunately he begins by making an assertion that testing overburdens a developer which at face value is complete nonsense. If you feel overburdened, it is a timeline issue and that is true no matter WHAT the nature of the tasking is.  So his assertion of “one the most important” contributing factors being overburdening the developers is massively flawed. It gets immediately worse from there because his second point is about time constraints.  Mr Mudge just gets so much wrong. What he really should be shooting down is the idea that testing, as a cost-center not as a task, can be eliminated by having your p...