Skip to main content

Boom! Screenshot! Level up your test debugging with RunListener

UI automation is like a dumb computer user; a really, really, profoundly dumb computer user. It will fail, it will be obvious that it is failing. But it won't always know immediately why. Sometimes just observing it's failures by means of a screenshot of the failure is all it takes.



As you recall from my last post on the topic, screenshots on failure are a potent way to quickly reduce the time to answer the question of why a UI test case failed. Google's Android Test Tools team likes screenshots enough that they've included the capability inside the UiAutomator framework. Square likes it enough to have included it in their Spoon framework. And as we all remember, Robotium's been doing this for a long, long time. The main advantage to the approach I described previously is that screenshots would happen automatically based on errors of any kind in the test lifecycle.

As of now, you're all probably migrating or have already migrated to the newer Android Testing Support Library. Among many other advantages to the new framework is the ability to use JUnit 4. This will break the pattern I've published already so it is time for an update and this time we're getting even fancier. Whereas before, I had you including an overrided runtest() method in all your test case classes, some of you may have gotten clever and included it once in a base class and simply extended the base class for all of your test case classes of similar type (this reduces boilerplate for things you might do commonly in your setup() and teardown() methods too. In JUnit 4, the approach is even better. It will reduce your boilerplate even more, and allow you to use a variety of screenshot approaches dynamically when you call your tests by passing it as a parameter to am instrument from the shell. And we'll do all of this using a RunListener.

Run listeners can be called at execution time as a flag for am instrument in the device shell:
To specify one or more RunListeners to observe the test run: -e listener com.foo.Listener,com.foo.Listener2

The way you implement your own listener is to write a single class that extends RunListener which you can then call or leave dormant as desired. In capturing a screenshot, I like to name my files after the test case's fully qualified name so that I can attach them to the failure results via a Jenkins Plugin. You can get the class and method names from the Description object. Here is some sample code for you to study and try out:


Happy testing!

Comments

  1. Not sure I understand this, but is it possible to use it to take a screenshot of the device, even if activity is not on the foreground?
    Is it possible to take a screenshot within the app, of other apps?

    ReplyDelete
    Replies
    1. You might be familiar with the calling parameters for the screenshot methods in Spoon and Robolectric, both of which require passing the view. With UiAutomator, it's much simpler (which you can see in my code snippet above). The reality is, the whole point of UiAutomator was to break free from the limitations of working solely within the application context of your own debuggable app. Therefore yes, you can take a screenshot while your test is running through another app if you're so inclined. The only limitation is if there is in the current hierarchy, any view with a layout flagged "secure". This blocks all screenshot methods as far as I know (you know, apart from holding another camera over your device screen).

      Reference: http://commonsware.com/blog/2012/01/16/secure-against-screenshots.html

      Delete
  2. How do you pull the screenshots off in an automated fashion for Jenkins?

    ReplyDelete
    Replies
    1. I run Jenkins nodes on Mac Mini's so I use Unix Shell commands for most of my device I/O via ADB. In the code you see above, I call out "/sdcard/test_output_files/" as the path to the screenshots I store on-device. It isn't the most elegant approach. I could be using a more environment-aware approach that checks for external storage based on the SDK level (which is how Spoon works). I hard-code that path but it could just as well be a parameter you pass to the test runner when you call it instead.

      Frail hardcoding aside, here's a quick look at the shell commands you'd use to pull the files, clean up the directory on the device, check to see if any files are PNGs and sort them according to classname folders so that the JUnit Test Attachments plugin for Jenkins can include them at the class level with your test results. Note however that you have to make sure to handle the case that you might not have any screenshots at all. I also like to use the Workspace Cleanup Plugin delete my workspace before each run to make sure there are no artifacts hanging around for these scripts to pick up incorrectly.


      ## PULL SCREENSHOTS AND LOGS FROM DEVICE
      adb -s $DEVICE_SERIAL pull /sdcard/test_output_files/

      ## CLEAN UP TEST FILES FROM DEVICE
      adb -s $DEVICE_SERIAL shell rm -rf /sdcard/test_output_files/

      ## SET UP DIRS FOR JUNIT ATTACHMENTS PLUGIN: https://wiki.jenkins-ci.org/display/JENKINS/JUnit+Attachments+Plugin
      shopt -s nullglob
      for file in *.png; do
      dir=${file%.*.png}
      mkdir -p "$dir"
      mv "$file" "$dir"
      done

      Delete
  3. Because I'm inattentive apparently I missed that this approach suddenly stopped working when I ran the tests against a device installed with Android 6.0. I opened the following bug with the AOSP here: https://code.google.com/p/android/issues/detail?id=195927

    ReplyDelete
  4. Okay, not time to panic yet. Turns out this is breaking as intended. Now I have to decide between forcing the permission to be enabled at the outset of testing based on the SDK level of the target device OR using a different method to select the output directory for the test data (logcat and screenshots). I updated the ticket with further testing.

    https://code.google.com/p/android/issues/detail?id=195927

    ReplyDelete
  5. I've had some significant "key learnings" today regarding permissions on Marshmallow. I'll be adding a NEW post to cover the permissions stuff here instead of just updating this post.

    ReplyDelete
  6. This comment has been removed by the author.

    ReplyDelete
  7. change targetSdkVersion from 23 to 22 in gradle, stupid permission bug.

    ReplyDelete
  8. Thanks for the guide, it was great help!

    A note though, if one would want to run the test suite through Gradle rather than simply instrumentation, you could either give a command line argument to gradle as mentioned in 1.3.0 notes in here:
    http://tools.android.com/tech-docs/new-build-system

    or by adding

    testInstrumentationRunnerArguments listener: "com.foo.package.MyListener"

    to build.gradle, either to defaultConfig or flavour-specific config.

    ReplyDelete

Post a Comment

Popular posts from this blog

UiAutomator and Watchers: Adding Async Robustness to UI Automation

"I'm looking over your shoulder... only because I've got your back." ~ Stephen Colbert After my recent UiAutomator review a user brought up an important question about the use of UiWatcher. The watchers serve as async guardians of the test flow, making sure the odd dialog window doesn't completely frustrate your tests. Having a tool that automatically watches your back when you're focused on the functional flow of your tests is awesome. 100% pure awesomesauce. Since the API documentation on watchers is scant and the UI Testing tutorial on the Android dev guide doesn't cover their use in depth, I figured I should add a post here that goes over a simple scenario demonstrating how to use this fundamentally important UI automation tool. In my example code below, I'm using uiautomator to launch the API Demo app (meaning run this against an Emulator built in API level 17 - I used the Galaxy Nexus image included in the latest ADT and platform tools). ...

UiAutomator.jar: What happened when Android's JUnit and MonkeyRunner got drunk and hooked up

"Drunkenness does not create vice; it merely brings it into view" ~Seneca So Jelly Bean 4.2 landed with much fanfare and tucked in amongst the neat new OS and SDK features (hello, multi-user tablets!) was this little gem for testers: UiAutomator.jar. I have it on good authority that it snuck in amongst the updates in the preview tools and OS updates sometime around 4.1 with r3 of the platform. As a code-monkey of a tester, I was intrigued. One of the best ways Google can support developers struggling with platform fragmentation is to make their OS more testable so I hold high hopes with every release to see effort spent in that area. I have spent a couple days testing out the new UiAutomator API  and the best way I can think of describing it is that Android's JUnit and MonkeyRunner got drunk and had a code baby. Let me explain what I mean before that phrase sinks down into "mental image" territory. JUnit, for all its power and access to every interface, e...

Why Developers Shouldn't Perform Software Testing - A Rebuttal

Take a minute and read the following 2-pager entitled " Guest View: Why developers shouldn’t perform software testing ". Man, I tried to like this article because the author, Martin Mudge, clearly knows that businesses who undervalue quality by trying to eliminate testing through simply shuffling the traditional testing task to developers are making a huge mistake. Unfortunately he begins by making an assertion that testing overburdens a developer which at face value is complete nonsense. If you feel overburdened, it is a timeline issue and that is true no matter WHAT the nature of the tasking is.  So his assertion of “one the most important” contributing factors being overburdening the developers is massively flawed. It gets immediately worse from there because his second point is about time constraints.  Mr Mudge just gets so much wrong. What he really should be shooting down is the idea that testing, as a cost-center not as a task, can be eliminated by having your p...