Skip to main content

UiAutomator.jar: What happened when Android's JUnit and MonkeyRunner got drunk and hooked up

"Drunkenness does not create vice; it merely brings it into view" ~Seneca

So Jelly Bean 4.2 landed with much fanfare and tucked in amongst the neat new OS and SDK features (hello, multi-user tablets!) was this little gem for testers: UiAutomator.jar. I have it on good authority that it snuck in amongst the updates in the preview tools and OS updates sometime around 4.1 with r3 of the platform. As a code-monkey of a tester, I was intrigued. One of the best ways Google can support developers struggling with platform fragmentation is to make their OS more testable so I hold high hopes with every release to see effort spent in that area. I have spent a couple days testing out the new UiAutomator API and the best way I can think of describing it is that Android's JUnit and MonkeyRunner got drunk and had a code baby. Let me explain what I mean before that phrase sinks down into "mental image" territory.

JUnit, for all its power and access to every interface, every service, every nook and cranny in your app, is limited to only running within your app's context. It carries the full heft of all the years of *Unit development elsewhere in test automation but it remains too close to its unit test roots to be serviceably fast and agile to develop in for UI automation. Moreover the limitation of only being able to test the associated product app prevents testers from fully exploring the interactions between applications and systems elsewhere on the device.

Conversely Android's MonkeyRunner is based in a Java adaptation of Python (Jython, no kidding) and it has access to a whole host of interfaces outside of the particular application you're trying to test. It can be used for mobile web testing, capturing screenshots of any app interaction, and since it is based in Python, is fairly quick to script. Its shortcomings are pretty severe when it comes to deep integration with application architecture so you are left clicking sometimes blindly in order to generate events (sendkeys, x/y coordinates, d-pad actions, oh my!).

It is easy to see where these two tools could complement each other well if combined. Deep application integration with simpler scripting and full-device access, not just sandboxed inside a single app's context could be really powerful. I say "could be" for a reason though and that's after a few days' mucking around with the tool. Let's go over the good aspects first.

The Good:

  1. Since UIAutomator still uses JUnit as the basis for its test runner, most of the familiar test case structures are available in all the best ways. 
  2. Of special importance are UiWatchers which are like async police whom you can configure to lurk outside of test cases to catch common difficulties affecting tests (such as dialogs and alerts) or embedded within tests themselves for more specific triggers.
  3. The XML hierarchy dump tool in Monitor/DDMS is amazingly fast by comparison to the old Hierarchy Viewer, and gets you everything you need as a tester to identify the specific UI elements your test will need without the distractions. Brilliant.
  4. The tests compile as a separate JAR which you push to the device/emulator in shared filespace so that the application-under-test-sandboxing of former JUnit test projects will be a thing of the past. Even better, the JAR still executes on the device allowing for massive parallelization just like before (sure, I am tempted to brag about having a parallelization problem but I'll take the high road this time)
  5. Simple, repeatable syntax for getting objects from the UI to interact with means you spend less time coding and more time constructing useful tests (at least, that's the hope, when it works)
  6. Use of Accessibility labels enforces good coding practices. Just like iOS's UI Automation, this tool takes advantage of some oft-overlooked aspects of complete code and so testers get convenient UI hooks, and the sight-impaired get better apps. Win, meet my friend Win. You two have lots to talk about.
The Bad:
  1. Just like the former JUnit, this one lacks portable XML test results output which means it is just feels like Google don't like good, thorough reporting. 
  2. Furthermore, on the reporting side of life, the lack of XML output is compounded by the lack of Eclipse integration for running the tests. You will spend a lot of time with the command line. As you're aware from my previous posts, we've built an extensive CI system and hooked it up to a device lab which is accessible even from within Eclipse. This tool is not there yet.
  3. JUnit's overly verbose coding style is still present meaning writing tests is still complex and you need to know a lot about device limitations, timing of events in the UI, and other kinds of non-trivial, deeper than scripting, test automation heavy lifting. I would say this is still a 4 out of 5 in terms of code complexity. Maybe a 3 if your app plays nicely. I would say my UIAutomator test cases are likely to contain 80% to 120% as many lines of code as my straight up JUnit-based UI test cases. No real gains here (yet).
  4. This tool only works on devices and emulators graced with Android 4.1 (or higher). This is because the tests are run via an app included with the OS on the device. Over the next few years this will allow you to begin getting wider device coverage with tests written using this tool but until then you can't use the UIAutomator for compatibility tests. Fragmentation's a mean ol' dog who won't be put down easily.
So what are we left with when we add this all up? I'd say while UIAutomator won't revolutionize the way I write UI test automation overnight, it shows tremendous promise. When this little code baby matures over the next year or two, with some more support from the Open Source community, I can see this going way beyond tools like Robotium. For those of you who are very comfortable in Java and JUnit, this will get you writing UI Automation faster than Robotium did, and with its ability to break out of the sandbox, you'll be getting more creative with what you're willing to test. 

What's the TL:DR?
Why you want to use this: no more app sandbox for JUnit-based tests, better coding practices, more automatable scenarios with deeper device integration.
Why you'll want to pull your hair out: still no native XML output, no Eclipse integration, still too verbose to be clean and fast, and it only works on devices/emulators running Jelly Bean 4.1 and above.

Final Words
Writing comprehensive automation for UI tests on devices is still really hard. There is huge support in the Android community for brave testers who like a challenge and Google have provided us another option. If Android's JUnit and MonkeyRunner's drunken tryst resulted in all this already, I say we all buy Google's Android test developers another round.

Comments

  1. Thanks for the review Russell! It sounds like, for the time being anyway this tool's biggest weakness is that it only runs on 4.2. One of the biggest wins for automation (especially on Android) is crossing multiple configs, this tool isn't the one for that. Like you said, maybe after the baby grows up a bit.

    ReplyDelete
  2. Hi Brian. Thanks for the comment. I've corrected my post to state that it is supported on devices running 4.1 and forward but even with that, Jelly Bean is less than 5% of total Android marketshare. One of the huge advantages to this tool however is its ability to test anything the device can do. I imagine if you've got a 4.1 phone or tablet, you can even test other people's apps without access to their source code like you'd need with regular JUnit.

    ReplyDelete
  3. Great tool..It is nice to see here about UIautomater.
    Could you tell me whether this UIautomater can supports for android web views like HypbridApps ?

    ReplyDelete
  4. I posted some similar findings here:
    https://plus.google.com/103712615773524578393/posts/dzqtF7BYyG4

    I don't think the verbosity is wholly a by-product of being JUnit -- for example, libraries like Mockito manage to keep test code shorter by having nice methods you can statically import and using plenty of method chaining. That's missing in UI Automator.

    As for the lack of JUnit XML support, I've already asked on the adt-dev mailing list about adding support to the excellent "android-junit-report" project, but with little luck so far:
    https://groups.google.com/forum/?fromgroups=#!topic/adt-dev/3McUOQ8JrKE

    The big win over Robotium is that it supports crossing package boundaries. Robotium can only test the Activities that live in the package under test. UI Automator works across the whole system.

    The biggest downside for me is that it's tied so closely to the system, therefore updates and bug fixes seem likely to be released infrequently, i.e. once per Android OS update.

    ReplyDelete
    Replies
    1. Is it possible to modify the android-junit-project ourselves...instead of waiting for Google to support it?

      Delete
  5. Thanks for the comments, Christopher.

    The verbosity is a Java thing. With projects like Cucumber and other Ruby-style BDD test semantics libraries out there, I would hope that future updates to tools designed to make automating the UI easier would try and move that direction. Sure, you can port Cucumber to Java but without native support for that level of semantics, you're just asking people to add an unnecessarily ungainly semantic translation layer themselves and that gets hard to support quickly.

    Which ties back to your other concerns about XML and just support in general. I think the pace of development on the platform itself is already so fast that these kinds of efforts are all measured against how much of a return they'd provide. It just seems like Google aren't convinced doing so within the Android project is worth their effort, especially with the open source community doing their best already to fill in the gaps.

    We use Robotium already and get our JUnit XML output from Polidea's XML test runner JAR. A quick build-time android:uses-permission insertion in the Manifest to add SDcard access to debug builds gives us a place on disk to write test results for easy retrieval. We'd probably just use "adb run-as" for most purposes but Robotium's screenshot tool points to a non /data/data/your.package.name directory so I kill two birds with one stone and pipe all file output there and pull a single directory back to the CI system. I have a few posts about it elsewhere in the blog.

    As for crossing package boundaries, I completely agree. This is an incredibly powerful tool for scenarios that require whole-device manipulation (so long as that doesn't involve needing access inside WebViews apparently). I look forward to expanding my test automation support coverage accordingly.

    ReplyDelete
  6. Thank you for your great writeup! My first impression is that it's a lot quicker to write testcases using this framework, as compared to Robotium. The webview limitation is as expected, but you probably can still inject javascript like with Robotium.

    But, I could not get the UiWatcher to work, it does not ever seem to be called even when the UiSelector does not match anything. Do you per chance have a sample you want to share?

    ReplyDelete
    Replies
    1. Thanks for the comment, jmk.

      I just wrote up a quick demo test JAR to walk through the use of Watchers. I'll post it and an article about using them tomorrow. Stay tuned!

      Delete
  7. Hey Russell, great write up, pretty useful.
    Thanks...cheers!

    ReplyDelete
  8. Hi Russell,
    I am facing method not found error while running uiautomator test cases on my device.
    I get the following error result on my command prompt while running the test cases.

    "Error in testDemo2:
    java.lang.NoSuchMethodError: com.android.uiautomator.core.UiSelector.textMatches

    at mh_test.MainClass.testDemo2(MainClass.java:61)
    at java.lang.reflect.Method.invokeNative(Native Method)
    at com.android.uiautomator.testrunner.UiAutomatorTestRunner.start(UiAuto
    matorTestRunner.java:121)
    at com.android.uiautomator.testrunner.UiAutomatorTestRunner.run(UiAutoma
    torTestRunner.java:82)
    at com.android.commands.uiautomator.RunTestCommand.run(RunTestCommand.ja
    va:76)
    at com.android.commands.uiautomator.Launcher.main(Launcher.java:83)
    at com.android.internal.os.RuntimeInit.nativeFinishInit(Native Method)
    at com.android.internal.os.RuntimeInit.main(RuntimeInit.java:237)
    at dalvik.system.NativeStart.main(Native Method)

    INSTRUMENTATION_STATUS: numtests=2
    FAILURES!!!
    Tests run: 2, Failures: 0, Errors: 1

    I have googled around but in google forums i found that such errors occur while running test cases on emulator but i am doing it on device so please help me out how to get rid of such error. an early reply would be very much appreciable coz i badly need to get the soultion.
    I would also like to post my code as-

    package mh_test;

    import com.android.uiautomator.core.*;
    import com.android.uiautomator.testrunner.UiAutomatorTestCase;




    public class MainClass extends UiAutomatorTestCase {

    {
    public void testDemo2() throws UiObjectNotFoundException {

    // Set the swiping mode to horizontal (the default is vertical)
    // appViews.setAsHorizontalList();


    UiObject eulaobject = new UiObject(new UiSelector()
    .className("android.widget.CheckBox"));
    eulaobject.click();

    // Validate that the package name is the expected one
    UiObject eulaValidation = new UiObject(new UiSelector().textMatches("I agree to the Terms and Conditions"));
    assertTrue("Eula doesnot match",
    eulaValidation.exists());
    }
    }

    Thanks,
    Ritima

    ReplyDelete
    Replies
    1. It depends on your device. Since this is a new tool and it depends on the software deployed to the device in the device OS, you might find that unless you're running 4.2 on your device, you don't have all the updated methods in the tool. This was something I noticed as well and made comment towards in my main post. For now, try running your code in an emulator running rev 17 of the SDK and proving whether or not it works at all under the best case scenario. Once that's established, double-check the OS on your device. If it isn't 4.2, you may need to be satisfied with running your tests on the emulator for now.

      Delete
    2. Hi Russell
      i was able to identify the cause of error, it was because i was using api 16 and some of the functions were not working in api 16.
      Thanks for reply. And i would olso lyk to know more about uiwatchers, i read ur blogpost on watcher it was pretty good but if you can describe in more verbose manner with complex example that would be favorable. Thanks:)

      Delete
  9. Hi Russell
    I am facing one more problem while using uiautomator that is- I want to check if there is an active internet connection established on my device before a particular test case is running. I searched around but could not get the desirable results.

    ReplyDelete
  10. Hey
    I am unable to press 'Go' button on searching something on Nexus 7' tablet.I tried using following -
    //Search something say "fun"
    new UiObject(new UiSelector().text("Enter URL or Search & Win")).setText("fun");
    getUiDevice().pressEnter();
    OR getUiDevice().pressSearch();

    Also tried :
    getUiDevice().pressKeyCode(66); //for enter
    getUiDevice().pressKeyCode(84); // for search

    Could you help me out with this?

    ReplyDelete
  11. Hey Russel
    Can you please tell which is the first class that is encountered when uiautomator is called. As far as I have understood, We need not write a main Activity class while dealing with the uiautomator for testing.

    ReplyDelete
  12. Russel,

    Can you tell me where to find the new changes to uiautomator.
    I want to see the new changes that have come to API Level 18 and Level 19(Kitkat)

    Thanks

    ReplyDelete
  13. While running the UiAutomator the result is shown on the command prompt with failure and pass trace. Where can I get the complete report, Is there any possibility to generate any XML report using Uiautomator?

    ReplyDelete
  14. This is common to all of Google's provided test tools. Internally, Google uses a custom test tracking tool and thus uses custom parsing rules to suit their needs. The rest of us who appreciate basic things like generic XML reporting are left to look for things like the Polidea XML Test Runner for JUnit (e.g. Robotium) tests. In the case of the UiAutomator's output, I found this to be quite useful:
    https://github.com/dpreussler/automator-log-converter/tree/master/snapshots

    I hope that helps.

    ReplyDelete
    Replies
    1. Thanks Russell...As of Today...Do you still use UIAutomator?...Or is there a better tool that you prefer?

      Delete
    2. Suuuuuuper late reply. I still use UiAutomator. I've got an upcoming blog post comparing it with Espresso that you might enjoy.

      Delete
  15. Not 4.1 and above, just 4.1. Each scrollForward method and methods like it keep returning false for if they are able to scroll again. This breaks scrollIntoView and you can kiss goodbye to any testing on small devices or anything that scrolls!

    Swipe is also broken in the same way on android 4.3!

    ReplyDelete

Post a Comment

Popular posts from this blog

UiAutomator and Watchers: Adding Async Robustness to UI Automation

"I'm looking over your shoulder... only because I've got your back." ~ Stephen Colbert After my recent UiAutomator review a user brought up an important question about the use of UiWatcher. The watchers serve as async guardians of the test flow, making sure the odd dialog window doesn't completely frustrate your tests. Having a tool that automatically watches your back when you're focused on the functional flow of your tests is awesome. 100% pure awesomesauce. Since the API documentation on watchers is scant and the UI Testing tutorial on the Android dev guide doesn't cover their use in depth, I figured I should add a post here that goes over a simple scenario demonstrating how to use this fundamentally important UI automation tool. In my example code below, I'm using uiautomator to launch the API Demo app (meaning run this against an Emulator built in API level 17 - I used the Galaxy Nexus image included in the latest ADT and platform tools). ...

Why Developers Shouldn't Perform Software Testing - A Rebuttal

Take a minute and read the following 2-pager entitled " Guest View: Why developers shouldn’t perform software testing ". Man, I tried to like this article because the author, Martin Mudge, clearly knows that businesses who undervalue quality by trying to eliminate testing through simply shuffling the traditional testing task to developers are making a huge mistake. Unfortunately he begins by making an assertion that testing overburdens a developer which at face value is complete nonsense. If you feel overburdened, it is a timeline issue and that is true no matter WHAT the nature of the tasking is.  So his assertion of “one the most important” contributing factors being overburdening the developers is massively flawed. It gets immediately worse from there because his second point is about time constraints.  Mr Mudge just gets so much wrong. What he really should be shooting down is the idea that testing, as a cost-center not as a task, can be eliminated by having your p...