The ui tests have long been a thorn in the side of Chrome
developers. The pain is most acute when a developer commits a change
and a ui test starts failing, only to pass on the next cycle. In
looking through recently failing ui tests it seems the tests fail for
the folowing reasons:

. A bug in Chrome the UI test occasionally hits. For example,
  BrowserTest.JavascriptAlertActivatesTab occasionally triggers a
  NOTREACHED in AtExitManager::RegisterCallback. This doesn't appear
  to be a bug in the test, rather a real bug in Chrome that is tricky
  to fix. These failures are the most frustrating as they are
  typically hard to reproduce outside of the buildbot.

. The UI test expects something to happen in a certain amount of time
  and for various reasons it doesn't. This can happen for any number
  of reasons, most often because the test isn't giving enough time
  for a condition to occur before asserting the condition occurred.

To address these pains I propose the following:

. Ignore any failures that are the result of:
  [580:1948:983267734:FATAL:object_watcher.cc(62)] Check failed:
false. RegisterWaitForSingleObject failed: 1299
  This seems to be the cause of a number of ui test failures and
  needs to be investigated. Until then, we shouldn't show failures
  that are the result of this.

. Have the bots maintain a flakey test list. The list will initially
  contain the set of tests Niranjan has identified. If a test on the
  list fails the buildbot becomes yellow, and only becomes red if
  the test fails again during the next run. We may want to require
  three failures in a row, but two is a good place to start at. The
  flakey ui test list will exist in the repository and can be
  modified over time as necessary. Similarly, if the ui tests timeout
  during an exception, only make the tree red if they timeout
  again. Hopefully this can leverage the same scripts used for
  tests_fixable.

. Analyze the set of tests that have recently failed in hopes of
  fixing the bugs or tests. Niranjan has a compiled a list of these
  bugs. This won't be easy as many of these bugs are hard to
  reproduce outside of the bots.

. Some of the UI tests poll until some criteria is matched. For
  example, until the number of tabs becomes X. This makes for tests
  that take longer than they need to. 250ms doesn't seem like much,
  but it quickly adds up as our test base grows. Instead, the test
  should issue a command so that the browser sends back a response
  when done so that the tests can run as fast as possible.

These recommendations make for a greener tree, but they won't make it
any easier to write, debug and maintain UI tests. The UI tests are
end-to-end tests, and have their place, but for the most part we've
written ui tests because we haven't had a more convenient way to bring
up the browser in a unit test setting. To that end I propose
we expand the unit tests to enable bringing up the browser in the unit
test, with the unit test having direct handles to all live
objects. Additionally we'll write mocks for prominent classes so that
you can create a Browser with only the parts you care about testing
(for example, no window, or no renderer...). This has the following
advantages:

. Easier to debug.
. You have handles to real objects, so that there is no need to add
  automation messages for new functionality you want to test.
. Tests should run faster as there can be less delays.
. You can mock out parts you don't need, resulting in more focused
  tests, for example, you need not have any history or bookmarks if
  you're not going to test them.

There are disadvantages to this approach though:

. If a test crashes, no tests after the crashing test will run (unit
  tests already have this property, but not the existing ui tests).
. The test runs on the ui thread, so you still need to write code that
  waits for a condition to be met. Helpers will need to be written to
  ease this.
. The startup process needs to be refactored to allow for ui tests to
  use it.

None-the-less, I think the advantages out weigh the negatives.

Thoughts?

  -Scott

--~--~---------~--~----~------------~-------~--~----~
Chromium Developers mailing list: [email protected] 
View archives, change email options, or unsubscribe: 
    http://groups.google.com/group/chromium-dev
-~----------~----~----~----~------~----~------~--~---

Reply via email to