I've been behind on a lot of things, but i am starting to get caught
up. This report touches several areas: modal web dialog support,
fixing the wait logic, unit testing approaches, moving to Subversion,
modal windows dialogs, and reliability.

The biggest thing i've been behind on is the support for modal web
dialogs in watir. But i have that working now mostly. I've committed
the code, and the test. See modal_dialog_test.rb for an example.

This is rough code. But it works. And i'm happy to get bug reports on
it and fix them. And i know i need to change the interface, and am
happy to recieve suggestions.

People have been asking about this for a long time. I've been "real
close" since last summer. This required programming in C. I was able
to get Yaxin Wang to do most of it -- he's a collegue at ThoughtWorks.
But to finish it up, i needed to get my own C programming environment
set up and make a couple of changes. And this took a lot of time, what
with a job change -- from ThoughtWorks to DataCert -- and all.

It's been more than just a job change. For the first time in five
years, i'm no longer a consultant. And i don't really need to speak or
teach or write any more. And i don't need to be a public contributor
to open-source software.

I became a consultant, to a large degree, so that these kinds of
things would be part of my job. I like doing them. But they had in
their own way become a chore. So i've been enjoying doing these things
only out of joy, rather than obligation. And the C programming hurdle
just took me a long time to get over. At the same time, i've retained
my commitment to completing the modal web dialog support. I knew i was
close, we needed it at almost all our clients when i was at
ThoughtWorks, and in fact i need it now if i am to use Watir at my new
job (which i need to do). Indeed, after a while, i stopped letting
myself work on practically anything else Watir-related other than
this.

Now that it is committed, i can get back to a number of long deferred
items.

Last night, i checked in a "fix" to a problem that Jonathan and Dmitri
had both reported regarding synchronization problems using goto. I say
"fix," because i now realize that i've introduced another problem: we
can now see exceptions and/or hangs when, say, doing an
ie.button ().click that closes the browser window.

I've been uncomfortable with the lack of unit tests for the wait
method. Now that i know i have to rework this code, it gives me new
impetus to write these tests. I see several approaches.

Almost all of our unit tests are what might better be termed
feature tests because they test Watir in the full context of a
browser. The only significant difference from full-scale system tests
is that the app they test is "mocked" by a set of interlinked html
files. They work very well for many feature tests, but tend to be
quite fast and thus they don't really exercise the wait logic.

You might call what we have "unit integration tests." There is another
style of unit testing called "unit isolation tests" that makes a much
more severe use of mock objects. In the case of testing IE#wait (this
is the standard way in Ruby to refer to the wait method of the IE
class -- it is a documenting convention and not an executable syntax),
this would mean mocking out the other things that method interacts
with: the document object, the busy method, the framesets. The great
thing about interaction testing is that it can easily provide
deterministic tests for code that may have to operate in a
non-deterministic environment. Right now, the only Watir test that
uses the isolation testing approach is ie_test.rb (using
ie_mock.rb). This was written by Elisabeth Hendrickson and
Indianapolis Mike Kelly.

Another approach to testing the wait logic would be embed
_javascript_-based delays in different events of a standard test html
page. A third approach would be to create a webrick app and put the
intentional delays on the server side.

One of the long-deferred items has been our move to Subversion and
OpenQA. One of the great things about SVN (which is how Subversion is
appreviated) is that every commit point is a release. This means that
anyone can easily download the particular software configuration as it
was after any particular commit. This may be convenient if i continue
to introduce bugs as i rework the wait code: i could ask a user to try
release X to see whether it works with their test suite. With CVS,
creating releases is a time-consuming and manual process, and we don't
do it much.

I strongly believe that getting this wait logic right and not forcing
users to think about it has been a characteristic distinguishing Watir
from other tools. Selenium doesn't do this. It has separate Click and
ClickAndWait methods. You use the latter if the click initiates a page
load, and the former if it doesn't. The problem i helped fix (we use
Selenium at DataCert) is that sometimes the ClickAndWait would
unexpectedly hit a client-side error that would prevent the page load
from happening. This caused the test to hang, while the ClickAndWait
method waited for a page load that was never going to happen. We fixed
this by adding a timeout, but i do believe that Watir's approach is
superior. It waits a little while, looks to see if a page load has
started, and if so, waits for it to finish.

I will be reworking Watir's support for Windows Dialogs shortly. I've
been embarrassed by all the problems we've had with the WinClicker and
have instead been working down a new path that doesn't require an
inverted flow of control nor passing control to an external
script. You can see my current progress in dialog_test.rb.

This has been an area where we've had awkward and unreliable
solutions, often characterized by hard-coded sleeps. To me, one of the
worst possible bugs in a testing tool is when sometimes it will run a
test fine, but will fail at other times, with no apparent difference
in the behavior of the software under test. Last year, one of users
shared a side-by-side comparison of Watir with Silk and one of the
columns was a reliability score. Watir tests worked 20 out of 20
times, but the various Silk tests were getting scores of 17 or 18 out
of 20.

More later...

_______________________________________________
Wtr-general mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/wtr-general

Reply via email to