It's MUCH more than a month, but I thought I would report back on what we
tried where we met with success.

We tried the following:

1> A manual tester writing test cases and doing manual testing. An
"automation tester" wrote methods needed to exercise  functionality and
handed it over to the manual tester, who then had to "convert" manual test
cases into test data for the watir script (i.e., populate an Excel sheet)

This approach didn't work AT ALL. The automation tester got focused on the
beauty of well written code and the manual tester got tied up in knots
trying to populate the Excel sheet of data, which seemed really alien.

2> The next approach we tried was to have an automation tester automate
product functionality for which development had finished and wasn't actively
being worked on.

IMO, this proved to be fairly wasteful and pointless.

3> The approach that we currently follow: Dev/Test work in two week
iterations. While devs work on developing a story, testers work on
understanding the functionality and generating test ideas. When a story is
dev complete, the tester first tests manually and after reporting bugs, if
there are no blockers, works to automate the tests.

This approach works pretty well because IMO we have managed to create a
structure that is simple enough for non-programmers to work with. We use the
Ruby Unit Test framework and use asserts to write tests that look
ridiculously simple (see below). Each iteration we manage to automate some
percentage of our tests, which is much better than 100% manual testing all
the time.

class TestCreateNewEmployee< Test::Unit::TestCase
  include NewEmployeePayslip_Methods
  def test_0010_create_new_employee_mandatory_fields

    tc = '0010'

    test_data =  get_input_data(tc, EMP_SHEET)    //gets a row of data from
an Excel sheet
    create_new_employee(tc, test_data) //populates the screen with data from
the Excel sheet

The cons of this approach is that some manual testers have a mental barrier
to automation (since it involves writing "code").

4> The approach that we are currently exploring is to see if we can use
Cucumber instead of Excel to document our test scenarios.

Can somebody point me to a good, current Cucumber/Watir tutorial? I've
looked at already.


On Thu, Aug 7, 2008 at 2:52 AM, Jeff Fry <> wrote:

> I think this is very context-specific. You're quite right to wonder about
> the cost of automating an unstable interface. That said, I've found specific
> situations where I've been very happy to use watir on early code. In fact,
> watir is light-weight enough that I'm regularly generate 'throw-away' code
> for a specific test, because the cost is low enough. For example, if I want
> to explore what'll happen if I a discussion thread has 100s of posts on it,
> I can quickly use watir to generate that state. That might be a script I
> want to hold onto and maintain, but it might also be one that I'd be happy
> to not maintain.
> I'll also add that watir lets you drive a web app through a variety of
> means. Certain changes (e.g. the placement of a button on the page, or the
> addition of a new button) may not break existing scripts. I'd probably start
> by discussing with the programmers what if anything I can expect to be
> constant. They might be perfectly willing to support some API for you, for
> example putting IDs on elements, and keeping those IDs the same even if
> they're changing other things.
> Without knowing more of what your context is, I'd suggest starting with a
> very small investment to begin, and see what works for you and what doesn't.
> ...Then if you're game, write us a little report a months from now about
> what worked for you!
> Cheers,
> Jeff
> On Wed, Aug 6, 2008 at 6:06 AM, Bhavna Kumar <>wrote:
>> Hi,
>> This is more of a process query than a technical one. We have introduced
>> WATiR scripting in our org and are working on getting testers familiar with
>> doing test cases in WATiR.
>> Within the org, we are debating whether to use WATiR to build a
>> smoke/regression testing suite (which basically means manual testing for
>> releases, followed by post facto WATiR development) or to use it for actual
>> testing. Using it for actual testing would mean that scripts would break as
>> UI changes happen (and this happens quite a bit till functionality
>> stabilizes).
>> I'm curious to know at what point you folks begin WATiR testing in the
>> development cycle? If used for actual testing, then what has your experience
>> been?
>> Thanks in advance,
>> Bhavna
> --
> Jeff Fry
> Member, Association for Software Testing Board of Directors
> >

You received this message because you are subscribed to the Google Groups 
"Watir General" group.
To post to this group, send email to
Before posting, please read the following guidelines:
To unsubscribe from this group, send email to
For more options, visit this group at

Reply via email to