Luke Bayes wrote:

Even being the best, I'm pretty sure we can all agree that it's not quite a simple as you've stated above. How deep should those tests go?

You test behavior. The majority of HTML is structural, and submits to visual inspection.

How fragile are they?

You run the tests as often as possible, while changing the code. This shakes out the fragility. A test that fails too often you make "fuzzier". For example, this assertion detects an input field inside a form:

  assert_xpath '//form/[EMAIL PROTECTED] = "foo"]'

If someone rearranges the graphics inside the form, you must reduce the restriction that the input is an immediate child of the form:

  assert_xpath '//form/descendant::[EMAIL PROTECTED] = "foo"]'

assert_xpath provides : as a shortcut for that:

  assert_xpath :form do
    assert_xpath :'[EMAIL PROTECTED] = "foo"]'
    # more assertions here
  end

> Which HTML snippets should be verified?

Roughly speaking, the ones which contain <%= %> should all be test-firsted, to ensure the business logic provides the correct outputs into the tests.

> Which should not?

Things which are easy to spot and fix if they are wrong.

> How do we teach others to write those tests?

Pair programming. A newb at my day job recently called out our colleague, who'd been there for a year, for not TDDing enough. I'm so proud...

> What does this mean in an AJAX world?

The exact same thing. Where assert_xpath uses Hpricot, REXML, or Libxml to extract the payload from HTML strings, assert_javascript uses Javascript-PurePerl to convert Javascript into XML describing its parse tree. Then assert_xpath can attack that.

  assert_xpath './/input[ @type = "image" ]' do |input|
    assert_javascript input.onclick
    assert_xpath './/Statement[2]' do
      assert_js_remote_function '/user/inventory/' do
        json = assert_js_argument(2)
        params = assert_params('uri?' + json[:parameters]).last
        assert_equal @user.id.to_s, params[:user_id]
        assert_equal @user.inventory.first.id.to_s, params[:inventory_id]
      end
    end
  end

That tests the onclick has an Ajax.Updater that calls the inventory action with the user ID and the first inventory item ID. Those IDs are the payload, so the test must parse its way to them and isolate them. Anything else can change.

http://assertxpath.rubyforge.org/classes/AssertJavaScript.html

> Even though we aren't responsible for browser
bugs, it is our job to work around them, shouldn't our tests verify this work too?

You can't use tests to prove the absence of bugs. The nightmare of browser compatibility illustrates this just as absolutely as tiny quantum-mechanical glitches in CPUs would.

Yet tests ensure that once you identify a bug you can safely fix it - test-first - without too much concern the fix will break something!

My understanding, which certainly could be flawed as I don't have any real experience with it, is that RSpec basically gives us 'permission' to write tests at a less granular level. By essentially creating that DSL and naming it a 'specification', we can write tests against object interactions in addition to isolated object behavior. This has the obvious side effect of compromising defect localization, but it seems like that may be a useful tradeoff in the context of GUI testing.

I don't understand why anyone would think that. We are not talking about "unit tests", which are a QA concept. A pure unit test must test objects in isolation, so if they fail you only must inspect one unit to find the problem.

We are discussing "developer tests". If you run them after every edit, then if they fail you only need to revert the last edit to fix them. So if you only write code if you can get a test to fail, and if a given edit connects two (or 15) objects, then a test must cover that interaction.

So why the hell does RSpec keep getting the good press here??

--
  Phlip

Reply via email to