6) we should use dbunit or something similar to set the database up
into a consistent state for the respective tests, and tear it down
again. Running through the web interface in tearDown is time
consuming, and error prone (if you add a new test that doesn't tear
itself down, then a different test fails).
On 27/12/2006, at 3:57 PM, Brett Porter wrote:
see below
On 27/12/2006, at 3:09 PM, Brett Porter wrote:
On 27/12/2006, at 2:08 PM, Brett Porter wrote:
Hi,
A few observations on these. Does anyone else have outstanding
"todos" in this area? Would like to gatehr them up and get them
resolved to make them useful.
1) these need to be run regularly to be really useful. They
aren't part of the main build ( a good idea, since it requires a
UI and takes forever). Is there a way to run them in rhino so we
can run them as part of the main build and then turn on the other
profiles when we have mutliple platforms to test on?
2) they currently all fail - Franz says it's due to UI changes we
haven't caught up to. See #1 :) Are the UI changes abstracted
sufficiently that this will be a quick fix, or is it going to a
be a big search and replace job? They fail due to "user
authenticated" assertions failing.
Fixed the fundamental problem, now it's just UI changes.
Down to 14 :)
Down to 5 (AccountSecurityTest, ProjectGroupTest). I'll look later.
3) Is there a way to get it to stop after a certain number of
failures? 39 open firefox browser instances caused my mac to
kernel panic.
4) I underestand that the plexus-security related tests are
shared across both webapps. Should we put some helper code into
plexus-security that can be used by these tests so that changes
there can be addressed there (preferably using the example webapp)?
I think this is the list of things to get done - I can put them
in JIRA if there isn't anything extra or anything I've missed in
the list.
- brett
5) the continuum tearDown should not swallow exceptions (retthrow
them, but that means changing the abstract selenium test case to
throw it too)