Linda,

Can you use <not> to achieve what you want?

That is, rather than having the step fail, but the remaining steps be 
executed, if you expect (in fact, *demand*) a failure, then you can 
wrap the assertion in a not, at which point it succeeds (I think ... 
I'm new to this as well), and all is good.

Others will correct me if I'm mistaken, I hope.

Amy!
On Thu, 23 Apr 2009 21:24:03 -0400, Linda de Boer wrote:
> G'day
> 
> I found that using the "haltonfailure" config option, that I still 
> could not get the test to continue to the next step.
> 
>  According to the documentation "Even when set to "false" all of the 
> trailing <step>s of the current <webtest> will be skipped but 
> processing will continue with the next <webtest>.". If I understand 
> this and all the other threads I've read today, I need to start a new 
> "<webtest>" for the next test step. If this is so, then I need a new 
> invoke. This has also been noted by another fellow in one of the 
> threads, but I can't find it again.
> 
> I am thinking that I have misunderstood something because this is a 
> very well organized package. I figure I've got to have missed 
> something. Is there a way around this that I have not found?
> 
> What I am currently doing is just testing menu buttons and verifying 
> pages. A simple starter. But I'd like a full report of all the 
> buttons, pass or fail, all at once. Not one or two at a time 
> depending upon when it failed. Am I trying to do something that I 
> should not?
> 
> Thanks much......;-)
> 
> --
> ldb
> _______________________________________________
> WebTest mailing list
> [email protected]
> http://lists.canoo.com/mailman/listinfo/webtest
_______________________________________________
WebTest mailing list
[email protected]
http://lists.canoo.com/mailman/listinfo/webtest

Reply via email to