https://bz.apache.org/bugzilla/show_bug.cgi?id=59152

--- Comment #11 from Sebb <[email protected]> ---
(In reply to Vladimir Sitnikov from comment #10)
> Milamber>It's very easy to conclude that this load test is successful (only
> 25% errors on 1 page), but in reality, this is a bad load test, because my
> target load is reduce to 25%, and the target server have been tested with
> only at 75% of the load.
> 
> Technically speaking, "validation" of a test report should include not only
> "% of errors" validation, but "planned throughput vs actual throughput",
> "planned response times vs actual response times".

It's also important to know whether any tests were skipped.

> 
> If a test is hard to setup (e.g. lots of steps), then it might be better to
> start new iteration to try one's best to achieve "throughput goal".
> 
> At the end of the day, real users do restart from scratch in case of failure.

More likely, they will redo the step that failed.
After a couple of such failures they will go away and try another time.

> 
> Note: high failrate would be misleading, since it will show "consequence",
> while it makes much more sense in knowing "root cause". In that case, it
> would be good to restart after the first failure, and %error would show
> exactly the step that failed.

That depends on the exact scenario. 
Some failures may not be fatal to the loop or the test, e.g. image download
failure.

Only the test designer knows what the correct on error behaviour is, and that
will vary between plans and parts of test plans.

I think the only sensible default is Continue on Error.

Partly because that is the original setting, and partly to ensure that the full
test plan is exercised by default.

-- 
You are receiving this mail because:
You are the assignee for the bug.

Reply via email to