Hi All,

just chiming in for some thoughts on this.

While I'm a firm believer in the KISS principle, and in your specific case the step implementation strategy may be the simplest solution, I do think that there is a space for new features in the story/scenario specification area, notably to allow for their grouping (and hence also repetitions).

I've created a new issue to start gathering some focus: http://jira.codehaus.org/browse/JBEHAVE-873

Feel free to contribute thoughts and requirements.

Cheers

On 18/12/2012 14:41, Mary Walshe wrote:
Thanks for you replies. I agree that it does not need to be something jbehave handles I was just wondering if it did.

It is a difficult case and it looks like we are going to get requirements in the future with the same criteria. Taking everything into account the test may end up taking a long time to run which wouldn't fit in with our CI and the idea of fast feedback.

We are going looking at taking the statistic out of the story and seeing if we can use our CI to handle such tests that allow a certain failure percentage. There just does not seem to be a silver bullet with this type of requirement.

Thank you for all your suggestions. I will pass them on to the developers. If we come to a nice solution for this case I'll follow up if you are interested.


On Tue, Dec 18, 2012 at 11:53 AM, Alexander Lehmann <[email protected] <mailto:[email protected]>> wrote:

    I think it would be possible to write a step similar to a
    composite step to run the necessary steps repeatedly and evaluate
    the result, however this has the disadvantage that the steps are
    hidden in the implementation of the step and not in the story file
    (and I think the steps are not reported individually), though
    maybe given stories may be a possibility.

    When running individual steps with a statistical outcome, it would
    be necessary to keep the state somewhere to be able to evaluate
    the probability in the end, possibly in the page object, but this
    is difficult to do if there are different tests to be accounted
    when multi-threading.

    One last thing I'd like to point out is that the evaluation of
    such a test is not deterministic unless you have a mockup service
    that actually returns a failure every n counts. When running the
    test 20 times, it would be valid to have 0 or 2+ failures
    sometimes, where it would be misleading to have the test fail.
    Maybe it would be feasible to run the 20 tests more than once in
    that case and calculate the average, but this is still a bit flaky
    to have a test failing when the expected value is not reached
    (either you need an estimator function and a confidence interval
    or you can just report the test result as warning with the
    calculated value)





    ---------------------------------------------------------------------
    To unsubscribe from this list, please visit:

    http://xircles.codehaus.org/manage_email




Reply via email to