My apologies to pick up this old thread. This is a features that we are really 
interested on.

My question is for Julian, how do you "pick up randomly the stories".

Once you have the stories, how do you tell JBehave to run them again? (we only 
feed the stories once at the beginning of executing and haven't found a way to 
refeeding them to JBehave)

How to do extract the errors from the log? Do you parse the log file? Or do you 
access the a log object, if so how you do that?

I'll update the issue you mention below Mauro once we get some focus with our 
requirements

Thanks,
Enrique

From: Mauro Talevi [mailto:[email protected]]
Sent: Sunday, December 23, 2012 7:05 AM
To: [email protected]
Subject: Re: [jbehave-user] Re: Repeating scenarios

Hi All,

just chiming in for some thoughts on this.

While I'm a firm believer in the KISS principle, and in your specific case the 
step implementation strategy may be the simplest solution, I do think that 
there is a space for new features in the story/scenario specification area, 
notably to allow for their grouping (and hence also repetitions).

I've created a new issue to start gathering some focus:  
http://jira.codehaus.org/browse/JBEHAVE-873

Feel free to contribute thoughts and requirements.

Cheers

From: Iulian Greculescu [mailto:[email protected]]
Sent: Tuesday, December 18, 2012 1:46 PM
To: [email protected]
Subject: Re: [jbehave-user] Re: Repeating scenarios

At the point of writing this I am dealing with a similar problem. Stressing the 
system to its limits and see how it behaves under pressure by randomly running 
all the stories it supports millions of times during the night. At the end of 
the n hours run it triggers the execution of a final "step" that produces a 
detailed report about the load and the results, extracts error entries from the 
logs and calculates some statistics. This report is used next day by humans to 
assess the system health.

Just an idea about how we did it but of course there is always room for 
improvement.

Cheers,
Julian

--- On Wed, 19/12/12, Mary Walshe <[email protected]> wrote:

From: Mary Walshe <[email protected]>
Subject: Re: [jbehave-user] Re: Repeating scenarios
To: [email protected]
Received: Wednesday, 19 December, 2012, 12:41 AM
Thanks for you replies. I agree that it does not need to be something jbehave 
handles I was just wondering if it did.

It is a difficult case and it looks like we are going to get requirements in 
the future with the same criteria. Taking everything into account the test may 
end up taking a long time to run which wouldn't fit in with our CI and the idea 
of fast feedback.

We are going looking at taking the statistic out of the story and seeing if we 
can use our CI to handle such tests that allow a certain failure percentage. 
There just does not seem to be a silver bullet with this type of requirement.

Thank you for all your suggestions. I will pass them on to the developers. If 
we come to a nice solution for this case I'll follow up if you are interested.

On Tue, Dec 18, 2012 at 11:53 AM, Alexander Lehmann 
<[email protected]</mc/[email protected]>> wrote:
I think it would be possible to write a step similar to a composite step to run 
the necessary steps repeatedly and evaluate the result, however this has the 
disadvantage that the steps are hidden in the implementation of the step and 
not in the story file (and I think the steps are not reported individually), 
though maybe given stories may be a possibility.

When running individual steps with a statistical outcome, it would be necessary 
to keep the state somewhere to be able to evaluate the probability in the end, 
possibly in the page object, but this is difficult to do if there are different 
tests to be accounted when multi-threading.

One last thing I'd like to point out is that the evaluation of such a test is 
not deterministic unless you have a mockup service that actually returns a 
failure every n counts. When running the test 20 times, it would be valid to 
have 0 or 2+ failures sometimes, where it would be misleading to have the test 
fail. Maybe it would be feasible to run the 20 tests more than once in that 
case and calculate the average, but this is still a bit flaky to have a test 
failing when the expected value is not reached (either you need an estimator 
function and a confidence interval or you can just report the test result as 
warning with the calculated value)





---------------------------------------------------------------------
To unsubscribe from this list, please visit:

   http://xircles.codehaus.org/manage_email




On 18/12/2012 14:41, Mary Walshe wrote:
Thanks for you replies. I agree that it does not need to be something jbehave 
handles I was just wondering if it did.

It is a difficult case and it looks like we are going to get requirements in 
the future with the same criteria. Taking everything into account the test may 
end up taking a long time to run which wouldn't fit in with our CI and the idea 
of fast feedback.

We are going looking at taking the statistic out of the story and seeing if we 
can use our CI to handle such tests that allow a certain failure percentage. 
There just does not seem to be a silver bullet with this type of requirement.

Thank you for all your suggestions. I will pass them on to the developers. If 
we come to a nice solution for this case I'll follow up if you are interested.

On Tue, Dec 18, 2012 at 11:53 AM, Alexander Lehmann 
<[email protected]<mailto:[email protected]>> wrote:
I think it would be possible to write a step similar to a composite step to run 
the necessary steps repeatedly and evaluate the result, however this has the 
disadvantage that the steps are hidden in the implementation of the step and 
not in the story file (and I think the steps are not reported individually), 
though maybe given stories may be a possibility.

When running individual steps with a statistical outcome, it would be necessary 
to keep the state somewhere to be able to evaluate the probability in the end, 
possibly in the page object, but this is difficult to do if there are different 
tests to be accounted when multi-threading.

One last thing I'd like to point out is that the evaluation of such a test is 
not deterministic unless you have a mockup service that actually returns a 
failure every n counts. When running the test 20 times, it would be valid to 
have 0 or 2+ failures sometimes, where it would be misleading to have the test 
fail. Maybe it would be feasible to run the 20 tests more than once in that 
case and calculate the average, but this is still a bit flaky to have a test 
failing when the expected value is not reached (either you need an estimator 
function and a confidence interval or you can just report the test result as 
warning with the calculated value)





---------------------------------------------------------------------
To unsubscribe from this list, please visit:

   http://xircles.codehaus.org/manage_email



Reply via email to