Hi Hans,

thanks for starting this discussion.  It is rather useful.

I tend to agree with most of the points below but not all.

Notably, I think stories should be independently executable, declaring via GivenStories all the preconditions they need. Scenarios are not necessarily independent and crucially will not always run against a blank state. That works for simple demo scenarios, but not for complex testing strategies. A scenario should declare its state and pre-condition (again via GivenStories, possibly selecting one specific scenario to depend on, or with the Lifecycle Before) when necessary (e.g. you could reset it), but it may also depend on the state of the previous scenario.

Also, with regard to point 6, imposing an arbitrary time-limit on a scenario execution is not a priori recommendable. True, one needs to be aware of time issues because if execution takes too long it will not be performed as often as it should, but the time considerations are linked to the nature of the system under test. Some scenarios will run for longer than a few minutes. A better solution is to structure the running of stories in parallel when possible.

If you want, you could start a new doc page contribution that we can evolve over time.

Feel free to create a JIRA issue and provide a pull request to a new page in https://github.com/jbehave/jbehave-core/tree/master/distribution/src/site/content

Cheers

On 27/11/2013 07:58, Hans Schwäbli wrote:
I would especially like to discuss this issue:

    /3. Each scenario must make sense and be able to be executed
    independently of any other scenario. When writing a scenario,
    always assume that it will run against the system in a default,
    blank state./

I quoted that from "The Cucumber Book". It sounds good initially, but I am not so sure about it. By the way, the system is nearly never in a "blank state", only in the very beginning after the first rollout. If this best practice is applied, it can cause too long story execution in some environments. Each scenario has to create some data first (which can be a lot) in order to perform the actual test. The above mentioned best practice seems to make sense if you have control over your test data in the database which the system under test (SUT) accesses. Then you could create some basic test data set in the SUT for various purposes and pick the ones in the stories from which you want to start your test. So you could cherry pick some data where you can perform some high level tests whichout first having to create the required data. But if you have no control over that test data in the SUT, then you have to create a lot of data in the scenarios first before you actually can perform the actual test. This applies for instance if you have to use a copy of the productive data as your test data. This data is created in a very complex way with many subsystems, so there is no way to design a basic (common) test data set for the tests. So I thought that in this environment, where you have no control of the test data set, it might be better that scenarios are not independent of each in order to opmize story execution time and have less repetition of data creation. Maybe a solution would be a feature I have seen in Cucumber which is similiar to a feature in JUnit. You can define a "Background" for all your scenarios in Cubumber. This is a kind of test fixture or what you do in the JUnit test method annotated with @BeforeClass or @Before. I could not figure out if it behaves so that it is executed just once for all scenarios or for each scenario. It would only be helpful for the problem which I mentioned if it would be performed once for all scenarios (similar purpose like @BeforeClass in JUnit). What do you think about the problems I see with the best practice I mentioned above and how would you solve it in a environment where you have to use productive data as test data and have nearly no control over them?


2013/11/22 Hans Schwäbli <[email protected] <mailto:[email protected]>>

    I would like to discuss best practices in using JBehave/BDD
    concerning story writing. So I will assert some best practices now
    as a JBehave/BDD beginner.

    Some of them I discovered online (various sources). I left the
    justifications.

    How do you think about it? Do you have any additional best
    practices for story writing with JBehave?

     1. Stories may be dependent of each other. If so, they must
        declare their dependencies.
     2. Each story typically has somewhere between five and twenty
        scenarios, each describing different examples of how that
        feature should behave in different circumstances.
     3. Each scenario must make sense and be able to be executed
        independently of any other scenario. When writing a scenario,
        always assume that it will run against the system in a
        default, blank state.
     4. Each scenario typically has somewhere between 5 and 15 steps
        (not considering step multiplification by example tables).
     5. A scenario should consist of steps of both types: action
        ("Given" or "When") and verification ("Then").
     6. Each scenario, including example table, should not run longer
        than 3 minutes.
     7. Steps of type "Given" and "When" should not perform a
        verification and steps of type "Then" should not perform actions.
     8. Step names should not contain GUI information but be expressed
        in a client-neutral way wherever possible. Instead of "/*Then*
        a popup window appears where a user can sign in/" it would be
        better to use "/*Then* the user can sign in/". Only use GUI
        words in step names if you intend to specifically test the GUI
        layer.
     9. Step names should not contain technical details but be written
        in business language terms.
    10. Use declarative style for your steps instead of imperative
        (see the example in "The Cucumber Book" page 91-93).
    11. Choose an appropriate language. If your requirements
        specification is in French for instance and most of the
        business analysts, programmers and testers speak French, write
        the stories in that language.
    12. Don't mix languages in stories.
    13. Use comments sparingly in stories.
    14. Avoid too detailed steps like "/*When* user enters street name/".
    15. Don't use step aliases for different languages. Instead choose
        just one language for all your stories.
    16. Use step name aliases sparingly.
    17. Prioritize your stories using meta information so that only
        high priority stories can be executed if required.



Reply via email to