Hi Joachim,

let's forget for a second the implementation details and focus on the textual story. You have a scenario:

Given a user registers with any profile with username John and password topsecret
And the system is using any locale
When the user logs in using username John and password topsecret
Then the user is greeted with the message containing the name John

Your use case is that you want to repeat this scenario any number of times for different locales (or any other parameter), and consider the combinations of these parameters? If this is the case it's a question of finding a way of expressing these combinations in the story itself.

What I find confusing that your "software architects" would specify the parameters as Java methods. That to me defeats in part the point of BDD where all the specification should be expressed in the story itself.

Cheers

On 13/04/2012 16:45, joachim.nils...@molybden.se wrote:

Hi all
I would like to trigger a discussion regarding different types of data driven testing.

What people normally mean by data driven test is a test where the test designer sets up a number of data scenarios that will be tested. This feature is nicely implemented using parametrised scenarios in JBehave. I tried it and think that it gives excellent help to decrease testcase writing when we have known test designs. And I think that for 90% of all products, this is more than enough.

However, I have seen complex products where development is being done by hundereds of developers and testers at the same time. Complex products that has support from several software architects working in parallel. In these products, test gap may be created from feature interference. Where different teams and different test teams does not have enough deep knowledge of the oter teams features to make a full test coverage analysis and implementation. Even with good tools available, it is hard to be sure that all branches in the code are covered.

The reason that I want to raise the discussion is that I have an idea of a solution, and I want some feedback, improvements or simply reasons why the solution is a bad idea.

My thoughts is to let the software architects be responsible for defining existing features of the system. And this definition from a JBehave perspective is to tag methods with an annotation indicating that the method is responsible for providing possible variants belonging to a specified category.

Say for example that you have a system where people can be registered. With the registration, they can be assigned different profiles which contain different set of permissions to the system.

Some features are still valid for all of the users, regardless of which profile they have, for example login where the user is greeted by name.

I will try to give the example as a scenario, and i will use a new keyword 'any' which is used to lookup a data category:


Given a user registers with any profile with username John and password topsecret
And the system is using any locale
When the user logs in using username John and password topsecret
Then the user is greeted with the message containing the name John


The implementing method annotations will look like:

@Given("a user registers with $profile with username $name and password $password")
@Given("the system is using $locale")
@When("the user logs in using username $name and password $password")
@Then("the user is greeted with the message containing the name $name")

And the system architect provides the methods:
  @DataCategory("locale")
  public String se_SV(){
      return "se_SV";
  }
  @DataCategory("locale")
  public String en_EN(){
      return "en_EN";
  }
  @DataCategory("profile")
  public String profile_admin(){
      return Profile.admin;
  }
  @DataCategory("profile")
  public Profile profile_user(){
      return Profile.user;
  }

An idea is also to let JBehave decide in runtime the number of tests to be performed by defining the test strategy by meta tags. For example a random strategy will give only one testcase but with random values. An exhaustive strategy will give all variants (four different tests based on the above definition).

The benefit of this feature is when the software architect decides to add support for another language by adding a new data provider, JBehave will create and run the tests already defined. :

  @DataCategory("locale")
  public String fr_CA(){
      return "fr_CA";
  }

The danger in this approach is obvious. The number of testcases generated can be very large. But that is where we can add some more real value by implementing test strategies like pair-wize and strategys based on execution history or based on which development team committed a specific feature.

I have sent a proof of concept in a patch to Mauro containing the implementation of the strategies 'random' and 'exhaustive' since they were the easiest to implement.

This step is to gather your feedback and input. Is this a feature that will improve JBehave or will it obfuscate it and make it harder to use?

Best regards,
Joachim Nilsson



---------------------------------------------------------------------
To unsubscribe from this list, please visit:

   http://xircles.codehaus.org/manage_email


Reply via email to