Adam Murdoch wrote > Currently, a test framework needs to provide the following pieces: > > 1. A set of framework-specific options to expose to the DSL. > 2. A test detector implementation that takes these options and the test > classpath, and builds a sequence of test execution requests. > 3. A test processor implementation that is instantiated in a worker > process, and that takes a sequence of test execution requests, executes > the tests as appropriate, and generates a sequence of test events as > things execute. > > How do you think these pieces would look for cucumber? Does this structure > even make sense for cucumber?
There are 3 things cucumber needs to know in order to execute any tests; 1. where the features are 2. where the "glue" is (aka code that executes when a step runs) 3. (optional) which tags to run (where scenarios & features can have 0-n tags) 4. (optional) regexes against which to test scenario names (only running ones which match) The first two are just paths (absolute, relative or classpath) and sensible defaults would be something like 1. = the resources in the test sourceset 2. = testRuntime configuration + plus the output of the test sourceset 3 & 4 = not set tags are really analogous to testng groups. Given these params, cucumber; - looks for files in the specified locations & parses them using Gherkin into an object model representing the scenario/feature/step tree - filters each against the tag and name options (if set) - anything left is executed The current model (of building a file tree, visiting each entry, determining if it is a test, if so execute it) would result in 1 invocation of cucumber per feature file that matches the specified file name filters. This would then result in 0-n test executions depending on whether any tag or name filters (options 3 & 4 above) had been set. Alternatively the execution request is done at the directory level and entire directories (that contain feature files) are distributed to each VM and cucumber invoked once per directory leaving it to use it's own file tree parsing to work out what to execute.] One downside to this approach is that it might be more inefficient in the case that a parallelisation strategy based on tags is in use (which I suspect is a more natural fit to cucumber). In this case, every execution request would have to go to every VM and cucumber would have to be invoked for every single feature file in every VM. The only way around that would be to use the gherkin parser in gradle but that means duplicating the work already done in cucumber. I don't know how much overhead there is to invoking cucumber in process, probably not much per invocation but this can still add up in the event of having 000s of tests. One note on parallelisation... I think gradle should expose a function that can translate a test execution request into the ID of a worker VM (or thread) that will execute that request. Gradle could then provide sensible builtin strategies like it does now (e.g. testNumber % noOfThread for round robin). Adam Murdoch wrote > The test report currently uses #1. This format is useful for those who > don't care about why a test was executed, but instead care about what > thing actually failed. doesn't #2 tell you that as well? in the current case of junit or testng, there isn't much difference anyway as far as I can see (except that #1 hide some useful info) so I'd vote for just moving to #2. Cheers Matt -- View this message in context: http://gradle.1045684.n5.nabble.com/Gradle-test-reporting-cucumber-jvm-tp5710680p5710716.html Sent from the gradle-dev mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe from this list, please visit: http://xircles.codehaus.org/manage_email
