Hi Adam

I might not have understood every detail you've described but one thing I got 
out of it is that a cleaner solution seems to be to create a domain object that 
describes what the variants produce. The thing that worries me is that if I 
write a Gradle script that requires variation in behavior, I might end up 
having to write such domain objects, which sounds like a big obstacle if I just 
want to get my build working (no reusage or generalization required). Hence, an 
approach like Hans' sounds appealing to me since I can easily grasp and 
implement it as a script writer.

Or maybe I just did not understand your explanation well enough and your 
solution is very easy and intuitive to appy for script writers.

Etienne



On 11.10.2011, at 01:17, Adam Murdoch [via Gradle] wrote:

> 
> On 09/10/2011, at 10:39 PM, Hans Dockter wrote:
> 
>> A common use case in Gradle is that you want to have different
>> variations of task behavior. Our goal is in general to avoid the use
>> of multiple actions per task to achieve this. In any case, let's look
>> at an example for variations: You want a test task that runs with and
>> without coverage. This variation does often not affect the behavior of
>> the executed task but rather the wiring.
> 
> Why would you want to run the tests without coverage?
> 
> One possibility I can think of is for performance reasons: if the coverage 
> report is not required for this build, then don't bother running the tests 
> against the instrumented production classes, as this is a touch faster.
> 
> I'd say the solution to this use case is not to have different variants of 
> the test task. Instead, it is to have some logic in the coverage plugin so 
> that if the coverage report needs to be built, then the tests are run against 
> the instrumented classes (with the appropriate wiring to make this happen), 
> and if the coverage report does not need to be built, then the tests are run 
> against the non-instrumented classes.
> 
> That is, whether or not coverage is included is a function of the desired 
> outputs of the build, not based on how the build happened to be launched.
> 
>> For example after a coverage
>> plugin is applied, the test task depends on instrumentation and is
>> finalized by the coverage report generation (finalizer is a not yet
>> implemented concept). Or you might have two variations of an integTest
>> task. One is running just the test, the other is starting and shutting
>> down Jetty.
> 
> Again, why would you want to sometimes run the setup for the tests, and 
> sometimes not?
> 
> The only option I can think of is that sometimes the setup is done outside 
> the current build, either manually or by another build. I wonder in this case 
> whether this is actually 2 different test suites, with separate results, 
> rather than 2 variants of the same test task.
> 
> I think the solution to this use case is to explicitly model the test suite 
> as a domain object, and in particular model the test setup. The integTest 
> task would depend on integTestSuite.classes and integTestSuite.classpath, it 
> would produce integTestSuite.testResults, it would be initialised by 
> integTestSuite.setup, and would be finalised by integTestSuite.tearDown. Both 
> testSuite.setup and testSuite.tearDown would either be some buildable domain 
> object, or possibly just a set of tasks.
> 
> This would allow you to either model 2 integ test suites, one with setup and 
> one without, or a single test suite, where you can tweak the setup and/or 
> teardown.
> 
>> 
>> One approach would be to use marker tasks that depend on the actual
>> task. For example a task integTestWithJetty that depends on Jetty. The
>> execution of integTestWithJetty would trigger a different wiring of
>> the the integTest task: runJetty < integTest < stopJetty <
>> integTestAll. You could play the same game for a test and
>> testNoCoverage task. It does not feel fully correct to use marker
>> tasks like this. To have both integTest and integTestAll in the output
>> feels strange. Those are not lifecycle tasks rather variations of test
>> and integTest. What about introducing the concept of a task variation?
>> Something like:
>> 
>> task integTest(type: Test) {
>>   variation {
>>      integTestAll {
>>         dependsOn jettyRun
>>         finalizedBy jettyStop
>>      }
>>   }
>> }
>> 
>> or
>> 
>> task integTest(type: Test) {
>>   variation {
>>      all {
>>         dependsOn jettyRun
>>         finalizedBy jettyStop
>>      }
>>   }
>> }
>> 
>> They could be executed either as integTestAll or integTest --all. They
>> would have the same actions as the actual tasks. They just differ in
>> the wiring.
>> 
>> Thoughts?
> 
> I'm not that excited by the idea of variants. I think it's the wrong 
> direction to go as a general strategy. The basic problem is this: what 
> happens when I need to execute both variants in the same build?
> 
> Certainly for the example use cases you gave, I think a better solution is to 
> introduce domain objects that describe what those variants produce (ie a 
> coverage report, or test suites with setup and teardown). This way, by using 
> domain objects, we solve the problem in terms of what the build should 
> produce. Using task variants instead attempts to solve the problem in terms 
> of how the build works. Describing 'what' always wins over describing 'how', 
> I think.
> 
> 
> --
> Adam Murdoch
> Gradle Co-founder
> http://www.gradle.org
> VP of Engineering, Gradleware Inc. - Gradle Training, Support, Consulting
> http://www.gradleware.com
> 
> 
> 
> If you reply to this email, your message will be added to the discussion 
> below:
> http://gradle.1045684.n5.nabble.com/Task-Variations-tp4884963p4890093.html
> To unsubscribe from Gradle, click here.



--
View this message in context: 
http://gradle.1045684.n5.nabble.com/Task-Variations-tp4884963p4893579.html
Sent from the gradle-dev mailing list archive at Nabble.com.

Reply via email to