On 04/02/2013, at 8:48 PM, Luke Daley wrote:

> 
> On 03/02/2013, at 8:05 PM, Adam Murdoch <[email protected]> wrote:
> 
>> 
>> On 03/02/2013, at 9:50 AM, Adam Murdoch wrote:
>> 
>>> Hi,
>>> 
>>> Something I'd like to try to achieve as part of the work to introduce 
>>> multiple outputs for JVM based projects, is to create fewer task instances 
>>> at configuration time. There are 2 main parts to this:
>>> 
>>> 1. Don't create tasks that will never be required. For example, if there is 
>>> no src/main/resources directory, and nothing generates it, then don't 
>>> create a processResources task.
>>> 2. Don't create tasks that are not required for the current build. For 
>>> example, if I'm generating the javadoc, don't create the test task or any 
>>> of its dependencies.
>>> 
>>> Ignore for the moment how we might do this. Let's just say its possible to 
>>> infer both #1 and #2.
>>> 
>>> Let's also say that if such as task is directly referenced in the build 
>>> logic, then it's "required". For example, if I execute 
>>> `myTask.dependsOn(processResources)` in my build script, then 
>>> processResources is required and must be created. Possibly also for the 
>>> command-line, so that if I run `gradle processResources`, then it is 
>>> required.
>>> 
>>> The question is whether we consider it backwards compatible or not that we 
>>> don't create certain tasks that used to always be created.
>>> 
>>> At first glance, it looks not too bad. The tasks don't do anything and 
>>> nothing references them from code, so it's fine to leave them out. There 
>>> are a few important benefits to doing this:
>>> 
>>> - There's less work to do at configuration time.
>>> - There's less heap required to maintain the model through the whole build.
>>> - We wouldn't log the execution of tasks that don't do anything. For 
>>> example, if there is no test source, you wouldn't see 'testClasses', 
>>> 'compileTestJava', 'processTestResources' etc.
>>> - It pushes the decoupling of 'what' and 'how', which means the 'how' can 
>>> vary across different executions of the same build. For example, when 
>>> generated source must be downloaded when building on one platform but can 
>>> be generated when building on some other platform. Or when I'm running the 
>>> tests against a distribution built by CI.
>>> 
>>> Unfortunately, there are a few places where the tasks leak out and are not 
>>> referenced by name or path:
>>> 
>>> - Via the events on TaskContainer, such as tasks.all { } or tasks.whenAdded 
>>> { }
>>> - Iterating over the various collections of tasks, such as 
>>> project.getTasks(name), or tasks.withType(type).
>>> - Via the execution events, such as gradle.taskGraph.beforeTask { }
>>> 
>>> Strictly speaking, we'd still be following the contract for these things. 
>>> In practice, though, I think there's an implicit contract that says that we 
>>> must continue to create all possible tasks.
>>> 
>>> Some options:
>>> 
>>> 1. Don't create the tasks, and simply document this as a potential breaking 
>>> change (or, more likely, as series of changes).
>>> 2. Create alternatives to the above leakage points, but with stronger 
>>> contracts. If you use one of the above, then we always create every 
>>> possible task.
>>> 3. Toggle the strategy based on whether you're using the Gradle 2.0 
>>> configuration model or not.
>> 
>> 4. Provide this as a general capability, so that the new language plugins 
>> (and other new/incubating plugins) can take advantage of it. The old 
>> language plugins would extend the new plugins and simply mark their tasks as 
>> always required. Later deprecate and remove the old language plugins.
>> 5. As for #4, but instead of removing the old plugins, change them in Gradle 
>> 2.0 so that they no longer mark the tasks as required.
> 
> #4 seems compelling. This seems like it could become a watershed moment, for 
> better/worse. We could build new plugins from the ground up that embrace the 
> new dependency management (incl. publishing) worlds and make them available 
> in parallel. However, I'm not sure about this. It's appealing because it will 
> decrease the time to market for new features and generally make our lives 
> easier (I think)

One thing that might not be so apparent about #4 is that it will make plugin 
author's lives easier. In particular, part of the work would be to tighten up 
some rules about when things can be considered 'configured' and provide some 
infrastructure that takes advantage of this. This would mean that most plugin 
implementations won't need to worry about laziness, but only if we guarantee 
that the plugin will only ever run against the new inference model.

Another implication of this work would be better warning and error messages 
when you attempt to configure something that has already been 'used' and the 
configuration will have no effect. This will be the case whether it's a 
per-plugin or a per-build-invocation decision whether to use the new stuff or 
not. Not sure if this is an argument for #3 or #4, actually.

> , I'm not sure it's the most user friendly. 

Just to explore this a bit more: What do you think would make it less user 
friendly, compared to, say, option #3?


> 
> At the moment I'm leaning towards #2 and #3. That is, 
> 
> 1. Improve the API over time to provide equivalent functionality that is 
> compatible with this inferencing
> 2. Move the existing plugins over to this API
> 3. Pre Gradle 2.0, opt-in to the new behaviour
> 
> This is the longer, more difficult road, but safer and more user friendly I 
> think.
> 
> -- 
> Luke Daley
> Principal Engineer, Gradleware 
> http://gradleware.com
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe from this list, please visit:
> 
>    http://xircles.codehaus.org/manage_email
> 
> 


--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
VP of Engineering, Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com

Reply via email to