On 28/08/2012, at 9:23 PM, Adam Murdoch <[email protected]> wrote:
>
> On 29/08/2012, at 12:02 AM, Luke Daley wrote:
>
>>
>> On 28/08/2012, at 3:20 AM, Adam Murdoch wrote:
>>
>>> Hi,
>>>
>>> One thing that the new build comparison/migration stuff needs to do is run
>>> a Gradle build and then assemble a description of what the build produced,
>>> in order to compare it.
>>>
>>> We're currently using the tooling API for this, but it's a little awkward.
>>> First we run the build using a BuildLauncher. Then we ask for the
>>> ProjectOutcomes model. One problem with this is that it's not entirely
>>> accurate, as the model builder can only guess what the build would have
>>> done. Another problem is that it's potentially quite slow, as we have to
>>> configure the build model twice.
>>>
>>> Both these problems would be addressed if we had some way to run the build
>>> and assemble the model in one operation. We have a few options about how we
>>> might model this.
>>>
>>> Here are some use cases we want to aim for (for the following, 'build
>>> model' means the version-specific Gradle model):
>>>
>>> * Request the Eclipse model: configure the build model, apply the eclipse
>>> plugin, assemble the Eclipse model.
>>> * Request the project tasks model: configure the build model, assemble the
>>> project tasks model.
>>> * Run a build from the IDE: run the selected tasks, assemble a build result
>>> model (successful/failed).
>>> * Run a build for comparison: run the selected tasks, assemble the outcomes
>>> model from the DAG (outcomes model extends build result model above).
>>> * Run a build from an integration test: inject classpath from test process,
>>> run the selected tasks, assemble a test build result model.
>>> * Configure a build from an integration test: inject classpath from test
>>> process, configure the model, make some assertions, assemble a test build
>>> result model.
>>> * Newer consumer fills out missing pieces of model provided by older
>>> provider: inject classpath from consumer process, invoke client provided
>>> action around the existing behaviour, client action decorates the result.
>>> * Create a new Gradle project from the IDE: configure the build model,
>>> apply the build-initialize plugin, run some tasks, assemble a build result
>>> model.
>>> * Tooling API client builds its own model: inject classpath from client
>>> process, invoke a client provided action, serialise result back. This
>>> allows, for example, an IDE to opt in to being able to ask any question of
>>> the Gradle model, but in a version specific way.
>>>
>>> What we want to sort out for the 1.2 release is the minimum set of consumer
>>> <-> provider protocol changes we can make, to later allow us to evolve
>>> towards supporting these use cases. Clearly, we don't want all this stuff
>>> for the 1.2 release.
>>>
>>> Something else to consider is how notifications might be delivered to the
>>> client. Here are some use cases:
>>>
>>> * IDE is notified when a change to the Eclipse model is made (either by a
>>> local change or a change in the set of available dependencies).
>>> * IDE is notified when an updated version of a dependency is available.
>>> * For the Gradle 'keep up-to-date' use case, the client is notified when a
>>> change to the inputs of the target output is made.
>>>
>>> Another thing to consider here is support for end-of-life for various
>>> (consumer, producer) combinations.
>>>
>>> There's a lot of stuff here. I think it pretty much comes down to a single
>>> operation on the consumer <-> provider connection: build request comes in,
>>> and build result comes out.
>>>
>>> The build request would specify (most of this stuff is optional):
>>> - Client provided logging settings: log level, stdin/stdout/stderr and
>>> progress listener, etc.
>>> - Build environment: Java home, JVM args, daemon configuration, Gradle user
>>> home, etc.
>>> - Build parameters: project dir, command-line args, etc.
>>> - A set of tasks to run. Need to distinguish between 'don't run any tasks',
>>> 'run the default tasks', and 'run these tasks'.
>>> - A client provided action to run. This is probably a classpath, and a
>>> serialised action of some kind. Doesn't matter exactly what.
>>> - A listener to be notified when the requested model changes.
>>>
>>> The build result would return:
>>> - The failures, if any (the failure might be 'this request is no longer
>>> supported').
>>> - The model of type T.
>>> - whether the request is deprecated, and why.
>>> - Perhaps some additional diagnostics.
>>>
>>> So, given that we only want a subset of the above for 1.2, we need to come
>>> up with a strategy for evolving. The current strategy is probably
>>> sufficient. We currently have something like this:
>>>
>>> <T> T getTheModel(Class<T> type, BuildOperationParametersVersion1
>>> operationParameters);
>>>
>>> The provider dynamically inspects the operationParameters instance. So, for
>>> example, if it has a getStandardOutput() method, then the provider assumes
>>> that it should use this to get the OutputStream to write the build's
>>> standard output to.
>>>
>>> This means that an old provider will ignore the build request features that
>>> it does not understand. To deal with this, the consumer queries the
>>> provider version, and uses this to decide whether the provider supports a
>>> given feature (the consumer knows which version a given feature was added).
>>>
>>> To implement this, I think we want to add a new interface, detangled from
>>> the old interfaces, perhaps something like:
>>>
>>> interface ProviderConnection extends InternalProtocolInterface {
>>> <T> BuildResult<T> build(Class<T> type, BuildRequest request)
>>> }
>>>
>>> On the provider side, DefaultConnection would implement both the old and
>>> new interface. On the consumer side, AdaptedConnection would prefer the new
>>> interface over the old interface.
>>>
>>> For BuildResult and BuildRequest, we could go entirely dynamic, so that
>>> these interfaces have (almost) no methods. Or we could go static with
>>> method for the stuff we need now and dynamic for new stuff. I'm tempted to
>>> go dynamic.
>>
>> There also may be a case for providing parameters to the model builder.
>>
>> At the moment, we are hardcoding that the only artifacts that we care about
>> are on the “archives” configuration. This is hardcoded in the provider. I
>> think it would be reasonable that one day we would want to give the user
>> control over this so they can deal with multiple configurations.
>>
>> Even if we decide we can hardcode “archives” forever, it seems not
>> unreasonable that you might want to provide a spec to the model builder to
>> influence it. This seems similar to executing tests and injecting a
>> classpath (or providing anything from client to provider really).
>
> Possibly. Do you have some examples?
Not beyond "here is the extra class path I want you to run with" at this time.
---------------------------------------------------------------------
To unsubscribe from this list, please visit:
http://xircles.codehaus.org/manage_email