Nicola Ken Barozzi wrote:

Stephen McConnell wrote:

Nicola Ken Barozzi wrote:

...

Gump metadata != Gump being setup.


Gump meta-data is insufficient.


It sure is. But it can be enhanced without having Gump barf on extra tags.

In order to create a functionally sufficient expression of path information you would need 6 separate gump project descriptors per project:

   build
   test
   runtime-api
   runtime-spi
   runtime-impl
   runtime-composite


Gump uses the word "project" in an improper way, as it's more about a project descriptor.

You can do the above in Gump by creating avalon, avalon-test, avalon-api, etc... If you look at the descriptors this is for example what Ant and many other projects do.

Going the direction of multiple gump files means invoking a build multiple time. This is massive inefficiency - do a build to generate the classes directory and a jar file, do another build to run the testcase, but then when you need the above information for generation of a build artifact - well - you sunk. You cannot do it with gump as it is today.


The solution is to do to gump what Sam did to Ant community .. he basically said - "hey .. there is an application that knows more about the classpath information than you do" and from that intervention ant added the ability to override the classloader definition that ant uses.

Apply this same logic to gump - there is a build system that knows more about the class loading requirements than gump does - and gump needs to delegate responsibility to that system - just as ant delegates responsibility to gump.


I.e. gump is very focused on the pure compile scenarios and does not deal with the realities of test and runtime environments that load plugins dynamically.


You cannot create fixed metadata for dynamically loaded plugins (components), unless you decide to declare them, and the above is sufficient.

Consider the problem of generating the meta data for a multi-staged classloader containing API, SPI and IMPL separation based on one or multiple gump definitions .. you could write a special task to handle phased buildup of data, and another task to consolidate this and progressively - over three gump build cycles you could produce the meta-data. Or, you could just say to magic - <artifact/> and if gump is opened up a bit .. the generated artifact will be totally linked in to gump generated resources - which means that subsequent builds that are using the plugin are running against the gump content.


The point is that gump build information is not sufficiently rich when it comes down to really using a repository in a productive manner when dealing with pluggable artifacts (and this covers both build and runtime concerns). How this this effect Depot? Simply that gump project descriptors should be considered as an application specific descriptor - not a generic solution.

Cheers, Steve.

p.s.

Re. gump management - I'm currently playing around with the notion of one gump project covering all of avalon - the single project definition generated by magic that declares the external dependencies (about 8 artifacts) and the Avalon produced artifacts (about 60 or more). The magic build will generate everything including plugins and metadata and publish this back to gump.

SJM

--

|---------------------------------------|
| Magic by Merlin                       |
| Production by Avalon                  |
|                                       |
| http://avalon.apache.org              |
|---------------------------------------|

Reply via email to