Hi all, I'm picking up a project that requires a configurable large scale deployment, something on the order of five concurrent development branches of HEAD, each with about five servers that the various projects in a branch will need to deploy to. Ideally, everything will be contained in the Maven build such that CI could deploy automatically and individuals with a sufficiently robust settings.xml could deploy manually from a Maven invocation.
The servers that are being deployed to will require a mix of different deployment strategies (one per packaging) and more than one artifact will need to be deployed to each server over the course of a reactor build. Closed-source m2 plugins have been developed for each of these package types and are provided by the target container vendor. Each plugin is basically a REST client that wouldn't be hard to rewrite, but it would be advantageous to use the vendor's plugins. Having read http://docs.codehaus.org/display/MAVEN/Dynamic+POM+Build+Sections, I set off with the plan that executions might be added to a running build by a plugin in the initialize phase. For instance, if an artifact A needed to be deployed using plugin X to five separate machines for a branch, X would not be statically configured in the POM with five separate executions (which get transformed into entries in the goal queue), rather some plugin would be developed to insert the five new goals at the right place in the goal queue dynamically. But it turns out that m2 pre-generates the list of goals before the first plugin runs, and in any case are not accessible from the mojo context, so this appears to be impossible. I haven't had a chance to check if this is still true in m3. I took a look at Cargo, and it would probably work if Deployer implementations were developed, but again, local requirements strongly prefer using the vendor's m2 plugins, and Cargo doesn't have a means to wrap an m2 plugin in a Cargo Deployer for pretty obvious reasons. I can easily see that plugins could be configured in a parent build and executions statically defined, one per target machine in each build that needs a particular kind of deployment. This creates a lot of configuration volume though, something I was hoping to avoid by creating named groups for the servers and possibly storing them in LDAP. This is especially important in the production cluster, where there are about 100 servers, and the expansion of the POMs for all these servers would not be well-received (merging the branches to HEAD prior to a release would be very tricky for the POMs, besides just being unwieldy). It may be that I am missing something obvious, or it may be that Maven isn't ideal for this job. Can anyone who's tried this before share their thoughts on it? Kind regards, Brian --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
