ship is focused on having a script or wagon url that you ship your releases
to, and being able to run that script against a specified release of your
project, even if your project on disk is a different version

- Stephen

---
Sent from my Android phone, so random spelling mistakes, random nonsense
words and other nonsense are a direct result of using swype to type on the
screen
On 25 Jun 2011 20:41, "Brian Topping" <[email protected]> wrote:
> Thanks Stephen, these are good leads. I had seen both of these projects
some time in the past, but hadn't strongly considered them.
>
> A primary concern is reusing existing maven plugins. Ship seems focused on
general script execution, no? I'm still left with getting a groovy script to
execute a maven plugin. Due to differences in component resolution between
plugins, maven itself is likely to be the only reliable container for a
plugin to run within.
>
> The concept of invoker reminds me of a blog entry I read somewhere where
an "n+1" project is used to distribute all the software. But unfortunately,
that project would have to be tuned every time a new project is added.
>
> Still searching...
>
> Cheers, Brian
>
> On Jun 25, 2011, at 12:09 AM, Stephen Connolly wrote:
>
>> 1. have a look at the ship-maven-plugin (i wrote it, but i think it has
some
>> good ideas for continuous delivery as well as delivery in general... i
call
>> it ship to have a different term from deploy so as not to confuse the
maven
>> lifecycle)
>>
>> 2. you might have some luck with, eg the maven-invoker-plugin, or at
least
>> use maven-invoker from your own plugin...
>>
>> - Stephen
>>
>> ---
>> Sent from my Android phone, so random spelling mistakes, random nonsense
>> words and other nonsense are a direct result of using swype to type on
the
>> screen
>> On 25 Jun 2011 07:57, "Brian Topping" <[email protected]> wrote:
>>> Hi all,
>>>
>>> I'm picking up a project that requires a configurable large scale
>> deployment, something on the order of five concurrent development
branches
>> of HEAD, each with about five servers that the various projects in a
branch
>> will need to deploy to. Ideally, everything will be contained in the
Maven
>> build such that CI could deploy automatically and individuals with a
>> sufficiently robust settings.xml could deploy manually from a Maven
>> invocation.
>>>
>>> The servers that are being deployed to will require a mix of different
>> deployment strategies (one per packaging) and more than one artifact will
>> need to be deployed to each server over the course of a reactor build.
>> Closed-source m2 plugins have been developed for each of these package
types
>> and are provided by the target container vendor. Each plugin is basically
a
>> REST client that wouldn't be hard to rewrite, but it would be
advantageous
>> to use the vendor's plugins.
>>>
>>> Having read
>> http://docs.codehaus.org/display/MAVEN/Dynamic+POM+Build+Sections, I set
off
>> with the plan that executions might be added to a running build by a
plugin
>> in the initialize phase. For instance, if an artifact A needed to be
>> deployed using plugin X to five separate machines for a branch, X would
not
>> be statically configured in the POM with five separate executions (which
get
>> transformed into entries in the goal queue), rather some plugin would be
>> developed to insert the five new goals at the right place in the goal
queue
>> dynamically.
>>>
>>> But it turns out that m2 pre-generates the list of goals before the
first
>> plugin runs, and in any case are not accessible from the mojo context, so
>> this appears to be impossible. I haven't had a chance to check if this is
>> still true in m3.
>>>
>>> I took a look at Cargo, and it would probably work if Deployer
>> implementations were developed, but again, local requirements strongly
>> prefer using the vendor's m2 plugins, and Cargo doesn't have a means to
wrap
>> an m2 plugin in a Cargo Deployer for pretty obvious reasons.
>>>
>>> I can easily see that plugins could be configured in a parent build and
>> executions statically defined, one per target machine in each build that
>> needs a particular kind of deployment. This creates a lot of
configuration
>> volume though, something I was hoping to avoid by creating named groups
for
>> the servers and possibly storing them in LDAP. This is especially
important
>> in the production cluster, where there are about 100 servers, and the
>> expansion of the POMs for all these servers would not be well-received
>> (merging the branches to HEAD prior to a release would be very tricky for
>> the POMs, besides just being unwieldy).
>>>
>>> It may be that I am missing something obvious, or it may be that Maven
>> isn't ideal for this job. Can anyone who's tried this before share their
>> thoughts on it?
>>>
>>> Kind regards, Brian
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: [email protected]
>>> For additional commands, e-mail: [email protected]
>>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
>

Reply via email to