My 6 cents :-) 1. Do not rely on dockerhub to host a modified/cutomized image 2. Try to use the the maven out of the box 3. Have a top level Dockerfile that you do a docker build extending the official maven image The first time this will take some time, but the docker layers will be cached making any future docker builds noop and fast 4. for sub project do the docker run using the local image you just build once 5. Always use --rm to not leave container handing around 6. The docker run will produce the artifacts but will not published, you can two options 6. a. Use a mvn jenkins job to have mvn to the docker build and the iterations of docker runs to builds, and lastly publishing 6.b As already mentioned then do free style and publish with typical tools from a script
Since I don't have the details, context or constraint, none of this could be applicable so just ignore it. On Wed, Nov 15, 2017 at 8:53 AM Thomas Bouron < [email protected]> wrote: > Thanks for the replies Jean, Allen. > > I agree with the pipeline approach, I want to do that for Brooklyn but > would like first to dockerize everything. It is a conservative approach but > it might be controversial in our community, therefore baby steps :) > > All of the projects I’m involved with only ever use freestyle jobs. Soooo… > > :) > > > > Fair enough :) > > This is one of the few jobs that Apache Hadoop doesn’t have dockerized. I > > think I know what needs to happen (import the global maven settings) but > I > > just haven’t gotten around to building the bits around it yet. I’ll > > probably write something up and add it to the Apache Yetus toolbox. > > > > So, I did some testing and turns out that you only need the global maven > `settings.xml`. As I mount the `~/.m2` folder on my docker image, `mvn > deploy` works like a charm! > However, one thing was missing: using the maven style project allows the > job to automatically archive artifacts and deploy them only at the end. I > managed to replicate this behaviour by using a conditional build step > (executed only on success) which call the same docker image but does a `mvn > deploy -DskipTest`. Granted it is not deploying the same artifacts that > have been tested (as it rebuilds them) but I think it is good enough for > now. > > I’m personally not a fan of depending upon docker hub for images. I’d > > rather build the images as part of the QA pipeline to verify they always > > work, and if the versions of bits aren’t pinned, to test against the > > latest. This also allows the Dockerfile to get precommit testing. > > > > It’s worth mentioning that all of the projects I’m involved with use > Yetus > > to automate a lot of this stuff. Patch testing uses the same base images > > as full builds. So if your tests run frequently enough, they’ll stay > cached > > and the build time becomes negligible over the course of the week. > > > > I get the point, but it sounds counter-productive to build the image on > each run. Need to do more testing to see if the cache would be enough for > us or not. > > Best. > -- > > Thomas Bouron • Senior Software Engineer @ Cloudsoft Corporation • > https://cloudsoft.io/ > Github: https://github.com/tbouron > Twitter: https://twitter.com/eltibouron >
