You're right, Jenkins is higly tied to it's remoting agent. We have workaround for some calls (like launching a process) but generally speaking we need this agent running on slave.
Yoann and I have developed an alternate approach based on a set of containers (aka a "pod") here one of them comes with a JVM and slave.jar and will handle remoting stuff, and another one do host the build command. They share network and workspace as a volume. Can also have some more container added to this set, for sample to run a selenium browser, without need for xvnc hack during the build. We also use plain docker run stdin/stdout as a channel between master and slave. No need for sshd in docker image, nor callback JNLP URL - which requires jenkins to be reachable from the slave. Removal for this Launcher complexity makes it trivial to run a docker container as a slave. this approach offer a great flexibility, see https://github.com/ndeloof/docker-slaves-plugin for details We have also considered a possible optimisation to create a docker container when jenkins starts to use a base image + Jvm, inject slave.jar (docker cp as you suggested) as well as jenkins jars into remoting cache, then commit the image. This image could then be used for all build, would perfectly match the jenkins installation, and as a result the remoting would start immediately without need for classes exchange. This is just an idea, not required, but something we have in mind for future. The One-Shot logic has been designed for this exact scenario, it was initialy mixed into docker-slaves, but as it could benefit other plugins (Kubernetes, Amazon ECS, maybe mesos as well) it made sense to just extract it. It's feature complete but implementation details would need some polish and additions for new hooks into jenkins-core, but is usable today : you can wait for a future release for a "cleaner" implementation, but the API is well defined. 2016-03-07 2:33 GMT+01:00 Ben Navetta <[email protected]>: > After looking into Jenkins' slave system a bit more, I think this would > need to have access to the Docker container to run slave.jar. That would > mean that any Docker images people want to execute on would have to have a > JVM in a reasonably well-defined location. Adding slave.jar in the pipeline > code with docker cp could work with that, so there wouldn't be too many > restrictions on image selection. Nicolas, do you think some of the > one-shot executor logic could be reused for this, or is it more targeted at > a standard build? > -- You received this message because you are subscribed to the Google Groups "Jenkins Developers" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-dev/CANMVJzn3hhY81CXUR5r8bSp_%3DgsZzYTzPUR0PqYDveM3DRXX6A%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
