At my company, we use Jenkins to run builds with lots of parallel tasks,
where the slaves for each task are provisioned from a private Kubernetes
cluster. We have a very specific problem with these provisioned slaves:
we'd like to reduce the time overhead of a Kubernetes slave to match that
of a physical slave (or get as close as possible). Since our slave
container itself has a non-trivial start-up time (after provisioning, but
before registering with the Jenkins master), we're thinking of maintaining
a Kubernetes deployment of 'ready' slaves that register themselves with the
master, and then are removed from the deployment when they're assigned a
job; the rest of the lifecycle remains the same (that is, the slaves are
still used only once). This ensures that we have a continuous supply of
ready slaves, and we can also use pool size auto-scaling to keep up with
We've tried this out internally by modified the Kubernetes plugin a little
to be able to support this system, and are reasonably satisfied with the
results. I have a couple of questions with regard to this:
1. Is there a better way to reduce overhead? In our case, overhead
essentially comprises of provisioning request time + pod scheduling time +
container setup + slave connect-back.
2. Does this use-case fall within the realm of the Kubernetes plugin, or is
it better off developed as a plugin dependent on this one?
Looking forward to feedback from y'all!
Thanks and regards,
You received this message because you are subscribed to the Google Groups
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email
To view this discussion on the web visit
For more options, visit https://groups.google.com/d/optout.