On Wednesday, 23 January 2019 16:06:03 UTC+1, Jesse Glick wrote:
> It sounds nice (I have not tried it). Some technical questions / 
comments: 

> make backup of jobs history before Jenkins master pod deletion and 
restore it in new pod 

> You do not just set a custom build record directory (IIRC this is a 
> system property) and use a persistent volume claim? See Evergreen for 
> example. (Beware however that there is some more true state besides 
> build history; it is safest to persist the entire `$JENKINS_HOME`, and 
> let JCasC + `job-dsl` overwrite old configurations.) 

Making backup of ~50GB is not good idea. I don't want to make snowflake. 
When new Jenkins master comes up it should starts with empty 
`$JENKINS_HOME` and jenkins-operator have to configure it to desire state. 
Backup is triggered before Jenkins master pod deletion and it have to be 
quick as possible. It have impact on how quick new Jenkins master can start 
up.

> every single plugin and it's depended plugins have to be installed with 
specific version 

> Good, I have long fought against plugin management scripts/tools which 
> try to be helpful by picking the latest plugins from the update 
> center, or adding dependencies automatically! A complete and explicit 
> list of plugins is safer, but you then run the risk of leaving lots of 
> unused plugins behind after configuration changes. Thoughts: 

> 
https://issues.jenkins-ci.org/browse/JENKINS-53506?focusedCommentId=348904&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-348904
 

> (And it would be great if we could get your system synchronized with 
Evergreen, at least optionally.) 

I forgot to mention that plugins are separated in two groups: required by 
jenkins-operator(used to configure Jenkins)(A) and required by user(B).
A:
- they are installed by jenkins-operator before Jenkins can start up
- every single change requires restart of Jenkins master pod(which I want 
to avoid)
- user can override versions
- user can add more plugins but it cause restart of Jenkins master pod
B:
- they are installed by jcasc plugins
- requires restart of Jenkins but no Kubernetes pod
- it should be used by users to install plugins

> use Kubernetes plugin to build jobs (every new job spins up new pod in 
Kubernetes 

> With master executor count set to zero, I hope.

Jenkins builds runs on slaves only(Kubernetes pods), so master agent is not 
used. However we have to run some jobs on master to configure it(apply 
grovy scripts, restore/backup and seed jobs creation).

> improve JNLP agent to handle Jenkins master pod restarts 

> Not sure where you are going with this. An agent cannot connect to two 
> different master sessions; it is just not possible without completely 
> rewriting Remoting. Pipeline builds do however attempt to continue 
> running, even in the middle of a `sh` step, across master restarts so 
> long as the actual machine / VM / container running a “JNLP” agent 
> survives: the agent will try to reconnect to the same master URL at 
> intervals and start a new Remoting channel, at which point the 
> Pipeline logic detects a connected agent with the same name as the 
> build was using before and the same filesystem layout and proceeds. 

> 
https://github.com/jenkinsci/kubernetes-plugin/blob/ef177b3b1297a928b22292644afabb1556b5da68/src/test/java/org/csanchez/jenkins/plugins/kubernetes/pipeline/RestartPipelineTest.java#L179-L195
 

> Similarly, a build using an agent launched from the master can be 
> resumed so long as the new master session can reconnect to the same 
> container somehow. 

We had problems when Jenkins master goes down and JNLP agent try to 
reconnect and it fails after timeout. Maybe increasing timeout in agent 
could solve this issue.

> In current solution only one Jenkins master can be run at the same time, 
the old goes down then the new one can be created. 
> I thought we can spin up new Jenkins master pod when the old one have to 
be killed and then switch traffic to the new one. 

> This is a very hard problem in Jenkins generally, since many features 
> in core & plugins assume that the Jenkins service is stateful, the 
> `$JENKINS_HOME` directory is that state, and only one process accesses 
> it. While workarounds are certainly possible if you accept 
> restrictions on the features in use, providing a satisfactory and 
> general solution is one of the goals of the SIG: 

> https://jenkins.io/sigs/cloud-native/#cloud-native-jenkins 

I aware of this limitation. My idea:
- Jenkins master goes down
- disconnect all slaves from the master
- perform new backup
- spin up new master
- configure it, restore backup
- reconnect slaves to new Jenkins
- Jenkins will resume builds from old Jenkins

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/3bd7177f-76cc-4e69-91e4-d872cb05daec%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to