The problem I see, is that the DevOps infracstructure doesn’t stay fix in time. 
It’s really depends on your need. But building into recent Jenkins an old 
version seem more like a possibility to me then having to revert to an old 
Jenkins to build an old version. If you have special command to build, this 
belong into your source solutions or makefile. The Jenkins should mearly call a 
single command with given option.

If you broke backward compatibility, version you pipeline script and branch it. 
Each branch belong to source branch/version.

Building an old version where the DevOps scripts can no longer work will 
happen, keeping them together tie them to stick. Might be what you need if you 
can always provide the same env and build system (OS and Docker for the 
service). Cause Jenkins might break backward compatibility one days or you 
might move away from Jenkins. Having the CI into the same repos it kind of lock 
you down to reuse the same CI that have the same exact API to perform your 
build.

I started at the pipeline beginning and the API was changing fast at first, 
this helped me a lot, thing are now more stable, but I still do this so I can 
deploy/build an old version to new infrastructures easily. But again it depends 
on your use case and how your software is deployed/used.

My devops mostly evolve as I add new things, like more linter, code coverage, 
etc. This allow me to do those thing on older version as well to compare. The 
build sequence belong to makefile and solution scripts and should not be part 
of the pipeline. The pipeline is more a sequencer of high level things to do, 
how to do it specifically should be a script or make target into your project.

This how I depart my code, other use case might vary.

From: [email protected] <[email protected]> On 
Behalf Of jeremy mordkoff
Sent: August 14, 2020 2:52 PM
To: Jenkins Users <[email protected]>
Subject: Re: Pipeline design question

How do you maintain and verify backwards compatibility with older releases if 
you keep your devops code in a separate repo? I keep my devops code in the same 
repo as the product code so that I know I can always go back and rebuild an 
older release and get exactly the same results.

The only exceptions to this is the code and config files for accessing the 
infrastructure servers, such as my debian mirrors and docker repo in 
artifactory, including the URLS, the gpg keys and CA certs needed. For this I 
generate a base docker image from which all of my other images are derived. 
This base image has all of these things pre-configured plus the latest security 
updates for all packages (debian, pypi and npm). I can go a month without 
rebuilding it and then rebuild it 5 times in a week. I run a build and test for 
all active branches once a week just to be sure any changes to this base image 
haven't introduced any new issues. It is very rare that this break an old 
branch and not master, in fact I only remember this happening once.

My point is my build has changed dramatically over the years and if I had my 
devops code in it's own repo, I would have to create branches to match the 
product branches anyway so why not keep it all together?
On Friday, August 14, 2020 at 1:30:49 PM UTC-4 Jérôme Godbout wrote:
**************
So you add all these repositories to your jobs and then they are run each time 
one of those repositories is updated, right?

Well, I either have unit tests build every night more into scheduled build into 
Jenkins pipeline options or manually for the distribution build they are done 
manually with a tag number injected by the user.
I do this because I don't have enough PC performance to run and build on every 
commit. But in an ideal world I would put a webhook on my repos to trig the 
Jenkins build when a push is done into the right branch. The Jenkins common 
library are always taking the head of the master branch, this should always 
work and have the most recent functions and bug fix (I do not break backward 
compatibility and if I do, I update all the Jenkinsfile right away). The setup 
and deploy is not large enough over here to bother just yet, but you could 
easily use branch to prevent backward compatibility issues with the common 
library.

**************
How do things work on slaves? Is each repos cloned in its own directory in the 
workspace directory?

The master scheckout the pipelines repos to fetch the Jenkinsfile from SCM. 
That repos only contain that file (or many of them) along with build specific 
groovy/shell... scripts to help the build process. The first thing it does on 
the slave node is to checkout the common tools and import the needed one inside 
a subfolder (Amotus_Jenkins/).

Once the tools are loaded, I do checkout the source and start the build as it 
normally should. I use ENV var to set the branch or other options that will be 
used to build that repos. Those env are injected by Jenkins parameters options.

The resulting artifacts are sent to artifactory/appcenter/ftp... and test 
results are analyze right into Jenkins.

That way, the only thing Jenkins known are credentials to repos and the 
pipeline repos and parameters ask to user. The rest is done inside the pipeline 
repos (unit test/build/deploy jenkinsfile). The source repos doesn't even known 
if any CI exist on him, so If I want to build an old version with a recent CI 
or change my CI it will still work.

The checkout syntax into Jenkinsfile is a bit painful if you have git 
submodules, but it will track the changes with the right plugins. My first few 
stage mostly look like this, it's a shame that nearly all my jenkinsfile file 
look like this but it straight forward once you see it once:

node("PHP && PHPComposer") {
def amotusModules = [:]; // this dictionary hold the tools modules files I made 
generic for all projects
def amotusRepos = [
[
name: 'Repos Name'
, url: 'https://bitbucket.org/repos2.git'
, branch: "${params.AMOTUS_BRANCH}"
, path: 'SourceRepos2'
]
];

stage('Checkout Tools') {
dir("Amotus_Jenkins") {
checkout([$class: 'GitSCM'
, branches: [[name: 'master']]
, browser: [$class: 'BitbucketWeb', repoUrl: 'https://bitbucket.org/repos.git']
, doGenerateSubmoduleConfigurations: false
, extensions: [[$class: 'SubmoduleOption', disableSubmodules: false, 
parentCredentials: true, recursiveSubmodules: true, reference: '', 
trackingSubmodules: false], [$class: 'CleanCheckout']]
, submoduleCfg: []
, userRemoteConfigs: [[credentialsId: 'BitBucketAmotus', url: 
'https://bitbucket.org/repo.git']]
]);
}
}

stage('Load option') {
dir(pwd() + "/Amotus_Jenkins/"){
// Load basic file first, then it will load all others options with their 
dependencies
load('JenkinsBasic.Groovy').InitModules(amotusModules);
amotusModules['basic'].LoadFiles([
'JenkinsPhp.Groovy'
, 'JenkinsPhpComposer.Groovy'
]);
}
}

stage('Checkout Repos') {
amotusRepos.each { repos ->
dir(repos['path']) {
checkout([$class: 'GitSCM'
, branches: [[name: amotusModules['basic'].ValueOrDefault(repos['branch'], 
'master')]]
, browser: [$class: 'BitbucketWeb', repoUrl: repos['url']]
, doGenerateSubmoduleConfigurations: false
, extensions: [[$class: 'SubmoduleOption', disableSubmodules: false, 
parentCredentials: true, recursiveSubmodules: true, reference: '', 
trackingSubmodules: false], [$class: 'CleanCheckout']]
, submoduleCfg: []
, userRemoteConfigs: [[credentialsId: 'BitBucketAmotus', url: repos['url']]]
]);
}
}
}

// Perform the build/test stages from here
}

That give a good idea on how things are executed on the slave node. I also use 
node env var to override default path for tools if they not install into the 
default path or the OS have a special path (I'm looking at you MacOS).

There is still quiet some room for improvement, but I got so little time 
allowed for my DevOps...

-----Original Message-----
From: [email protected] <[email protected]> On Behalf Of 
Sébastien Hinderer
Sent: August 14, 2020 11:29 AM
To: [email protected]
Subject: Re: Pipeline design question

Hello Jérôme, thanks a lot for your response.

Jérôme Godbout (2020/08/11 16:00 +0000):
> Hi,
> this is my point of view only,but using a single script (that you put
> into your repos make it easier to perform the build, I put my pipeline
> script into a separated folder). But you need to make sure your script
> is verbose enough to see where it has fail if anything goes wrong,
> sinlent and without output long script will be hard to understand
> where it has an issue with it.

Indeed. Generally speaking, we activate the e and x shell options to have 
command displayed and scripts stop on the first error.

[...]

> I for one, use 3+ repos.
> 1- The source code repos
> 2- The pipeline and build script repos (this can evolve aside form the
> source, so my build method can change and be applied to older source
> version, I use branch/tag when backward compatibility is broken or a
> specific version is needed for a particualr source branch)
> 3- My common groovy, scripts tooling between my repos
> 4- (optional) my unit tests are aside and can be run on multiple
> versions

That's a very interesting workflow, thanks!

So you add all these repositories to your jobs and then they are run each time 
one of those repositories is updated, right?

How do things work on slaves? Is each repos cloned in its own directory in the 
workspace directory?

> Hope this can help you decide or plan you build architecture.

It helps a lot! Thanks!

Sébastien.

--
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/20200814152915.GA143147%40om.localdomain.
--
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
[email protected]<mailto:[email protected]>.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/0cf9610d-a372-48d4-b793-d89b7bb9e0cfn%40googlegroups.com<https://groups.google.com/d/msgid/jenkinsci-users/0cf9610d-a372-48d4-b793-d89b7bb9e0cfn%40googlegroups.com?utm_medium=email&utm_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/YTOPR0101MB231540F025F61E2C2DACF8C1CD400%40YTOPR0101MB2315.CANPRD01.PROD.OUTLOOK.COM.

Reply via email to