Hi folks

I’m trying to get a handle on our use of Jenkins testing for PRs prior to 
committing them. When we first discussed this, it was my impression that our 
objective was to screen PRs to catch any errors caused by differences in 
environment and to avoid regressions. However, it appears that the tests keep 
changing without warning, leading to the impression that we are using Jenkins 
as a “mini-MTT” testing device.

So I think we need to come to consensus on the purpose of the Jenkins testing. 
If it is to screen for regressions, then the tests need to remain stable. A PR 
that does not introduce any new problems might not address old ones, but that 
is no reason to flag it as an “error”.

On the other hand, if the objective is to use Jenkins as a “mini-MTT”, then we 
need to agree on how/when a PR is ready to be merged. Insisting that nothing be 
merged until even a mini-MTT is perfectly clean is probably excessively 
prohibitive - it would require that the entire community (and not just the one 
proposing the PR) take responsibility for cleaning up the code base against any 
and all imposed tests.

So I would welcome opinions on this: are we using Jenkins as a screening tool 
on changes, or as a test for overall correctness of the code base?

Ralph

Reply via email to