Hello Emanuele,

Am 21.09.22 um 12:01 schrieb Emanuele Rocca:
Well but that's the whole point of automated testing. There's no *need*
to do it locally if it's already done by Salsa for you. What is already
automated and working pretty well is:

- amd64 build
- i386 build
- source build
- autopkgtest
- blhc
- lintian
- piuparts
- reprotest
- arm64 crossbuild

That's a pretty time consuming list of things to go through for a human!

sure, that's a killer argument that I can't really argue against. But that is not the point for me.

For all these checks we already have existing infrastructure, running the same also by a pipeline job isn't helping at all if it's not clear how to handle the fallout (we already mostly have seen in other places too!).

As Sandro and Arnaud have pointed out it's probably mostly a matter of the workflow for a package upload. And for me the CI pipeline stuff right now doesn't fit really into the package upload workflow that is typically used.

Using the CI stuff in your own namespace is perfectly fine and I'm using this option from time to time. But I use there also the possibility to do heavily force pushing to not blow up the git tree with dozens of Fixup commits! In the 'official' git tree this is a no go of course.

Nobody is perfect and even every Python package will have it's own small differences in between. As long as we don't have *one* Debian way to build packages and a helpful way to deal with breakage in any of the test runs it will always be a waste of time an energy to run for all packages a CI run at all times!

If the decision is to do this step I will simply need to ignore any errors that are not RC.

The only work left to be done on your machine is a binary build to see
if the packages look good, perhaps some specific manual testing [1],
source build and upload. Isn't that better?

I do all package built locally as a all/any build run.
As written above and trying to say, I like atomic git commits that are doing things "correct" and by looking at the commit it's clear why this commit was done. I have to "fight" enough on my day job with my colleagues as they do git mostly using committing every forward and backward steps with no clean up locally finally before pushing their stuff and so I need to spend a lot of time to get the changes and their basically meaning. You would end up the same in the packages here as people would commit again and again to fix up the packages.

I stand on my thinking, it's not helpful to enable a global CI for all packages. Doing this from case to case is absolutely fine to me.

Arnaud Ferraris has written about the usage of a CI option in Debian Mobile etc. His writing is affirming me as I see and have the same experience within the PureOS ecosystem. People work there the same as I did describe, package are prepared in the local namespace and if the CI is running there successfully then a push to the team namespace is done.

--
Regards
Carsten

Reply via email to