Re: Streamlining the use of Salsa CI on team packages

2019-10-15 Thread Peter Pentchev
On Mon, Oct 14, 2019 at 01:51:57PM +, PICCA Frederic-Emmanuel wrote:
> Hello,
> 
> and if at the end the upstream could take care of the Debian packaging, by 
> adding a 
> .salsa-ci.yml in the upstream directory, in order to have a feedback with 
> nice badges ?

That would mean that part of the Debian packaging is in the upstream
source with all the undesired consequences of that, like any time
a change to the Debian packaging is needed, either the upstream author
has to release a new version, or a patch needs to be added to the Debian
package, making the change not trivial to see.

G'luck,
Peter

-- 
Peter Pentchev  roam@{ringlet.net,debian.org,FreeBSD.org} p...@storpool.com
PGP key:http://people.FreeBSD.org/~roam/roam.key.asc
Key fingerprint 2EE7 A7A5 17FC 124C F115  C354 651E EFB0 2527 DF13


signature.asc
Description: PGP signature


Re: Streamlining the use of Salsa CI on team packages

2019-10-14 Thread Thomas Goirand
On 9/16/19 10:03 PM, Hans-Christoph Steiner wrote:
>>> I know the Ruby team also decided to use debian/salsa-ci.yml instead of
>>> debian/gitlab-ci.yml [2]. I guess we should also do the same.
> This is still an open question:
> https://salsa.debian.org/salsa-ci-team/pipeline/issues/86
> 
> Debian has a bad habit of customizing things that do not need to be
> customizing.  That raises the barrier for contributors ever higher, in a
> "death by a thousand papercuts" kind of way.  I think we should stick to
> the standard file name for GitLab CI.

The issue is that, from a packaging standpoint, we cannot add a file if
it's not in the debian folder, because this makes change to the upstream
files. So, no choice...

Thomas Goirand (zigo)



Re: Streamlining the use of Salsa CI on team packages

2019-10-11 Thread Louis-Philippe Véronneau
On 19-09-15 20 h 31, Louis-Philippe Véronneau wrote:
> On 19-09-05 01 h 40, Louis-Philippe Véronneau wrote:
>> Hello folks!
>>
>> I'd like to propose we start using Salsa CI for all the team packages. I
>> think using a good CI for all our packages will help us find packaging
>> bugs and fix errors before uploads :)
>>
>> I also think that when possible, we should be using the same CI jobs for
>> our packages. The Salsa CI Team's default pipeline [1] is a good common
>> ground, as currently it:
>>
>> * builds the package
>> * runs piuparts
>> * runs autopkgtest
>> * runs lintian
>> * runs reprotest
>> * and does more!
>>
>> I don't think a failing CI should be a blocker for an upload, but I
>> think it's a good red flag and should be taken in account.
>>
>> I know the Ruby team also decided to use debian/salsa-ci.yml instead of
>> debian/gitlab-ci.yml [2]. I guess we should also do the same.
>>
>> Thoughts? If we decide to go ahead with this, I guess we should modify
>> the policy accordingly and contact the Salsa Team to see if adding this
>> additional load is OK with them.
>>
>> [1] https://salsa.debian.org/salsa-ci-team/pipeline#basic-use
>> [2] https://salsa.debian.org/salsa-ci-team/pipeline/issues/86#note_106245
> These are the steps I see going forward with this:
> 
> --
> 1. Agree on a default pipeline we should be using on the DPMT & PAPT
> packages.
> 
> 2. Draft a proposed modification to the DPMT and the PAPT policies
> 
> 3. Adopt that proposed modification once we reach consensus
> 
> 4. Wait for the "All Clear" from the Salsa Team
> 
> 5. Commit the previously agreed upon CI pipeline to all the DPMT & PAPT
> packages, while making sure the CI isn't ran on that push ("-o ci.skip")
> --
> 
> For step 1, I proposed we use the "Salsa Pipeline" [1], as I feel it is
> the most mature solution, has more contributors and has more features
> (including reprotest and piuparts). This option seems to have had the
> most support so far.
> 
> Hans-Christoph Steiner proposed we use "ci-image-git-buidpackage"
> instead [2]. The code behind it is simpler and the way it's built makes
> it possible for maintainers to modify the CI for their packages.
> 
> For step 2, so far people seemed to agree that:
> 
> * having a standardised CI pipeline is a good idea
> * the CI should be used as a tool to help us improve our packages, and
> not be required to pass
> 
> Please contribute to this discussion if you care about this issue! I'll
> try to draft something more concrete in the next few days.
> 
> [1] https://salsa.debian.org/salsa-ci-team/pipeline
> [2] https://salsa.debian.org/salsa-ci-team/ci-image-git-buildpackage
> 

As promised, here's a draft proposal on CI usage for the team:

https://salsa.debian.org/python-team/tools/python-modules/merge_requests/12/

-- 
  ⢀⣴⠾⠻⢶⣦⠀
  ⣾⠁⢠⠒⠀⣿⡁  Louis-Philippe Véronneau
  ⢿⡄⠘⠷⠚⠋   po...@debian.org / veronneau.org
  ⠈⠳⣄



signature.asc
Description: OpenPGP digital signature


Re: Streamlining the use of Salsa CI on team packages

2019-09-17 Thread Hans-Christoph Steiner
Raphael Hertzog:
> Hi,
> 
> On Sun, 15 Sep 2019, Louis-Philippe Véronneau wrote:
>> For step 1, I proposed we use the "Salsa Pipeline" [1], as I feel it is
>> the most mature solution, has more contributors and has more features
>> (including reprotest and piuparts). This option seems to have had the
>> most support so far.
> 
> Ack. I also deployed this pipeline on 500 Kali packages at
> https://gitlab.com/kalilinux/packages/
> and it has been working relatively well. There are a couple
> of remaining issues but the project is evolving quickly and I'm
> confident that we can get past them.
> 
> One of the issue is related to the fact that the CI build does not bump
> the version and it can conflict with the version in the archive and it
> will often confuse piuparts.
> https://salsa.debian.org/salsa-ci-team/pipeline/issues/78
> 
> The project is a bit lacking in terms of leadership/guidance and there are
> pending issues that should have been resolved more quickly to avoid
> confusion and better define the rules:
> https://salsa.debian.org/salsa-ci-team/pipeline/issues/84
> https://salsa.debian.org/salsa-ci-team/pipeline/issues/76
> https://salsa.debian.org/salsa-ci-team/pipeline/issues/86
> 
>> For step 2, so far people seemed to agree that:
>>
>> * having a standardised CI pipeline is a good idea
>> * the CI should be used as a tool to help us improve our packages, and
>> not be required to pass
> 
> On this, I disagree. The CI should pass, but it's perfectly OK to
> disable some of the failing tests to make it pass. We want merge requests
> to run the CI and we want them to succeed to prove that they are not
> making the package regress compared to the current situation.
> 
> Consider that the package tracker is likely to display the CI status at
> some point.
> 
> Note that for merge request, it won't really work until 
> https://gitlab.com/gitlab-org/gitlab/issues/30242 gets fixed in GitLab.

That's a good point. The tests that are there should be required to
pass, otherwise they can be removed, run manually, run in dev branches,
or set to "allow_failure: true" in a specific job.

That reminds me of a related topic I've been thinking about:  should
salsa's GitLab-CI setup be considered its own test tool, or just purely
a conduit for the other standard Debian test methods (lintian,
autopkgtest, piuparts, reprotest, etc).  I'm on the fence about this, so
I opened this discussion:
https://salsa.debian.org/salsa-ci-team/pipeline/issues/111

.hc



Re: Streamlining the use of Salsa CI on team packages

2019-09-17 Thread Raphael Hertzog
Hi,

On Sun, 15 Sep 2019, Louis-Philippe Véronneau wrote:
> For step 1, I proposed we use the "Salsa Pipeline" [1], as I feel it is
> the most mature solution, has more contributors and has more features
> (including reprotest and piuparts). This option seems to have had the
> most support so far.

Ack. I also deployed this pipeline on 500 Kali packages at
https://gitlab.com/kalilinux/packages/
and it has been working relatively well. There are a couple
of remaining issues but the project is evolving quickly and I'm
confident that we can get past them.

One of the issue is related to the fact that the CI build does not bump
the version and it can conflict with the version in the archive and it
will often confuse piuparts.
https://salsa.debian.org/salsa-ci-team/pipeline/issues/78

The project is a bit lacking in terms of leadership/guidance and there are
pending issues that should have been resolved more quickly to avoid
confusion and better define the rules:
https://salsa.debian.org/salsa-ci-team/pipeline/issues/84
https://salsa.debian.org/salsa-ci-team/pipeline/issues/76
https://salsa.debian.org/salsa-ci-team/pipeline/issues/86

> For step 2, so far people seemed to agree that:
> 
> * having a standardised CI pipeline is a good idea
> * the CI should be used as a tool to help us improve our packages, and
> not be required to pass

On this, I disagree. The CI should pass, but it's perfectly OK to
disable some of the failing tests to make it pass. We want merge requests
to run the CI and we want them to succeed to prove that they are not
making the package regress compared to the current situation.

Consider that the package tracker is likely to display the CI status at
some point.

Note that for merge request, it won't really work until 
https://gitlab.com/gitlab-org/gitlab/issues/30242 gets fixed in GitLab.

Cheers,
-- 
Raphaël Hertzog ◈ Debian Developer

Support Debian LTS: https://www.freexian.com/services/debian-lts.html
Learn to master Debian: https://debian-handbook.info/get/


signature.asc
Description: PGP signature


Re: Streamlining the use of Salsa CI on team packages

2019-09-16 Thread Hans-Christoph Steiner



Louis-Philippe Véronneau:
> On 19-09-05 01 h 40, Louis-Philippe Véronneau wrote:
>> Hello folks!
>>
>> I'd like to propose we start using Salsa CI for all the team packages. I
>> think using a good CI for all our packages will help us find packaging
>> bugs and fix errors before uploads :)
>>
>> I also think that when possible, we should be using the same CI jobs for
>> our packages. The Salsa CI Team's default pipeline [1] is a good common
>> ground, as currently it:
>>
>> * builds the package
>> * runs piuparts
>> * runs autopkgtest
>> * runs lintian
>> * runs reprotest
>> * and does more!
>>
>> I don't think a failing CI should be a blocker for an upload, but I
>> think it's a good red flag and should be taken in account.

Sounds good.

>> I know the Ruby team also decided to use debian/salsa-ci.yml instead of
>> debian/gitlab-ci.yml [2]. I guess we should also do the same.

This is still an open question:
https://salsa.debian.org/salsa-ci-team/pipeline/issues/86

Debian has a bad habit of customizing things that do not need to be
customizing.  That raises the barrier for contributors ever higher, in a
"death by a thousand papercuts" kind of way.  I think we should stick to
the standard file name for GitLab CI.


>> Thoughts? If we decide to go ahead with this, I guess we should modify
>> the policy accordingly and contact the Salsa Team to see if adding this
>> additional load is OK with them.

I think people should add these manually as they see a need.  Then only
if the Salsa Team says they really have the capacity, add it to all
packages.


>> [1] https://salsa.debian.org/salsa-ci-team/pipeline#basic-use
>> [2] https://salsa.debian.org/salsa-ci-team/pipeline/issues/86#note_106245
> These are the steps I see going forward with this:
> 
> --
> 1. Agree on a default pipeline we should be using on the DPMT & PAPT
> packages.
> 
> 2. Draft a proposed modification to the DPMT and the PAPT policies
> 
> 3. Adopt that proposed modification once we reach consensus
> 
> 4. Wait for the "All Clear" from the Salsa Team
> 
> 5. Commit the previously agreed upon CI pipeline to all the DPMT & PAPT
> packages, while making sure the CI isn't ran on that push ("-o ci.skip")
> --
> 
> For step 1, I proposed we use the "Salsa Pipeline" [1], as I feel it is
> the most mature solution, has more contributors and has more features
> (including reprotest and piuparts). This option seems to have had the
> most support so far.

I just checked out salsa-pipelines' piuparts support, it should take me
a couple hours to port it, if its needed.

I think reprotest and piuparts are going to be too heavy for gitlab-ci,
unless their use is manually triggered, or only on releases.  For
example, this salsa-pipeline piuparts-only job timed out after 2 hours
without having completed:
https://salsa.debian.org/debian/dpdk/-/jobs/221036
The reprotest failed after 30 minutes:
https://salsa.debian.org/debian/dpdk/-/jobs/221032

For ruby-fog-aws, these were much shorter:
https://salsa.debian.org/ruby-team/ruby-fog-aws/pipelines/63361
piuparts: ~5 minutes
reprotest: ~10 minutes

The whole libcloud job (build, package, lintian, autopkgtest) takes <10
minutes:
https://salsa.debian.org/python-team/modules/libcloud/-/jobs/248526

The whole androguard took ~13 minutes:
https://salsa.debian.org/python-team/modules/androguard/pipelines

Seems like there needs to be some load testing before pushing heavy
processes like reprotest and piuparts.


> Hans-Christoph Steiner proposed we use "ci-image-git-buidpackage"
> instead [2]. The code behind it is simpler and the way it's built makes
> it possible for maintainers to modify the CI for their packages.
> 
> For step 2, so far people seemed to agree that:
> 
> * having a standardised CI pipeline is a good idea
> * the CI should be used as a tool to help us improve our packages, and
> not be required to pass
> 
> Please contribute to this discussion if you care about this issue! I'll
> try to draft something more concrete in the next few days.
> 
> [1] https://salsa.debian.org/salsa-ci-team/pipeline
> [2] https://salsa.debian.org/salsa-ci-team/ci-image-git-buildpackage

Thanks for keeping this moving!

.hc



Re: Streamlining the use of Salsa CI on team packages

2019-09-16 Thread Marcin Kulisz
On 15 September 2019 23:01:46 BST, Thomas Goirand  wrote:

snip

>This tells "instance_type: g1-small", which doesn't match any name at:
>https://cloud.google.com/compute/vm-instance-pricing
>
>Am I right that this is n1-standard-1, which is 1 VCPU and 3.75 GB?

Nop, this is incorrect you're looking for this
https://cloud.google.com/compute/docs/machine-types#sharedcore

snip

>Since we're talking about the smallest type of instance possible at
>google, then other people may have experience the lack of RAM for sure.

g1-small are not the smallest but problem with them is that those are shared 
CPU with 1.7GB of ram.

I agree that this is not suitable for heavy build packages.

I personally would hope that packages build by salsa are throw away and are 
just for testing then source is uploaded and they are rebuild by buildd's in 
such case there would be no need for root or any other heavy handed management 
of those.
But as some people already stated there is not much info from salsa team about 
their plans in this regard.



Re: Streamlining the use of Salsa CI on team packages

2019-09-15 Thread Louis-Philippe Véronneau
On 19-09-05 01 h 40, Louis-Philippe Véronneau wrote:
> Hello folks!
> 
> I'd like to propose we start using Salsa CI for all the team packages. I
> think using a good CI for all our packages will help us find packaging
> bugs and fix errors before uploads :)
> 
> I also think that when possible, we should be using the same CI jobs for
> our packages. The Salsa CI Team's default pipeline [1] is a good common
> ground, as currently it:
> 
> * builds the package
> * runs piuparts
> * runs autopkgtest
> * runs lintian
> * runs reprotest
> * and does more!
> 
> I don't think a failing CI should be a blocker for an upload, but I
> think it's a good red flag and should be taken in account.
> 
> I know the Ruby team also decided to use debian/salsa-ci.yml instead of
> debian/gitlab-ci.yml [2]. I guess we should also do the same.
> 
> Thoughts? If we decide to go ahead with this, I guess we should modify
> the policy accordingly and contact the Salsa Team to see if adding this
> additional load is OK with them.
> 
> [1] https://salsa.debian.org/salsa-ci-team/pipeline#basic-use
> [2] https://salsa.debian.org/salsa-ci-team/pipeline/issues/86#note_106245
These are the steps I see going forward with this:

--
1. Agree on a default pipeline we should be using on the DPMT & PAPT
packages.

2. Draft a proposed modification to the DPMT and the PAPT policies

3. Adopt that proposed modification once we reach consensus

4. Wait for the "All Clear" from the Salsa Team

5. Commit the previously agreed upon CI pipeline to all the DPMT & PAPT
packages, while making sure the CI isn't ran on that push ("-o ci.skip")
--

For step 1, I proposed we use the "Salsa Pipeline" [1], as I feel it is
the most mature solution, has more contributors and has more features
(including reprotest and piuparts). This option seems to have had the
most support so far.

Hans-Christoph Steiner proposed we use "ci-image-git-buidpackage"
instead [2]. The code behind it is simpler and the way it's built makes
it possible for maintainers to modify the CI for their packages.

For step 2, so far people seemed to agree that:

* having a standardised CI pipeline is a good idea
* the CI should be used as a tool to help us improve our packages, and
not be required to pass

Please contribute to this discussion if you care about this issue! I'll
try to draft something more concrete in the next few days.

[1] https://salsa.debian.org/salsa-ci-team/pipeline
[2] https://salsa.debian.org/salsa-ci-team/ci-image-git-buildpackage

-- 
  ⢀⣴⠾⠻⢶⣦⠀
  ⣾⠁⢠⠒⠀⣿⡁  Louis-Philippe Véronneau
  ⢿⡄⠘⠷⠚⠋   po...@debian.org / veronneau.org
  ⠈⠳⣄



signature.asc
Description: OpenPGP digital signature


Re: Streamlining the use of Salsa CI on team packages

2019-09-15 Thread Louis-Philippe Véronneau
On 19-09-15 18 h 01, Thomas Goirand wrote:
> On 9/15/19 4:10 AM, Louis-Philippe Véronneau wrote:
>> On 19-09-14 17 h 35, Thomas Goirand wrote:
>>> On 9/13/19 11:08 PM, Louis-Philippe Véronneau wrote:
 On 19-09-13 05 h 57, Thomas Goirand wrote:
> On 9/5/19 7:40 AM, Louis-Philippe Véronneau wrote:
>> Hello folks!
>>
>> I'd like to propose we start using Salsa CI for all the team packages. I
>> think using a good CI for all our packages will help us find packaging
>> bugs and fix errors before uploads :)
>
> I would agree *IF* and only *IF* we find better runners than the one
> currently default in Salsa. The GCE runners are horribly slow (they are
> the smallest to avoid cost). As a result, some tests may just fail
> because of that, and it becomes just frustrating / annoying noise.

 I never experienced such timeouts, but I guess I don't work on very
 large packages or things that take more than a few minutes to build.
>>>
>>> The issue isn't build time. But when you have unit tests sensitive to
>>> timing. See for example openvswitch:
>>>
>>> https://salsa.debian.org/openstack-team/third-party/openvswitch/pipelines/61713
>>
>> Do you have similar issues running those CI tasks in a private runner?
>> (I'm really curious since I haven't had problems and the Salsa runners
>> don't seem slow compared to the private runners I run on my machines).
> 
> For this particular package, I even had issues with some buildd on some
> slow architectures like older MIPS. Just, with the Salsa default
> runners, it's a complete disaster where most of the tests fails, not
> just a few, because the runner is too slow.
> 
> What this shows is that we should *not* just blindly add the CI to all
> of the team's package. Potentially, this will be a disaster. You may add
> the CI script here and there, but I am warning you: adding it to all
> packages at once is a recipe for a big disaster.
> 
>> Maybe one solution to your problem would be to provide a fast/responsive
>> shared runners to the Salsa Team and tag your CI pipelines to use that
>> runner exclusively [1]?
>>
>> [1] https://docs.gitlab.com/ee/ci/yaml/#tags
> 
> Yes, that's what I've been telling to begin with. We should try
> providing other runners for the team if possible.
> 
>> [1]
>> https://salsa.debian.org/salsa/salsa-terraform/blob/master/environments/prod/runner.tf
> 
> This tells "instance_type: g1-small", which doesn't match any name at:
> https://cloud.google.com/compute/vm-instance-pricing
> 
> Am I right that this is n1-standard-1, which is 1 VCPU and 3.75 GB?
> 
>> It's possible to push to Salsa without triggering a CI run with "git
>> push -o ci.skip" or by including "[ci-skip]" in the HEAD commit message.
>>
>> IIUC, the problem isn't the overall amount of repositories using the CI,
>> but adding a 1000 ones at the same time and overloading the runners.
> 
> Ah, nice, good to know.
> 
>>> 1/ Take a super big care when adding jobs.
>>
>> I feel this is easily resolved by the "-o ci.skip" thing.
> 
> Good!
> 
>> I'm not 100% sure that's a good idea. The Salsa Team has pretty strict
>> requirements about shared runners (they require root on those machines
>> to make sure the .debs created by the runners can be trusted) and I'm
>> happy they do.
> 
> I didn't know, and this makes me question the overall way it works, and
> worries me a lot. ie we should be running on throwaway VMs, rather than
> having a VM we should be able to trust. The way you describe things, I
> wonder how easy it should be to get root on these VMs by running a
> crafted CI job...

Well, runners aren't running the CI jobs directly: everything is ran in
Docker, as that's the only available executor on the Salsa shared
runners. Even then, it uses a Gitlab special sauce, so it's not even
strait up Docker.

>> I really wonder how common the issues you've experienced with the Salsa
>> CI runners are. Has anyone here had similar problems?
> 
> Since we're talking about the smallest type of instance possible at
> google, then other people may have experience the lack of RAM for sure.
> 
>> I'd be fine with 95% of our package using the same default pipeline and
>> the last 5% using something else or disabling it and adding a few
>> comments in d/gitlab-ci.yml explaining why.
> 
> The question is: how do you know who's the 5% that needs a better attention?
I don't really think a failing CI is a big deal. It's not like this will
break any package anyway.

The way I see it, we'd push to all the repositories and then let folks
working on individual packages disable it if it's causing them trouble,
as long as they document why they disabled it.

Since the CI won't be ran on the first push, we won't get an avalanche
of mails saying the CI failed for X Y Z reason.

>> FWIW, I've opened an issue on the Salsa Support issue tracker to see
>> what the Salsa team thinks of this whole discussion [3]
>>
>> [3]: https://salsa.debian.org/salsa/support/

Re: Streamlining the use of Salsa CI on team packages

2019-09-15 Thread Thomas Goirand
On 9/15/19 4:10 AM, Louis-Philippe Véronneau wrote:
> On 19-09-14 17 h 35, Thomas Goirand wrote:
>> On 9/13/19 11:08 PM, Louis-Philippe Véronneau wrote:
>>> On 19-09-13 05 h 57, Thomas Goirand wrote:
 On 9/5/19 7:40 AM, Louis-Philippe Véronneau wrote:
> Hello folks!
>
> I'd like to propose we start using Salsa CI for all the team packages. I
> think using a good CI for all our packages will help us find packaging
> bugs and fix errors before uploads :)

 I would agree *IF* and only *IF* we find better runners than the one
 currently default in Salsa. The GCE runners are horribly slow (they are
 the smallest to avoid cost). As a result, some tests may just fail
 because of that, and it becomes just frustrating / annoying noise.
>>>
>>> I never experienced such timeouts, but I guess I don't work on very
>>> large packages or things that take more than a few minutes to build.
>>
>> The issue isn't build time. But when you have unit tests sensitive to
>> timing. See for example openvswitch:
>>
>> https://salsa.debian.org/openstack-team/third-party/openvswitch/pipelines/61713
> 
> Do you have similar issues running those CI tasks in a private runner?
> (I'm really curious since I haven't had problems and the Salsa runners
> don't seem slow compared to the private runners I run on my machines).

For this particular package, I even had issues with some buildd on some
slow architectures like older MIPS. Just, with the Salsa default
runners, it's a complete disaster where most of the tests fails, not
just a few, because the runner is too slow.

What this shows is that we should *not* just blindly add the CI to all
of the team's package. Potentially, this will be a disaster. You may add
the CI script here and there, but I am warning you: adding it to all
packages at once is a recipe for a big disaster.

> Maybe one solution to your problem would be to provide a fast/responsive
> shared runners to the Salsa Team and tag your CI pipelines to use that
> runner exclusively [1]?
> 
> [1] https://docs.gitlab.com/ee/ci/yaml/#tags

Yes, that's what I've been telling to begin with. We should try
providing other runners for the team if possible.

> [1]
> https://salsa.debian.org/salsa/salsa-terraform/blob/master/environments/prod/runner.tf

This tells "instance_type: g1-small", which doesn't match any name at:
https://cloud.google.com/compute/vm-instance-pricing

Am I right that this is n1-standard-1, which is 1 VCPU and 3.75 GB?

> It's possible to push to Salsa without triggering a CI run with "git
> push -o ci.skip" or by including "[ci-skip]" in the HEAD commit message.
> 
> IIUC, the problem isn't the overall amount of repositories using the CI,
> but adding a 1000 ones at the same time and overloading the runners.

Ah, nice, good to know.

>> 1/ Take a super big care when adding jobs.
> 
> I feel this is easily resolved by the "-o ci.skip" thing.

Good!

> I'm not 100% sure that's a good idea. The Salsa Team has pretty strict
> requirements about shared runners (they require root on those machines
> to make sure the .debs created by the runners can be trusted) and I'm
> happy they do.

I didn't know, and this makes me question the overall way it works, and
worries me a lot. ie we should be running on throwaway VMs, rather than
having a VM we should be able to trust. The way you describe things, I
wonder how easy it should be to get root on these VMs by running a
crafted CI job...

> I really wonder how common the issues you've experienced with the Salsa
> CI runners are. Has anyone here had similar problems?

Since we're talking about the smallest type of instance possible at
google, then other people may have experience the lack of RAM for sure.

> I'd be fine with 95% of our package using the same default pipeline and
> the last 5% using something else or disabling it and adding a few
> comments in d/gitlab-ci.yml explaining why.

The question is: how do you know who's the 5% that needs a better attention?

> FWIW, I've opened an issue on the Salsa Support issue tracker to see
> what the Salsa team thinks of this whole discussion [3]
> 
> [3]: https://salsa.debian.org/salsa/support/issues/170

Thanks a lot for doing this, taking the time to communicate with the
Salsa people, etc.

I'm all for more CI, so feel free to ignore my remarks and go ahead, my
intention was just bring your attention to things I've seen. If it works
well, then fantastic! :)

Cheers,

Thomas Goirand (zigo)



Re: Streamlining the use of Salsa CI on team packages

2019-09-14 Thread Louis-Philippe Véronneau
On 19-09-14 17 h 35, Thomas Goirand wrote:
> On 9/13/19 11:08 PM, Louis-Philippe Véronneau wrote:
>> On 19-09-13 05 h 57, Thomas Goirand wrote:
>>> On 9/5/19 7:40 AM, Louis-Philippe Véronneau wrote:
 Hello folks!

 I'd like to propose we start using Salsa CI for all the team packages. I
 think using a good CI for all our packages will help us find packaging
 bugs and fix errors before uploads :)
>>>
>>> I would agree *IF* and only *IF* we find better runners than the one
>>> currently default in Salsa. The GCE runners are horribly slow (they are
>>> the smallest to avoid cost). As a result, some tests may just fail
>>> because of that, and it becomes just frustrating / annoying noise.
>>
>> I never experienced such timeouts, but I guess I don't work on very
>> large packages or things that take more than a few minutes to build.
> 
> The issue isn't build time. But when you have unit tests sensitive to
> timing. See for example openvswitch:
> 
> https://salsa.debian.org/openstack-team/third-party/openvswitch/pipelines/61713

Do you have similar issues running those CI tasks in a private runner?
(I'm really curious since I haven't had problems and the Salsa runners
don't seem slow compared to the private runners I run on my machines).

Maybe one solution to your problem would be to provide a fast/responsive
shared runners to the Salsa Team and tag your CI pipelines to use that
runner exclusively [1]?

[1] https://docs.gitlab.com/ee/ci/yaml/#tags

> Oh, in fact, don't ... Salsa doesn't keep artifacts / logs long enough,
> so you will see nothing. I wonder why the Salsa admins decided on saving
> them on Google infrastructure then! Or did I misunderstood the way it
> works? Hard to tell, because we get almost zero communication about this
> from the admins.
> 
>> If what you describe really is caused by the default runners not being
>> fast enough, why couldn't we ask the Salsa team for more powerful ones?
>> Have you talked to them about this?
>
> Since when Salsa admins are receptive to critics and/or suggestions? The
> last time we told them that using Google wasn't a good idea, they just
> ignored it. Do you even know how the default runners were provisioned?
> Who's paying? How much credit do we have? How much can we use them?

AFAIU, they are provisioned with terraform [2] and then configured with
ansible [3]. Generally most of the code used to run and maintain Salsa
is in their repository.

I don't know who is paying (my guess is Debian+some Google credits), but
I feel these are valid questions and I don't see why the Salsa Team
would not answer if you ask them.

I know they looked for CI runner sponsors for a while and I'm not sure
they were able to find enough to meet their requirements.

[1]
https://salsa.debian.org/salsa/salsa-terraform/blob/master/environments/prod/runner.tf
[2]
https://salsa.debian.org/salsa/salsa-ansible/tree/master/roles/gitlab-runner

>> It seems to me that spending money in QA like CI runners is very
>> profitable for the project, as it saves everyone a lot of time dealing
>> with unnecessary failure caused by lack of tests. It's not like Debian
>> is a very poor organisation...
> 
> Indeed. It'd be even better if we could have our own cloud, but nobody
> in power seem receptive to the idea.
> 
> Also, please consider what happened the last time someone added 1000+ CI
> jobs (ie: Salsa went kind of down, and the person who did that got kind
> of flamed by Salsa admins).

It's possible to push to Salsa without triggering a CI run with "git
push -o ci.skip" or by including "[ci-skip]" in the HEAD commit message.

IIUC, the problem isn't the overall amount of repositories using the CI,
but adding a 1000 ones at the same time and overloading the runners.

> At this point in time, I really don't think Salsa is ready for what you
> proposed, unless we at least:
> 
> 1/ Take a super big care when adding jobs.

I feel this is easily resolved by the "-o ci.skip" thing.

> 2/ We have our own CI runners.

I'm not 100% sure that's a good idea. The Salsa Team has pretty strict
requirements about shared runners (they require root on those machines
to make sure the .debs created by the runners can be trusted) and I'm
happy they do.

Running Gitlab CI runners takes a bit of work (I run multiple ones,
including a major shared one on 0xacab.org) and if we want people to
feel like they can download the artifacts, we need to make sure the CI
runners we use are meet the high standards.

Also, as I'm sure you are aware of, maintaining stuff takes time and
effort and if we can forgo that and let a dedicated team of volunteers
do it (the Salsa team), I think it's a win.

I really wonder how common the issues you've experienced with the Salsa
CI runners are. Has anyone here had similar problems?

I'd be fine with 95% of our package using the same default pipeline and
the last 5% using something else or disabling it and adding a few
comments in d/gitlab-ci.yml explaining why.

> As 

Re: Streamlining the use of Salsa CI on team packages

2019-09-14 Thread Thomas Goirand
On 9/13/19 11:08 PM, Louis-Philippe Véronneau wrote:
> On 19-09-13 05 h 57, Thomas Goirand wrote:
>> On 9/5/19 7:40 AM, Louis-Philippe Véronneau wrote:
>>> Hello folks!
>>>
>>> I'd like to propose we start using Salsa CI for all the team packages. I
>>> think using a good CI for all our packages will help us find packaging
>>> bugs and fix errors before uploads :)
>>
>> I would agree *IF* and only *IF* we find better runners than the one
>> currently default in Salsa. The GCE runners are horribly slow (they are
>> the smallest to avoid cost). As a result, some tests may just fail
>> because of that, and it becomes just frustrating / annoying noise.
> 
> I never experienced such timeouts, but I guess I don't work on very
> large packages or things that take more than a few minutes to build.

The issue isn't build time. But when you have unit tests sensitive to
timing. See for example openvswitch:

https://salsa.debian.org/openstack-team/third-party/openvswitch/pipelines/61713

Oh, in fact, don't ... Salsa doesn't keep artifacts / logs long enough,
so you will see nothing. I wonder why the Salsa admins decided on saving
them on Google infrastructure then! Or did I misunderstood the way it
works? Hard to tell, because we get almost zero communication about this
from the admins.

> If what you describe really is caused by the default runners not being
> fast enough, why couldn't we ask the Salsa team for more powerful ones?
> Have you talked to them about this?

Since when Salsa admins are receptive to critics and/or suggestions? The
last time we told them that using Google wasn't a good idea, they just
ignored it. Do you even know how the default runners were provisioned?
Who's paying? How much credit do we have? How much can we use them?

> It seems to me that spending money in QA like CI runners is very
> profitable for the project, as it saves everyone a lot of time dealing
> with unnecessary failure caused by lack of tests. It's not like Debian
> is a very poor organisation...

Indeed. It'd be even better if we could have our own cloud, but nobody
in power seem receptive to the idea.

Also, please consider what happened the last time someone added 1000+ CI
jobs (ie: Salsa went kind of down, and the person who did that got kind
of flamed by Salsa admins).

At this point in time, I really don't think Salsa is ready for what you
proposed, unless we at least:

1/ Take a super big care when adding jobs.
2/ We have our own CI runners.

As I wrote, I believe my company could provide a few of these runners
(pending my boss approval, who's currently on holidays). However, it'd
be nice if my company wasn't the only sponsor... Not only because of
financial issues, but at least for redundancy.

Your thoughts?

Cheers,

Thomas Goirand (zigo)



Re: Streamlining the use of Salsa CI on team packages

2019-09-13 Thread Louis-Philippe Véronneau
On 19-09-13 05 h 57, Thomas Goirand wrote:
> On 9/5/19 7:40 AM, Louis-Philippe Véronneau wrote:
>> Hello folks!
>>
>> I'd like to propose we start using Salsa CI for all the team packages. I
>> think using a good CI for all our packages will help us find packaging
>> bugs and fix errors before uploads :)
> 
> I would agree *IF* and only *IF* we find better runners than the one
> currently default in Salsa. The GCE runners are horribly slow (they are
> the smallest to avoid cost). As a result, some tests may just fail
> because of that, and it becomes just frustrating / annoying noise.

I never experienced such timeouts, but I guess I don't work on very
large packages or things that take more than a few minutes to build.

If what you describe really is caused by the default runners not being
fast enough, why couldn't we ask the Salsa team for more powerful ones?
Have you talked to them about this?

It seems to me that spending money in QA like CI runners is very
profitable for the project, as it saves everyone a lot of time dealing
with unnecessary failure caused by lack of tests. It's not like Debian
is a very poor organisation...

-- 
  ⢀⣴⠾⠻⢶⣦⠀
  ⣾⠁⢠⠒⠀⣿⡁  Louis-Philippe Véronneau
  ⢿⡄⠘⠷⠚⠋   po...@debian.org / veronneau.org
  ⠈⠳⣄



signature.asc
Description: OpenPGP digital signature


Re: Streamlining the use of Salsa CI on team packages

2019-09-13 Thread Thomas Goirand
On 9/5/19 7:40 AM, Louis-Philippe Véronneau wrote:
> Hello folks!
> 
> I'd like to propose we start using Salsa CI for all the team packages. I
> think using a good CI for all our packages will help us find packaging
> bugs and fix errors before uploads :)

I would agree *IF* and only *IF* we find better runners than the one
currently default in Salsa. The GCE runners are horribly slow (they are
the smallest to avoid cost). As a result, some tests may just fail
because of that, and it becomes just frustrating / annoying noise.

Would anyone but me (ie: my company) be able to provide decent runners?

Cheers,

Thomas Goirand (zigo)



Re: Streamlining the use of Salsa CI on team packages

2019-09-12 Thread Louis-Philippe Véronneau
On 19-09-10 14 h 09, Hans-Christoph Steiner wrote:
> 
> 
> Gregor Riepl:
>>
>>> I am not a fan of pointing to a moving target with the "include" statement:
>>>
>>> include:
>>>   - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/salsa-ci.yml
>>>   -
>>> https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/pipeline-jobs.yml
>>>
>>> "master" will change, and that can break CI jobs where nothing in the
>>> local repo has changed.
>>
>> It does have pros and cons.
>>
>> The good: Additional build/verification steps or even automatic deployment 
>> can
>> be added by the Salsa team at some point without requiring changes to each
>> repository.
>>
>> The bad: As you mentioned, a moving target can be bad and cause inadvertent
>> build failures and other issues that are out of the hands of maintainers.
>>
>> The ugly: Pulling in external scripts always bears a certain risk. They may 
>> go
>> away at some point or cause potentially dangerous side effects.
>>
>> However, I do think that a standardised CI pipeline is very useful. Consider
>> that the buildd infrastructure also uses a standardised build process that
>> packages cannot simply get away from. If this process is replicated on Salsa,
>> with an external script or not, people will quickly get a "glimpse" of what
>> would happen on buildd. The need to manually adapt the CI script every time
>> something changes in the buildd process is a heavy burden to bear and will
>> easily lead to people "forgetting" to update their scripts. That kind of
>> defeats the purpose.
>>
>> Also, consider that the Salsa CI pipeline is not an absolute source of truth,
>> but a tool for developers and maintainers to quickly spot issues with their
>> packages. If an autobuild fails, it's not the end of the world. It just means
>> you have to go check what's going on.
>>
> 
> I totally agree about having a standardized build process and CI
> pipeline.  And I agree that the CI builds are a tool, not the final
> release build process.

I think we all agree on that :)

I'd like to start working on a draft modification to the DPMT and PAPT
to add a part about using Gitlab CI, but I'm not sure what the process
to change those files is...

How were previous modifications dealt with?

-- 
  ⢀⣴⠾⠻⢶⣦⠀
  ⣾⠁⢠⠒⠀⣿⡁  Louis-Philippe Véronneau
  ⢿⡄⠘⠷⠚⠋   po...@debian.org / veronneau.org
  ⠈⠳⣄



signature.asc
Description: OpenPGP digital signature


Re: Streamlining the use of Salsa CI on team packages

2019-09-10 Thread Hans-Christoph Steiner



Gregor Riepl:
> 
>> I am not a fan of pointing to a moving target with the "include" statement:
>>
>> include:
>>   - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/salsa-ci.yml
>>   -
>> https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/pipeline-jobs.yml
>>
>> "master" will change, and that can break CI jobs where nothing in the
>> local repo has changed.
> 
> It does have pros and cons.
> 
> The good: Additional build/verification steps or even automatic deployment can
> be added by the Salsa team at some point without requiring changes to each
> repository.
> 
> The bad: As you mentioned, a moving target can be bad and cause inadvertent
> build failures and other issues that are out of the hands of maintainers.
> 
> The ugly: Pulling in external scripts always bears a certain risk. They may go
> away at some point or cause potentially dangerous side effects.
> 
> However, I do think that a standardised CI pipeline is very useful. Consider
> that the buildd infrastructure also uses a standardised build process that
> packages cannot simply get away from. If this process is replicated on Salsa,
> with an external script or not, people will quickly get a "glimpse" of what
> would happen on buildd. The need to manually adapt the CI script every time
> something changes in the buildd process is a heavy burden to bear and will
> easily lead to people "forgetting" to update their scripts. That kind of
> defeats the purpose.
> 
> Also, consider that the Salsa CI pipeline is not an absolute source of truth,
> but a tool for developers and maintainers to quickly spot issues with their
> packages. If an autobuild fails, it's not the end of the world. It just means
> you have to go check what's going on.
> 

I totally agree about having a standardized build process and CI
pipeline.  And I agree that the CI builds are a tool, not the final
release build process. As for updating that config, in Debian we already
have a well known update mechanism: `apt-get upgrade`.  The CI builds
can use that same process, we don't need to introduce a new one just for
CI builds (e.g. dynamic links to files in gitlab).

These CI environment configs can be included in a Debian package.  This
has been my goal with ci-image-git-buildpackage.  The bits are all shell
scripts which can easily be included in a Debian package.  The mechanism
used in salsa-ci-team/pipeline is a mystery, even to me, and I've been
using GitLab-CI since the beginning (2015), and setting up CI systems
since 2006 (bash scripts!).  There is obviously a lot of great work in
salsa-ci-team/pipeline, I just question the interface between it and the
Debian Developer: how its specified in the .gitlab-ci.yml file.

.hc



Re: Streamlining the use of Salsa CI on team packages

2019-09-05 Thread Gregor Riepl


> I am not a fan of pointing to a moving target with the "include" statement:
> 
> include:
>   - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/salsa-ci.yml
>   -
> https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/pipeline-jobs.yml
> 
> "master" will change, and that can break CI jobs where nothing in the
> local repo has changed.

It does have pros and cons.

The good: Additional build/verification steps or even automatic deployment can
be added by the Salsa team at some point without requiring changes to each
repository.

The bad: As you mentioned, a moving target can be bad and cause inadvertent
build failures and other issues that are out of the hands of maintainers.

The ugly: Pulling in external scripts always bears a certain risk. They may go
away at some point or cause potentially dangerous side effects.

However, I do think that a standardised CI pipeline is very useful. Consider
that the buildd infrastructure also uses a standardised build process that
packages cannot simply get away from. If this process is replicated on Salsa,
with an external script or not, people will quickly get a "glimpse" of what
would happen on buildd. The need to manually adapt the CI script every time
something changes in the buildd process is a heavy burden to bear and will
easily lead to people "forgetting" to update their scripts. That kind of
defeats the purpose.

Also, consider that the Salsa CI pipeline is not an absolute source of truth,
but a tool for developers and maintainers to quickly spot issues with their
packages. If an autobuild fails, it's not the end of the world. It just means
you have to go check what's going on.



Re: Streamlining the use of Salsa CI on team packages

2019-09-05 Thread Hans-Christoph Steiner


I think we should definitely use Gitlab-CI!  The
'salsa-ci-team/pipeline' project does have good coverage, with reprotest
and piuparts.  I'm the lead dev on another approach, also part of the
salsa-ci-team, called 'ci-image-git-buildpackage':
https://wiki.debian.org/Salsa/Doc#Running_Continuous_Integration_.28CI.29_tests

It has lintian, autopkgtest and more, but lacks piuparts and reprotest.
 It does have "aptly" support where it will automatically build and
deploy binary packages to a apt repo.  Then other CI builds can add
those repos for CI runs.  This is very helpful for complex suites of
packages that depend on each other.  We use it in the Android Tools Team
(click on any "pipeline" button to see it in action):
https://salsa.debian.org/android-tools-team/admin

'salsa-ci-team/pipeline' is quite simple for packages with simple
requirements.  One limitation with it is that you can't include a bash
script directly in the debian/.gitlab-ci.yml file, which is the normal
way of working with GitLab-CI.  That was a central requirement for
ci-image-git-buildpackage.  The automation is just bash commands, so you
can use them in a bash script as needed.  Or easily add multiple jobs,
or do anything you would normally do in GitLab-CI.  For example:
https://salsa.debian.org/android-tools-team/android-platform-system-core/blob/master/debian/.gitlab-ci.yml

I am not a fan of pointing to a moving target with the "include" statement:

include:
  - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/salsa-ci.yml
  -
https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/pipeline-jobs.yml

"master" will change, and that can break CI jobs where nothing in the
local repo has changed.


Louis-Philippe Véronneau:
> Hello folks!
> 
> I'd like to propose we start using Salsa CI for all the team packages. I
> think using a good CI for all our packages will help us find packaging
> bugs and fix errors before uploads :)
> 
> I also think that when possible, we should be using the same CI jobs for
> our packages. The Salsa CI Team's default pipeline [1] is a good common
> ground, as currently it:
> 
> * builds the package
> * runs piuparts
> * runs autopkgtest
> * runs lintian
> * runs reprotest
> * and does more!
> 
> I don't think a failing CI should be a blocker for an upload, but I
> think it's a good red flag and should be taken in account.
> 
> I know the Ruby team also decided to use debian/salsa-ci.yml instead of
> debian/gitlab-ci.yml [2]. I guess we should also do the same.
> 
> Thoughts? If we decide to go ahead with this, I guess we should modify
> the policy accordingly and contact the Salsa Team to see if adding this
> additional load is OK with them.
> 
> [1] https://salsa.debian.org/salsa-ci-team/pipeline#basic-use
> [2] https://salsa.debian.org/salsa-ci-team/pipeline/issues/86#note_106245
>