Re: Streamlining the use of Salsa CI on team packages

2019-09-15 Thread Louis-Philippe Véronneau
On 19-09-05 01 h 40, Louis-Philippe Véronneau wrote:
> Hello folks!
> 
> I'd like to propose we start using Salsa CI for all the team packages. I
> think using a good CI for all our packages will help us find packaging
> bugs and fix errors before uploads :)
> 
> I also think that when possible, we should be using the same CI jobs for
> our packages. The Salsa CI Team's default pipeline [1] is a good common
> ground, as currently it:
> 
> * builds the package
> * runs piuparts
> * runs autopkgtest
> * runs lintian
> * runs reprotest
> * and does more!
> 
> I don't think a failing CI should be a blocker for an upload, but I
> think it's a good red flag and should be taken in account.
> 
> I know the Ruby team also decided to use debian/salsa-ci.yml instead of
> debian/gitlab-ci.yml [2]. I guess we should also do the same.
> 
> Thoughts? If we decide to go ahead with this, I guess we should modify
> the policy accordingly and contact the Salsa Team to see if adding this
> additional load is OK with them.
> 
> [1] https://salsa.debian.org/salsa-ci-team/pipeline#basic-use
> [2] https://salsa.debian.org/salsa-ci-team/pipeline/issues/86#note_106245
These are the steps I see going forward with this:

--
1. Agree on a default pipeline we should be using on the DPMT & PAPT
packages.

2. Draft a proposed modification to the DPMT and the PAPT policies

3. Adopt that proposed modification once we reach consensus

4. Wait for the "All Clear" from the Salsa Team

5. Commit the previously agreed upon CI pipeline to all the DPMT & PAPT
packages, while making sure the CI isn't ran on that push ("-o ci.skip")
--

For step 1, I proposed we use the "Salsa Pipeline" [1], as I feel it is
the most mature solution, has more contributors and has more features
(including reprotest and piuparts). This option seems to have had the
most support so far.

Hans-Christoph Steiner proposed we use "ci-image-git-buidpackage"
instead [2]. The code behind it is simpler and the way it's built makes
it possible for maintainers to modify the CI for their packages.

For step 2, so far people seemed to agree that:

* having a standardised CI pipeline is a good idea
* the CI should be used as a tool to help us improve our packages, and
not be required to pass

Please contribute to this discussion if you care about this issue! I'll
try to draft something more concrete in the next few days.

[1] https://salsa.debian.org/salsa-ci-team/pipeline
[2] https://salsa.debian.org/salsa-ci-team/ci-image-git-buildpackage

-- 
  ⢀⣴⠾⠻⢶⣦⠀
  ⣾⠁⢠⠒⠀⣿⡁  Louis-Philippe Véronneau
  ⢿⡄⠘⠷⠚⠋   po...@debian.org / veronneau.org
  ⠈⠳⣄



signature.asc
Description: OpenPGP digital signature


Re: What is the process to update the DPMT and PAPT policies?

2019-09-15 Thread Scott Kitterman



On September 15, 2019 11:59:08 PM UTC, "Louis-Philippe Véronneau" 
 wrote:
>Hi!
>
>What is the process to update the DPMT and PAPT policies? I feel the
>DPMT policy is pretty good and I feel the PAPT policy could copy a
>bunch
>of stuff from there.
>
>For example, the PAPT policy doesn't include a "Git Procedures"
>section.
>
>I'm guessing the way to go is to clearly propose a draft modification
>on
>the mailing list and see if it reaches consensus. Would that be
>acceptable?

My recollection is that is what we've done in the past.

Scott K



Re: Streamlining the use of Salsa CI on team packages

2019-09-15 Thread Louis-Philippe Véronneau
On 19-09-15 18 h 01, Thomas Goirand wrote:
> On 9/15/19 4:10 AM, Louis-Philippe Véronneau wrote:
>> On 19-09-14 17 h 35, Thomas Goirand wrote:
>>> On 9/13/19 11:08 PM, Louis-Philippe Véronneau wrote:
 On 19-09-13 05 h 57, Thomas Goirand wrote:
> On 9/5/19 7:40 AM, Louis-Philippe Véronneau wrote:
>> Hello folks!
>>
>> I'd like to propose we start using Salsa CI for all the team packages. I
>> think using a good CI for all our packages will help us find packaging
>> bugs and fix errors before uploads :)
>
> I would agree *IF* and only *IF* we find better runners than the one
> currently default in Salsa. The GCE runners are horribly slow (they are
> the smallest to avoid cost). As a result, some tests may just fail
> because of that, and it becomes just frustrating / annoying noise.

 I never experienced such timeouts, but I guess I don't work on very
 large packages or things that take more than a few minutes to build.
>>>
>>> The issue isn't build time. But when you have unit tests sensitive to
>>> timing. See for example openvswitch:
>>>
>>> https://salsa.debian.org/openstack-team/third-party/openvswitch/pipelines/61713
>>
>> Do you have similar issues running those CI tasks in a private runner?
>> (I'm really curious since I haven't had problems and the Salsa runners
>> don't seem slow compared to the private runners I run on my machines).
> 
> For this particular package, I even had issues with some buildd on some
> slow architectures like older MIPS. Just, with the Salsa default
> runners, it's a complete disaster where most of the tests fails, not
> just a few, because the runner is too slow.
> 
> What this shows is that we should *not* just blindly add the CI to all
> of the team's package. Potentially, this will be a disaster. You may add
> the CI script here and there, but I am warning you: adding it to all
> packages at once is a recipe for a big disaster.
> 
>> Maybe one solution to your problem would be to provide a fast/responsive
>> shared runners to the Salsa Team and tag your CI pipelines to use that
>> runner exclusively [1]?
>>
>> [1] https://docs.gitlab.com/ee/ci/yaml/#tags
> 
> Yes, that's what I've been telling to begin with. We should try
> providing other runners for the team if possible.
> 
>> [1]
>> https://salsa.debian.org/salsa/salsa-terraform/blob/master/environments/prod/runner.tf
> 
> This tells "instance_type: g1-small", which doesn't match any name at:
> https://cloud.google.com/compute/vm-instance-pricing
> 
> Am I right that this is n1-standard-1, which is 1 VCPU and 3.75 GB?
> 
>> It's possible to push to Salsa without triggering a CI run with "git
>> push -o ci.skip" or by including "[ci-skip]" in the HEAD commit message.
>>
>> IIUC, the problem isn't the overall amount of repositories using the CI,
>> but adding a 1000 ones at the same time and overloading the runners.
> 
> Ah, nice, good to know.
> 
>>> 1/ Take a super big care when adding jobs.
>>
>> I feel this is easily resolved by the "-o ci.skip" thing.
> 
> Good!
> 
>> I'm not 100% sure that's a good idea. The Salsa Team has pretty strict
>> requirements about shared runners (they require root on those machines
>> to make sure the .debs created by the runners can be trusted) and I'm
>> happy they do.
> 
> I didn't know, and this makes me question the overall way it works, and
> worries me a lot. ie we should be running on throwaway VMs, rather than
> having a VM we should be able to trust. The way you describe things, I
> wonder how easy it should be to get root on these VMs by running a
> crafted CI job...

Well, runners aren't running the CI jobs directly: everything is ran in
Docker, as that's the only available executor on the Salsa shared
runners. Even then, it uses a Gitlab special sauce, so it's not even
strait up Docker.

>> I really wonder how common the issues you've experienced with the Salsa
>> CI runners are. Has anyone here had similar problems?
> 
> Since we're talking about the smallest type of instance possible at
> google, then other people may have experience the lack of RAM for sure.
> 
>> I'd be fine with 95% of our package using the same default pipeline and
>> the last 5% using something else or disabling it and adding a few
>> comments in d/gitlab-ci.yml explaining why.
> 
> The question is: how do you know who's the 5% that needs a better attention?
I don't really think a failing CI is a big deal. It's not like this will
break any package anyway.

The way I see it, we'd push to all the repositories and then let folks
working on individual packages disable it if it's causing them trouble,
as long as they document why they disabled it.

Since the CI won't be ran on the first push, we won't get an avalanche
of mails saying the CI failed for X Y Z reason.

>> FWIW, I've opened an issue on the Salsa Support issue tracker to see
>> what the Salsa team thinks of this whole discussion [3]
>>
>> [3]: 

What is the process to update the DPMT and PAPT policies?

2019-09-15 Thread Louis-Philippe Véronneau
Hi!

What is the process to update the DPMT and PAPT policies? I feel the
DPMT policy is pretty good and I feel the PAPT policy could copy a bunch
of stuff from there.

For example, the PAPT policy doesn't include a "Git Procedures" section.

I'm guessing the way to go is to clearly propose a draft modification on
the mailing list and see if it reaches consensus. Would that be acceptable?

Cheers!

-- 
  ⢀⣴⠾⠻⢶⣦⠀
  ⣾⠁⢠⠒⠀⣿⡁  Louis-Philippe Véronneau
  ⢿⡄⠘⠷⠚⠋   po...@debian.org / veronneau.org
  ⠈⠳⣄



signature.asc
Description: OpenPGP digital signature


Re: Streamlining the use of Salsa CI on team packages

2019-09-15 Thread Thomas Goirand
On 9/15/19 4:10 AM, Louis-Philippe Véronneau wrote:
> On 19-09-14 17 h 35, Thomas Goirand wrote:
>> On 9/13/19 11:08 PM, Louis-Philippe Véronneau wrote:
>>> On 19-09-13 05 h 57, Thomas Goirand wrote:
 On 9/5/19 7:40 AM, Louis-Philippe Véronneau wrote:
> Hello folks!
>
> I'd like to propose we start using Salsa CI for all the team packages. I
> think using a good CI for all our packages will help us find packaging
> bugs and fix errors before uploads :)

 I would agree *IF* and only *IF* we find better runners than the one
 currently default in Salsa. The GCE runners are horribly slow (they are
 the smallest to avoid cost). As a result, some tests may just fail
 because of that, and it becomes just frustrating / annoying noise.
>>>
>>> I never experienced such timeouts, but I guess I don't work on very
>>> large packages or things that take more than a few minutes to build.
>>
>> The issue isn't build time. But when you have unit tests sensitive to
>> timing. See for example openvswitch:
>>
>> https://salsa.debian.org/openstack-team/third-party/openvswitch/pipelines/61713
> 
> Do you have similar issues running those CI tasks in a private runner?
> (I'm really curious since I haven't had problems and the Salsa runners
> don't seem slow compared to the private runners I run on my machines).

For this particular package, I even had issues with some buildd on some
slow architectures like older MIPS. Just, with the Salsa default
runners, it's a complete disaster where most of the tests fails, not
just a few, because the runner is too slow.

What this shows is that we should *not* just blindly add the CI to all
of the team's package. Potentially, this will be a disaster. You may add
the CI script here and there, but I am warning you: adding it to all
packages at once is a recipe for a big disaster.

> Maybe one solution to your problem would be to provide a fast/responsive
> shared runners to the Salsa Team and tag your CI pipelines to use that
> runner exclusively [1]?
> 
> [1] https://docs.gitlab.com/ee/ci/yaml/#tags

Yes, that's what I've been telling to begin with. We should try
providing other runners for the team if possible.

> [1]
> https://salsa.debian.org/salsa/salsa-terraform/blob/master/environments/prod/runner.tf

This tells "instance_type: g1-small", which doesn't match any name at:
https://cloud.google.com/compute/vm-instance-pricing

Am I right that this is n1-standard-1, which is 1 VCPU and 3.75 GB?

> It's possible to push to Salsa without triggering a CI run with "git
> push -o ci.skip" or by including "[ci-skip]" in the HEAD commit message.
> 
> IIUC, the problem isn't the overall amount of repositories using the CI,
> but adding a 1000 ones at the same time and overloading the runners.

Ah, nice, good to know.

>> 1/ Take a super big care when adding jobs.
> 
> I feel this is easily resolved by the "-o ci.skip" thing.

Good!

> I'm not 100% sure that's a good idea. The Salsa Team has pretty strict
> requirements about shared runners (they require root on those machines
> to make sure the .debs created by the runners can be trusted) and I'm
> happy they do.

I didn't know, and this makes me question the overall way it works, and
worries me a lot. ie we should be running on throwaway VMs, rather than
having a VM we should be able to trust. The way you describe things, I
wonder how easy it should be to get root on these VMs by running a
crafted CI job...

> I really wonder how common the issues you've experienced with the Salsa
> CI runners are. Has anyone here had similar problems?

Since we're talking about the smallest type of instance possible at
google, then other people may have experience the lack of RAM for sure.

> I'd be fine with 95% of our package using the same default pipeline and
> the last 5% using something else or disabling it and adding a few
> comments in d/gitlab-ci.yml explaining why.

The question is: how do you know who's the 5% that needs a better attention?

> FWIW, I've opened an issue on the Salsa Support issue tracker to see
> what the Salsa team thinks of this whole discussion [3]
> 
> [3]: https://salsa.debian.org/salsa/support/issues/170

Thanks a lot for doing this, taking the time to communicate with the
Salsa people, etc.

I'm all for more CI, so feel free to ignore my remarks and go ahead, my
intention was just bring your attention to things I've seen. If it works
well, then fantastic! :)

Cheers,

Thomas Goirand (zigo)



Re: Packages depending on python-testtools are now RC: is bzr still a thing?

2019-09-15 Thread Thomas Goirand
On 9/15/19 2:26 PM, Mattia Rizzolo wrote:
> Considering that this is bzr we are talking about, a package that is
> already entering the graveyard, I think it would be easiest to just
> disable the test suite and move on.
> 
> But I would be happier it Thomas at least checked the rdeps before
> dropping packages, at least evaluating if breaking things is alrightif
> he really likes to break packages :/

You mean check *better*. Because I do carefully check each time, as much
as I can, but in this occurrence, it looks like I didn't check well
enough. Mistakes unfortunately do happen when you work on a lot of
packages. Moreover, the current tooling we have at our disposal is kind
of broken. reverse-depends takes sometimes forever in Sid for a reason I
can't figure out. And if I'm not mistaking, that's the only tool we have
that can check reverse dependencies in a meaningful way. Or is there a
better way? I've read others using a dak command, how?

On 9/15/19 5:17 PM, Jelmer Vernooij wrote:
> It's not just bzr, it's also a bunch of plugins for bzr that we'd have
> to disable the testsuite for - as well as a bunch of other
non-bzr-related packages - python-subunit, python-fixtures,
python-testscenarios, python-daemon, python-fastimport.

I've already removed Python 2 support from subunit, python-fixtures, and
python-testscenarios. Now, as I wrote in the bug report, what worries me
aren't these, but the other packages that are depending on python-daemon:

Packages without Python 3 support upstream:
- bcfg2 doesn't seem to have Python 3 support upstream, neither the
Debian package.
- nss-pam-ldapd isn't Python 3 ready upstream. Note that the debian
maintainer the same person as upstream.
- There's been zero upstream work on this repository:
https://github.com/Yubico/python-pyhsm
so the package has no change to be Python 3 ready anytime soon.

Packages that simply need an upgrade from latest upstream release:
- lavapdu-daemon should be upgraded to latest upstream release to have
Python 3 support.
- mini-buildd in Experimental has been converted to Python 3, while the
version in Sid is still Python 2.
- I haven't been able to tell for lava-coordinator.

So, for bcfg2, nss-pam-ldapd, and python-pyhsm, I'm really not convinced
that waiting for longer will help. That's a general problem that we btw
need to address: how are we going to deal with this? There's going to be
a lot of it, and we need to find a way out if we really are going to get
Python 2 out.

For the other 3, I shouldn't be hard to address by the current maintainers.

I've raised the severity of #936189 #937165 #938069 #936819 #937049
#936818 to serious, and included them as Cc to this reply, in order to
warn the maintainers. I haven't done it for the BZR stuff, as obviously,
the package maintainer is aware now.

Again, sorry that it happened this way.

Cheers,

Thomas Goirand (zigo)



Re: Packages depending on python-testtools are now RC: is bzr still a thing?

2019-09-15 Thread Jelmer Vernooij
It's not just bzr, it's also a bunch of plugins for bzr that we'd have to 
disable the testsuite for - as well as a bunch of other non-bzr-related 
packages - python-subunit, python-fixtures, python-testscenarios, 
python-daemon, python-fastimport.

Jelmer

On 15 September 2019 14:26:35 CEST, Mattia Rizzolo  wrote:
>Considering that this is bzr we are talking about, a package that is
>already entering the graveyard, I think it would be easiest to just
>disable
>the test suite and move on.
>
>But I would be happier it Thomas at least checked the rdeps before
>dropping
>packages, at least evaluating if breaking things is alrightif he really
>likes to break packages :/
>
>
>On Sun, 15 Sep 2019, 11:09 am Jelmer Vernooij, 
>wrote:
>
>>
>>
>> On 15 September 2019 01:15:11 CEST, Scott Kitterman
>
>> wrote:
>> >On Saturday, September 14, 2019 6:43:13 PM EDT Thomas Goirand wrote:
>> >> Hi,
>> >>
>> >> As I wrongly thought python-extras was used only by OpenStack
>stuff,
>> >I
>> >> removed Python 2 support for it. Then someone filed a bug against
>> >> python-testtools (ScottK, I believe) saying that it became RC.
>> >> Therefore, I went ahead and removed Python 2 support for
>testtools,
>> >but
>> >> now, this implies that a few packages which I didn't wish to
>impact
>> >are
>> >> also RC:
>> >>
>> >> * bzr-builddeb
>> >> * bzr-email
>> >> * bzr-fastimport
>> >> * bzr-git
>> >> * bzr-stats
>> >> * bzr-upload
>> >> * loggerhead
>> >>
>> >> So, basically, unfortunately, Bazaar has lost some of its build
>> >> dependencies.
>> >>
>> >> So, I went ahead, and looked what I could do for Bazaar.
>> >Unfortunately,
>> >> when looking at:
>> >> https://launchpad.net/bzr
>> >>
>> >> I can see no release since January 2016, no daily archive. The
>last
>> >> commit in the bzr repository of bzr is from 2017-03-17.
>> >>
>> >> Then I went to see how much Python 3 effort would be needed, and I
>> >> quickly gave up. It'd be A LOT of work, but nobody seems doing ANY
>> >work
>> >> on bzr anymore.
>> >>
>> >> So I wonder: is it time to remove bazaar from Debian? Or is there
>any
>> >> vague plan to make it work with Python 3? If we are to remove it
>from
>> >> Debian, then we'd better do it ASAP.
>> >
>> >As I understand it, bazaar (bzr) is dead and being replaced by
>breezy
>> >(brz),
>> >which is python3 compatible.
>> >
>> >https://www.breezy-vcs.org/
>> >
>> >My inference is that anything bzr specific can go, but I'm not
>involved
>> >in
>> >either project.
>>
>> Bzr maintainer / breezy upstream here.
>>
>> I'm planning to upload transitional packages to trigger upgrades from
>bzr
>> to Breezy.
>>
>> The packages for that are not ready yet though. Can we undo the
>dropping
>> of python-testtools in the meantime?
>>
>> Jelmer
>>
>>



Re: [Help] Re: Bug#939181: cycle: Python2 removal in sid/bullseye

2019-09-15 Thread peter green

> tmp = rt.encrypt('Cycle{}'.format(pickle.dumps(objSave)))

Thanks to this hint

This hint was *wrong*, it will introduce garbage into the string and the 
"rotor" code is clearly designed to work with byte strings, not unicode strings.

Change it to

"tmp=rt.encrypt( b'Cycle'+pickle.dumps(objSave) )"




Re: Packages depending on python-testtools are now RC: is bzr still a thing?

2019-09-15 Thread Mattia Rizzolo
Considering that this is bzr we are talking about, a package that is
already entering the graveyard, I think it would be easiest to just disable
the test suite and move on.

But I would be happier it Thomas at least checked the rdeps before dropping
packages, at least evaluating if breaking things is alrightif he really
likes to break packages :/


On Sun, 15 Sep 2019, 11:09 am Jelmer Vernooij,  wrote:

>
>
> On 15 September 2019 01:15:11 CEST, Scott Kitterman 
> wrote:
> >On Saturday, September 14, 2019 6:43:13 PM EDT Thomas Goirand wrote:
> >> Hi,
> >>
> >> As I wrongly thought python-extras was used only by OpenStack stuff,
> >I
> >> removed Python 2 support for it. Then someone filed a bug against
> >> python-testtools (ScottK, I believe) saying that it became RC.
> >> Therefore, I went ahead and removed Python 2 support for testtools,
> >but
> >> now, this implies that a few packages which I didn't wish to impact
> >are
> >> also RC:
> >>
> >> * bzr-builddeb
> >> * bzr-email
> >> * bzr-fastimport
> >> * bzr-git
> >> * bzr-stats
> >> * bzr-upload
> >> * loggerhead
> >>
> >> So, basically, unfortunately, Bazaar has lost some of its build
> >> dependencies.
> >>
> >> So, I went ahead, and looked what I could do for Bazaar.
> >Unfortunately,
> >> when looking at:
> >> https://launchpad.net/bzr
> >>
> >> I can see no release since January 2016, no daily archive. The last
> >> commit in the bzr repository of bzr is from 2017-03-17.
> >>
> >> Then I went to see how much Python 3 effort would be needed, and I
> >> quickly gave up. It'd be A LOT of work, but nobody seems doing ANY
> >work
> >> on bzr anymore.
> >>
> >> So I wonder: is it time to remove bazaar from Debian? Or is there any
> >> vague plan to make it work with Python 3? If we are to remove it from
> >> Debian, then we'd better do it ASAP.
> >
> >As I understand it, bazaar (bzr) is dead and being replaced by breezy
> >(brz),
> >which is python3 compatible.
> >
> >https://www.breezy-vcs.org/
> >
> >My inference is that anything bzr specific can go, but I'm not involved
> >in
> >either project.
>
> Bzr maintainer / breezy upstream here.
>
> I'm planning to upload transitional packages to trigger upgrades from bzr
> to Breezy.
>
> The packages for that are not ready yet though. Can we undo the dropping
> of python-testtools in the meantime?
>
> Jelmer
>
>


Re: Packages depending on python-testtools are now RC: is bzr still a thing?

2019-09-15 Thread Jelmer Vernooij



On 15 September 2019 01:15:11 CEST, Scott Kitterman  
wrote:
>On Saturday, September 14, 2019 6:43:13 PM EDT Thomas Goirand wrote:
>> Hi,
>> 
>> As I wrongly thought python-extras was used only by OpenStack stuff,
>I
>> removed Python 2 support for it. Then someone filed a bug against
>> python-testtools (ScottK, I believe) saying that it became RC.
>> Therefore, I went ahead and removed Python 2 support for testtools,
>but
>> now, this implies that a few packages which I didn't wish to impact
>are
>> also RC:
>> 
>> * bzr-builddeb
>> * bzr-email
>> * bzr-fastimport
>> * bzr-git
>> * bzr-stats
>> * bzr-upload
>> * loggerhead
>> 
>> So, basically, unfortunately, Bazaar has lost some of its build
>> dependencies.
>> 
>> So, I went ahead, and looked what I could do for Bazaar.
>Unfortunately,
>> when looking at:
>> https://launchpad.net/bzr
>> 
>> I can see no release since January 2016, no daily archive. The last
>> commit in the bzr repository of bzr is from 2017-03-17.
>> 
>> Then I went to see how much Python 3 effort would be needed, and I
>> quickly gave up. It'd be A LOT of work, but nobody seems doing ANY
>work
>> on bzr anymore.
>> 
>> So I wonder: is it time to remove bazaar from Debian? Or is there any
>> vague plan to make it work with Python 3? If we are to remove it from
>> Debian, then we'd better do it ASAP.
>
>As I understand it, bazaar (bzr) is dead and being replaced by breezy
>(brz), 
>which is python3 compatible.
>
>https://www.breezy-vcs.org/
>
>My inference is that anything bzr specific can go, but I'm not involved
>in 
>either project.

Bzr maintainer / breezy upstream here.

I'm planning to upload transitional packages to trigger upgrades from bzr to 
Breezy.

The packages for that are not ready yet though. Can we undo the dropping of 
python-testtools in the meantime? 

Jelmer