Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-30 Thread Jeremy Stanley
On 2020-06-30 09:15:47 +0200 (+0200), Thomas Goirand wrote:
[...]
> If there's some nasty NPM job behind, then I probably will just
> skip the dashboard, and expect deployment to get the dashboard not
> from packages. What is included in the dashboard? Things like
> https://zuul.openstack.org/ ?

That's a white-labeled tenant of https://zuul.opendev.org/ but yes,
basically an interface for querying the REST API for in-progress
activity, configuration errors, build results, log browsing, config
exploration and so on. The result URLs it posts on tested changes
and pull/merge requests are also normally to a build result detail
page provided by the dashboard, thought you should be able to
configure it to link directly to the job logs instead.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-30 Thread Thomas Goirand
On 6/30/20 12:41 AM, Jeremy Stanley wrote:
> On 2020-06-29 23:55:49 +0200 (+0200), Thomas Goirand wrote:
> [...]
>> nodepool from OpenStack,
> 
> Well, *formerly* from OpenStack, these days Nodepool is a component
> of the Zuul project gating system, which is developed by an
> independent project/community (still represented by the OSF):
> 
> https://zuul-ci.org/
> https://opendev.org/zuul/nodepool/
> 
> You could probably run a Nodepool launcher daemon stand-alone
> (without a Zuul scheduler), but it's going to expect to be able to
> service node requests queued in a running Apache Zookeeper instance
> and usually the easiest way to generate those is with Zuul's
> scheduler. You might be better off just trying to run Nodepool along
> with Zuul, maybe even set up a GitLab connection to Salsa:
> 
> https://zuul-ci.org/docs/zuul/reference/drivers/gitlab.html
> 
>> and use instances donated by generous cloud providers (that's not
>> hard to find, really, I'm convinced that all the providers that
>> are donating to the OpenStack are likely to also donate compute
>> time to Debian).
> [...]
> 
> They probably would, I've approached some of them in the past when
> it sounded like the Salsa admins were willing to entertain other
> backend storage options than GCS for GitLab CI/CD artifacts. One of
> those resource donors (VEXXHOST) also has a Managed Zuul offering of
> their own, which they might be willing to hook you up with instead
> if you decide packaging all of Zuul is daunting (it looks like both
> you and hashar from WMF started work on that at various times in
> https://bugs.debian.org/705844 but more recently there are some
> JavaScript deps for its Web dashboard which could get gnarly to
> unwind in a Debian context).

Hi Jeremy,

I gave up a few times, because the reverse dependencies for it were not
aligned with what was in use in Debian at the time, and gave up.

If there's some nasty NPM job behind, then I probably will just skip the
dashboard, and expect deployment to get the dashboard not from packages.
What is included in the dashboard? Things like https://zuul.openstack.org/ ?

Cheers,

Thomas Goirand (zigo)



Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-29 Thread Jeremy Stanley
On 2020-06-29 23:55:49 +0200 (+0200), Thomas Goirand wrote:
[...]
> nodepool from OpenStack,

Well, *formerly* from OpenStack, these days Nodepool is a component
of the Zuul project gating system, which is developed by an
independent project/community (still represented by the OSF):

https://zuul-ci.org/
https://opendev.org/zuul/nodepool/

You could probably run a Nodepool launcher daemon stand-alone
(without a Zuul scheduler), but it's going to expect to be able to
service node requests queued in a running Apache Zookeeper instance
and usually the easiest way to generate those is with Zuul's
scheduler. You might be better off just trying to run Nodepool along
with Zuul, maybe even set up a GitLab connection to Salsa:

https://zuul-ci.org/docs/zuul/reference/drivers/gitlab.html

> and use instances donated by generous cloud providers (that's not
> hard to find, really, I'm convinced that all the providers that
> are donating to the OpenStack are likely to also donate compute
> time to Debian).
[...]

They probably would, I've approached some of them in the past when
it sounded like the Salsa admins were willing to entertain other
backend storage options than GCS for GitLab CI/CD artifacts. One of
those resource donors (VEXXHOST) also has a Managed Zuul offering of
their own, which they might be willing to hook you up with instead
if you decide packaging all of Zuul is daunting (it looks like both
you and hashar from WMF started work on that at various times in
https://bugs.debian.org/705844 but more recently there are some
JavaScript deps for its Web dashboard which could get gnarly to
unwind in a Debian context).
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-29 Thread Thomas Goirand
On 6/29/20 7:35 PM, Utkarsh Gupta wrote:
>> Running the script shows that 279 reverse (build?) dependencies are
>> affected by mock. This clearly isn't something one wants to run on a
>> personal computer, and even less a test which one wants to run sequentially.
> 
> Haha, right.
> What we (me and a couple others) do is run this build script on a
> server (via screen) and call it a night :P
> And we get the list of broken packages in the morning!
> 
> But of course, this is not a very "professional" way of doing it.

What we could do, is use nodepool from OpenStack, and use instances
donated by generous cloud providers (that's not hard to find, really,
I'm convinced that all the providers that are donating to the OpenStack
are likely to also donate compute time to Debian). And then we could
launch 150 builds at a time on 150 VMs. Then the time to wait is only
the time of the longest build.

>> Has any thought went into having some kind of runners running on a cloud
>> to run these tests, and maybe plug this into Salsa's CI to run it
>> automatically?
> 
> This seems to be a nice idea!
> I am not sure if someone had the time or energy to do this, but this
> is something we'd definitely love \o/

To get this to happen, we have no other way but using the power of some
kind of cloud / HTC.

>> I'd very much would love to set this up, at least as a first
>> experimentation on a bunch of package of the DPMT.
> 
> Me too!

I shall resume packaging nodepool then...

Cheers,

Thomas Goirand (zigo)



Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-29 Thread Utkarsh Gupta
Hello,

On Mon, Jun 29, 2020 at 8:24 PM Thomas Goirand  wrote:
> Nice! Thanks a lot for the pointer.

\o/

> I very much agree with you that the debate has to be emptied from
> emotions if possible. My goal has never been to point finger at anyone,
> but try to fix a reoccurring situation which I would like to avoid.

Definitely! Everyone would love to avoid that!

> Running the script shows that 279 reverse (build?) dependencies are
> affected by mock. This clearly isn't something one wants to run on a
> personal computer, and even less a test which one wants to run sequentially.

Haha, right.
What we (me and a couple others) do is run this build script on a
server (via screen) and call it a night :P
And we get the list of broken packages in the morning!

But of course, this is not a very "professional" way of doing it.

> Has any thought went into having some kind of runners running on a cloud
> to run these tests, and maybe plug this into Salsa's CI to run it
> automatically?

This seems to be a nice idea!
I am not sure if someone had the time or energy to do this, but this
is something we'd definitely love \o/

> I'd very much would love to set this up, at least as a first
> experimentation on a bunch of package of the DPMT.

Me too!


Best,
Utkarsh



Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-29 Thread Sandro Tosi
> Running the script shows that 279 reverse (build?) dependencies are
> affected by mock. This clearly isn't something one wants to run on a
> personal computer, and even less a test which one wants to run sequentially.
>
> Has any thought went into having some kind of runners running on a cloud
> to run these tests, and maybe plug this into Salsa's CI to run it
> automatically?
>
> I'd very much would love to set this up, at least as a first
> experimentation on a bunch of package of the DPMT.

i sent this some time ago do d-devel

https://lists.debian.org/debian-devel/2020/03/msg00342.html

it didnt get much traction

-- 
Sandro "morph" Tosi
My website: http://sandrotosi.me/
Me at Debian: http://wiki.debian.org/SandroTosi
Twitter: https://twitter.com/sandrotosi

-- 
Sandro "morph" Tosi
My website: http://sandrotosi.me/
Me at Debian: http://wiki.debian.org/SandroTosi
Twitter: https://twitter.com/sandrotosi



Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-29 Thread Thomas Goirand
On 6/29/20 2:33 PM, Utkarsh Gupta wrote:
> There exists such a thing which I use daily: ruby-team/meta[1].
> The meta/build script is (hopefully and exactly) what we need here!
> 
> It checks all the reverse(-build)-dependencies and lets you know what's
> going to break as soon as you dput.

Hi Utkarsh,

Nice! Thanks a lot for the pointer.

I very much agree with you that the debate has to be emptied from
emotions if possible. My goal has never been to point finger at anyone,
but try to fix a reoccurring situation which I would like to avoid.

Running the script shows that 279 reverse (build?) dependencies are
affected by mock. This clearly isn't something one wants to run on a
personal computer, and even less a test which one wants to run sequentially.

Has any thought went into having some kind of runners running on a cloud
to run these tests, and maybe plug this into Salsa's CI to run it
automatically?

I'd very much would love to set this up, at least as a first
experimentation on a bunch of package of the DPMT.

Your thoughts?

Cheers,

Thomas Goirand (zigo)



Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-29 Thread Scott Kitterman
On Monday, June 29, 2020 7:53:46 AM EDT Thomas Goirand wrote:
> On 6/29/20 12:58 PM, Scott Kitterman wrote:
> > On June 29, 2020 10:12:49 AM UTC, Thomas Goirand  wrote:
> >> On 6/29/20 8:34 AM, Ondrej Novy wrote:
> >>> nope, this is not true. Using the newest debhelper compat level is
> >>> recommended, see man page. There is no reason to __not__ upgrade
> >>> debhelper compat level. I will always upgrade debhelper in my
> >> 
> >> packages
> >> 
> >>> to the newest debhelper as soon as possible. Please newer downgrade
> >>> debhelper in my packages again without asking.
> >> 
> >> I don't agree this is best practice when backports are to be expected.
> > 
> > I'm substantially less enthusiastic about bumping compat levels than
> > Ondrej, but since debhelper 13 is available in buster-backports,>
> > backporting is unrelated to whether it's a good idea or not.
> I'm not maintaining OpenStack through the official backports channel,
> because OpenStack users need to have access to all versions of OpenStack
> backported to the current Stable. These backports are available through
> a debian.net channel (available using extrepo).
> 
> Therefore, the debhelper backport is not available in my build
> environment unless I explicitly do some work to make this happen (and
> Ondrej is aware of that). Just bumping the debhelper version (and
> without a good reason to do so) just add some unnecessary work
> maintaining the debhelper backport for me. By all means, let's bump to
> version 12. But why version 13 if you don't need the added features?
> This makes no sense to me.

Since you are maintaining an external backports repository, I think it's 
perfectly reasonable to expect packages that would work with Debian Backports 
to be supported.  One debhelper upload per compat level doesn't seem like 
enough work to be worth all this complaining.

Scott K

signature.asc
Description: This is a digitally signed message part.


Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-29 Thread Scott Kitterman
On Monday, June 29, 2020 10:17:57 AM EDT Scott Talbert wrote:
> On Mon, 29 Jun 2020, Scott Kitterman wrote:
> >>> More over, mock debhelper was upgraded to 13, for no apparent
> >> 
> >> reason
> >> 
> >>> (yet another "cosmetic fix" that isn't helping?). I'd like to
> >> 
> >> remind
> >> 
> >>> everyone that, increasing debhelper compat version to a number
> >> 
> >> that
> >> 
> >>> isn't in stable, without a specific reason (like the need of a
> >> 
> >> specific
> >> 
> >>> feature that wasn't there before) is just annoying for anyone
> >>> maintaining backports. That's truth even for when debhelper
> >> 
> >> itself is
> >> 
> >>> backported to oldstable (it's always nicer to be able to build a
> >>> backport without requiring another backport at build time).
> >>> 
> >>> nope, this is not true. Using the newest debhelper compat level is
> >>> recommended, see man page. There is no reason to __not__ upgrade
> >>> debhelper compat level. I will always upgrade debhelper in my
> >> 
> >> packages
> >> 
> >>> to the newest debhelper as soon as possible. Please newer downgrade
> >>> debhelper in my packages again without asking.
> >> 
> >> I don't agree this is best practice when backports are to be expected.
> > 
> > I'm substantially less enthusiastic about bumping compat levels than
> > Ondrej, but since debhelper 13 is available in buster-backports,
> > backporting is unrelated to whether it's a good idea or not.
> 
> Can you elaborate on other reasons not the upgrade the compat levels?
> 
> Scott

This is a matter of personal preference, but since the behavior of debhelper 
changes between compat versions, I prefer no to change it unless I have time 
to thoroughly QA changes in the package.  This generally means I don't change 
it often.

Unless there are issues with a specific compat level (hello compat 11) or the 
compat level has been deprecated, I tend to not to do it, but I'm generally 
pretty minimalist in my package updates.  That doesn't mean someone else is 
wrong to do so if they've checked that package is correct after the change.

Scott K

signature.asc
Description: This is a digitally signed message part.


Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-29 Thread Scott Talbert

On Mon, 29 Jun 2020, Scott Kitterman wrote:


More over, mock debhelper was upgraded to 13, for no apparent

reason

(yet another "cosmetic fix" that isn't helping?). I'd like to

remind

everyone that, increasing debhelper compat version to a number

that

isn't in stable, without a specific reason (like the need of a

specific

feature that wasn't there before) is just annoying for anyone
maintaining backports. That's truth even for when debhelper

itself is

backported to oldstable (it's always nicer to be able to build a
backport without requiring another backport at build time).

nope, this is not true. Using the newest debhelper compat level is
recommended, see man page. There is no reason to __not__ upgrade
debhelper compat level. I will always upgrade debhelper in my

packages

to the newest debhelper as soon as possible. Please newer downgrade
debhelper in my packages again without asking.


I don't agree this is best practice when backports are to be expected.


I'm substantially less enthusiastic about bumping compat levels than 
Ondrej, but since debhelper 13 is available in buster-backports, 
backporting is unrelated to whether it's a good idea or not.


Can you elaborate on other reasons not the upgrade the compat levels?

Scott



Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-29 Thread Utkarsh Gupta
On Sun, Jun 28, 2020 at 11:29 PM Sandro Tosi  wrote:
> OS is *just* another software we package for
> Debian; is it complex? sure, but it's not special, and it doesnt
> warrant any special treatment.

I am afraid when you say this.
First of all, that's not completely true. But I don't want to go there.

What I want to emphasis more on is:
It's okay if you don't want to treat packages in a "special" way,
that's totally fine!
But what's **not** fine is breaking something else whilst updating
something else.
That is just NOT...okay.

Sometimes it just happens (accidentally or whatever) but I think even then, the
person who does that should at least look up at the packages broken,
try to fix it,
and if it lies something outside their scope (because of time
constraints, etc), the
least they can do is report this to the respective maintainer(s) or at
least raise to
the list so that people who can, will help!

If everyone uploaded what they felt like without taking care of what's breaking,
the whole of Debian would just be chaos.

And I think, that's not the way it should be. At all.
And I completely agree with Thomas' statement when he says,
"No, this is not how Debian works, it never was, and hopefully, never will."

I love Ondrej and I love Thomas. And this mail has nothing to do with them.
Instead, this is a mail to everyone.
And while at it, I'd also request everyone to be a little empathetic.
I really hope that's not much to ask, is it!?


> It'd be nice if we had a framework to be able to rebuild all reverse
> build-dependency when we update a package. But currently, we don't have
> such CI. If one volunteers to write it, probably we can find some
> compute resources to make it happen. That's probably the way out, and
> IMO we should really all think about it.

There exists such a thing which I use daily: ruby-team/meta[1].
The meta/build script is (hopefully and exactly) what we need here!

It checks all the reverse(-build)-dependencies and lets you know what's
going to break as soon as you dput.


Best,
Utkarsh
---
[1]: https://salsa.debian.org/ruby-team/meta



Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-29 Thread Thomas Goirand
On 6/29/20 12:58 PM, Scott Kitterman wrote:
> On June 29, 2020 10:12:49 AM UTC, Thomas Goirand  wrote:
>> On 6/29/20 8:34 AM, Ondrej Novy wrote:
>>> nope, this is not true. Using the newest debhelper compat level is
>>> recommended, see man page. There is no reason to __not__ upgrade
>>> debhelper compat level. I will always upgrade debhelper in my
>> packages
>>> to the newest debhelper as soon as possible. Please newer downgrade
>>> debhelper in my packages again without asking.
>>
>> I don't agree this is best practice when backports are to be expected.
> 
> I'm substantially less enthusiastic about bumping compat levels than
> Ondrej, but since debhelper 13 is available in buster-backports,> backporting 
> is unrelated to whether it's a good idea or not.

I'm not maintaining OpenStack through the official backports channel,
because OpenStack users need to have access to all versions of OpenStack
backported to the current Stable. These backports are available through
a debian.net channel (available using extrepo).

Therefore, the debhelper backport is not available in my build
environment unless I explicitly do some work to make this happen (and
Ondrej is aware of that). Just bumping the debhelper version (and
without a good reason to do so) just add some unnecessary work
maintaining the debhelper backport for me. By all means, let's bump to
version 12. But why version 13 if you don't need the added features?
This makes no sense to me.

Cheers,

Thomas Goirand (zigo)



Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-29 Thread Scott Kitterman



On June 29, 2020 10:12:49 AM UTC, Thomas Goirand  wrote:
>On 6/29/20 8:34 AM, Ondrej Novy wrote:
...
>> More over, mock debhelper was upgraded to 13, for no apparent
>reason
>> (yet another "cosmetic fix" that isn't helping?). I'd like to
>remind
>> everyone that, increasing debhelper compat version to a number
>that
>> isn't in stable, without a specific reason (like the need of a
>specific
>> feature that wasn't there before) is just annoying for anyone
>> maintaining backports. That's truth even for when debhelper
>itself is
>> backported to oldstable (it's always nicer to be able to build a
>> backport without requiring another backport at build time).
>> 
>> nope, this is not true. Using the newest debhelper compat level is
>> recommended, see man page. There is no reason to __not__ upgrade
>> debhelper compat level. I will always upgrade debhelper in my
>packages
>> to the newest debhelper as soon as possible. Please newer downgrade
>> debhelper in my packages again without asking.
>
>I don't agree this is best practice when backports are to be expected.

I'm substantially less enthusiastic about bumping compat levels than Ondrej, 
but since debhelper 13 is available in buster-backports, backporting is 
unrelated to whether it's a good idea or not.

Scott K



Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-29 Thread Thomas Goirand
On 6/29/20 8:34 AM, Ondrej Novy wrote:
> Ondrej, you once cared for the OpenStack packages. Why are you now
> completely careless?
> 
> 
> because it's really hard to cooperate with you. I already tried to
> explain it to you but you didn't listen.

You're mixing 2 things: working on OpenStack package, and caring not to
break them. I'm just asking for the later.

On 6/29/20 8:34 AM, Ondrej Novy wrote:
> yep, that's how it works. We need to move forward and don't keep old,
> buggy and unmaintained packages in Debian, right?

If I'm getting this right, not only you break things (which is ok, if it
isn't on purpose), but now claim that this is the right thing to do. No,
this is not how Debian works, it never was, and hopefully, never will.

> More over, mock debhelper was upgraded to 13, for no apparent reason
> (yet another "cosmetic fix" that isn't helping?). I'd like to remind
> everyone that, increasing debhelper compat version to a number that
> isn't in stable, without a specific reason (like the need of a specific
> feature that wasn't there before) is just annoying for anyone
> maintaining backports. That's truth even for when debhelper itself is
> backported to oldstable (it's always nicer to be able to build a
> backport without requiring another backport at build time).
> 
> nope, this is not true. Using the newest debhelper compat level is
> recommended, see man page. There is no reason to __not__ upgrade
> debhelper compat level. I will always upgrade debhelper in my packages
> to the newest debhelper as soon as possible. Please newer downgrade
> debhelper in my packages again without asking.

I don't agree this is best practice when backports are to be expected.

Cheers,

Thomas Goirand (zigo)



Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-29 Thread Ondrej Novy
Hi,

ne 28. 6. 2020 v 16:48 odesílatel Thomas Goirand  napsal:

> Hi,
>
> Under a single Github account, the below packages are maintained:
> - mock
> - subunit
> - testtools
> - fixtures
> - funcsigs (deprecated, py2 backport)
> - testresources
> - traceback2
> - testscenarios
> - testrepository
> - extras
> - linecache2
>
> Currently, these packages are maintained by a variety of DDs, and
> there's no uniform maintenance of them.
>

which is perfectly fine, that's how Debian works.

The last upload of mock 4.0.2, by Ondrej, broke *a least*:
> - nova (see: #963339)
> - cloudkitty (see: #963069)
> - congress (see: #963312)
> - rally (see: #963381)
>
> All of the 4 packages above were able to build in Bullseye (ie: mock
> 3.0.5) and FTBFS in Sid (with mock 4.0.2).
>
> Well done! :(
>

yep, that's how it works. We need to move forward and don't keep old, buggy
and unmaintained packages in Debian, right?

You should add autopkgtest to prevent this. Failed autopkgtest will block
migration. Or we should start using full transitions, which is a bad idea
imho.

Ondrej, you once cared for the OpenStack packages. Why are you now
> completely careless?
>

because it's really hard to cooperate with you. I already tried to explain
it to you but you didn't listen.


> More over, mock debhelper was upgraded to 13, for no apparent reason
> (yet another "cosmetic fix" that isn't helping?). I'd like to remind
> everyone that, increasing debhelper compat version to a number that
> isn't in stable, without a specific reason (like the need of a specific
> feature that wasn't there before) is just annoying for anyone
> maintaining backports. That's truth even for when debhelper itself is
> backported to oldstable (it's always nicer to be able to build a
> backport without requiring another backport at build time).
>

nope, this is not true. Using the newest debhelper compat level is
recommended, see man page. There is no reason to __not__ upgrade debhelper
compat level. I will always upgrade debhelper in my packages to the newest
debhelper as soon as possible. Please newer downgrade debhelper in my
packages again without asking.

I don't want this to happen again. So I am hereby asking to take over
> the maintenance of these packages which aren't in the OpenStack team.
> They will be updated regularly, each 6 months, with the rest of
> OpenStack, following the upstream global-requirement pace. I'm confident
> it's going to work well for me and the OpenStack team, but as well for
> the rest of Debian.
>

for my packages (i'm uploader): no, sorry.

Reasons:
1. I hate openstack-pkg-tools
2. I like pybuild
3. you hate pybuild and don't want to use it

-- 
Best regards
 Ondřej Nový


Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-28 Thread Thomas Goirand
On 6/28/20 7:59 PM, Sandro Tosi wrote:
>> Is anyone from the team opposing to this?
> 
> Yes, i'm against your proposal.
> 
>> If so, please explain the
>> drawbacks if the OpenStack team takes over.
> 
> 1. you're personally attacking Ondrej, who is one of the very few
> members of this team doing team-wide work, and that should be enough
> to reject it
> 2. this is clearly an hostile take-over (even if you frame it as a
> proposal), and that should be enough to reject it
> 3. you propose to only update those packages every 6 months, i dont
> find it appropriate: OS is *just* another software we package for
> Debian; is it complex? sure, but it's not special, and it doesnt
> warrant any special treatment.
> 4. you clearly want to have sole and absolute control of the packages
> in the openstack-team, because what would happen if a os-team member
> will upgrade one of those packages (in good faith) and things will
> break? will they get another "well done! :( " email from you?
> 4.1. You wonder why Ondrey "stopped caring" about OS, if that's the
> case, i could see why
> 5. consolidating packages *into* the DPMT/PAPT gives a lot of
> benefits, f.e. people basically got "free" handling of the py2removal
> process; moving packages out is actually detrimental for the python
> ecosystem (at least that's my opinion).
> 
> Thomas, this is not the first time your temperament and aggressive
> behavior is causing some troubles, please reassess how you interact
> and work with other fellow contributors.

Sandro,

I'm sorry if the tone was inappropriate. It probably was. Though it's
not *that* harsh toward Ondrej. At least, it's really FAR from the
hostile behavior he had toward me last summer during debconf, after I
fixed 40 Django RC bugs (due to Django python2 removal), for which I was
thank with threatens.

What I'm painting of what happened is the reality. Let me explain. In
OpenStack, we have this repository:

https://github.com/openstack/requirements/

in this, you'll see the upper-constraints.txt file. This sets pinning
for the current release of OpenStack, which evolves at the same time as
the project. It's updated often during a cycle of 6 months before a
release, then it is frozen for the release. Right now, the stable/ussuri
branch matches what we have in Sid (so one should be looking at that).
Ondrej used to carefully check for this before doing any upload, as I
mentored him to do so. Now he apparently does not care anymore. Call it
personal attacks if you wish, I still don't think this is right.

When you write that:
> "OS is *just* another software we package for Debian; is it complex?
> sure, but it's not special, and it doesnt warrant any special
> treatment."

I don't agree with you here. Absolutely all of the other distributions
that include OpenStack are making sure that nothing breaks it by
careless uploads of not compatible releases of Python modules. Ubuntu
does it, Red Hat as well. Just in Debian, nobody cares but the
maintainers of OpenStack itself.

In fact, let me expand this further, because that's not the first time
I'm raising this issue: we do not threat Python libraries as candidate
for transitions enough, are countless uploads breaks the world of many.
One very good example would be Django, and in the past we had also
SQLAlchemy (though upstream got better for SQLA, so there's less
problems with that one). So yeah, OpenStack shouldn't have any special
treatment *IF* we care enough not breaking things when we update packages.

It'd be nice if we had a framework to be able to rebuild all reverse
build-dependency when we update a package. But currently, we don't have
such CI. If one volunteers to write it, probably we can find some
compute resources to make it happen. That's probably the way out, and
IMO we should really all think about it.

Now, please read what Jeremy wrote, and understand that these package
are really related to OpenStack. Given the fact that these packages are
tightly coupled with OpenStack, it does make sense.

Also, given how often Debian is released (every 2 years, these days?),
updating packages every 6 months doesn't seem that bad, especially if
you consider the set of packages that I'm talking about. They aren't
updated that often upstream.

Please take a step back and understand what's going on. What I would
like to happen, is making sure that things don't break, and currently,
this isn't the case with this set of packages. And this isn't the first
time. So I'm proposing to take measures to make this stop. If you feel
it's a hostile take over, then ok we shall find another way. But then
What is your proposal so that it doesn't happen anymore then?

Cheers,

Thomas Goirand (zigo)



Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-28 Thread Scott Kitterman
On Sunday, June 28, 2020 1:59:08 PM EDT Sandro Tosi wrote:
> 5. consolidating packages *into* the DPMT/PAPT gives a lot of
> benefits, f.e. people basically got "free" handling of the py2removal
> process; moving packages out is actually detrimental for the python
> ecosystem (at least that's my opinion).

Definitely this.

Scott K

signature.asc
Description: This is a digitally signed message part.


Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-28 Thread Sandro Tosi
> Is anyone from the team opposing to this?

Yes, i'm against your proposal.

> If so, please explain the
> drawbacks if the OpenStack team takes over.

1. you're personally attacking Ondrej, who is one of the very few
members of this team doing team-wide work, and that should be enough
to reject it
2. this is clearly an hostile take-over (even if you frame it as a
proposal), and that should be enough to reject it
3. you propose to only update those packages every 6 months, i dont
find it appropriate: OS is *just* another software we package for
Debian; is it complex? sure, but it's not special, and it doesnt
warrant any special treatment.
4. you clearly want to have sole and absolute control of the packages
in the openstack-team, because what would happen if a os-team member
will upgrade one of those packages (in good faith) and things will
break? will they get another "well done! :( " email from you?
4.1. You wonder why Ondrey "stopped caring" about OS, if that's the
case, i could see why
5. consolidating packages *into* the DPMT/PAPT gives a lot of
benefits, f.e. people basically got "free" handling of the py2removal
process; moving packages out is actually detrimental for the python
ecosystem (at least that's my opinion).

Thomas, this is not the first time your temperament and aggressive
behavior is causing some troubles, please reassess how you interact
and work with other fellow contributors.


--
Sandro "morph" Tosi
My website: http://sandrotosi.me/
Me at Debian: http://wiki.debian.org/SandroTosi
Twitter: https://twitter.com/sandrotosi



Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-28 Thread Jeremy Stanley
On 2020-06-28 16:48:02 +0200 (+0200), Thomas Goirand wrote:
[...]
> I don't want this to happen again. So I am hereby asking to take
> over the maintenance of these packages which aren't in the
> OpenStack team. They will be updated regularly, each 6 months,
> with the rest of OpenStack, following the upstream
> global-requirement pace. I'm confident it's going to work well for
> me and the OpenStack team, but as well for the rest of Debian.
> 
> Is anyone from the team opposing to this? If so, please explain
> the drawbacks if the OpenStack team takes over.

While I don't agree with Thomas's harsh tone in the bits of the
message I snipped (please Thomas, I'm sure everyone's trying their
best, there's no need to attack a fellow contributor personally over
technical issues), I did want to point out that the proposal makes
some sense. The Testing Cabal folk were heavily involved in
OpenStack and influential in shaping its quality assurance efforts;
so OpenStack relies much more heavily on these libraries than other
ecosystems of similar size, and OpenStack community members, present
and past, continue to collaborate upstream on their development.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature