On December 23, 2021 12:24:16 AM UTC, Sandro Tosi <mo...@debian.org> wrote:
>> People are expected to do so (coordination/testing etc).
>>
>>
>> - Mistakes happen.
>>
>>
>> BUT:
>>
>>
>> - Apparently some people forgot this and deliberately don't follow (and
>> I don't mean the can-happen accidents).
>>
>> (In the speficic case I have in mind the maintainer just added a Breaks:
>> without telling anyone,
>>
>> so "communicating" with d-d- c and/or failing autopkgtests..)
>
>there's also a problem of resources: let's take the example of numpy,
>which has 500+ rdeps. am i expected to:
>
>* rebuild all its reverse dependencies with the new version
>* evaluate which packages failed, and if that failures is due to the
>new version of numpy or an already existing/independent cause
>* provide fixes that are compatible with the current version and the
>new one (because we cant break what we currently have and we need to
>prepare for the new version)
>* wait for all of the packages with issues to have applied the patch
>and been uploaded to unstable
>* finally upload to unstable the new version of numpy
>
>?
>
>that's unreasonably long, time consuming and work-intensive for several reason
>
>* first and foremost rebuild 500 packages takes hardware resources not
>every dd is expected to have at hand (or pay for, like a cloud
>account), so until there's a ratt-as-as-service
>(https://github.com/Debian/ratt) kinda solution available to every DD,
>do not expect that for any sizable package, but maybe only for the
>ones with the smallest packages "networks" (which are also the ones
>causing the less "damage" if something goes wrong),
>* one maintainer vs many maintainers, one for each affected pkg;
>distribute the load (pain?)
>* upload to experimental and use autopkgtests you say? sure that's one
>way, but tracker.d.o currently doesnt show experimental excuses
>(#944737, #991237), so you dont immediately see which packages failed,
>and many packages still dont have autopkgtest, so that's not really
>covering everything anyway
>* sometimes i ask Lucas to do an archive rebuild with a new version,
>but that's still relying on a single person to run the tests, parse
>the build log, and the open bugs for the failed packages; maybe most
>of it is automated, but not all of it (and you cant really do this for
>every pkg in debian, because the archive rebuild tool needs 2 config
>files for each package you wanna test: 1. how to setup the build env
>to use the new package, 2. the list of packages to rebuild it that
>env).
>
>what exactly are you expecting from other DDs?
>
>unstable is unstable for a reason, breakage will happen, nobody wants
>to break intentionally (i hope?) others people work/packages, but
>until we come up with a simple, effective technical solution to the
>"build the rdeps and see what breaks" issue, we will upload to
>unstable and see what breaks *right there*.
>
>Maybe it's just lazy on my part, but there needs to be a cutoff
>between making changes/progress and dealing with the consequences, and
>walking on eggshells every time there's a new upstream release (or
>even a patch!) and you need to upload a new pkg.

It's not an either or.

Generally, the Release Team should coordinate timing of transitions.  New 
libraries should be staged in Experimental first.  Maintainers of rdpends 
should be alerted to the impending transition so they can check if they are 
ready.

Debian is developed by a team and we should work together to move things 
forward.  Particularly for a big transition like numpy, we all need to work 
together to get the work done.

It's true that breakage will happen in unstable.  We shouldn't be afraid of it, 
but we should also work to keep it manageable.

Scott K

Reply via email to