Jose Luis Rivero wrote:
I would prefer to analyze the causes of the slacker arch (manpower, hardware, etc) and if we are not able to solve the problem by any way (asking for new devs, buying hardware, etc) go for mark it as experimental and drop all stable keywords.

How is that better? Instead of dropping one stable package you'd end up dropping all of them. A user could accept ~arch and get the same behavior without any need to mark every other package in the tree unstable. An arch could put a note to that effect in their installation handbook. The user could then choose between a very narrow core of stable packages or a wider universe of experimental ones.

I guess the question is whether package maintainers should be forced to maintain packages that are outdated by a significant period of time. Suppose something breaks a package that is 3 versions behind stable on all archs but one (where it is the current stable). Should the package maintainer be required to fix it, rather than just delete it? I suspect that the maintainer would be more likely to just leave it broken, which doesn't exactly leave things better-off for the end users.

I'm sure the maintainers of qt/baselayout/coreutils/etc will exercise discretion on removing stable versions of these packages. However, for some obscure utility that next-to-nobody uses, does it really hurt to move from stable back to unstable if the arch maintainers can't keep up?

I guess it comes down to the driving issues. How big a problem are stale packages (with the recent movement of a few platforms to experimental, is this an already-solved problem?)? How big of a problem do arch teams see keeping up with 30-days as (or maybe 60/90)? What are the practical (rather than theoretical) ramifications?

Reply via email to