Re: De-vendoring gnulib in Debian packages

2024-05-12 Thread Theodore Ts'o
On Sun, May 12, 2024 at 04:27:06PM +0200, Simon Josefsson wrote:
> Going into detail, you use 'gzip -9n' but I use git-archive defaults
> which is the same as -n aka --no-name.  I agree adding -9 aka --best is
> an improvement.  Gnulib's maint.mk also add --rsyncable, would you agree
> that this is also an improvement?

I'm not convinced --rsyncable is an improvement.  It makes the
compressed object slightly larger, and in exchange, if the compressed
object changes slightly, it's possible that when you rsync the changed
file, it might be more efficient.  But in the case of PGP signed
release tarballs, the file is constant; it's never going to change,
and even if there are slight changes between say, e2fsprogs v1.47.0
and e2fsprogs v1.47.1, in practice, this is not something --rsyncable
can take advantage of, unless you manually copy
e2fsprogs-v1.47.0.tar.gz to e2fsprogs-v1.47.1.tar.bz, and then rsync
e2fsprogs-v1.471.tar.g and I don't think anyone is doing this,
either automatically or manually.

That being said, --rsyncable is mostly harmless, so I don't have
strong feelings about changing it to add or remove in someone's
release workflow.

> Right, there is no requirement for orig.tar.gz to be filtered.  But then
> the outcome depends on upstream, and I don't think we can convince all
> upstreams about these concerns.  Most upstream prefer to ship
> pre-generated and vendored files in their tarballs, and will continue to
> do so.

Well, your blog entry does recognize some of the strong reasons why
upstreams will probably want to continue shipping them.  First of all,
not all compilation targets are guaranteed to have autoconf, automake,
et. al, installed.  E2fsprogs is portable to Windows, MacOS, AIX,
Solaris, HPUX, NetBSD, FreeBSD, and GNU/Hurd, in addition to Linux.
If the package subscribes to the 'all the world's Linux, and nothing
else exists/we have no interest in supporting anything elss', I'd ask
the question, why are they using autoconf in the first place?  :-)

Secondly, i have gotten burned with older versions of either autoconf
or the aclocal macros changing in incompatible ways between versions.
So my practice is to check into git the configure script as generated
by autoconf on Debian testing, which is my development system; and if
it fails on anything else, or when a new version of autoconf or
automake, etc. causes my configure script to break, I can curse, and
fix it myself instead of inflicting the breakage on people who are
downloading and trying to compile e2fsprogs.

 Let's assume upstream doesn't ship minimized tarballs that are
> free from vendored or pre-generated files.  That's the case for most
> upstream tarballs in Debian today (including e2fsprogs, openssh,
> coreutils).  Without filtering that tarball we won't fulfil the goals I
> mentioned in the beginning of my post.  The downsides with not filtering
> include (somewhat repeating myself):
>
> ...

Your arguments are made in a very general way --- there are potential
problems for _all_ autogenerated or vendored files.  However, I think
it's possible to simply things by explicitly restricting the problem
domain to those files auto-generated by autoconf, automake, libtool,
etc.  For example, the argument that this opens things up for bugs
could be fixed by having common code in a debhelper script that
re-generates all of the autoconf and related files.  This address your
"tedious" and "fragile" argument.

And if you are always regenerating those files, you don't need to
audit the code, since they are going to them, no?  And the generated
files from autoconf and friends have well understood licensing
concerns.

And by the way, all of your concerns about vendored files, and all of
my arguments for why it's no big deal apply to gnulib source files,
too, no?  Why are you so insistent on saying that upstream must never,
ever ship vendored files --- but I don't believe you are making this
argument for gnulib?

Yes, it's simpler if we have procrustean rules of the form "everything
MUST be shared libraries", and "never, EVER have generated or vendored
files".  However, I think we're much better off if we have targetted
solution which fix the 80 to 90% of the cases.  We agree that gnulib
isn't going to be a shared library; but the argument in favor of it
means that there are exception, and I think we need to have similar
accomodations files like configure, config.{guess,sub}.

Upstream *is* going to be shipping those files, and I don't think it's
worth it to deviate from upstream tarballs just to filter out those
files, even if it makes somethings simpler from your perspective.  So
I do hear your arguments; it's just on balance, my opinion is that it's
not worth it.

Cheers,

- Ted



Re: De-vendoring gnulib in Debian packages

2024-05-12 Thread Theodore Ts'o
On Sat, May 11, 2024 at 04:09:23PM +0200, Simon Josefsson wrote:
>The current approach of running autoreconf -fi is based on a
>misunderstanding: autoreconf -fi is documented to not replace certain
>files with newer versions:
>https://lists.nongnu.org/archive/html/bug-gnulib/2024-04/msg00052.html

And the root cause of *this* is because historically, people put their
own custom autoconf macros in aclocal.m4, so if autoreconf -fi
overwrote aclocal.m4, things could break.  This also means that
programmtically always doing "rm -f aclocal.m4 ; aclocal --install"
will break some packages.

The best solution to this is to try to promote people to put those
autoconf macros that they are manually maintaining that can't be
supplied in acinclude.m4, which is now included by default by autoconf
in addition to aclocal.m4.  Personally, I think the two names are
confusing and if it weren't for historical reasons, perhaps should
have been swapped, but oh, well

(For example, I have some custom local autoconf macros needed to
support MacOS in e2fsprogs's acinclude.m4.)

> 1) Use upstream's PGP signed git-archive tarball.

Here's how I do it in e2fsprogs which (a) makes the git-archive
tarball be bit-for-bit reproducible given a particular git commit ID,
and (b) minimizes the size of the tarball when stored using
pristine-tar:

https://github.com/tytso/e2fsprogs/blob/master/util/gen-git-tarball

> To reach our goals in the beginning of this post, this upstream tarball
> has to be filtered to remove all pre-generated artifacts and vendored
> code.  Use some mechanism, like the debian/copyright Files-Excluded
> mechanism to remove them.  If you used a git-archive upstream tarball,
> chances are higher that you won't have to do a lot of work especially
> for pre-generated scripts.

Why does it *has* to be filtered?  For the purposes of building, if
you really want to nuke all of the pre-generated files, you can just
move them out of the way at the beginning of the debian/rules run, and
then move them back as part of "debian/rules clean".  Then you can use
autoreconf -fi to your heart's content in debian/rules (modulo
possibly breaking things if you insist on nuking aclocal.m4 and
regenerating it without taking proper care, as discussed above).

This also allows the *.orig.tar.gz to be the same as the upstream
signed PGP tarball, which you've said is the ideal, no?

> There is one design of gnulib that is important to understand: gnulib is
> a source-only library and is not versioned and has no release tarballs.
> Its release artifact is the git repository containing all the commits.
> Packages like coreutils, gzip, tar etc pin to one particular commit of
> gnulib.

Note that how we treat gnulib is a bit differently from how we treat
other C shared libraries, where we claim that *all* libraries must be
dynamically linked, and that include source code by reference is
against Debian Policy, precisely because of the toil needed to update
all of the binary packages should some security vulnerability gets
discovered in the library which is either linked statically or
included by code duplication.

And yet, we seem to have given a pass for gnulib, probably because it
would be too awkward to enforce that rule *everywhere*, so apparently
we've turned a blind eye.

I personally think the "everything must be dynamically linked" to be
not really workable in real life, and should be an aspirational goal
--- and the fact that we treat gnulib differently is a great proof
point about how the current debian policy is not really doable in real
life if it were enforced strictly, everywhere, with no exceptions

Certainly for languages like Rust, it *can't* be enforced, so again,
that's another place where that rule is not enforced consistently; if
it were, we wouldn't be able to ship Rust programs.

- Ted



Re: New supply-chain security tool: backseat-signed

2024-04-11 Thread Theodore Ts'o
On Thu, Apr 11, 2024 at 03:37:46PM +0100, Colin Watson wrote:
> 
> When was the last time this actually happened to you?  I certainly
> remember it being a problem in the early 2.5x days, but it's been well
> over a decade since this actually bit me.

I'd have to go through git archives, but I believe the last time was
when aclocal replaced one of the macros in aclocal.m4, and the updated
macro was not backwards compatible.

- Ted



Re: New supply-chain security tool: backseat-signed

2024-04-11 Thread Theodore Ts'o
On Sat, Apr 06, 2024 at 04:30:44PM +0100, Simon McVittie wrote:
> 
> But, it is conventional for Autotools projects to ship the generated
> ./configure script *as well* (for example this is what `make dist`
> outputs), to allow the project to be compiled on systems that do not
> have the complete Autotools system installed.

Or, because some upstream maintainers have learned through, long,
bitter experience that newer versions of autoconf tools may result in
the generated configure script to be busted (sometimmes subtly), and
so distrust relying on blind autoreconf always working.

(For Debian, I always make sure that the upstream configure script for
autoconf is generated on a Debian testing system, and yes, I have had
to make adjustments to the "prefferred form of modification" files so
that the resulting configure script works.  For me, it's not that the
configure file is the preferred form of modification, but rather, the
preferred form of distriibution.)

Yes, I realize that the logical follow-on to this is that perhaps we
should just abandon autotools completely; unfortunately, I'm not quite
willing to make the assertion, "all the world's Linux and I don't care
about portability to non-Linux systems" ala the position taken by the
systemd maintainers --- and for all its faults, autoconf still has
decades of potability work that is not easy to replace.

   - Ted



Re: xz backdoor

2024-04-01 Thread Theodore Ts'o
On Mon, Apr 01, 2024 at 04:47:05PM +0100, Colin Watson wrote:
> On Mon, Apr 01, 2024 at 08:13:58AM -0700, Russ Allbery wrote:
> > Bastian Blank  writes:
> > > I don't understand what you are trying to say.  If we add a hard check
> > > to lintian for m4/*, set it to auto-reject, then it is fully irrelevant
> > > if the upload is a tarball or git.
> > 
> > Er, well, there goes every C package for which I'm upstream, all of which
> > have M4 macros in m4/* that do not come from an external source.
> 
> Ditto.  And a bunch of the packages where I'm not upstream too, such as
> that famously enthusiastic adopter of all things GNU, OpenSSH.

For e2fsprogs, almost all the M4 macros come from an external source;
but I had to patch one of the macros so that it would work on *BSD
when using pmake as opposed to GNU make.  And in another case, I
copied the macro from another package's git repo to fix a portability
issue with Mac OS X.

So it's highly likely that if you added a hard check in Lintian, both
of these would trigger for e2fsprogs.

Portability is hard.  Let's go shopping!

- Ted



Re: Validating tarballs against git repositories

2024-04-01 Thread Theodore Ts'o
On Mon, Apr 01, 2024 at 06:36:30PM +0200, Vincent Bernat wrote:
>
> I think that if Debian was using git instead of the generated tarball, this
> part of the backdoor would have just been included in the git repository as
> well. If we were able to magically switch everything to git (and we won't,
> we are not even able to agree on simpler stuff), I don't think it would have
> prevented the attack.

I'm not sure how much it would have helped, but I think the theory
behind eliminating the gap between the release tarball and the git
tree is the theory that in 2024, more developers are more likely to be
building and testing against the git tree, and so it might have been
more likely noticed.  After all, Jia Tan decided it was worth while to
check in 99% of the exploit in git, but to only enable it when it was
built from the release tarball.  If the exploit was always active when
built from the git tree, perhaps someone might have noticed before it
Debian uploaded the trojan'ed binary package to unstable, and then a
week or so later, having it promoted to testing.

I'm not sure how likely that would be for the specific case of
xz-utils, since it appears the number of developers (not just
Maintainers) was extremely small, but presumably Jia Tan decided to do
things in that way in the hopes of making less likely that the malware
would be noticed.

- Ted



Re: Validating tarballs against git repositories

2024-04-01 Thread Theodore Ts'o
On Sat, Mar 30, 2024 at 08:44:36AM -0700, Russ Allbery wrote:
> Luca Boccassi  writes:
> 
> > In the end, massaged tarballs were needed to avoid rerunning autoconfery
> > on twelve thousands different proprietary and non-proprietary Unix
> > variants, back in the day. In 2024, we do dh_autoreconf by default so
> > it's all moot anyway.
> 
> This is true from Debian's perspective.  This is much less obviously true
> from upstream's perspective, and there are some advantages to aligning
> with upstream about what constitutes the release artifact.

My upstream perspective is that I've burned repeatedly with
incompatible version changes in autotools programs which causes my
configure.{in,ac} file to no longer create a working configure script,
or which causes subtle breakages.  So my practice is to use autoconf
on my Debian testing development system before checking in the
configure.ac and configure files --- but I ship the generated files
and I don't tell people to run autoreconf before running ./configure.
And if things break after they run autoreconf, I tell them, "you ran
autoreconf; you get to keep both pieces".

And there *have* been times when autoconf has gotten updated in Debian
testing, and the resulting configure script has broken, at which point
I curse at autotools, and fix the configure.ac and/or aclocal.m4
files, etc., and *then* check in the generated configure file and
autotool source files.

> Yes, perhaps it's time to switch to a different build system, although one
> of the reasons I've personally been putting this off is that I do a lot of
> feature probing for library APIs that have changed over time, and I'm not
> sure how one does that in the non-Autoconf build systems.  Meson's Porting
> from Autotools [1] page, for example, doesn't seem to address this use
> case at all.

The other problem is that many of the other build systems are much
slower than autoconf/makefile.  (Note: I don't use libtool, because
it's so d*mn slow.)  Or building the alternate system might require a
major bootstrapping phase, or requires downloading a JVM, etc.

> Maybe the answer is "you should give up on portability to older systems as
> the cost of having a cleaner build system," and that's not an entirely
> unreasonable thing to say, but that's going to be a hard sell for a lot of
> upstreams that care immensely about this.

Yeah, that too.  There are still people building e2fsprogs on AIX,
Solaris, and other legacy Unix systems, and I'd hate to break them, or
require a lot of pain for people who are building on MacPorts, et. al.
It hasn't been *all* that long ago that I started require C99
compilers

That being said, if someone who was worried about an Jia Tan-style
attack with e2fsprogs, first of all, you can verify that configure
corresponds to autoconf on the Debian testing at the time when the
archive was generated, and the officially released tar file is
generated via:

git archive --prefix=e2fsprogs-${ver}/ ${commit} | gzip -9n > $fn

... and the release tarballs are also in the pristine-tar branch of
e2fsprogs.  So even if kernel.org (preferred) and sourceforget.net
(legacy) servers for the e2fsprogs tar files completely implodes, and
you only have access to the git repo, you can still get the original
e2fsprogs tar files using pristine-tar.

- Ted



Re: 64-bit time_t transition in progress in unstable

2024-03-06 Thread Theodore Ts'o
On Wed, Mar 06, 2024 at 12:33:08PM -0800, Steve Langasek wrote:
> 
> Aside from the libuuid1t64 revert, for which binNMUs have been scheduled, I
> actually would expect unstable to be dist-upgradeable on non-32-bit archs:
> either the existing non-t64 library will be kept installed because nothing
> yet needs the t64 version, or something does want the t64 version and apt
> will accept it as a replacement for the non-t64 version because it Provides:
> the non-t64 name.
> 
> So once the libuuidt64 revert is done (later today?), if apt dist-upgrade is
> NOT working, I think we should want to see some apt output showing what's
> not working.

Sorry, I've been crazy busy so I didn't have time to object to
libuuid1t64 as bewing compltely unnecessary before it had rolled out
to unstable.  Similarly, libcom-err2 and libss2 don't use time_t, so
the rename to ...t64 was completely unnecessary.

On my todo list was to figuire out how to revert them, but given that
libuuid1t64 has been causing problems and has required the revert, I
was planning on waiting for the dust to settle before trying to fix up
libcom-err2 and libss2.

(None of this is intended to be a criticism of the team working on
time_t transition; I understand how it's hard to figure out whether a
library has a time_t exported in their interface.  Unfortunately, I
had less than a week to respond, and it happened while I was
travelling, so I didn't have time to review before I saw the upload to
unstable, and I figured out that it was too late for me to object.)

 - Ted



Re: Policy: should libraries depend on services (daemons) that they can speak to?

2024-01-15 Thread Theodore Ts'o
On Mon, Jan 08, 2024 at 11:18:09AM +, Simon McVittie wrote:
> On Mon, 08 Jan 2024 at 08:21:08 -, Sune Vuorela wrote:
> > Maybe the question is also a bit .. "it depends".
> ...
> > So that users actually likely get a system that works?
> 
> I think the fact that we argue about this every few years, with no simple
> conclusion, is adequate evidence that the answer is "it depends". We're
> balancing two competing factors: "make the system work by default" implies
> that *something* needs to be responsible for pulling in required services
> at least some of the time, while "make the system flexible" implies that
> we should not be pulling in all of the services all of the time.

I'll argue that best practice is that upstream show make the shared
library useful *without* the daemon, but if the daemon is present,
perhaps the shared library can do a better job.

For example, when I implemented libuuid, if you want to create a huge
number of UUID's very quickly, because you're a large enterprise
resource planning application, the the uuidd daemon will allow
multiple processes to request "chunks" of UUID space, and create
unique UUID's without having to having to go through some kind of
locking protocol using a single shared state file.

So libuuid works just fine without uuidd, but if you are populating a
large ERP system, then you very much will want uuidd to be installed.
So in that case, you can make the dependency relationship be either
suggests or recommends, instead of a hard dependency.

Of course, that's an upstream design consideration, and not all
upstreams are so forward looking... in their design.

> 
> Meanwhile, some distributions are more opinionated than Debian,
> have chosen a distro-wide preferred implementation for each swappable
> component, and make it quite difficult to exclude those components or
> swap them for alternatives. We probably don't want to do that either.

Well, right.  And if the distribution's primary market is enterprise
customers, such that using an Enterprise Resource Planning system is
highly likely (even if said ERP is proprietary softare sold by a very
large German company), you might decide that it's worthwhile to
install uuidd by default, especially since it's a relativelty small
daemon.  But if you're a distribution that thinks that every last
kilobyte matters, because you might be used in a docker context (for
example), then you mnight want to make different choices.

Cheers,

- Ted



Re: Linking coreutils against OpenSSL

2023-11-11 Thread Theodore Ts'o
On Sat, Nov 11, 2023 at 09:32:46AM +0200, Julian Andres Klode wrote:
> 
> WRT dlopen(), this is never an appealing solution because you do not
> get any type-checking, you have to make sourceful changes for an soname
> bump. It is an interface you can use for loading plugins (that is, you
> should be in control of what the interface is that you are loading from
> the library object), but it's not really usable for stuff that is outside
> of your control.

There are some caveats, yes, but I *have* made it work.  For example,
see [1].  This allows debugfs in e2fsprogs to use GNU readline (or
BSD's libedit), which is *super* convenient, without requiring forcing
GNU readline to be placed in emergency boot floppies or being added to
essential.

[1] https://github.com/tytso/e2fsprogs/blob/master/lib/ss/get_readline.c

For things which are optional, if there is a soname bump, things won't
stop working.  In the case of coreutils and OpenSSL, sha1sum will just
git a bit slower --- in fact, the speeds that it has today, which has
never been a problem for me, FWIW.  And yes, to add accomodate a
soname bump you might need to make some sourceful changes, but it's
often not that hard to do.  At least for the libreadline functionality
used by debugfs, it's mainly doing a bit of testing, concluding "nope,
the ABI didn't change for the functions that libss cares about", and
just adding the additional SONAME to:

#define DEFAULT_LIBPATH 
"libreadline.so.8:libreadline.so.7:libreadline.so.6:libreadline.so.5:libreadline.so.4:libreadline.so:libedit.so.2:libedit.so:libeditline.so.0:libeditline.so"

:-)

As another example, I have a pending patch to e2fsprogs that I will
shortly integrating which allows mke2fs to optionally use libarchive,
so that mke2fs -d can take a tarball, but again, without expanding the
dependencies for e2fsprogs.  It's not *that* bad.

- Ted



Re: Linking coreutils against OpenSSL

2023-11-10 Thread Theodore Ts'o
On Thu, Nov 09, 2023 at 11:13:51PM +, Luca Boccassi wrote:
> > Alternatively, what would you think about making sha256sum etc.
> > divertible and providing implementations both with and without the
> > OpenSSL dependency?
> 
> Please, no, no more diversion/alternatives/shenanigans, it's just huge
> and convoluted complications for no real gain.

Agreed, let's not.

If you can get upstream a patch so that coreutils could try to dlopen
OpenSSL and use it if it is available, but skip it if it is not, that
might be one way to avoid OpenSSL going into essential.  The challenge
is that OpenSSL is not known for its ability to maintain a stable ABI,
but if we only care about supporting a specific version of OpenSSL
(the one which is shipped with coreutils) and given that the fallback
is a slower sha256sum, which IMHO is *not* a disaster, perhaps it's
doable?

- Ted



Re: lpr/lpd

2023-09-22 Thread Theodore Ts'o
On Fri, Sep 22, 2023 at 11:07:39PM +0900, Simon Richter wrote:
> Yes and no. We're providing a better service than pulling the rug under the
> users, but we could do better by communicating that these packages are in
> need of new maintainers instead of waiting for them to bit-rot to a point
> where we have an excuse to remove them -- because all we're doing that way
> is justify the rug pull, but the impact for the users isn't minimized.

There are two things that we can call for, and we probably should try
to do both.  The first is making sure that we have an active *Debian*
maintainer.  The other is to see if we can find an *Upstream*
maintainer.  And for that, we can take a look at a wider set of
potential maintainers.  For example, in the case of rlpr, it also is
packaged by FreeBSD and NetBSD.  So perhaps there are some joint
upstream collaboration that could be done with folks from different
distibutions, or even different OS's.

Just a thought,

- Ted



Re: Potential MBF: packages failing to build twice in a row

2023-08-09 Thread Theodore Ts'o
On Tue, Aug 08, 2023 at 10:26:09AM +0200, Helmut Grohne wrote:
> As a minor data point, I also do not rely on `debian/rules clean` to
> work for reproducing the original source tree, because too many packages
> fail it.
> 
> Let me point out though that moving to git-based packaging is not the
> property that is relevant here. I expect that most developers use either
> sbuild or pbuilder for the majority of their builds. Both tools create a
> .dsc, copy that .dsc into a chroot, unpack, build, and dispose of it. So
> we effectively have at least three ways of cleaning source packages:
> 
> a) `debian/rules clean`
> b) Some VCS (and that's probably just git)
> c) Copy the source before build and dispose the entire copy

For what it's worth, my packages are managed using git, and some times
I'll use git-buildpackage (with sbuild as the backend), dgit (for
releases to unstable; but for some reasons it mysteriously fails when
doing uploads backports), as well as dpkg-buildpackage in the git
repository.

Because *do* run dpkg-buildpackage for my test builds, I actually have
an incentive to make "./debian/rules clean" work correctly, because
running dpkg-buildpackage leaves modified files all over the my
repository's working directory, and it's *useful* that "debian/rules
clean" gets my repository back to a clean state.  I could do "git
reset --hard", but sometimes I have locally modified files in working
directory, and "git reset --hard" would blast all of that, where as
"./debian/rules clean" does what I want.

Cheers,

- Ted



Re: snapshot.d.o has been in a bad state for several months

2023-08-09 Thread Theodore Ts'o
On Wed, Aug 09, 2023 at 08:31:09AM +0200, Bjørn Mork wrote:
> "Theodore Ts'o"  writes:
> 
> > I was curious about this, since I rely on snapshots.debian.org in
> > order to create repeatable builds for a file system test appliance, so
> > I started digging a bit.  Looking at the debian-bugs pseudo-package
> > "snapshot.debian.org":
> >
> > https://bugs.debian.org/cgi-bin/pkgreport.cgi?package=snapshot.debian.org
> >
> > the maintainer is listed as:
> >
> > "snapshot.debian.org Team "
> >
> > But according to lists.debian.org, "debian-shaphots" is a dead list,
> 
> "debian-shaphots" != "debian-shapshot" :-)
> 
> https://lists.debian.org/debian-snapshot/

Ah, I see.  I was looking for the mailing list under:

https://lists.debian.org/devel.html

and that seems to where the old debian-snapshots list.  I guess at
some point ten years ago that list was killed, and debian-snapshot was
created under:

   https://lists.debian.org/misc.html


BTW, it also looks like not only are some snapshots not being taken,
some of the snapshots are missing packages.   For example:

   https://snapshot.debian.org/archive/debian/20230806T091912Z/

is missing the package libc-dev-bin, and:

   https://snapshot.debian.org/archive/debian/20230807T150823Z/

is missing the package dbus.  Which is something that I'm finding when
I try building an kvm-xfstests VM using:

https://github.com/tytso/xfstests-bld/blob/master/test-appliance/gen-image

Ah, well, I guess I'll try the snapshot for 20230805T151946Z next

   - Ted



Re: snapshot.d.o has been in a bad state for several months

2023-08-08 Thread Theodore Ts'o
On Wed, Aug 02, 2023 at 01:33:11PM +0200, Johannes Schauer Marin Rodrigues 
wrote:
> Hi,
> 
> snapshot.debian.org is getting worse again. There is not a single snapshot for
> August yet and the last days of July are spotty:
> 
> http://snapshot.debian.org/archive/debian/?year=2023=7
> 
> None for the 29. and only a single timestamp for the 26., 27., 28. and 30.
> There should be four per day. The situation is even worse for other archives.
> For debian-ports, for the month of July, there are only 22 snapshots overall:
> 
> http://snapshot.debian.org/archive/debian-ports/?year=2023=7
> 
> This problem has been known for half a year already:
> 
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1031628
> 
> But that bug got closed in favor of #1029744 which was filed because
> debian-ports had no snapshots at all for January and only three for February
> this year but there is no reply to that bug.
> 
> In #1031628 Julien said that there is "not much we can do about it at the
> moment".
> 
> What is the status of this problem? What is needed to fix it? Is this just a
> problem of computational and/or storage resources which an be fixed by the
> funds available to Debian?

I was curious about this, since I rely on snapshots.debian.org in
order to create repeatable builds for a file system test appliance, so
I started digging a bit.  Looking at the debian-bugs pseudo-package
"snapshot.debian.org":

https://bugs.debian.org/cgi-bin/pkgreport.cgi?package=snapshot.debian.org

the maintainer is listed as:

"snapshot.debian.org Team "

But according to lists.debian.org, "debian-shaphots" is a dead list,
and apparently the last archived message to the list is from September
2001:

https://lists.debian.org/debian-snapshots/

That seems unfortunate.

- Ted



Re: /usr-merge: continuous archive analysis

2023-07-12 Thread Theodore Ts'o
On Wed, Jul 12, 2023 at 03:34:38PM +0200, Helmut Grohne wrote:
> ## No good solution for bookworm-backports
> 
> There is one major issue where I don't have a good answer:
> bookworm-backports. When this originally surfaced, Luca Boccassi
> suggested that we do not canonicalize in backports. That's easier said
> than done as the support for split-/usr will soon vanish from packages

> ... Adding such intrusive changes to
> bookworm-backports and pulled by a significant fraction of backports
> sounds bad to me. The alternative here is that backporting will become a
> lot harder as those performing backports would have to undo the
> canonicalization.

For those packages that are likely to be backported, would ti be
possible provide some tools so that the package maintainers can make
it easy to have the debian/rules file detect whetther it is being
built on a distro version that might have split-/usr, or not, or
whether we the package needs to do various mitigations or not?

I've done that by hand, since for a while I was maintaining the debian
directiry in e2fsprogs (yes, I know, bad bear, you're not supposed to
do that), but one of the reasons why I did this is that I had *one*
set of debian files that would successfully build on Debian stable,
Debian testing, Debian oldstable, Debian oldoldstable, some random
Ubuntu LTS versions, *and* Google's Prod-NG[1] variant.  That's
because I wanted to allow people to check out the latest version of
e2fsprogs from the git tree, and build it on a variety of distro
versions, even though e2fsprogs upstream had added some new binaries,
or some new config files, since Debian oldoldstable had been released.

[1] https://marc.merlins.org/linux/talks/ProdNG-LinuxCon2013/ProdNG.pdf

I haven't kept up with it, since it's not really needed any more
(Google has since migrated away from ProdNG to something else, and I
stopped caring about Ubuntu LTS :-), but the hardest part was dealing
with various different versions of debhelper.

The point is before we lift the freeze, perhaps we can provide some
tools that make it easier for package maintainer to only "make
split-/usr support vanish" conditionally, so as to make life easier
for people who are doing the bookworm and bullseye backports?

I don't mind keeping some buster and bullseye and bookworm schroots
around, and doing test-builds of the packages I build, and then making
minor adjustments as necessary to make sure things still work.
Combined with some test automation so that we can test to see whether
a package about to be uploaded to bullseye-backports won't break on a
split-/usr machine, and this should be quite doable.

Of course, this may be more effort than people are willing to do

- Ted



Re: i386 in the future (was Re: 64-bit time_t transition for 32-bit archs: a proposal)

2023-06-01 Thread Theodore Ts'o
On Wed, May 31, 2023 at 12:51:06AM +0200, Diederik de Haas wrote:
> 
> I would be VERY disappointed if Debian would abandon people who do NOT have 
> the means to just buy new equipment whenever they feel like it.

Debian is a Do-ocracy.  Which is to say, it's a volunteer project.
People work on what they feel like working on.  Trying to guilt-trip
people into working on something because they *should* often doesn't
work well.

If you'd like to make sure that i386 isn't abandoned, why don't you
roll up your sleeves, step forward, and volunteer to help?

Cheers,

- Ted



Re: Consultation on license documents

2023-03-17 Thread Theodore Ts'o
On Fri, Mar 17, 2023 at 09:09:22PM +0800, 刘涛 wrote:
> Hello, I have the following questions to consult and look forward to your 
> authoritative answers.
> 
> 1. Must various software packages in the Debian community contain a
> license file "license.txt"? Without this file, how does the users
> know about the license usage of the package?

Debian packages have licensing information in 
/usr/share/doc//copygright.

There is not consensus in the global, upstream open source movement
about where the licensing information should be found in the source
distribution for a open source package.  I will typically look at the
COPYING file, and the README file, and I'd say that most of the time,
I can find the licensing information there.  However, we (the Debian
community) do not have the authority to mandate a standard place for
upstream software packages to place the licensing information.

It is the responsibility of the Debian maintainer when they are
packaging a software package for Debian to find the copyright and
licensing information and then arrange to make sure that when the
package is installed, the licensing information is installed in
/usr/share/doc//copyright, and in the debian/copyright file
in the Debian source package.

There is a proposed standard being promulgated by the Linux foundation
called SPDX[1], which has been standardized by the Internet
Organization for Standardization (ISO), as ISO/IEC 5962:2021.  This is
a scheme for tagging source files, which is important because very
often lincensing information is very often much more fine-grained that
at the level of a single package.  This is why the Debian copyright
format[2], DEP-5, can also provide copyright information on a
per-source-file basis.

[1] https://spdx.dev/
[2] https://dep-team.pages.debian.net/deps/dep5/

For companies are interested in license compliance, they may find this
particular article, "Open-Source License Compliance in Software Supply
Chains"[3] useful.  It was published in the book Towards Engineering
Free/Libre Open Source Software (FLOSS) Ecosystems for Impact and
Sustainability.

[3] 
https://dirkriehle.com/publications/2017-selected/license-clearance-in-software-product-governance/

These days, there is a lot of work in people interested in Open Source
supply chains who are now worrying about being able to track libraries
used in products and companies' production code, not just from the
perspective of copyright license compliance, but for security reasons
as well.  For example, at the 2022 Linux Foundation Member Summit[4],
there were four sessions, including two keynotes, on this subject.
Slides and Video for the keynote talks are available; slides are
linked off of the sessions descriptions.  The video of the keynotes
are available here[5].

[4] 
https://events.linuxfoundation.org/archive/2022/lf-member-summit/program/schedule/
[5] https://www.youtube.com/watch?v=BltvpGfqz14


> 2. I found that each software package has a "Copyleft" document, and
> a lot of license information is also listed in this
> document. Therefore, I would like to ask, when the two documents
> "license.txt" and "Copyleft" exist in the software package at the
> same time, which one should the user take as the basis, and how to
> deal with the situation where the declared license information of
> the two documents is inconsistent, Which shall prevail?

I am not a lawyer, and even if I were a lawyer, I am not *your*
lawyer, so I am not in a position to give legal advice.  If you want
an authoratative opinion, you will need to find a lawyer who is
willing to give you formal legal advice, and they will very ask to be
paid in order to give you that opinion.

Best regards,

- Ted



Re: Help trying to debug an sbuild failure?

2022-12-28 Thread Theodore Ts'o
On Wed, Dec 28, 2022 at 12:10:51AM +0100, Johannes Schauer Marin Rodrigues 
wrote:
> Note, that if you keep upgrading a Debian unstable chroot across multiple
> releases, it will end up looking slightly different than a freshly
> debootstrapped Debian unstable chroot. So I think there is value in
> semi-regularly re-creating the build chroot from scratch. Maybe write a script
> that does what you need?

That's true, but the number of user/group id's that sbuild would
actually care about is probably quite small and might very well just
be sbuild:sbuild I would think, no?

> Finally, I think this is something that could be solved in sbuild. Ultimately,
> schroot is able to do things as the root user, so it should have sufficient
> permissions to fix up a chroot that carries incorrect permissions. Could you
> quickly note in a bug against sbuild on the Debian BTS which steps exactly you
> carried out so that I am able to reproduce your problem?

Sure, no problem.

> I'm making no promises that I'll find the time to improve the schroot backend
> of sbuild to survive the kind of chroot-rsync that you have performed but if
> this is important to you, then maybe we can make a trade and I implement this
> sbuild functionality and you have a look at pull requests
> https://github.com/tytso/e2fsprogs/pull/118 or #107 and leave some comments in
> return? :)

Thanks for the reminder, I'll take a look.  Most of the patch
proposals for e2fsprogs end up going to linux-e...@vger.kernel.org (so
that other ext4 developers can review them), and I sometimes forgot to
look over the github pull requests.

I'm also about to go on a Panama Canal cruise, where my internet
access may be limited (which is why I was trying to get sbuild setup
on my laptop in the first place :-), and e-mail has the advantage of
being much easily cacheable using offlineimap...

- Ted



Re: Help trying to debug an sbuild failure?

2022-12-26 Thread Theodore Ts'o
On Mon, Dec 26, 2022 at 08:45:53PM +0100, Santiago Vila wrote:
> El 26/12/22 a las 20:29, Theodore Ts'o escribió:
> > I: The directory does not exist inside the chroot.
> 
> This is really a problem with schroot. I guess that this will not work either:
> 
> schroot -c the-chroot-name
> 
> This usually works when you are in your $HOME because this file:
> 
> /etc/schroot/default/fstab

Nope, that's not the issue; yes, /tmp and /home are missing form
/etc/schroot/sbuild/fstab, but that was true on the *working* setup as
well, and from what I can tell; that's deliberate.  It looks like the
goal is to keep things hermetic when building with sbuild, so it's a
feature that there aren't random directories leaking through from the
host to the sbuild enviroment.

I think I've figured out the issue.  The problem is that the user and
group id's for sbuild are different on my desktop and my laptop, and I
did an rsync to copy the /chroot directories from my desktop to my
laptop --- and it appears that sbuild is very sensisitve about the
user id's being the same across the host and chroot environments.

Apparently sbuild copies the files for the package it is building over
a directory in /var/lib/sbuild/build, with the permissions being mode
770, and that is what sbuild bind mounts into the chroot.  If my
theory is correct, if the user/group id's are different from between
the base /etc/passwd and chroot, then things go bad in a hurry.

>From my working system (while gbp buildpackage was running sbuild):

% ls -al /var/lib/sbuild/build/
total 12
4 drwxrws--- 3 sbuild sbuild 4096 Dec 26 23:05 ./
4 drwxrws--- 4 sbuild sbuild 4096 Dec 19  2020 ../
4 drwxrwx--- 3 tytso  sbuild 4096 Dec 26 23:05 f2fs-tools-oT4KHz/

Amd on the broken (laptop) system:

# ls -al /var/lib/sbuild/build/
total 32
4 drwxrws--- 8 fwupd-refresh Debian-exim 4096 Dec 26 22:48 ./
4 drwxrws--- 4 sbuildsbuild  4096 Dec 25 14:45 ../
4 drwxrwx--- 2 tytso Debian-exim 4096 Dec 26 14:01 f2fs-tools-9QfprK/
4 drwxrwx--- 2 tytso Debian-exim 4096 Dec 26 16:01 f2fs-tools-btkVPv/
4 drwxrwx--- 2 tytso Debian-exim 4096 Dec 26 14:20 f2fs-tools-cVTRAh/
4 drwxrwx--- 2 tytso Debian-exim 4096 Dec 26 22:48 f2fs-tools-Myld8j/
...

Each of were created from a failed sbuild invocation...  And
"Debian-exim" on my laptop has the same group id as "sbuild" on my
desktop (and which is the group id in my chroots).

This also explains the error message:

E: Failed to change to directory ‘/<>’: Permission denied

Oops.

So I guess I need to either manually juggle group id's in the chroots
(and/or my laptop's root directory --- but it's probably easier to do
it in the chroots, since there are fewer filees to chgrp in the
chroots), or I could recreate the sbuild chroots from scratch using
sbuild-createchroot (but then I would need to recreate all of the
custom hacks so that ccache,eatmydata, apt-auto-proxy, etc. are working
correctly in the chroot).

What fun

- Ted

p.s.  I guess if I had been planning ahead I would have made sure that
various system users and groups which are auto-created by packages at
install-time (and which are therefore different depending on install
order) were pre-created on my laptop with the same numerical id's as
on my desktop, since that would have avoided all *sorts* of random
problems, especially if I was going to play games with copying chroots
around --- or trying to use NFS --- and not getting taken by surprise
by this sort of thing.  Live and learn



Help trying to debug an sbuild failure?

2022-12-26 Thread Theodore Ts'o
Hi, I'm trying to figure out an sbuild failure on my laptop.  The
sbuild environment from replicated from my desktop where things work
perfectly well.  But in my laptop, things are failing at the 
"Setup apt archive" step with

   E: Failed to change to directory ‘/<>’: Permission denied

And I'm completely at a loss trying to figure out what might be going
wrong.  Can anyone give me some hints about what to look for?

Thanks,

- Ted



sbuild (Debian sbuild) 0.84.2 (08 December 2022) on letrec.thunk.org

+==+
| f2fs-tools 1.15.0-1 (amd64)  Mon, 26 Dec 2022 19:20:17 + |
+==+

Package: f2fs-tools
Version: 1.15.0-1
Source Version: 1.15.0-1
Distribution: unstable
Machine Architecture: amd64
Host Architecture: amd64
Build Architecture: amd64
Build Type: full

I: NOTICE: Log filtering will replace 
'var/run/schroot/mount/unstable-amd64-sbuild-a67a3165-6688-4368-a376-66e094e41dfa'
 with '<>'
I: NOTICE: Log filtering will replace 'build/f2fs-tools-cVTRAh/resolver-CwG6Va' 
with '<>'

+--+
| Update chroot|
+--+

Get:1 http://httpredir.debian.org/debian unstable InRelease [161 kB]
Get:2 http://httpredir.debian.org/debian unstable/main Sources.diff/Index [63.6 
kB]
Get:3 http://httpredir.debian.org/debian unstable/main amd64 
Packages.diff/Index [63.6 kB]
Get:4 http://httpredir.debian.org/debian unstable/main Sources 
T-2022-12-26-1404.34-F-2022-12-26-0804.45.pdiff [15.0 kB]
Get:4 http://httpredir.debian.org/debian unstable/main Sources 
T-2022-12-26-1404.34-F-2022-12-26-0804.45.pdiff [15.0 kB]
Get:5 http://httpredir.debian.org/debian unstable/main amd64 Packages 
T-2022-12-26-1404.34-F-2022-12-26-0804.45.pdiff [33.2 kB]
Get:5 http://httpredir.debian.org/debian unstable/main amd64 Packages 
T-2022-12-26-1404.34-F-2022-12-26-0804.45.pdiff [33.2 kB]
Fetched 337 kB in 1s (238 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
Calculating upgrade...
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

+--+
| Fetch source files   |
+--+


Local sources
-

/tmp/gbp/f2fs-tools_1.15.0-1.dsc exists in /tmp/gbp; copying to chroot
I: NOTICE: Log filtering will replace 
'build/f2fs-tools-cVTRAh/f2fs-tools-1.15.0' with '<>'
I: NOTICE: Log filtering will replace 'build/f2fs-tools-cVTRAh' with 
'<>'

+--+
| Install package build dependencies   |
+--+


Setup apt archive
-

Merged Build-Depends: debhelper-compat (= 13), libblkid-dev, libselinux1-dev, 
pkg-config, uuid-dev, build-essential, fakeroot
Filtered Build-Depends: debhelper-compat (= 13), libblkid-dev, libselinux1-dev, 
pkg-config, uuid-dev, build-essential, fakeroot
E: Failed to change to directory ‘/<>’: Permission denied
I: The directory does not exist inside the chroot.  Use the --directory option 
to run the command in a different directory.
Dummy package creation failed
E: Setting up apt archive failed

Setup apt archive
-

Merged Build-Depends: dose-distcheck
Filtered Build-Depends: dose-distcheck
E: Failed to change to directory ‘/<>’: Permission denied
I: The directory does not exist inside the chroot.  Use the --directory option 
to run the command in a different directory.
Dummy package creation failed
E: Setting up apt archive failed
E: Failed to explain bd-uninstallable

+--+
| Summary  |
+--+

Build Architecture: amd64
Build Type: full
Build-Space: n/a
Build-Time: 0
Distribution: unstable
Fail-Stage: explain-bd-uninstallable
Host Architecture: amd64
Install-Time: 0
Job: /tmp/gbp/f2fs-tools_1.15.0-1.dsc
Machine Architecture: amd64
Package: f2fs-tools
Package-Time: 0
Source-Version: 1.15.0-1
Space: n/a
Status: given-back
Version: 1.15.0-1

Finished at 2022-12-26T19:20:17Z
Build needed 00:00:00, no disk space



Bug#1021750: general: the nodelalloc mount option should be used by default for ext4 in /etc/fstab

2022-10-14 Thread Theodore Ts'o
On Fri, Oct 14, 2022 at 03:37:29PM +0200, Marco d'Itri wrote:
> On Oct 14, Vincent Lefevre  wrote:
> 
> > > This is obviously convenient on Guillem's part, but we have to optimize 
> > > systems by default for the general case and not just for dpkg -i.
> > This dpkg FAQ says that this is not beneficial for just dpkg,
> > but also "for any application in the system".
> [citation needed]
> 
> I hope that you understand why at this point I cannot trust as is the 
> opinions of the dpkg maintainer.

The dpkg FAQ is just wrong.  It relates to controversy which is over a
dozen years old.  For more information see, see Josef Sipek's blog
post from 2009, "O_PONIES & Other Assorted Wishes"

https://blahg.josefsipek.net/?p=364

The O_PONIES mention references an 2009 April Fool's patch:


https://lore.kernel.org/linux-fsdevel/20090401041843.gn19...@josefsipek.net/

Because buggy applications and clueless application programmers vastly
outnumbers file system maintainers, at the 2009 LSF/MM workshop, a
number of the major file system developers agreed on the following
workaround.  If the application opens a pre-existing file with
O_TRUNC, or renames a newly created file on top of pre-existing file,
we will force the delayed allocation to be automatically resolved when
the file is closed (in the first case) or the rename (in the second
case).  It does *not* force a file system commit.  So if you crash
within 5 seconds of the close(2) or rename(2), you will still suffer
data loss.

HOWEVER, this was always the case for buggy applicatons that refused
to call fsync(2) who were relying on the old ext3 file system
semantics.  It did not guarantee that things would work; it would just
*mostly* work, since *usually* you didn't crash within 5 seconds of
rewriting a file.

So no, we're not going to be making the default change to /etc/fstab.

NACK.

- Ted



Re: Bug email is not getting to me

2022-09-26 Thread Theodore Ts'o
On Sun, Sep 25, 2022 at 07:00:38PM -0700, Russ Allbery wrote:
> Steven Robbins  writes:
> > On Sunday, September 25, 2022 4:57:19 P.M. CDT Russ Allbery wrote:
> 
> >> If someone sends mail from a domain that says all mail from that domain
> >> will always have good DKIM signatures, and if the signature isn't
> >> present or doesn't validate the mail should be rejected, and that
> >> message is forwarded through bugs.debian.org to someone whose mail
> >> server honors DMARC settings, the mail will be rejected.  That's
> >> because the process of modifying the message in the way that
> >> bugs.debian.org needs to do (adding the bug number to the Subject
> >> header, for instance) usually breaks the signature.
> 
> > So are you effectively confirming this is indeed the DMARC bug [1] filed
> > in 2014?
> 
> > [1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=754809
> 
> Yeah, that's my guess.  It's a very common problem.
> 
> The right solution is probably for bugs.debian.org to rewrite all incoming
> mail to change the From header to a debian.org address (probably the bug
> address) and add a Reply-To header pointing to the original sender.  This
> is the fix that Mailman mostly uses, and it seems to work (I've had this
> problem with multiple mailing lists I run and turning on this message
> mangling has fixed it).  But of course someone has to find time to
> implement this.

Yeah, what would be nice is if it massaged the From header to
something like this:

From: Theodore Tso 
Reply-to: Theodore Tso 

That makes things a bit friendlier for various mail user agents, while
bypassing the DKIM signature problem.  But as Russ said, the trick is
finding someone with time to implement the change in the Debian BTS...

- Ted



Re: Firmware - what are we going to do about it?

2022-05-29 Thread Theodore Ts'o
On Sun, May 29, 2022 at 05:33:21PM -0400, Bobby wrote:
> FWIW, as a 10+ years user (first time caller :p) I strongly support
> sticking with the status quo. There are plenty of systems that don't
> require firmware to work, and often when people say it doesn't "work"
> they really mean that its functionality is more limited.

Unfortunately, that's not true.  Without the firmware, in many cases
on modern laptops (for example, the Samsung Galaxy Book 360) the WiFi
and Ethernet devices will simply *not* *work*.  If the user has only
downloaded the Netinst installer onto a USB stick (since most modern
laptops also don't have DVD drives), they will not be able to install
their system.

This is a rather negative user experience.

> Further, there are security concerns with blobs. Yes, we can get
> microcode updates, but were those updates themselves actually audited?
> As far as I know, they are just as opaque as the code they're
> replacing. They could be making security worse, and we won't know
> until someone finds the exploits. The rare event where a microcode
> update is released and it increases security is far outweighed by the
> vast majority of the situations where installing opaque code is
> detrimental to security.

On many modern peripherals, the microcode updates are digitally signed
by the manufacturer.  So if you didn't trust, say, the CPU updated
microcode for your Intel processor, why are you trusting the original
CPU microcode, which would have also come from Intel?

> If people are unhappy with the status quo, my proposal would be to
> encourage more people to work on free alternatives. There is an ocean
> of possibilities here, from open hardware to reverse engineering. My
> feeling is that a lot more could be done to better support hardware
> that doesn't involve non-free code at all. There are many free
> projects that have never made it to Debian.

Unfortunately, if you want a modern laptop, which supports the latest
WiFi standards, and which is thin and light, you're not going to find
one which is using purely free alternatives.  100% free laptop
alternatives do exist, but typically you will end up are using ten
year old hardware, or the devices are significantly heavier and more
cumbersome.

And unfortunately, open hardware is signficantly more difficult and
requires far more capital outlay than "open software".  Simply
encouraging more people to work on free alternatives is not going to
be enough unless someone is willing bankroll these efforts to the
tunes of millions of dollars.

If people want to use really awful, old hardware, all in the name of
"free software", they should certainly have the freedom to do so, and
it should be easy for them to make sure that the purity of their
system is not compromised.

However, if someone has already purchased the hardware, it's rather
horrible user experience when they discover that Debian won't install
a working system on it, and to find the that the the non-free firmware
in a locked filing cabinet stuck in a disused lavatory with a sign on
the door saying 'Beware of the leopard'.

Remember, the Debian Social Contract says that our priorities are our
users *and* free software.  Making it nearly impossible for a novice
user to install Debian on their brand new laptop where Windows 10 and
Ubuntu just *works* might not be the best way of balancing the
competing needs here of the users and free software.   

Best regards,

- Ted



Re: NEW processing friction

2022-02-07 Thread Theodore Ts'o
On Mon, Feb 07, 2022 at 07:05:59PM -0700, Sean Whitton wrote:
> Hello,
> 
> On Mon 07 Feb 2022 at 12:00PM -05, Theodore Ts'o wrote:
> 
> > On Mon, Feb 07, 2022 at 12:06:24AM -0700, Sean Whitton wrote:
> >>
> >> When we treat any of the above just like other RC bugs, we are accepting
> >> a lower likelihood that the bugs will be found, and also that they will
> >> be fixed
> >
> > Another part of this discussion which shouldn't be lost is the
> > probabiltiy that these bugs will even *exist* (since if they don't
> > exist, they can't be found :-P) in the case where there is a NEW
> > binary package caused by a shared library version bump (and so we have
> > libflakey12 added and libflakey11 dropped as binary packages) and a
> > NEW source package.
> 
> Which category of bugs do you mean?  I distinguished three.

The argument why a package which has an upstream-induced shared
library version bump, has to go through the entire NEW gauntlent, is
because it is Good Thing that to check to see if it has any copyright
or licensing issue.  But if you have a different package which doesn't
have upstream-induced shared library bump, it doesn't go throguh the
same kind of copyright and license hazing.  And I believe this isn't
fair.

Either we should force every single package to go through a manual
copyright/licensing recheck, because Debian Cares(tm) about copyright,
or "copyright/licensing concerns are an existential threat to the
project" (I disagree with both arguments), or a package such as
libflakey which is going through constant shared library version bumps
should not go through the NEW gauntlet just because it has new binary
packages (libflakey11, libflakey12, libflakey13, etc.) at every single
upstream release.

- Ted




Re: NEW processing friction

2022-02-07 Thread Theodore Ts'o
On Mon, Feb 07, 2022 at 12:06:24AM -0700, Sean Whitton wrote:
> 
> When we treat any of the above just like other RC bugs, we are accepting
> a lower likelihood that the bugs will be found, and also that they will
> be fixed

Another part of this discussion which shouldn't be lost is the
probabiltiy that these bugs will even *exist* (since if they don't
exist, they can't be found :-P) in the case where there is a NEW
binary package caused by a shared library version bump (and so we have
libflakey12 added and libflakey11 dropped as binary packages) and a
NEW source package.

If we can't do anything else, I suspect we can reduce project a
friction a lot of we only subject packages to copyright hazing when it
is a NEW source package, and not when there is a NEW binary package
caused by some usptream maintainers not being able to maintain ABI
backwards compatibility.

Granted, I'm being selfish here since that's where I experience the
friction, but I'm a big believer in half a loaf being better than
none.

- Ted



Re: Lottery NEW queue (Re: Are libraries with bumped SONAME subject of inspection of ftpmaster or not

2022-01-21 Thread Theodore Ts'o
On Fri, Jan 21, 2022 at 01:28:54PM -0500, Scott Kitterman wrote:
> 
> 1.  When the SO name changes and the binary package name is adjusted 
> accordingly, it is not super rare for the maintainer to mess something up in 
> the renaming and end up with an empty binary package, which does no one any 
> good.  I note that for debhelper compat 15 there appears to be some related 
> work in progress.  Perhaps this is, or can be extended to be, sufficient to 
> eventually make this kind of error a thing of the past.

Can we have better automated tooling, either in Lintian, or in when
source packages are rebuilt, that can take care of this?

The other thing that's perhaps considering here is that unfortunately,
there are some upstreams that are extremely irresponsible with library
ABI backwards compatibility, where they bump the SONAME essentially at
every release.  I recall one extreme case a few years ago where there
were over ten(!) SONAME bumps for a particular library over 12 months.

The problem with this is that it makes for a massive headache when it
comes to security updates.  The claim for why we want to use shared
libraries, despite the library dependency hell problem, is that when a
security problem gets fixed, all we need to do is to upload a new
shared library package, and all of the packages which depend on it
automatically get updated.  Well, if during the course of a testing
release, we have binary packages depending on libshaky3, libshaky5,
libshaky6, libshaky7 and libshaky8, now if there's a long-standing
security bug that gets fixed, it's not necessarily the case that when
the Debian maintainer which uploads an updated libshaky source
package, which might result in binary packages libshaky-dev,
libshaky-bin, and libshaky8, that there will be updates for
libshaky{3,5,6,7}.

Now that we are requiring source uploads for everything entering
testing, there's easy answer to this --- which is to simply have an
automated system which rebuilds all of the packages that have a
build-depends on libshaky-dev, so all the packages will now have a
dependency on libshaky8.  Huzzah!

But if we're going to do that, then we could also just support static
libraries, and just rebuilt all of the pacakges that link statically
with libshaky, thus solving the security argument for shared
libraries.  This also avoids the fairness problem where some packages
are reguarly going through ftpmaster review, and others aren't...

Just a thought

- Ted



Re: dpkg taking a bit too long ...

2021-10-05 Thread Theodore Ts'o
On Tue, Oct 05, 2021 at 04:09:12PM +0200, Jonathan Carter wrote:
> real  0m37.751s
> user  0m7.428s
> sys   0m12.374s
> 
> That's on ext4/nvme with no eatmydata. Perhaps time to perform a smart test
> on your disk?

Except Norbert was reporting 100% (and 15 minutes) of CPU time

Norbert, what file system are you using?

- Ted



Re: next steps after usrmerge

2021-08-27 Thread Theodore Ts'o
On Fri, Aug 27, 2021 at 07:34:06PM +0200, Bastien ROUCARIES wrote:
> 
> See the proposal here of guillem:
> https://wiki.debian.org/Teams/Dpkg/Spec/MetadataTracking

This proposal doesn't directly address usrmerge, and the fact that new
Debian installs have been installing systems with top-level symlinks
for two stable releases now.

The only proposal I've seen from guillem which directly addresses the
usrmerge merge is to attempt to rollback the clock by two stable
releases, in direct contravention of the decisiopn made by the
technical committee.

- Ted



Re: next steps after usrunmess

2021-08-27 Thread Theodore Ts'o
On Fri, Aug 27, 2021 at 03:39:57AM +0100, Phil Morrell wrote:
> >   - reverting the changes in deboostrap in sid, bullseye (and ideally
> > in buster too),
> >   - reverting the notion that split-/usr is unsupported (which includes
> > the extremely confusing interpretation about this applying to
> > sid/testing too), and update documentation such as release-notes,
> 
> This bullet point response confuses me - and then what?
> 
> If I understand your position correctly, you don't want merged-/usr as
> an end-goal and you disagree with usrmerge transition as a hack. In
> order to achieve the result above without bypassing Debian processes,
> the formal method would to pass a GR overriding the tech-ctte minority.
> Is the only reason you haven't proposed that as a GR that you've already
> sunk too much energy into this? Or that you don't trust that process?

My question is the reverse.  If there is rough consensus that we as a
community *do* want to go forward with /usr unification in a way which
is compatible with all of the other distrubtions --- and Debian is
definitely in thet trailing edge here --- and a very small number of
dpkg developers are refusing to help resolve these issues, are they
entitled to perform a pocket veto on /usr unification?

Simon and I have proposed technical paths forward which appear to be
sound, and I note that Guillem has not commented on them.  Which is
why I haven't really participated in this thread in the last couple of
days; I've said my piece, and if folks who essentially want to
rollback the clock by several years refuse to engage, just simply
repeating my points doesn't seem to be a good use of electrons.

But the question remains --- how do we as a community move forward?
Debian is made up of volunteers, so we can't *force* the dpkg
developers to do anything they don't want to do.   So what then?

Does someone need to create patches to dpkg which attempt to teach it
that /bin/foo and /usr/bin/foo are the same file, if there exists a
symlink from /bin to usr/bin?  And then with some kind of process,
maybe with the blessing of the technical committee, upload it as an
NMU over the objections of the dpkg developers if they continue to
refuse to engage with solutions that proceed forward with
/usr-unification?  That seems to be rather non-ideal from a community
perspective.  But what's the alternative?  Should a single DD have the
power to overturn a techical committee because they are the maintainer
of a highly important package?  That doesn't seem great, either.


As I've said before, I've never been a fan of /usr-unification; I
don't hate it, but I've never thought it was worth it in and of
itself, other the "compatibility with the rest of the world argument".
I'm not a huge fan of systemd, either, although I never hated it as
much as some.  But the entire Linux ecosystem has spoken, and so my
personal views aren't really important at this point.  Part of living
in a community is realizing that one doesn't always get one's own way,
and subsuming one's individual wants for the greater good.

So I repeat the question to the entire community --- what is to be
done?  How do we move forward?

- Ted



Re: Debian choice of upstream tarballs for packaging

2021-08-25 Thread Theodore Ts'o
On Wed, Aug 25, 2021 at 04:11:37PM +0200, Thomas Goirand wrote:
> 
> It's been *years* since I encounter a PyPi package that doesn't have a
> Git repo as its homepage (and unfortunately, 99% on Github).
> 
> I wrote this many times, but I don't see why we should use any "upstream
> tarball" when the Git repository itself contains the tarball with:
> 
> git archive --prefix=$(DEBPKGNAME)-$(VERSION)/ $(GIT_TAG) \
>   | xz >../$(DEBPKGNAME)_$(VERSION).orig.tar.xz
> 
> (which leads to a .xz, which is nicer)

Well, if we don't use an "upstream tarball", we do need to keep our
own private archive the Git repository.  After all, there is no
guaranteee that the upstream git repo might disappear in the future.

Simon's proposal that use use a tarball of the bare git repo
containing all of the git objects needed leading up to the signed tag
works, but isn't necessarily the most efficient over time, since we
would be keeping multiple copies of redundant git repos in
snapshots.debian.org, or across multiple Debian versions in our ftp
archives.  But it at least guarantees if we will continue to have
access to the source even if the upstream git repo goes *poof*.

> Not only then, only only has to merge the upstream tag in the Debian
> branch to get the new release, but on top, no need to "gbp import" or
> "pristine-tar commit", and a single packaging branch becomes enough.
> 
> I very much wish this packaging workflow gained more traction, and the
> pristine-tar abomination dies...

Sure, but it implies that the git repos on salsa and/or dgit have to
become our official source of record for the purposes of GPL
compliance.  Which means we need to be a lot more careful about ever
allowing those git trees from being deleted or rewritten, even if the
goal is to remove files that might be found to be problematic
copyright licensing perspective.

- Ted



Re: Making the dpkg database correspond with reality (Was Re: merged /usr vs. symlink farms)

2021-08-24 Thread Theodore Ts'o
On Tue, Aug 24, 2021 at 11:57:27AM +0200, Simon Richter wrote:
> Hi,
> 
> On 8/24/21 2:48 AM, Theodore Ts'o wrote:
> 
> > So in theory, if we had a program which looked for the top-level
> > symlinks /{bin,lib,sbin} -> /usr/{bin,lib,sbin}, and if they exist,
> > scans dpkg database is scanned looking for of the form
> > /{bin,lib,sbin}/$1, and updates them with /usr/{bin,lib,sbin}/$1, and
> > then in the future, if dpkg sees the top-level symlink, canonicalizes
> > any files referenced in the packages to /usr/{bin,lib,sbin}/$1, with a
> > fallback searching for /{bin,lib,sbin}/$1 in the file system, this
> > would solve the problem.
> 
> Yes. To apply the transformation, this would likely have to happen in the
> dpkg package itself, so the one-time transformation is applied only when
> dpkg can maintain the workaround from that point on.

Sure, since we're talking about dpkg database surgery, it's better
that the sources of such a program be located in the dpkg sources, and
regardless of who does the work, the dpkg maintainers should be
involved in the code review of any such change.

> That is the half that is missing from my proposal, as I was focusing on how
> to transition non-usrmerged systems from within dpkg.

I've been more focused on the usrmerged systems since nearly all of my
personal systems are already usrmerged, since I tend to reinstall my
systems whenver I upgrade my hardware, and deboostrap has installing
systems with the usrmerge top-level symlinks since Buster.
Furthermore, people have been arguing strenuously that the possibility
of file loss is real for these usrmerged systems, so fixing this
seemed to be high priority, regardless of how many systems use the
usrmerge setup.

So protecting against lost files was higher priority in my mind,
whether the people arguing that it's rare because Ubuntu users having
been losing files, or not, I accept the argument that if it can happen
(and you've demonstrated via adversarial test cases that it *can*), we
should fix that bug, whether it's considered an RC bug or not.

(Although my personal belief is that we should be a lot more open to
fixing less critical bugs in stable releases, so if we have a fix, I'd
be all for rolling it out early to Bullseye regardless of whether it's
considered "RC" or not.)

> > Let's ignore how we would deploy this helper program and the update
> > dpkg from a stable upgrade perspective, but in terms of preventing
> > potential problems during the testing window, getting an update to
> > dpkg which included the database fixup program and which ran from the
> > maintainer script would be a potential solution path.
> 
> Yes-ish. We'd also need to make sure that installing the usrmerge package
> after that dpkg upgrade does not make things inconsistent again, so it would
> make sense for the dpkg source package to provide a "usrmerge" package that
> is well integrated, and for dpkg to conflict with older versions of
> usrmerge.

I'm less worried about whether usrmerge is part of dpkg or not, since
most of my usrmerged systems are due to them being reinstalled since
Debian Buster has been around, and the definition of usrmerged is
relatively well understood (symlinks for /{bin,lib,sbin} to
/usr/{bin,lib,sbin}).  But it wouldn't hurt for dpkg to provide its
own "usrmerge" functionality, and I certainly wouldn't argue against it.

> > Furthermore, if dpkg knew to always canonicalize filenames from
> > /{bin,lib,sbin}/XXX to /usr/{bin,lib,sbin}/XXX when adding to the
> > database, and when it looked for files in the file system looked first
> > in /usr/{bin,lib,sbin}/XXX, with a fallback to /usr/{bin,lib,sbin}/XXX
> > the file names in the package would not need to change all, since all
> > of the magic fixup work would be happening inside dpkg.
> 
> Yes. In an ideal world, we wouldn't hardcode the list of symlinks in dpkg
> though, that's why I proposed shipping symlinks in base-files and having
> dpkg recognize the symlink-vs-directory conflict as intent to move a
> filesystem tree around.

That's certainly the more general solution, although again, I think
the definition of usrmerge is well understood, especially since Debian
has been on the trailing edge of the usrmerge transition across the
Linux ecosystem, and so the high priority should be for fixing the
specific case of moving the contents to /{bin,lib,sbin} to
/usr/{bin,lib,sbin} and leaving symlinks behind for the directories.

If we want to support people who want to say, move /usr/bin to
/u1/bin, and /usr/lib to /u2/lib, or even /usr/lib/X11 to
/funky/dir/X11, great!  Personally, I wouldn't consider that a high
priority item on the requirements list, though, unless it comes
essentially for free.

Cheers,

- Ted



Making the dpkg database correspond with reality (Was Re: merged /usr vs. symlink farms)

2021-08-23 Thread Theodore Ts'o
I want to ask a potentially stupid question.

As I understand things, the problem is that in a usrmerge'd file
system where we have the top-level symlinks /{bin,lib,sbin} which
point at /usr/{bin,lib,sbin}, the problem is if we have a package
which contains the file in /sbin/blart, it gets installed in
/usr/sbin/blart thanks to the symlink, but the dpkg database has an
entry for /sbin/blart, and that is the conflict which is the problem.

So in theory, if we had a program which looked for the top-level
symlinks /{bin,lib,sbin} -> /usr/{bin,lib,sbin}, and if they exist,
scans dpkg database is scanned looking for of the form
/{bin,lib,sbin}/$1, and updates them with /usr/{bin,lib,sbin}/$1, and
then in the future, if dpkg sees the top-level symlink, canonicalizes
any files referenced in the packages to /usr/{bin,lib,sbin}/$1, with a
fallback searching for /{bin,lib,sbin}/$1 in the file system, this
would solve the problem.

Let's ignore how we would deploy this helper program and the update
dpkg from a stable upgrade perspective, but in terms of preventing
potential problems during the testing window, getting an update to
dpkg which included the database fixup program and which ran from the
maintainer script would be a potential solution path.

Furthermore, if dpkg knew to always canonicalize filenames from
/{bin,lib,sbin}/XXX to /usr/{bin,lib,sbin}XXX when adding to the
database, and when it looked for files in the file system looked first
in /usr/{bin,lib,sbin}/XXX, with a fallback to /usr/{bin,lib,sbin}/XXX
the file names in the package would not need to change all, since all
of the magic fixup work would be happening inside dpkg.

Is this a viable path forward, or am I missing something?

Thanks,

- Ted



Re: merged /usr vs. symlink farms

2021-08-22 Thread Theodore Ts'o
On Sun, Aug 22, 2021 at 12:26:46PM +0200, David Kalnischkies wrote:
> 
> So, when did you last log into your build chroot to upgrade dpkg and
> apt first? And while at that, did you follow the release notes – from
> the future, as they have yet to be written for the release you are
> arguably upgrading to already?

Personally, I never upgrade build chroots between major versions.  I
just use a tool like sbuild-createchroot to create them from scratch.
That's mainly because I need to keep the Buster chroot around for
backports, so I'll just create a new Bullseye chroot when I need it.

And of course my unstable chroot is continuously being upgraded but
that avoids most of the concerns people have about the Debian N to N+1
stable upgrade path.

> But okay, lets assume you actually do: apt and dpkg tend not to be
> statically linked, so they end up having dependencies

So on my "pet" production systems (on my "cattle" systems I just
recreate them from scratch, e.g. "gce-xfstests create-image"[1] which
uses deboostrap), what I do is update /etc/apt/sources.list, and then
do "apt update ; apt-get install dpkg ; apt-get install apt" before I
run "apt-get dist-upgrade".  This handles the dependencies for me, and
while a few packages will get upgraded using the old dpkg and apt, the
vast majority of the packages will get upgraded using the latest
stable version of dpkg and apt.  This seems like relatively cheap
insurance; it doesn't hurt, and it might help avoid some nasty corner
cases that got overlooked during testing.

[1] https://thunk.org/gce-xfstests

> Don't get me wrong, as an apt dev I would love if we could do that. It
> is kinda annoying to work around issues you have fixed years ago, but
> aren't available in (soon) oldstable.

Well, I'll observe that if we told people to upgrade dpkg and apt
first using "apt-get install ...", this automatically handles the
dependencies, and while it doesn't make *all* of the potential issues
go away, I would think it would reduce the potential corner cases and
hence make it easier to test to make sure the right thing happens on
an update from Debian vN to vN+1, would it not?


> P.S.: As someone will ask: Ubuntu splits the user base in two: Those who
> run their release upgrader which runs outside of the packaging system and
> largely can do whatever (including bring in a standalone apt/dpkg just
> dealing with an upgrade – they usually resign to much simpler things
> through) and those who don't like for example chroots and containers who
> effectively use whatever an upgrade path 'apt dist-upgrade' gives you.

... and in my opinion, that's a *fine* strategy.  Sure, it would be
nice if "apt dist-upgrade" always worked smoothly, and that's a great
aspirational goal, but if we can reduce risks to users ---
particularly non-technical/non-expert users --- by using a release
upgrader which upgrades apt and dpkg first, we should do that.  So
let's take a page from Ubuntu, by all means!

- Ted



Re: merged-/usr vs. partially-symlink-farmed-root

2021-08-22 Thread Theodore Ts'o
On Sun, Aug 22, 2021 at 10:24:56PM +0100, Luca Boccassi wrote:
> The point of the migration is that /usr/bin will be identical to /bin,
> etc. If they are not identical, then it's not usrmerge as it is
> understood and has been adopted by many upstreams for a decade, it's
> something else that is incompatible with it.

I'll note that people on this thread are using usrmerge in different
senses of the word.

For simplicity's sake, what I've tried to do in my posts is to refer
to "usrmerge" as meaning the creations of top-level symlinks at
/{bin,lib,sbin} pointing /usr//{bin,lib,sbin}.  This is the specific
proposal made here:

https://www.freedesktop.org/wiki/Software/systemd/TheCaseForTheUsrMerge/

... and in Debian, see:

https://wiki.debian.org/UsrMerge

So I think there is some justification to using to usrmerge only to
refer to the top-level symlinks approach.



A more general term might be "/usr unification", quoting from a
2012(!) LWN article:

"/usr unification" (or simply "usrmove") is the idea of moving
the contents of /bin, /lib, and related root-level directories
into their equivalents under /usr.
- https://lwn.net/Articles/483921/

After moving the contents of /{bin,lib,sbin}/* to their equivalents
under /usr, the next question is whether we stop things from breaking?
By using top-level sytmlinks, which many call "usrmerge", or creating
symlink farms in the directories /{bin,lib,sbin}, for which I try to
use the more awkward construction, "/usr unification via symlink
farms"?

Admittedly, "/usr unification via symlink farms" is awkward, but I've
been hoping we can declare consensus that using symlink farms is
undersirable way of trying to achieve /usr unification, since it
wastes a lot of on-disk inodes, and there is complexity involved in
needing to keep the symlink farm up-to-date as new files are created
in /usr/{bin,lib,sbin}.

But in any case, perhaps it would simplfy the discussion if we try to
stick with consistent terminology?  So what if we were to use usrmerge
to unambiguously mean achieving /usr unification via top-level
/{bin,lib,sbin} to /usr/{bin,lib,sbin} and considering symlink farms
as being another (although IMHO, inferior) way accomplishing the goal
of /usr/ unification?

- Ted



Re: merged /usr vs. symlink farms

2021-08-21 Thread Theodore Ts'o
On Sun, Aug 22, 2021 at 02:15:31AM +0200, Simon Richter wrote:
> 
> The latter is what brought us into a situation where it is no longer safe to
> move files between packages and between aliased directories in the same
> upgrade, and because users will be expected to upgrade in a single step
> between stable releases, that means these two types of changes are mutually
> exclusive for the entire release cycle.

So with the goal of trying to enumerate possible solutions, it sounds
some combination of:

(a) disallowing moving problematic files between packages, with possibly some
QA tools to enforce this
(b) keeping the next release cycle *short*, say only a year
(c) requiring that dpkg be upgraded first, and having dpkg and
related tools understand the concept of usrmerge and the
fact that /{bin,lib,sbin} and /usr/{bin,lib,sbin} are identical
for usrmerged systems

might be possible paths forward.  Do you agree?  What are other
possible solutions?

- Ted



Re: merged /usr vs. symlink farms

2021-08-21 Thread Theodore Ts'o
On Sat, Aug 21, 2021 at 10:26:13AM +0200, Wouter Verhelst wrote:
> It bothers me that you believe "we've been doing this for a while and it
> didn't cause any problems, so let's just continue doing things that way
> even if the people who actually wrote the damn code say that path is
> littered with minefields and they're scared of what could happen when we
> finish the tranition this way" is a valid strategy. It goes against
> everything I was taught to do to write reliable software.

So as an expert, what's your recommendation about what is to be done?
Personally, I *don't* have a problem about telling people to manually
update dpkg, apt, and/or apt-get before they do the next major stable
release (maybe it's because this is something I do as a matter of
course; it's not that much extra effort, and I'm a paranoid s.o.b.,
and I know that's the most tested path given how Debian testing
works).

Other people think that is a terrible idea, to be avoided at all
costs.  I don't understand why that is such a terrible outcome, since
I do it already, but perhaps I can be educated on that point.

In any case, I believe downsides of the symlink farm alternative are
greater than the downsides of other options:

* just living with the risk of potential corner cases which
  might affect users when they do the bullseye -> bookworm
  upgrade, since aparently the sky hasn't fallen with Ubuntu, or

* advise users to upgrade dpkg/apt/apt-get first when they do
  the next release, with the strength of that advice depending
  on how likely users are to suffer from a "mine" causing
  their system to lose a limb or two.

Heck, if the minefield is that dangerous (and if so, why the *heck*
aren't Ubuntu users screaming from data loss, system instability,
etc?), perhaps you should advise the release managers that dpkg needs
to have fixes pushed out to Bullseye NOW! NOW! NOW! to eliminate the
potential imminent damage that you seem to be so fearful of our users
might get hit with.

Can you give more details about real life scenarios which is
triggering your fears, and whether there are ways we can mitigate
against those scenarios?

Best regards,

- Ted



Re: merged /usr vs. symlink farms

2021-08-20 Thread Theodore Ts'o
On Fri, Aug 20, 2021 at 07:56:33AM -0600, Sam Hartman wrote:
> As you know, one of the ways we can see  how close we are on consensus
> is to look at what happens when someone proposes a summary like you did.

Thanks, that was my goal: trying to see if we could move the
discussion towards some kind of community consensus.

> Simon Richter tried to challenge your summary effectively saying that we
> couldn't have an informed consensus because there were open technical
> issues that had not been addressed.  This was roundly rejected by
> yourself, Philip Hands and Luca Boccassi.
> 
> Simon's position seemed to be that we need a dpkg update  in order to
> move forward and that we cannot depend on that mid-release.
> 
> You talked about ways we could get a dpkg update at the beginning of the
> release process.
> Luca and Philip made more structural objections.
> 
> Simon did not clearly explain *why* we need a dpkg update.

Fair enough.  I am unclear on whether we need a dpkg update; I do
believe, though, that even if we needed a dpkg update for some reason,
though, we should explore options other than /usr unification via
symlink farms, which I continue to believe is highly undesirable
choice.

> I can see two arguments why we might need a dpkg update:
> 
> 1)  To fix bugs related to directory aliasing.
> 
> I don't think that there is a consensus those bugs need to be fixed to
> move forward.  (Put another way it's not clear the community agrees they
> are RC).

Actually, I think we can make a stronger statement than that.  Even if
the dpkg bugs relating to directory aliasing which are release
critical in terms of severity (and it's unclear to me whether they
rise to that level), the specifically germane question is whether they
have to be fixed *before* proceeding with the bullseye->bookworm
update.  After all, it might be that say, "dpkg-query -S" can fail[1],
but even *if* this were considered priority "serious" --- which I do
not believe --- if it doesn't break upgrading systems, then it can be
fixed when dpkg is upgraded, and we don't have to upgrade dpkg first.

(And again, is it *really* that bad if we tell users that it is
advisable to upgrade dpkg first?  TBH, I very often will upgrade dpkg
and apt by hand, before running "apt dist-upgrade" to this day,
because of past experience from upgrades long, long ago...  Maybe it's
a cargo cult practice, like typing "sync" three times, but it's not
that hard!)

So perhaps one path forwarwd is to example the breakages listed in
[1], and consider how likely any of them are likely to break upgrades.
I will admit that concerns around update-alternatives and dpkg-divert
sound the scariest --- but I've been using a usrmerged system for a
while, and nothing as broken for me.  And as others have pointed out,
Ubuntu has been using usrmerged systems exclusively for a while now,
"going behind dpkg's back", and there doesn't seem to have been a lot
of reports of disaster befalling Ubuntu.  Is there things they are
doing to mitigate the potential problems around dpkg-divert and
update-alternatives?  What can we learn from the Ubuntu experience?

[1] https://wiki.debian.org/Teams/Dpkg/MergedUsr

> However, there are proposed solutions under development that in terms of
> being favored in a consensus discussion  are preferable to
> usr-merge-via-symlink farms:
> 
> A) An extraordinary upgrade process.  For example requiring that if you
> are running on a system that is not usrmerged already, you need to
> install usrmerge at the beginning of the upgrade.
> (it could still be transitively essential, but explicitly asking people
> where it matters to install early).
> 
> b) Require that bookworm packages work on non-usrmerged systems and
> support non-usrmerged build chroots in the bookworm cycle.
> 
> None of these solutions are ideal.
> There is still technical work to do, and  there a absolutely are open
> technical issues.

Agreed.  And if the folks who are working can let us know how we can
help determine how we can come to some kind of resolution on those
open technical issues, that would be great.  Letting things drag out
isn't going to be helpful, so once we have mature options to consider,
hopefully we can weigh the pros and the cons and come to some kind of
group "hum" about the best path forward.

(And I'm having flashbacks to my days as an IETF working group chair.  :-)

  - Ted



Re: merged /usr vs. symlink farms

2021-08-19 Thread Theodore Ts'o
On Thu, Aug 19, 2021 at 10:39:45PM +0200, Simon Richter wrote:
> 
> I think no one likes that idea, but it's the only solution that doesn't
> immediately fail because it requires a dpkg update that hasn't shipped with
> the current stable release, breaks local packages (kernel modules, firmware,
> site-wide systemd configuration), or both.

This could be solved if we could somehow require dpkg to be updated
before any other packages during the the next update, no?

Breaking this constraint means that we can't make "apt-get
dist-update" work seemlessly --- but what if we were to change the
documented procedure for doing a major update?

That's not ideal, granted, but how does that compare against the other
alternatives?

- Ted

P.S.  I had a vague memory that there was some update in the long
distant past where we did require a manual upgrade of dpkg first.  Or
is my memory playing tricks on me?  I do know that a manual update of
dpkg is the first step in a crossgrade



Re: merged /usr vs. symlink farms

2021-08-19 Thread Theodore Ts'o
On Thu, Aug 19, 2021 at 11:17:17AM +0100, Simon McVittie wrote:
> In this specific case, I think the thing you're having a problem with is
> the gradual, file-by-file migration of executables into /usr by individual
> packages and individual packages' maintainers. That's not merged-/usr:
> merged-/usr does the migration all at once, by creating the aliasing
> symlinks (and then we can clean up the contents of data.tar.* to put all
> /usr-like files below /usr at our leisure, during the next release cycle,
> without needing maintainer script glue).

FWIW, from following the discussion, I've become more and more
convinced that a symlink farm is *not* the right answer, regardless of
whether it is done centrally or via individual packages moving files
and created symlinks if necessary in individual maintainer scripts.

The symlink farm idea seems to be pushed by the dpkg team, because
it's clear that supprorting directory aliasing by having /bin ->
usr/bin, /lib -> /usr/lib, etc., top-level symlinks does create more
work for the dpkg team, and they seem to be put off by the fact that
they hadn't agreed to do that work, and they appear to claim that they
weren't consulted in advance.

But if we are going to follow how Fedora, Solaris, etc, have been
moving elimitating the traditional /{bin,sbin,lib} and
/usr/{bin,sbin,lib} split, directory aliasing the way Fedora, Solaris,
etc. have done things is the only way to go.

Perhaps the dpkg team should have been consulted earlier, and if they
could have convincingly argued that this was a show stopper, or they
had demanded that someone else should have provided the engineering
effort to make dpkg handle the directory aliasing *first*, perhaps we
shouldn't have even stated in the /{bin,sbin,lib} ->
/usr/{bin,sbin,lib} unification journey, despite the fact that all of
the other distributions have gone down that path.

Speaking personally, I'm not super excited about /usr unification.
But then again, I don't work on projects such as embedded systems,
containerized systems, etc., which seem to benefit from /usr
unification, and there *is* value in being similar to other Linux
distributions.

In any case, that's water on the bridge.  We are where we are, and
stopping midway through the /usr unification journey would be a far
worse outcome.  And given that we've already lost the benefits of the
split /usr architecture (specifically, the ability to boot without
/usr being mounted, which I recognize is not as useful in the 21st
century) --- we should push on and finish the job.

Given that symlink farms have all sorts of downsides, the best path
foward seems to be to teach dpkg about the top-level directory
aliasing, and simply handling this appropriately.  Issues such as the
/bin/sh vs. /usr/bin/sh unification causing problems with /etc/shells
is an issue all distributions have to deal with anyway, and we can
look to see how they have handled it.

Cheers,

- Ted



Re: WARNING: dh_installsystemd is moving unit files to /usr/lib/systemd/system

2021-08-19 Thread Theodore Ts'o
On Thu, Aug 19, 2021 at 11:46:21AM +0200, Michael Biebl wrote:
> Am 19.08.21 um 08:27 schrieb Michael Biebl:
> > I'll check later today, if i-s-h (init-system-helpers) does properly
> > handle this new path. If so, I'd say the bug should be reassigned to
> > lintian and we should start transitioning the files to
> > /usr/lib/systemd/system.
> 
> I now remember updating i-s-h [1].
> 
> So we should be safe using /usr/lib/systemd/system I'd say.

OK, thanks for confirming this.  What really worried me was this text
in lintian:

N:   Systemd in Debian searches for unit files in /lib/systemd/system/ and
N:   /etc/systemd/system. Notably, it does *not* look in
N:   /usr/lib/systemd/system/ for service files.

This implied that it was *systemd* that didn't like /usr/lib/systemd,
and I didn't understand the subtlty that it was really the how
Debian's init-system-helpers worked which was the issue.

So it sounds like it's OK for me to upload a package like e2fsprogs
with a systemd unit file, despite the lintian flagging this as an
error.

  - Ted



WARNING: dh_installsystemd is moving unit files to /usr/lib/systemd/system

2021-08-18 Thread Theodore Ts'o
There appears to be a rather major regression in debhelper 1.13.4 and
1.13.4nmu1, which is forcing unit files to go in
/usr/lib/systemd/system, instead of /lib/systemd/systemd (where sytemd
will actually pay attention to them).

On systems with ursmerge, things should still work, thanks to the
compatibility symlink, but this will cause packages with unit files
built since debhelper 1.13.4 was released to unstable, or uploaded as
source builds, to be incorrect, triggering a Lintian error and
breaking on systems that don't have usrmerge installed.

For more details and analysis, please see:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=992469

I just wanted to post a warning that if you were planning on building
or uploading new source-only uploads to unstable now that Bullseye has
been released, and your package contains systemd unit files, you may
want to hold off until this bug gets fixed...

- Ted



Re: A summary of where I think we are on the technical side of the merged /usr discussion

2021-08-17 Thread Theodore Ts'o
Simon,

Thanks so much for your comprehensive answer.  It's a great summary
that I think would be really useful for those of us who are package
maintainers who don't have a strong position one way or another
vis-a-vis usrmerge vs merged-/usr-via-symlink-farms, but just want to
do what is best for our users.

I guess I was thinking that if individual packages could just move all
of the files to /usr/..., then how the symlinks would be handled might
not matter as much.

> If components of your package implement a third-party filesystem "API",
> then you need to check that the consumer is going to look in both the
> rootfs and /usr. For e2fsprogs, I would expect the problem areas to be
> the /sbin/fsck.TYPE and /sbin/mkfs.TYPE interfaces: if you install
> to /usr/sbin/fsck.TYPE and /usr/sbin/mkfs.TYPE, will the fsck and mkfs
> wrappers in util-linux still find them?

So long as PATH includes /sbin and /usr/sbin, the fsck and mkfs
wrappers will find them.  For fsck there is a failsafe in case PATH is
not set, and so it might be a good idea (although probably not
strictly necessary in Debian systems) to make the following change in
util-linux's disk-utils/fsck.c:

-#define FSCK_DEFAULT_PATH "/sbin"
+#define FSCK_DEFAULT_PATH "/sbin:/usr/sbin"

That being said, you do have a good point that there might be scripts
that have "/sbin/fsck." hard-coded in the shell scripts, just as
I've seen /bin/rm, /bin/mv, etc., hard coded in some shell scripts ---
not to mention "#!/bin/sh" or "#!/bin/bash" as the first line in
gazillions of scripts.  So getting rid of all of compatibility
symlinks whether done via a symlink tree or top-level symlinks for
/bin, /sbin, /lib, etc., is probably not realistic for decades.

That being said, the number of inodes that we might need for symlink
farms for /bin, /sbin, et.al. is *not* something I'm terribly fond of.
It's probably not a show-stopper to add that many symlinks,
but... yelch.  So my personal preference, even if it required making
changes in dpkg so it was aware of directory aliases, and requiring
that dpkg getting updated first in the bullseye->bookworm upgrade
would be to stick with usrmerge.  

On that front: is the list of potential problems vis-a-vis dpkg and
usrmerge here[1] comprehensive?

[1] https://wiki.debian.org/Teams/Dpkg/MergedUsr

If so, would it perhaps be helpful to consider what might be solutions
to the issues listed in [1]?  Some of them might not be that hard to
mitigate if minor(?) changes to dpkg were contemplated, and some of
them might not be hard to mitigate via brute force techniques (e.g.,
adding /bin/*sh and /usr/bin/*sh to /etc/shells, etc.)

- Ted



Re: A summary of where I think we are on the technical side of the merged /usr discussion

2021-08-17 Thread Theodore Ts'o
On Tue, Aug 17, 2021 at 05:19:06PM +0100, Simon McVittie wrote:
> On Tue, 17 Aug 2021 at 08:08:15 -0600, Sam Hartman wrote:
> > In order to build packages that work on a non-usrmerge system, you need
> > a build chroot that is not usrmerge.
> 
> Well. That's not 100% true: it's more accurate to say that when *some*
> source packages are built in a merged-/usr chroot, the resulting binary
> packages don't work correctly on a non-merged-/usr system. Most source
> packages are fine either way.
> 
> Such packages are already violating a Policy "should", because they're
> not building reproducibly (and the reproducible-builds infra tests this
> for testing and unstable). Ansgar did a survey of this when we were
> discussing one of the Technical Committee bugs, and reported that around
> 80 packages had a bug of this class at the time, which had apparently
> dropped to 29 by the time the TC resolution was voted on.

Do we have a dashboard for this so the list of which source packages
result in different binary packages depending on whether they are
built with a usrmerge vs !usrmerge system?  We could look at the
reproducible-builds reports, but not all reproducible build failures
are caused by the usrmerge/!usrmerge dependency, right?

> If we want to make buildd chroots merged-/usr any time soon, then I
> think we need to say this class of bugs is RC for bookworm.

Agreed; I'd go further, and claim we should get all of these bugs
resolved well before the bookworm freeze.


On a separate question, at the moment e2fsprogs is installing some
files in /sbin and /lib, and other in /usr/sbin and /usr/lib, etc.,
since historically the goal was to allow systems to boot, and bring up
networking, etc., without /usr being mounted.  As a result there are
some breakout lintain warnings:

W: libext2fs-dev: breakout-link usr/lib/x86_64-linux-gnu/libe2p.so -> 
lib/x86_64-linux-gnu/libe2p.so.2
W: libext2fs-dev: breakout-link usr/lib/x86_64-linux-gnu/libext2fs.so -> 
lib/x86_64-linux-gnu/libext2fs.so.2
W: ss-dev: breakout-link usr/lib/x86_64-linux-gnu/libss.so -> 
lib/x86_64-linux-gnu/libss.so.2

Suppose I released a new version of e2fsprogs targetting sid and
bookworm which installs everything in /usr/bin, /usr/sbin, /usr/lib,
etc, instead of splitting up files in /... and /usr/...

   * Is this a desirable thing to do now?  (Post-bullseye release)
   * What are the potential risks of doing this now?
   * Bullseye users might still be assuming that they can boot w/o
 /usr mounted, is that correct?  Hence, for bullseye-backports, I
 would need to be able to support building e2fsprogs packages
 which retain some files being installed in /{bin,sbin,lib},
 etc., and some in /usr/{bin,sbin,lib},

Apologies for asking these potentially stupid questions, but it would
be great if there could be concrete guidelines could be given for
package maintainers, not just what is *mandatory* (which would be in
policy), and what would be considered *desirable* for a package
maintainer who wants to be helpful/proactive and is trying to move the
ball forward.

- Ted



Re: Steam Deck: good news for Linux gaming, bad news for Debian :(

2021-08-11 Thread Theodore Ts'o
On Wed, Aug 11, 2021 at 04:08:13PM +0200, Vincent Bernat wrote:
> I think we have more systemic issues. I am quite impressed how Nix/NixOS
> is able to pull so many packages and modules with so few people. But
> they use only one workflow, one way to package, one init system, etc.
> Looking at Arch, one workflow, one way to package, one init system, etc.
> Looking at Fedora, one workflow, one way to package, one init system.

I wouldn't call it "issues" per se.  It's all about trade-offs.
Having only one way to do things helps velocity, but it also impedes
flexibility, which some users and developers value.

Having a faster release cycle either requires a lot more engineering
resources (volunteers or paid, depending on the distro) and/or it
forces users to continually update to new major releases if they want
to continue getting security updates.

There still *are* enterprise customers who like the longer release
cycles.  Some of them even use Debian and have privately referred to
it as "their secret advantage".  Whether it is a large number or not,
and whether they are contributing back to the Debian community (and
whether that is important to us) are different questions.

Requiring that all packages use the common distro-shipped shared
libraries (or Perl or Python components) as opposed to shipping their
own is another engineering tradeoff where there may be some
advantages, but also disadvantages, in terms of effort, pain if the
shared libraries or Perl/Python components laugh at the concept of
"stable API's", and userspace package upstreams that want to work
across a large number of distributions all supporting different
versions of their dependencies, and/or upstream that want to move
faster than Debian is willing to release.

These are all tradeoffs, and there is no one right answer.  That may
be painful for those who believe that there is, and it is a hidden
assumption in the blithe assertion that Debian should be "The
Universal OS".  Unfortunately, these tradeoffs mean that there can
*be* no single "Universal OS".  There will always be a need for
different horses for different courses.

Debian has taken a strong opinionated stance on many of these
tradeoffs, and that's fine.  It's not necessarily a problem, except
insofar that some people want Debian to be applicable for a particular
use case, such as for example Steam OS.  It might be the answer is
that Debian simply can't be as Universal as we might aspire to be.

Cheers,

- Ted



Re: Thanks and Decision making working group (was Re: General Resolution: Statement regarding Richard Stallman's readmission to the FSF board result)

2021-04-19 Thread Theodore Ts'o
On Mon, Apr 19, 2021 at 02:05:20PM +0100, Jonathan Dowland wrote:
> On Mon, Apr 19, 2021 at 11:30:48AM +0800, Benda Xu wrote:
> > The winning option "Debian will not issue a public statement on this
> > issue" implies that the majority of DDs is not interested in such
> > non-technical affairs.
> 
> The vote in fact shows the opposite.  That interpretation of the result
> would be true if the majority of people voted for that as their first
> preference. They did not: it was the most-agreed upon preference between
> two ideologically opposite factions. The majority of voting DDs
> expressed a strong preference one way or the other.

I agree with all of the above.  I also can't help feel that the result
was probably the best one that could have been reached for the project
as a whole.  In which case, the voting system arguably did its job.

The division was not caused by our decision making process; it was
caused by the fact that this was naturally a question for which there
was nothing like unaminity amongst the voting members.

It is unclear that any change in our voting procedures could have made
things any better.

Cheers,

   - Ted



Re: freeipa is in trouble for the next release (again)

2021-03-24 Thread Theodore Ts'o
On Wed, Mar 24, 2021 at 12:33:53PM +0100, Harald Dunkel wrote:
> On 3/24/21 11:05 AM, Andrey Rahmatullin wrote:
> > On Wed, Mar 24, 2021 at 10:02:37AM +0100, Harald Dunkel wrote:
> > > For my own part, I run freeipa-server on CentOS 7. I am not affected
> > > by #970880. I would be very happy with freeipa-client in Bullseye, even
> > > if freeipa-server doesn't make it.
> > The deadline for adding new packages to testing was 2021-02-12.
> 
> So what would be your suggestion?

FWIW, my suggestion would be to attempt to work with the Debian
FreeIPA team (there is a release critical bug open since September
2020, so it's unclear how active the team is currently) to get the
package healthy, and after Bullseye releases, work to get it into
testing, and then into Bullseye-backports.

Cheers,

- Ted




Re: Making Debian available

2021-01-15 Thread Theodore Ts'o
On Fri, Jan 15, 2021 at 09:35:01AM -0800, Russ Allbery wrote:
> 
> The point is to make things easier for our users.  Right now, we're doing
> that for you but not for the users who don't care whether firmware is
> non-free.  I think the idea is that we should consider making things
> easier for both groups of users.  There's no reason to make things worse
> for you and others who want the fully free installer in the process.

I wonder if a compromise would be to make an install CD/DVD which
contains the non-free packages, but which gives the user the option to
abstain from using said non-free packages --- it can explain that the
non-free packages may be needed for some hardware, but why people who
are committed to Free Software might prefer loss of functionality to
using non-free software.

We might still need to continue to ship a CD/DVD which completely
omits the non-free software, since for some people they might object
to having any non-free bits on their install media, regardless of
whether or not they are used.

But having a non-free installer where the use of non-free packages is
optional, perhaps that might be a sufficient compromise that we could
make that installer more easily findable, instead of leaving it in
a "locked filing cabinet stuck in a disused lavatory with a sign on the
door saying ‘Beware of the Leopard.'".

After all, for people who very on the "non-free is evil and must be
avoided at all costs" spectrum, this installer would help them get
their message out --- and after providing the pro-Free-Software-at-all
costs message to users that might oherwise might not get it (remember,
these are people who had previously been using Windows 10), we trust
users to choose how they come down on the question.

Just a thought

- Ted



Re: systemd services that are not equivalent to LSB init scripts

2019-07-14 Thread Theodore Ts'o
On Sun, Jul 14, 2019 at 07:23:31PM +0100, Simon McVittie wrote:
> micro-httpd appears to be an example of this - I'm a bit surprised
> there aren't more. Perhaps this indicates limitations in the infrastructure
> around inetd services making it hard to implement "use systemd.socket(5)
> under systemd or inetd otherwise"?

I'll note that it's a bit tricky even in the cron vs systemd.timer use
case.  That's what I was referring to when I said we had to go through
some effort just to enable the "use cron" functionality, since we had
to make sure that this was inhibited in the case where cron and
systemd is enabled on the system.

So requiring support of non-systemd ecosystems is in general, going to
require extra testing.  In the case of cron/systemd.timers, this means
testing and/or careful code inspection to make sure the following
cases work:

* systemd && cron
* systemd && !cron
* !systemd && cron

Support of non-systemd ecosystems is not going to be free, and some
cases, it is not going to be fun, something which many have asserted
should be something we should be striving for.  The challenge is how
do we develop the consensus to decide whether or not we force
developers to pay this cost.

And if we don't, is it better to just let this rot where we allow
developers to violate current policy with a wink and a nudge until
it's clear that we do have consensus?  Or do we force them to do the
work?  Or do we somehow go through the pain and effort to try to
determine what that consensus actually is?

- Ted



Re: Is it the job of Lintian to push an agenda?

2019-07-14 Thread Theodore Ts'o
On Sun, Jul 14, 2019 at 11:10:36AM -0300, Chris Lamb wrote:
> Theodore Ts'o wrote:
> 
> > P.S.  I'm going to be adding an override in e2fsprogs for
> > package-supports-alternative-init-but-no-init.d-script because it
> > has false positives
> 
> Regardless of the specifics of this particular package if Lintian
> could feasibly not emit this false-positive, would it surely not be
> more sensible to get this fixed there instead?

There is a bug open against Lintian already, but it's not at all clear
it's solveable short of solving the halting problem.  E2fsprogs is
shipping 5 systemd unit files for which the cron.d file is a rough
substitute.  So the only choice is whether you want false positive or
false negative reports for a Lintian "Important" warning.

I'm getting 5 Important Lintian errors, one for each systemd unit
files.  Some of them are associated with a systemd.timer setup, and
some are normal system services unit files.  *All* of them in
combination implement the functionality which is also (mostly)
provided by cron.d entry and the e2scrub_all_cron shell script.

Just suppressing the warning for systemd.timer files would not be
sufficient.  You'd have to suppress *all* Lintian complaints of this
class if there is at least one timer file and at least one cron.d file
in the package.   But that's going to subject to false negatives.

Or, you know, you could solve the halting problem.  :-)

> That would not only be a cleaner solution than an override (which you
> would likely just have to remove later...) it would be a general
> kindness in that it could potentially save countless other developers
> undergoing the same manual process as you.

I prefer not to either (a) delay a release of e2fsprogs until this
Lintian bug is solved, one way or another (and it's not clear it can
be solved easily), or (b) deal with people complaining and filing bugs
regarding the Lintian Important report.

So override does seem to be the best approach, especially given how
charged the whole sysvinit vs systemd controversy and my lack of faith
that the Lintian bug is going to be resovled any time soon.  I'd
*much* rather avoid any flames directed at me caused by this false
positive.

  - Ted



Re: How to adopt a dead package?

2019-07-14 Thread Theodore Ts'o
On Sun, Jul 14, 2019 at 02:34:50PM -0400, Perry E. Metzger wrote:
> On Sun, 14 Jul 2019 23:11:46 +0500 Andrey Rahmatullin
>  wrote:
> > > If I wanted to adopt the package and get it back into Debian, what
> > > would I need to do? I haven't been a package maintainer before. I
> > > presume there's a document somewhere I can read with detailed
> > > instructions?  
> > Generic instructions: https://mentors.debian.net/intro-maintainers
> > Reintroducing packages:
> > https://www.debian.org/doc/manuals/developers-reference/ch05.en.html#reintroducing-pkgs
> 
> Thank you!

Hey Perry,

Long time no chat!

Two things to highlight.  First, bozohttpd has been out of Debian for
about 5 years, so it's worth taking a quick look through the upgrading
checklist in the Debian Policy document for changes since its list
Standards conformance claim, which is against version 3.9.3 (we're not
at version 4.4):

   https://www.debian.org/doc/debian-policy/upgrading-checklist.html


Secondly, one good thing is that the packaging for bozohttpd appears
to be very simple, if obsolete.  And fortunately, debhelper compat
level 5 is deprecated, but it's at least still supported by modern
versions of debhelper (which is version 12; see the COMPATIBILITY
LEVELS section of the debhelper man page for more details).

The New Maintainer's Guide is going to give you examples using the new
recommended "dh" template, where the core of the debian/rules makefile
looks like this:

%:
dh $@

The last version of bozohttpd's debian/rules file runs explicit
debhelper commands, e.g.:

binary-arch: build install
dh_testdir
dh_testroot
dh_installdocs
...

The developer's reference guide recommends using the existing
packaging when reintroducing a package.  And that's probably good
advice, although it may cause some initial confusion when you read the
New Maintainer's Guide, since it assumes you are packaging a new
package, where there is no legacy packaging effort to use as a base.

That being said, it might be simpler to guarantee pacakging policy
complaince if you were to start from scratch; the primary files which
you'd like to pay special attention are the current
debian/{post,rm}{inst,rm} files.

Finally, if you need someone to help be a mentor and sponsor the
upload, or if you have any questions, please feel free to contact me
off-line; I'd be happy to help.

Cheers,

- Ted



Re: libfuse3-dev is a virtual package?

2019-07-14 Thread Theodore Ts'o
On Sun, Jul 14, 2019 at 07:12:59PM +0200, Sven Joachim wrote:
> This can happen if you have assigned a negative Pin-Priority to
> libfuse3-dev.  According to apt_preferences(5), a Priority < 0 "prevents
> the version from being installed", and apparently apt achieves this by
> pretending that the package is not there at all.

Thanks for the hint; you called it exactly.  I had this in my apt
preferences:

Package: *
Pin: release a=testing
Pin-Priority: 900

Package: *
Pin: release o=Debian
Pin-Priority: -10

... and I'm currently still on Buster, having not moved on to Bullseye
yet.

- Ted



libfuse3-dev is a virtual package?

2019-07-14 Thread Theodore Ts'o
So this is weird.  I can't install libfuse3-dev on my buster system:

# apt install libfuse3-dev
Reading package lists... Done
Building dependency tree   
Reading state information... Done
Package libfuse3-dev is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source

E: Package 'libfuse3-dev' has no installation candidate

Apt seems to think it is a virtual package:

# apt show libfuse3-dev
Package: libfuse3-dev
State: not a real package (virtual)
N: Can't select candidate version from package libfuse3-dev as it has no 
candidate
N: There is 1 additional record. Please use the '-a' switch to see it
N: No packages found
# apt show -a libfuse3-dev
Package: libfuse3-dev
Version: 3.4.1-1
Priority: optional
Section: libdevel
Source: fuse3
Maintainer: Laszlo Boszormenyi (GCS) 
Installed-Size: 668 kB
Depends: libfuse3-3 (= 3.4.1-1), libselinux-dev
Suggests: fuse
Homepage: https://github.com/libfuse/libfuse/wiki
Tag: devel::library, role::devel-lib
Download-Size: 128 kB
APT-Sources: https://mirrors.kernel.org/debian buster/main amd64 Packages
Description: Filesystem in Userspace (development) (3.x version)
 Filesystem in Userspace (FUSE) is a simple interface for userspace programs to
 export a virtual filesystem to the Linux kernel. It also aims to provide a
 secure method for non privileged users to create and mount their own filesystem
 implementations.
 .
 This package contains the development files.

But as near as I can tell, it's a real package:

https://packages.debian.org/buster/libfuse3-dev

Help?

- Ted



Re: Is it the job of Lintian to push an agenda?

2019-07-14 Thread Theodore Ts'o
On Sat, Jul 13, 2019 at 02:22:01PM -0700, Russ Allbery wrote:
> Matthias Klumpp  writes:
> 
> > With two Debian stable releases defaulting to systemd now, I think a
> > solid case could be made to at least relax the "must" requirement to a
> > "should" in policy (but that should better go to the respective bug
> > report).
> 
> The Policy process is not equipped to deal with this because that process
> requires fairly consensus, and I don't believe that's possible to reach on
> this topic.
> 
> I don't know what decision-making process the project should use here: a
> big thread on debian-devel (wow, that sounds fun), a bunch of in-person
> conversations at DebConf (probably way more productive but excludes some
> folks), the TC (tried and didn't work very well), a GR, some new mediated
> consensus process, or what.  Or maybe some working group that goes all-in
> on creating a "good enough" automated translation from unit files to
> sysvinit scripts and we support sysvinit that way and thereby dodge the
> problem.

The alternative seems to be a large number of package maintainers
willfuly ignoring a particular reading of the Policy, whether or not
that reading of the policy is "correct" or not.  Hopefully we can
avoid bug priority escalation/descalation wars over what might or
might not be a policy violation.   Oh joy

- Ted

P.S.  I'm going to be adding an override in e2fsprogs for
package-supports-alternative-init-but-no-init.d-script because it
has false positive, regardless of its claim:

N:Severity: important, Certainty: certain

It most *definitely* is not certain.  We went through quite a bit of
trouble providing alternative functionality via cron, and not via
(only) systemd timers.  I will admit the functionality is slightly
better if you are using systemd, but as saying goes, "patches
gratefully accepted".  Whining for developers to do extra work via
Debian Policy is, well, not.

And I say all of this not being a systemd fan.  But the vast majority
of Linux ecosystem has made a choice, and we should just move *on*.



Using dh causes configure to be run twice?

2019-07-09 Thread Theodore Ts'o
In my attempt to convert e2fsprfogs's debian/rules to use dh, I'm
running into yet another frustration with dh, which is that it insists
on running the configure script twice.

The problem is that dh is trying to use build-arch and build-indep:

% dh build --no-act
   debian/rules build-arch
   debian/rules build-indep

And build-arch and build-indep both want to run configure:

% dh build-arch --no-act
   dh_testdir -a
   dh_update_autotools_config -a
   debian/rules override_dh_auto_configure
   debian/rules override_dh_auto_build
   create-stamp debian/debhelper-build-stamp

% dh build-indep --no-act
   dh_testdir -i
   dh_update_autotools_config -i
   debian/rules override_dh_auto_configure
   debian/rules override_dh_auto_build
   create-stamp debian/debhelper-build-stamp

This seems amazingly non-optimal.

I tried to see how other packages work around this misfeature, and I
see that openssh just hacks things to make the second
dh_auto_configure a no-op:

override_dh_auto_configure-arch:
dh_auto_configure -Bdebian/build-deb -- $(confflags)
ifeq ($(filter noudeb,$(DEB_BUILD_PROFILES)),)
dh_auto_configure -Bdebian/build-udeb -- $(confflags_udeb)
# Avoid libnsl linkage. Ugh.
perl -pi -e 's/ +-lnsl//' debian/build-udeb/config.status
cd debian/build-udeb && ./config.status
endif

override_dh_auto_configure-indep:

RLY?

That seems like an amazing hack.  Is there no other way to work around
what appears to be massive mis-design in dh?

If the goal is to make moving to dh a mandate, and working on Debian
to be fun, we desperately need better documentation on how to use dh
for real-world packages, and not just simple, trivial packages   :-(

  - Ted



Re: Could we generate d/control instead of working with "assembly level code" directly (was: Re: The noudeb build profile and dh-only rules files)

2019-07-09 Thread Theodore Ts'o
On Tue, Jul 09, 2019 at 12:24:40PM +0200, Simon Richter wrote:
> Your proposal of generating some of the fields doesn't affect the API
> itself, as long as the fields are populated at the right time. We don't
> have a mechanism for updating the control file at build time, because any
> part of the build process that would be able to do so is after the part
> where the control file is consumed for the first time, so it would give an
> inconsistent view.

I used to handle this back when I had the goal of making sure that
e2fsprogs from the git repository could successfully build as far back
as oldoldstable.  The idea was that sometimes people would want to be
able to get an updated version of e2fsprogs with all of the bug fixes;
and while I'm not willing to manually extract large number of bug
fixes and backport them to ancient distro versions of Debian and
Ubuntu --- our backport process to Debian Obsolete^H^H^H^H^H^H^H^H
Stable is *not* fun for me, as far as I'm concerned), I could at least
make sure that modern versions of e2fsprogs could be trivially
repackaged for ancient versions of Debian/Ubuntu.

The way I did this was to make a default target in debian/rules called
debian-files, which would create (or re-create) debian/control from a
debian/control.in file.  Then to build e2fsprogs on debian, one would
first unpack the e2fsprogs' upstream tarfile distribution, or check it
out from git, and then run:

./debian/rules
dpkg-buildpackage

The Debian source package would have the automagically generated
debian/control file, so it was fully compatible with all of Debian's
package tooling, but it would also have the debian/control.in file,
which as far as *I* was concerned was the preferred form for
modification.

Cheers,

- Ted



Re: The noudeb build profile and dh-only rules files

2019-07-08 Thread Theodore Ts'o
On Mon, Jul 08, 2019 at 07:28:50PM +0100, Colin Watson wrote:
> 
> Per my other reply, you may find that it isn't that painful after all
> once you find the right approach.  For instance, while a separate udeb
> build pass does make
> https://salsa.debian.org/ssh-team/openssh/blob/master/debian/rules a
> little bit more complicated, I wouldn't say it's something I find myself
> having to think about very often.

Thanks, that's really helpful.  One of the really frustrating things
I've found about trying to use dh is that there is a real lack of
examples which are more complicated than:

#!/usr/bin/make -f
#
# See?   dh is easy-peasy!

%:
dh $@

Sure, there are few a examples using one or two override declarations,
but trying to use dh on a non-trivial package is non-obvious,
given the current documentation.  Some more advanced tutorials, or
some links to good examplars of how to use dh in "real world",
non-trivial packaging setups, would be really good.  It looks like
openssh is a good example, but I'm sure they must be others.


> > P.S.  If anyone thinks that increasing the size of the debian
> > installer by 145k is unacceptable, please let me know now
> 
> This is something you'd need to run past debian-boot@, as there may well
> be particular images for which it still matters (I haven't kept up, but
> it's certainly something they'd want to be informed of).

Ack, I'll do that.  I did try to do some research, and I think I saw a
netinstall image which was 48 MiB, but I didn't see anything smaller.
Of course, I haven't checked all architectures, so checking with
debian-boot would be good.

> Oh, and apologies if this is obvious, but the other reason a separate
> udeb build pass may be necessary is if certain configure options make
> the code actually not work in the context of d-i.  This is the case for
> openssh (for example, it builds an sshd that can be used as part of
> d-i's network-console feature, but PAM wouldn't work in that context).
> I don't know whether it's the case for e2fsprogs.

Yeah, it wasn't the case of e2fsprogs not working (I've always tried
to make e2fsprogs very self-contained with an absolute minimal number
of dependencies, since it has to work before /usr is mounted.)  It was
doing things like disabling gettext/NLS which was responsible for the
bulk of the 145k decrease in size, IIRC.

As I said, back in the days of installation floppies, it was clearly
worth it to do a second pass build just to save bytes.  But these
days?  Realistically, e2fsprogs-udeb has been growing in size as ext4
has become a more featureful file system; for example, between Debian
Jessie and Debian Buster, e2fsprogs-udeb has grown by 142k.  And back
in the ext2/ext3 days of installation floppies, before e2fsprogs
supported 64-bit block numbers, ext4 features like extents, inline
data, bigalloc, metadata checksums, etc., e2fsprogs-udeb was even
smaller.

- Ted




Re: The noudeb build profile and dh-only rules files

2019-07-08 Thread Theodore Ts'o
On Mon, Jul 08, 2019 at 07:36:30PM +0200, Samuel Thibault wrote:
> Hello,
> 
> Theodore Ts'o, le lun. 08 juil. 2019 13:25:32 -0400, a ecrit:
> > How important is noudeb, and why is defined in the first place?
> 
> My usage of noudeb is mostly to avoid the two-times-longer build time 
>

It used to be that I built e2fsprogs twice; once for udebs, and once
for the "normal" build.  I'm planning on ripping that out as being
complexity that seems incredibly painful to convert to using dh, and
the cost seems to be growing the installed size of e2fsprogs-udeb by
145k (or roughly 15%).

Back in the days of boot/root installation floppies, saving every last
byte was clearly important.  My plan is to drop it to save developer
maintenance headache, and it also avoids the double-compilation build
time extension.  (I assume that's what you were referring to when you
mentioned "avoid the two-times-longer build time", right?)

   - Ted

P.S.  If anyone thinks that increasing the size of the debian
installer by 145k is unacceptable, please let me know now



The noudeb build profile and dh-only rules files

2019-07-08 Thread Theodore Ts'o
I'm the middle of an effort to simplify the debian/rules file for
e2fsprogs so that someday, maybe, I'll be able to convert it to use
dh.  One of the things which I noticed while trying to rip things out
of debian/rules to make the dh conversion easier (possible?) was the
support for noudeb.

How important is noudeb, and why is defined in the first place?

And will debhelper do the the right thing in terms of filtering out
udeb packages if noudeb is psecified in DEB_BUILD_PROFILES?  If
support of noudeb is free in the brave new dh-only world, that's
great.

If it has to be hacked in manually, how important is it to support
noudeb?

I've tried to do some web searches to answer the above questions, but
I'm afraid my Google-fu has failed me.  So maybe this is something
that can be documented in a few places, including the debhelper man
pages, and perhaps, in the BuildProfileSpec Debian Wiki page?

Many thanks!!

- Ted



Re: Survey: git packaging practices / repository format

2019-06-23 Thread Theodore Ts'o
On Fri, Jun 21, 2019 at 05:59:52PM +0100, Ian Jackson wrote:
> > There's a variant of this which is to grab updates from upstream using
> > "git cherry-pick -x COMMIT_ID ; git format-patch -o debian/patches -1 
> > COMMIT_ID".
> > 
> > At the moment I'm updating debian/patches/series by hand, but I really
> > should automate all of the above.
> 
> Thanks for the reply.  I think this approach is novel to me.
> 
> I think in my third column there, "Tools for manipulating delta from
> upstream, building .dsc, etc.", "git merge" is not entirely right to
> describe this approach, and certainly `1.0-with-diff' is wrong.
> 
> How do you update to a new upstream version while preserving your
> delta queue ?  Just git merge with an upstream seems like it might
> work sometimes but at some point the patches will need to be
> refreshed...

Well, I'm cheating a bit, since I *am* upstream for the package in
question.  In that case, or in the case of people who can follow an
"upstream first" policy, when you sync up with upstream, by definition
you can just completely empty debian/patches.

I do something similar with my ext4 kernel patch queue; when I sync
with upstream, I know which patches haven't been sent upstream ---
they are at the end of the patch queue in what I term the "unstable
portion" of the patch queue, so it's easy enough to preserve then when
I rebase from 5.1-rc2 to 5.2-rc2; and most of the time either no patch
updating is necessary, or when there is, I just fix up the patches and
commit them as part of the rebase commit where I delete the patches in
the "stable portion" of the patch queue, which are upstream, and the
update the series files so it looks something like:

# v5.2-rc2<-- this is the "origin" of the ext4.git tree


# unstable patches


unstable-patch-1
unstable-patch-2

- Ted



Re: Stalls due to insufficient randomness in cloud images

2019-06-03 Thread Theodore Ts'o
On Mon, Jun 03, 2019 at 02:37:48PM +0200, Marco d'Itri wrote:
> On Jun 03, Bastian Blank  wrote:
> 
> > Does anyone know what RHEL8 (which should have this problem as well)
> > does to "fix" this problem?
> RHEL8 enables by default rngd from rng-tools, which looks much better to 
> me than haveged.

rngd is indeed much better than haveged, which as the Arch Wiki has
observed[1]:

   Warning: The quality of the generated entropy is not guaranteed and
   sometimes contested (see LCE: Do not play dice with random numbers[2]
   and Is it appropriate to use haveged as a source of entropy on
   virtual machines[3]?). Use it at your own risk or use it with a
   hardware based random number generator with the rng-tools (see
   #Alternative section)

[1] https://wiki.archlinux.org/index.php/Haveged
[2] https://lwn.net/Articles/525459/
[3] 
https://security.stackexchange.com/questions/34523/is-it-appropriate-to-use-haveged-as-a-source-of-entropy-on-virtual-machines

Unfortunately, rngd is better because it's not going to solve the
problem which the OP articulated.

Which is to say, it will use a hardware number generator if present,
and it will use RDRAND if present --- both of which are good if you
trust the hardware and --- but if the host is misconfigured, it's not
going to help you.

My opinion is we should darned will make sure the host is configured
correctly, instead of playing dice with the guest VM's security.  But
there are people who are comforted by the "Yeah, I know the CPU is a
deterministic system for the most part.  But the CPU's cacheu
architecture is ***sooo*** complicated that *I* can't figure it
out, so it *MUST* be secure."

- Ted



Re: ZFS in Buster

2019-06-01 Thread Theodore Ts'o
On Tue, May 28, 2019 at 06:43:55PM +0200, Dan wrote:
> 
> The ZFS developers proposed the Linux developers to rewrite the whole
> ZFS code and use GPL, but surprisingly the linux developers didn't
> accept. See below:
> https://github.com/zfsonlinux/zfs/issues/8314

I've read the thread, and there are a lot of misunderstandings of many
of the key people involved.  There also seems to be a lot of
misunderstanding of what the cause of the "hostility" is coming from
--- it is *not* about "sticking it to Oracle because they chose the
CDDL".

Also, it's not accurate that "linux developers didn't accept".  Ryan
sent a query to Linus, and Linus didn't respond.  I don't know if he
sent a single message, or whether he retried a couple of times.  A
failure to respond is not the same as a rejection.  There are plenty
of reasons why Linus might not have responded.

That being said, I don't propose to relitigate that whole thread here.
If people really care, feel free to contact me privately.  Or it could
be the case that since Ryan has closed, the ZoL community has already
moved on.  Which is also a fine outcome: from most of the upstream
Linux developers that I've talked to; not because they hold any
particular animus against ZoL.  It's just that no one feels
particularly interested in giving ZoL any kind of special treatment
--- the hostility around bypassing the requirements of GPL is about
exactly that; not the identity of the company or project trying to do
those particular things.

As Sam has noted, even in the most permissive interpretation, which is
that the Kernel has chosen to draw the lines around GPL compliance in
a different place as the FSF, does not mean that there are *no* lines.
Indeed, there are lines, and when they are violated, there will be
hostility and a refusal to cooperate, and ZoL is getting no better
*or* no worse treatment in that regard.

Bringing this back to Debian, my perception is that while there is not
unanimity about how the moral and legal requirements of the GPL
should be understood within Debian (just as there is also not
unanimity in the kernel community), the center of gravity within
Debian tends to be weighted towards the less permissive
interpretations of the GPL compared to the Linux Kernel community as
whole.  Which is to say, if you can't get the Linux Kernel community
folks to agree towards a certain flexibility towards evading the
CDDL/GPL license compatibility problems using techniques like "GPL
condoms", it is even less likely that the Debian community is going to
be willing to be so accomodating.

I also agree with Sam that the only way to know for sure is to have a
GR.  So you don't have to take our word for it; but please do
understand it's going to take a lot of community resources to make
that determination.  And there might be better uses of that time and
energy.

Regards,

- Ted



Re: Survey: git packaging practices / repository format

2019-06-01 Thread Theodore Ts'o
On Tue, May 28, 2019 at 04:51:10PM +0100, Ian Jackson wrote:
> 
>  Modified   Direct changes git merge
>   upstream files,to upstream files  (.dsc: 1.0-with-diff or
>  plus debian/*. single-debian-patch)
>  Maybe d/patches, depending.
>  History has direct merges from upstream.

There's a variant of this which is to grab updates from upstream using
"git cherry-pick -x COMMIT_ID ; git format-patch -o debian/patches -1 
COMMIT_ID".

At the moment I'm updating debian/patches/series by hand, but I really
should automate all of the above.

- Ted



Re: Buster/XFCE unlock screen is blank

2019-06-01 Thread Theodore Ts'o
On Sat, Jun 01, 2019 at 06:16:58AM +0530, Raj Kiran Grandhi wrote:
> 
> In a fresh install of Buster with XFCE desktop, locking the screen
> blanks the monitor and the monitor enters a power save state. After
> that, neither moving the mouse nor typing on the keyboard would turn
> the monitor back on.
> I could find two ways to get the display back on:
> 
> 1. Typing the password without any visual feedback (while the monitor
> continues to be in the power save state) unlocks the screen and the user
> session is displayed normally.
> 
> 2. Switching to another VT, say vt1 or vt2 turns the monitor back on
> and on switching
> back to the vt of the original session the unlock prompt is displayed
> normally and the screen can be unlocked.

There's another workaround (which is the one I use):

xset s off

This has other consequences as well, of course, but I tend to suspend
my laptop if it's ever going to be left alone, particularly if it's
running on battery.

And once I found a workaround which worked for my laptop, I was too
lazy to find a more "proper" fix.  :-)

- Ted



Re: e2fsprogs as Essential: yes?: Maybe we should be separating l10n files first?

2018-01-07 Thread Theodore Ts'o
On Sat, Jan 06, 2018 at 02:25:55AM +0100, Manuel A. Fernandez Montecelo wrote:
> 2018-01-02 03:10 Theodore Ts'o:
> > My only real concern is whether this might complicate building the
> > latest version of e2fsprogs for stable and old-stable for
> > debian-backports.
> 
> I think that it's fine, they've been supported for a long time and
> stable and old-stable should be covered, not sure about old-old-stable
> without backports.

It's mostly fine.  The one problem I found was that I tried to add the
*-dbg packages to the control file such that they would only be built
if the build profile pkg.e2fsprogs.legacy-dbg was active.
Unfortunately, when I tried doing a source-only upload of e2fsprogs,
the DAK software complained that "e2fsprogs-dbg" et.al. were new
packages that weren't supported with a source-only upload.

That's apparently because it was parsing the control file, found the
package declaration, and assumed that build profiles will only
suppress packages (e.g., such as fuse2fs or documentation pckages).
It didn't assume that a non-standard build profile such as
pkg.e2fsprogs.legacy-dbg would *enable* a package, and that under
normal builds, the *-dbg packages wouldn't be built.  Which is fair
enough, there's no way DAK could determine that without actually
building the source-only upload.

So I had to move the *-dbg package definitions into a
debian/control.legacy-dbg file, and mutate debian/control in order to
both support jessie backports, and keep DAK happy.

Which is sad, but it's a solution which works --- and I understand why
there really isn't any other way for DAK to handle this case.

Cheers,

- Ted



Re: e2fsprogs as Essential: yes?: Maybe we should be separating l10n files first?

2018-01-03 Thread Theodore Ts'o
On Tue, Jan 02, 2018 at 12:38:55AM +0100, Manuel A. Fernandez Montecelo wrote:
> 
> Lately architectures tend to use automatic bootstrapping at least for
> some of the initial dependencies.  Adding support for build profiles
> (would be something like pkg.e2fsprogs.nofuse in this case) can help to
> build just by using env variables when invoking dpkg-buildpackage or
> other build tools.
> 
> Would you accept patches to achieve this in e2fsprogs?  It would
> probably be quite clean, not complicating/obfuscating the packaging
> files too much, usually only 2~10 lines (but I didn't look specifically
> into this package yet).

With some help from Simon McVittie, you should be able to use the
build profile pkg.e2fsprogs.no-fuse2fs in the just-uploaded 1.43.8-2
version of e2fsprogs.  It seems to work for me; please let me know if
does what you need.

Cheers,

- Ted



Re: e2fsprogs as Essential: yes?: Maybe we should be separating l10n files first?

2018-01-03 Thread Theodore Ts'o
On Wed, Jan 03, 2018 at 08:16:35PM +, Simon McVittie wrote:
> > Has there been any thought about having the build
> > profiles framework support for having the rules file autoselect a
> > build profile based on the build environment?
> 
> I suspect that might be a "considered and rejected" sort of thing,
> because toolchain maintainers want the same command to always do more
> or less the same thing. If you want some automation for enabling special
> build profiles, I'd suggest wrapping it around the outside instead.
> That also means it's allowed to edit debian/control if it needs to.

Actually, after doing some experimentation, I was able to make this
work.  From the debian/rules file:

USE_DBGSYM ?= $(shell if dpkg --compare-versions $(DH_VERSION) ">=" 9.20160114 
; then echo yes ; fi)

ifeq ($(USE_DBGSYM),yes)
dh_strip_args = -p$(1) --dbgsym-migration='$(1)-dbg (<= 1.43-1)'
dh_strip_args2 = -p$(1) --dbgsym-migration='$(2)-dbg (<= 1.43-1)'
else
dh_strip_args = -p$(1) --dbg-package=$(1)-dbg
dh_strip_args2 = -p$(1) --dbg-package=$(2)-dbg
DBG_PACKAGES += -pe2fsprogs-dbg -pe2fslibs-dbg -plibcomerr2-dbg -plibss2-dbg
export DEB_BUILD_PROFILES += pkg.e2fsprogs.legacy-dbg
endif

Which is actually cool, because it means you can do an "apt-get source
e2fsprogs", "schroot -c jessie-amd64", and then run "dpkg-buildpackage
-us -uc --changes-option=-S" and have the right thing happen
automagically.

- Ted



Re: e2fsprogs as Essential: yes?: Maybe we should be separating l10n files first?

2018-01-03 Thread Theodore Ts'o
On Mon, Jan 01, 2018 at 11:43:23PM +, Simon McVittie wrote:
> 
> Perhaps you could convert this into a pkg.e2fsprogs.nofuse build profile?
> 
> This would look something like the attached (untested!) patches.

Thanks, I'll give this a try.  From the BuildProfile web page and
looking at the package versions, support for it appears to be in
Jessie, or at least "preliminary support" is present.  Is that
correct?  Is there any gotchas I should be aware of when backporting
to Jessie?

It looks like one thing which my scheme supports which build profiles
does not is that when backporting to Jessie, I can just check out the
tree from git, and then run:

./debian/rules debian-files
dpkg-buildpackage

... and it will autodetect that the system doesn't support *-dbgsym
packages, and create e2fsprogs-dbg, e2fslibs-dbg, et.al packages for
Jessie instead.

Since the autodetection isn't there, I would have to just manually
build with some kind of pkg.e2fsprogs.old-dbg build profile, or some
such, instead.  I guess it's about the same level of inconvenience of
needing to run ./debian/rules in order to generate the control file
from control.in.

My "./debian-rules debian-files" scheme used to do a lot more,
including rewriting several other files in debian/ using m4, back when
we had just migrated libuuid and libblkid from e2fsprogs to
util-linux, and I wanted to support backports to stable, old-stable,
and Ubuntu LTS.  I did like the fact that it could detect which build
option (now, "build profile") to use automatically, so the folks
building backports for various Debian derivitives didn't need to do
anything special.  Has there been any thought about having the build
profiles framework support for having the rules file autoselect a
build profile based on the build environment?

Cheers,

- Ted



Re: e2fsprogs as Essential: yes?: Maybe we should be separating l10n files first?

2018-01-01 Thread Theodore Ts'o
On Tue, Jan 02, 2018 at 12:38:55AM +0100, Manuel A. Fernandez Montecelo wrote:
> Lately architectures tend to use automatic bootstrapping at least for
> some of the initial dependencies.  Adding support for build profiles
> (would be something like pkg.e2fsprogs.nofuse in this case) can help to
> build just by using env variables when invoking dpkg-buildpackage or
> other build tools.
> 
> Would you accept patches to achieve this in e2fsprogs?  It would
> probably be quite clean, not complicating/obfuscating the packaging
> files too much, usually only 2~10 lines (but I didn't look specifically
> into this package yet).

Sure.  The debian/rules.custom support that I put in predates the
Debian build profiles spec, but it's basically the same idea.  So it
should be pretty simple to migrate things to use the build profile
concept; it's just a matter of wiring up things slightly differently
in debian/rules.

I haven't looked at build profiles spec super-closely yet, but since
it looks like there is core infrastructural support for it in dpkg,
debhelpers, and friends, it's no longer necessary to run
debian/control.in through m4 to generate debian/control (which is how
I did things sans dpkg support).

Send me a patch, either sent to linux-e...@vger.kernel.org, or as
Debian bug, and I'll happily take it.

My only real concern is whether this might complicate building the
latest version of e2fsprogs for stable and old-stable for
debian-backports.

Cheers,

- Ted



Re: e2fsprogs as Essential: yes?: Maybe we should be separating l10n files first?

2018-01-01 Thread Theodore Ts'o
On Mon, Nov 13, 2017 at 08:35:10PM +0100, Helmut Grohne wrote:
> > To to be clear, the key metric for your specific goal is the reduction
> > of the _source_ package count since the goal is to reduce the number
> > of packages which have to be built by "hand" (or by script), before
> > you can create a sbuild/pbuild build chroot, correct?
> 
> Correct. Unless I am mistaken, removing e2fsprogs from the build set
> also removes fuse.

Apologies for the thread necromancy, but I was going through old bugs
and old todo items for e2fsprogs debian package, and I was rereading
this thread as part of that.

This probably doesn't help much, but for people who are doing things
by hand, you can skip the dependency on fuse by unpacking the
e2fsprogs source packaging, adding the file debian/rules.custom which
contains the single line, "SKIP_FUSE2FS=yes", and building by hand.

It currently doesn't automatically fix up the control file, but I can
set things up so that adding the rules.custom file with
SKIP_FUSE2FS=yes, and running "./debian/rules debian-files" will
automatically rewrite the control file dropping the fuse2fs
dependencies and the "fuse2fs" package from the control.in file.

Which might not matter that much, since when bootstrapping a new
architecture, it's all done manually anyway, so having a properly
update debian/control file might not matter that much.

(The rules.custom infrastructure in e2fsprogs's debian/rules file was
something I had put in a while ago to support building subsets of
e2fsprogs for certain specialized use cases at $WORK.  It was also
used way back when to support building new versions of e2fsprogs on
extremely ancient old-old-old-old-stable.)

Yeah, it's horribly manual, but when you need to bootstrap a newn
architecture, it's all manual *anyway*.  And yes, it's a workaround
compared to dropping e2fsprogs from the essential set (for which I
still support), but it's a workaround that works today.  I suppose the
real problem is that a random developer who is trying to bootstrap
Debian on a new architecture won't know about this trick, but in case
it's helpful, I thought I would mention it.  (Waving to the RISC-V
folks.)

Cheers,

- Ted






Re: recommends for apparmor in newest linux-image-4.13

2017-12-10 Thread Theodore Ts'o
On Wed, Dec 06, 2017 at 11:24:45AM +0100, Laurent Bigonville wrote:
> The SELinux policy could be altered to either run everything that we know is
> not ready to be confined in an unconfined domain or put that domain in
> permissive (which would result in a lot of denials being logged), so it's
> possible to behave more or less the same way as AppArmor depending of how
> the policy is designed.

It "could" be altered the same way that anyone "could" modify a
sendmail.cf file.  Someone "could" create a program which plays the
game of Go written raw assembly language.

If it "could" be done, why hasn't been done in the past decade?

- Ted



Re: recommends for apparmor in newest linux-image-4.13

2017-12-04 Thread Theodore Ts'o
On Mon, Dec 04, 2017 at 05:56:45PM +, Ian Jackson wrote:
> Theodore Ts'o writes ("Re: recommends for apparmor in newest 
> linux-image-4.13"):
> > [something about] security-weenies
> 
> IMO this language is completely inappropriate in any formal Debian
> context.

The second definition from http://whatis.techtarget.com/definition/weenie

2) In the context of program development and among the "hackerdom"
that Raymond chronicles, the term weenie can be ascribed
respectfully to someone who is highly knowledgeable, intensely
committed to, or even just employed on a particular endeavor or in
a particular operating system culture. For example, a "UNIX
weenie" may mean someone who is an expert at using or modifying
UNIX . But, depending on the context, it could also mean a "UNIX
bigot."

> (I have to disclose an interest: I have a PhD in computer security, so
> maybe I am one of these "weenies"?)

Given that I served on the Security Area Directorate of the IETF for
close to ten years, the term could also be used to describe me.  But
as I said, if it's too hard for *me* to figure out how to make SELinux
work on my development laptop, perhaps folks would insist that I turn
in my security weenie union card...

I don't consider it offensive, just I don't consider the term "hacker"
to be offensive.

- Ted



Re: recommends for apparmor in newest linux-image-4.13

2017-12-03 Thread Theodore Ts'o
On Wed, Nov 29, 2017 at 11:51:55AM -0800, Russ Allbery wrote:
> Michael Stone  writes:
> > On Tue, Nov 28, 2017 at 08:22:50PM -0800, Russ Allbery wrote:
> 
> >> Ubuntu has successfully shipped with AppArmor enabled.
> 
> > For all the packages in debian? Cool! That will save a lot of work.
> 
> Yes?  I mean, most of them don't have rules, so it doesn't do anything,
> but that's how we start.  But indeed, Ubuntu has already done a ton of
> work here, so it *does* save us quite a bit of work.

The fact that AppArmor doesn't do anything if it doesn't have any
rules is why we have a chance of enabling it by default.  The problem
with SELinux is that it's "secure" by the security-weenies' definition
of secure --- that is, if there isn't provision made for a particular
application, with SELinux that application is secure the way a
computer with thermite applied to the hard drive is secure --- it
simply doesn't work.

Every few years, I've tried turning on SELinux on my development
laptop.  After it completely fails and trying to make it work just
work for the subset of application that I care about, I give up and
turn it off again.  Having some kind of LSM enabled is, as far as I am
concerned, better than nothing.

(And I speak as someone who chaired the IP Security working group at
the IETF, and was the technical lead for the MIT Kerberos V5 effort.
If admitting that I'm too dumb or don't have enough patience to figure
out how to make SELinux work on my development laptop means that
someone is going to revoke my security-weenies' union card, I'm happy
to turn it in)

- Ted



Re: ISO download difficult (was: Debian Stretch new user report (vs Linux Mint))

2017-12-03 Thread Theodore Ts'o
On Sat, Dec 02, 2017 at 11:59:08AM +, Sue Spence wrote:
> On 2 December 2017 at 11:49, Holger Levsen  wrote:
> 
> > On Sat, Dec 02, 2017 at 12:32:29PM +0100, Geert Stappers wrote:
> > > URL is https://cdimage.debian.org/cdimage/unofficial/non-free/
> > cd-including-firmware/
> >
> > so who will make nonfree.debian.net and non-free.debian.net
> > http-redirect to that URL? :)
>
> I'll be writing a blog post this weekend which links to it, if only for my
> own sake. I get the joke of course, but Debian is free with or without the
> firmware so I wouldn't set up such a redirect out of my own pedantic
> notions of correctness, never mind everyone else's. :)

How about https://works-on-pcs.debian.org?  :-)

Personally, as a developer, I will say there is one benefit of being
so user-unfriendly that the usable ISO is hidden under the
beware-of-leopard sign, which is that it serves as a "you have to be
this technically aware to install debian" barrier.  As a result, we
don't have the low signal-to-noise bug reports that are all-too-common
on Ubuntu's launchpad.net.

So if we want to reform our "FSF-ly correct freedom is more important
than being friendly to novices" (and it's not clear Debian as a whole
agrees with this sentiment), folks might want to consider that this
probably means we will need to have more people doing bug triage.

Personally, I think prioritizing users who just want to a working
PC/Laptop over the FSF is the right choice, since I belong to the
pragmatic wing of the Open Source movement, but I suspect I'm in the
minority in the Debian community.  Which is fine; I'll just continue
to enjoy the high quality of most bug reports in the Debian BTS.  :-)

- Ted



Re: e2fsprogs as Essential: yes?: Maybe we should be separating l10n files first?

2017-11-13 Thread Theodore Ts'o
On Mon, Nov 13, 2017 at 03:28:32PM +0100, Helmut Grohne wrote:
> 
> On Sun, Nov 12, 2017 at 02:18:45PM -0500, Theodore Ts'o wrote:
> > 1)  If people really want to make e2fsprogs non-essential, I'm not
> > going to object seriously.  It's the downgrade of e2fsprogs from
> > Priority: required to Priority: important which where things get
> > super-exciting.

By the way, when I said "super-exciting", that was a reference to an
management euphemism "uncomfortably excited" which generally refers to
the excitement one feels when "working without a net while crossing
the Grand Canyon on a tightrope" :-)

But if you really are focused on getting to Essential: no, and not
necessarily changing the priority field, that certainly is a much more
easily achievable goal.

> > 3) Lsattr/chattr et.al depend on the e2fsprogs shared libraries, so
> > moving them into a separate binary package isn't going to save as much
> > space as you would like.  So it's not at all clear the complexity is
> > worth it.
> 
> I'm not enthusiastic about moving lsattr either for precisely the reasons
> you name.

Yeah, I think the bigger question is whether any of a reduced minbase
needs lsattr/chattr in the first place.

> Reducing the package count lowers the complexity of the bootstrap
> problem. If e2fsprogs (or anything else) can be moved to the native
> phase, that's a win.

To to be clear, the key metric for your specific goal is the reduction
of the _source_ package count since the goal is to reduce the number
of packages which have to be built by "hand" (or by script), before
you can create a sbuild/pbuild build chroot, correct?

Cheers,

- Ted



Re: e2fsprogs as Essential: yes?: Maybe we should be separating l10n files first?

2017-11-12 Thread Theodore Ts'o
On Mon, Nov 13, 2017 at 01:14:01AM +0100, Guillem Jover wrote:
> I think that trying to trim down the pseudo-Essential set is an
> extremely worthwhile goal, because it has visible effects on several
> areas, at least:
> 
>  - Possibly making bootstrapping a port way easier.
>  - Making it possible and easier to use Debian on (very) embedded systems.
>  - Reducing footprint for minbase setups, such as VM images, buildds,
>chroots, and similar.

Except for a port, you will need *some* file system, so simply
removing all file system utilities from the minbase set doesn't
necessarily make it *easier* per se.

And most minbase setups aren't necessarily manually removing locale
files today, because debootstrap doesn't support this.  I'm just
pointing out that *just* simply splitting out coreutils into coreutils
and coreutils-l10n will shrink the minbase set by roughly as much as
what is listed at the EssentialOnDiet page.

This is not an argument to not do the other things on the
EssentialOnDiet page.  I'm just pointing out there's quite a lot of
low-hanging fruit that can also be harvested if the priamry goal is
reduction of minbase for VM images, chroots, buildds, etc.  And I
don't think it should be neglected.

I will certainly grant that if the goal is to make Debian work on
super-tiny embedded systems we will need to eject a lot of things from
minbase, including bash, tar, perl-base, etc.  And if the super-tiny
embedded system is going to use squashfs, and is not any other on-disk
file system, then sure, that's certainly a case where removing
e2fsprogs makes sense.

But there are *plenty* of use cases where people are using a minbase
created using debootstrap where there is some lower-hanging fruit that
we might want to pick first.

Cheers,

- Ted



Re: e2fsprogs as Essential: yes?: Maybe we should be separating l10n files first?

2017-11-12 Thread Theodore Ts'o
On Sun, Nov 12, 2017 at 09:13:42PM +0100, Mathieu Parent wrote:
> 
> There is another way to trim the locales: Use dpkg's "--path-exclude=".
> 
> This also allows one to keep some locales. This is what we use at work
> [1]. The problem is that debootstrap doesn't handle those options, so
> we need to hack a bit [2].
> 
> [1]: 
> https://github.com/nantesmetropole/docker-debian/blob/master/templates/etc/dpkg/dpkg.cfg.d/01_save-space
> [2]: 
> https://github.com/nantesmetropole/docker-debian/blob/master/templates/post-debootstrap.sh

You can always manually delete binaries afterwards or by using
--path-exclude, but that has always seemed to be a hack to me.  By
that argument there's no point making e2fsprogs Essential: no /
Priority: important, since you could just remove the files you don't
want (mke2fs, e2fsck, etc.) afterwards.  :-)

- Ted



Re: e2fsprogs as Essential: yes?: Maybe we should be separating l10n files first?

2017-11-12 Thread Theodore Ts'o
On Mon, Oct 02, 2017 at 01:34:47PM +0200, Adam Borowski wrote:
> But, we're discussing changes to e2fsprogs behind its maintainer's back.  I
> believe he reads debian-devel, but, being nowhere like a frequent poster,
> apparently doesn't watch new threads immediately as they appear (and this
> one started as a response to a 2011 post).
> 
> Ted: could you please chime in?  In case you unsubscribed d-devel, it starts
> at https://lists.debian.org/debian-devel/2017/09/msg00449.html

Apologies for not responding sooner.  Between a crazy travel schedule
(kernel summit in Prague, teaching a tutorial at LISA 2017 in San
Francisco, and recovering from a killer case of Flu), I just hadn't
really gotten to this earlier.

1)  If people really want to make e2fsprogs non-essential, I'm not
going to object seriously.  It's the downgrade of e2fsprogs from
Priority: required to Priority: important which where things get
super-exciting.

2) I'm at this point I'm not really enthuiastic to move lsattr out of
e2fsprogs.  We are still adding new features to ext4, some of which
will require new flags, and chattr/lsattr et.al. *were* originally
designed to be only for ext 2/3/4.  Other file systems have decided to
use the same ioctl, which is fine, but I've always considered
chattr/lsattr to be an ext 2/3/4 utility first, and more generic file
system utility second.  Moving lsattr out of e2fsprogs to some other
package (e.g., util-linux) will make my kernel development much more
inconvenient.

3) Lsattr/chattr et.al depend on the e2fsprogs shared libraries, so
moving them into a separate binary package isn't going to save as much
space as you would like.  So it's not at all clear the complexity is
worth it.

4) If the real goal is reduce the size of minbase, there is a much
more effective thing we can do first, or at least, in parallel.  And
that is to move the l10n files to a separate foo-l10n package.  The
last time I did this analysis was with Debian Jessie, but I don't
think the numbers have changed that much.  Of the 201 MB i386 minbase
chroot, 33MB, or over 16% can be found in /usr/local/locale.  The
breakdown (using Debian Jessie numbers) are:

Package Savings Cummulative
(kB)Savings (kB)  Percentage
coreutils   8052 805224.91
dpkg46201267239.20
bash37441641650.78
gnupg   34241984061.37
e2fsprogs   17762161666.86
tar 16802329672.06
shadow  16322492877.11
apt 15282645681.84
libapt-pkg4.12  10522750885.09
Linux-PAM   796 2830487.55
findutils   756 2906089.89
grep636 2969691.86
diffutils   620 3031693.78
debconf 596 3091295.62
adduser 444 3135696.99
sed 428 3178498.32
libgpg-error388 3217299.52
systemd 84  3225699.78
acl 72  32328   100.00

In Debian Stretch, I've already done this separation for e2fsprogs, so
the installed size of e2fsprogs is only 1309kB.  And so I've already
made harvested way more than half of the savings of getting rid of
e2fsprogs from the minbase set by the simple expedient of moving the
/usr/share/locale files to e2fsprogs-l10n.

Simply splitting coreutils into coreutils and coreutils-l10n would
reduce minbase by a factor of *six* over getting rid of e2fsprogs from
minbase.

Does this mean trying to get to Essential: no for e2fsprogs is a bad
thing?  No, but if your goal is reduce the size of minbase for Debian,
I just want to point out that there is **much** lower hanging fruit
that folks might want to consider harvesting first.

Cheers,

- Ted

P.S.  In case it isn't obvious, the reason why it's interesting to
shrink the size of minbase is that it makes Debian much lighter-weight
for Docker --- you don't need e2fsck or mke2fs in most docker
containers based on Docker; neither do you need the translations into
Turkish, German, Spanish, Chinese, etc., for e2fsprogs, coreutils,
dpkg, etc., for most Docker containers.

When I asked the question *why* is it worth it to spend the effort to
reduce the size of Essential: yes, I was told it was it was a
prerequisite of reducing the set of packages where Priority is
"required", and *that* was because it allowed for a reduction of the
minbase set, which in turn is intersting for reducing the size of
Docker images.  If there are other reasons why people want to make
e2fsprogs no longer essential: yes, it's good to have those out on the
table, since it may be that there are other things we can do first, or
in parallel, that might be *far* more effective towards the real end
goal that people have.



Re: How does one include the original upstream signature?

2017-08-04 Thread Theodore Ts'o
On Fri, Aug 04, 2017 at 10:28:54AM -0400, Chris Lamb wrote:
> 
>   https://lists.debian.org/debian-devel/2017/07/msg00451.html

Thanks!  Turns out the problem was operator error.  I dropped

e2fsprogs_1.43.4.orig.tar.gz.asc

into the top-level directory, instead of

e2fsprogs_1.43.5.orig.tar.gz.asc

Oops!  Mea culpa.

It might be nice though if the Lintian informational messages had more
explanation about how to address this.  For example, telling the
developer to rerun dpkg-buildpackage with *.orig.tar.*.asc alongside
the original compressed tarfile, perhaps?

Cheers,

- Ted



How does one include the original upstream signature?

2017-08-04 Thread Theodore Ts'o
I'm getting the following lintian error message:

E: e2fsprogs changes: orig-tarball-missing-upstream-signature 
e2fsprogs_1.43.5.orig.tar.gz
N:
N:The packaging includes an upstream signing key but the corresponding
N:.asc signature for one or more source tarballs are not included in your
N:.changes file.
N:
N:Severity: important, Certainty: certain
N:
N:Check: changes-file, Type: changes
N:

... but I can't figure out how to get the changes file to include the
original upstream signature file.  I've tried naming the upstream
signature file e2fsprogs_1.43.5.orig.tar.gz.asc; I've tried naming it
e2fsprogs-1.43.5.tar.gz.  Neither cause dpkg-buildpackage to include
the signature file.

I've checked the man pages for dpkg-source, dpkg-genchanges, and
dpkg-buildpackage; none shed any light on the subject.  I've tried
some google searches, and the closest I've come to documentation is:

https://bugs.debian.org/cgi-bin/bugreport.cgi?att=1;bug=833585;filename=0001-Check-for-the-presence-of-a-signature-if-an-upstream.patch;msg=12

...but I don't understand the internals of Lintian enough to figure
out what filename it is looking for.

Further google searches seem to indicate the only way I can get
Lintian to shut up is to delete debian/upstream/signing-key.asc.
Which I will do if I have to, since debian-watch really isn't all that
interesting when I'm the upstream.  But I can't help thinking that
either some documentation is sorely lacking; the Lintian information
is needs more information; or I've missed something basic.  Or perhaps
all three or some combination of all three.

Help me, debian-devel!  You're my only hope!

- Ted



Re: Can we kill net-tools, please?

2016-12-29 Thread Theodore Ts'o
On Wed, Dec 28, 2016 at 01:38:48PM +1000, Russell Stuart wrote:
> I don't know whether "crap" is the right word, but it is certainly
> baggage from a bygone era.  "Baggage" here means that if we are nice to
> our users (ie, Debian sysadmins), we should not force them to know two
> tools.  We only have one complete tool set available: iproute2.  This means 
> at the very least ifconfg can not appear in any conffile, nor can it really 
> appear in documented shell scripts like dhclient-script.

This is really going to be a generational thing.  For those of us who
started programming in the BSD 4.x days (my first kernel programming
experience was with BSD 4.3), ifconfig and netstat are still the tools
that I use every day, and I only use the iproute2 tools in the
*extremely* rare circumstances that I need to do something exotic
which is only supported by the iproute2 tools.

This probably makes me a bad person, but it's how I operate, because I
grew up using the ifconfig and netstat.  From my perspective, it's the
height of arrogance to decide that we're being "kind" to Debian system
administrators by forcibly taking away the traditional BSD tools, and
forcing them to learn a new interface, just because we think it's the
right and moral choice for system administrators.

If you want to deprecate ifconfig and netstat, the kindest way to do
that is to (a) remove all of the progammatic dependencies on ifconfig
and netstat output, (b) add hints to ifconfig and netstat which look
at how they were invoked and adds a one-line hint:

   Ifconfig has been deprecated; you should probably use "ip a show
   dev lo" instad of the shorter and more convenient "ifconfig lo"
   because debian is going to arrogantly make "ifconfig" go away in
   the next stable release, because we believe this is in your best
   interests, and Debian always has the priorities of our users at heart.

OK, you can remove the last half, but keep in mind there are plenty of
people who aren't using the exotic features provided by iproute2, and
are very happy using the more convenient and shorter BSD-style
commands.  If you're going to remove it _becase_, the least you can do
is to make the transition a bit more gentle.

Regards,

- Ted



Re: Bug#820036: No bug mentioning a Debian KEK and booting use it.

2016-10-26 Thread Theodore Ts'o
On Wed, Oct 26, 2016 at 08:42:07AM +0200, Philipp Kern wrote:
> > To the extent that we could easily support this particular use case,
> > it might be a good thing.  (I doubt Debian is going to want to get
> > into the business of verifying and then resigning firmware blobs.)
> 
> Depends if you are then able to flash it into the addon card you have
> (think VGA BIOS on an NVIDIA graphics card), which requires a) access to
> some flash process and b) depending on that potentially a signature
> trusted by the device to accept the update.

So I guess I was assuming the use case where the firmware is
dynamically loaded into RAM each time the machine boot.  For example,
this is how the Intel Wireless drivers' firmware is handled.  I need
to have the binary blob iwlwifi-8000C-22.ucode available on my system
each time it boots, or the wifi card no talkee to the network.  Since
it is needed each time the system boots, and certain cases (for
example, a firewire device which can do arbitrary, unrestricted,
device-initiated DMA requests anywhere in host memory[1]), it would
make sense that the firmware needs to be signed before it can be loaded.

In the case of firmware which is flashed into non-volatile memory, I
would guess that the it probably wouldn't necessarliy use the
Microsoft signing key at all.  (For example, for a long time most
printers were not bothering to do any digital signature checking at
all before installing a firmware update.)

> Otherwise you end up with no graphics output on bootup because the
> system is not trusting the blob on your graphics card to run. If you
> screw it up too heavily, you can render your machine unbootable as well.
> (I know a coworker succeeded in doing that when modifying the key set.)
> Nothing a SPI programmer can't fix, but it'd be annoying nonetheless.

I suspect that most firmwares that have to be flashed will need to be
done using vendor-provided software.  For example, on Lenovo systems,
where you have to get the BIOS update on a bootable USB stick which
you then boot.  In that case it's largely orthogonal to Linux and
Debian altogether.

The problem would be more for firmwares which have to be loaded each
time you boot Linux.

Cheers,

- Ted




Re: Bug#820036: No bug mentioning a Debian KEK and booting use it.

2016-10-24 Thread Theodore Ts'o
On Tue, Oct 18, 2016 at 07:52:13PM +0800, Paul Wise wrote:
> 
> It was posted to bug #820036, which is tracking Debian support for
> secure boot. Peter was advocating quite correctly that as well as
> having our copy of shim (the first-stage bootloader on secure boot
> systems) signed by Microsoft, we should also have a copy signed by a
> Debian signing authority, so that users can theoretically choose to
> distrust the Microsoft key. IIRC, unfortunately in practice that is
> unlikely to be possible since various firmware blobs are only
> Microsoft-signed.

It's probably not possible for Debian to deal with this, but I could
imagine a user (perhaps someone who is using Debian for their entire
organization, etc.) who is willing to download firmware blobs from a
trusted source (e.g., directly from the vendor), and then verify the
Microsoft signature as a double check, and then resign it with their
own signing authority key.

To the extent that we could easily support this particular use case,
it might be a good thing.  (I doubt Debian is going to want to get
into the business of verifying and then resigning firmware blobs.)

Cheers,

- Ted



Re: GPL debate on kernel mailing list

2016-09-06 Thread Theodore Ts'o
On Tue, Sep 06, 2016 at 02:29:56AM +0200, Zlatan Todorić wrote:
> You're just fueling myths you stand behind for some reason. You take
> data from one year (did you even verify it on your own?) and you don't
> look at historical development of situation.

The data was compiled primarily by Jon Corbet and Greg K-H, who are
both kernel developers who are considered pretty strong members of the
kernel development community.  And they have data going back seven
years (this is an annual report), and it's all mostly the same.

The methodology of this report includes having a very detailed mapping
of e-mail addresses to company.  That's important because some
engineers who work for companies use a kernel.org or a .edu e-mail
address.  So it's something which the kernel community members
consider highly authoratative.

> While I can pull out data
> that will easily throw out of door your point I will just go a bit
> through development. Companies didn't care for Linux...

so that's a religious statement

> and only wanted
> profit from it. GNU and Linux where spearheaded by volunteers, by fun
> and most of companies didn't look at it. They started looking when
> volunteers made it very competitive, they started employing some of them
> to continue such work but mostly not. 

In actual practice the demand of kernel developer engineers vastly
outstrips the supply.  So anyone is pretty much at all vaguely
competent gets hired --- at which point their output increases
significantly because they are able to work full time, instead of in
their own time.  So that's why the bulk of the work is done by people
who are paid by companies.  When I was working at MIT as the tech lead
for the Kerberos development effort, I could only work on Linux in the
evenings and weekends.  When I started getting paid to work full time,
I could work on Linux a much greater percentage of my time.

The people who weren't hired by companies were largely either (a)
really incompetent (and in practice a number of them were hired by the
company, and once they were discovered to be incompetent, they were
let go), or (b) _chose_ not to want to work on Linux full time.

> Most company contributions happen
> because someone who came from Free software background pushed this
> inside company and yet to date we don't have a major Free software
> company (RedHat could be called a major open source company).

If you take a look at the top contributors (by number of commits
authored by engineers from a particular company), most of thoose
companies do have significant open source complaince/program offices.
So it's a lot more than just "someone from a free software background
pushed this inside the company".  These is no doubt plenty of examples
of that, but if you look at where the bulk of the contributions are
coming from, they are coming from companies that have a dozen or more
engineers working on Linux.  (And in many cases, 50 or more.)  That's
the only way you can get enough contributions to be high up on the top
contributions by company list.

I am sure there are a large number of companies who have one or two
commits attributed to engineers working at that company.  And in those
cases, the dynamic may be as you have described.  But that's not where
the bulk of the contributions to Linux are coming from.  They are
coming from a relatively small number of companies who are
contributing a *large* amount of work.

> Microsoft had attitude of calling Linux "cancer and communism". Do you
> think they nowdays contribute because to open source because they really
> like it. No, they were loosing edge, and most contributions from
> companies to open source happen because they are loosing edge. And even
> today they show a lot of hostile approach when they can - by suddenly
> not releasing documentation, by introducing non-free firmware. Creating
> enterprise editions with nonfree code etc.

More philosophy/religion

> There must be awareness that even if they today contribute most of code
> (it would be interesting to pull out entire data or data for few first
> years where probably volunteers made 80%-90% and then just throw such
> statistic at you and talk about distortion of reality) it is not because
> they are good community citizens that understand the philosophy. And I
> am fairly sure that most of their dormant projects where only good
> because community gave a lot of love and care after it was killed
> mainstream. So even if they produce most of the code today, they are
> still hostile to GPL and entire philosophy.

Sure, if you look the first few years Linux was done all by hobbyists.
But by today's standard the Linux kernel was a toy.  It didn't scale
at all and compared to the SunOS or Ultrix or AIX kernel of the day,
it lacked features, and it couldn't handle big systems.  It took major
contributions by companies like IBM, who hired hundreds of people
working at the Linux Technology Center, pushing efforts such as the
Linux Scalability 

Re: GPL debate on kernel mailing list

2016-09-05 Thread Theodore Ts'o
On Tue, Aug 30, 2016 at 12:09:35PM +0200, Zlatan Todorić wrote:
> For years and years companies are using community hard work and creating
> their "great" products without turning back
>
> People all over the world created Free software for decades and just
> small number of those people got employed to work on Free software for
> living...

This is one of these myths that gets repeated over and over again, but
it's a bit of a distortion of reality.  If you look at the actual data
of who actually contributes to the Linux kernel[1], engineers employed by
companies contribute over 80% of the changes.  Consultants are 2.6%,
and hobbyists are somewhere between 7.7% and 14.5% (6.8% of the
commits are authored by people where it's not clear whether their work
is supported by a company or not).

[1] Linux Kernel Development: How Fast It is Going, Who is Doing It,
What They are Doing, and Who is Sponsoring It, 2016.   http://goo.gl/QKbJ5Q

I suspect if you take a look at how many of the commits that go into
gcc or LLVM, you would see a similar dynamic.


So the debate is really about whether or not the companies versus "the
community" is really an accurate, or for that matter, healthy, way of
looking at things.

It's far more accurate to say that the companies are *part* of the
community, and we need to encourage all members of the community,
whether they are individuals or corporations, to live up to the
community norms.  (And some cases, that means teaching a student at a
two-year college in Toronto that taking credit for other people's work
and sending patches that haven't been tested, and in some cases, don't
even compile, to users who are asking for help on a bug tracker isn't
cool.  And in other cases, it might be convincing companies and
individuals who ship VM images that they need to include source.)

> I don't know is it a time for GPLv4 which will explain to all
> corporations that THIS LICENSE mean you must participate with
> community...  ...and not engage that only way to achieve is by lies,
> manipulation, abuse, FUD, secrets.

In my opinion, this kind of Manichean attitude is not an accurate
description of reality, and it's really not helpful.

- Ted



Re: support for noatime

2016-08-26 Thread Theodore Ts'o
On Fri, Aug 26, 2016 at 01:50:51PM +0200, Adam Borowski wrote:
> 
> For mbox files (and possibly similar cases), there's only a handful of
> interested readers, thus they can be patched to touch atime by hand instead
> of relying on a system-wide mount option.

Something that could be done is to set the A flag on the root file
system before installing any packages via "chattr +A /".  This will
cause the noatime flag to be inherited by all subdirectories and
files.  The file owner can then remove the 'A' flag for those mbox
files where you want the atime field to be updated.

The advantage of doing this is that it doesn't require any kernel changes.

   - Ted



Re: synaptics vs libinput and GNOME 3.20 no longer supporting synaptics

2016-07-16 Thread Theodore Ts'o
On Thu, Jul 14, 2016 at 01:57:13PM +1000, Peter Hutterer wrote:
> libinput is a lot smarter than synaptics when it comes to palm
> detection.

Question about libinput?  The main reason why I'm using synclient
because I have a Thinkpad T540p which doesn't have hard buttons for
the "mouse buttons".  It does have TrackPoint which I infinitely
prefer to the !@#?! horrendo Trackpad on the T540p.  So I do the
following to only use the Trackpad for buttons.

synclient RightButtonAreaTop=0
synclient RightButtonAreaRight=4858
synclient RightButtonAreaBottom=5000
synclient RightButtonAreaLeft=3500

synclient MiddleButtonAreaTop=0
synclient MiddleButtonAreaRight=3499
synclient MiddleButtonAreaBottom=5000
synclient MiddleButtonAreaLeft=2800

synclient coastingFriction=50
synclient coastingSpeed=15

synclient areaTopEdge=6000
synclient areaLeftEdge=0
synclient VertEdgeScroll=0
synclient HorizEdgeScroll=0

Basically, I don't want to use the Trackpad for mouse events, not
*ever*.  And even if the keyboard and trackpoint are quiscent, I don't
want a random palm swipe to be registered a mouse or button event ---
only when the pad is physically depressed.

What's the equivalent way of doing the same thing with the libinput
driver?  (Note: I'm still using the X server, not Wayland, and I'm
using XFCE).

Thanks,

- Ted



Re: DEB_BUILD_MAINT_OPTIONS=hardening=+pie breaks shared library builds

2016-05-21 Thread Theodore Ts'o
On Sat, May 21, 2016 at 09:21:55PM +0200, Christian Seiler wrote:
>
><<>
> 
> Hope that helps.

Yes, that was incredibly helpful.   Thanks!!!

- Ted



DEB_BUILD_MAINT_OPTIONS=hardening=+pie breaks shared library builds

2016-05-21 Thread Theodore Ts'o
If the pie hardening option is enabled, then dpkg-buildflags --get
LDFLAGS emits:

-fPIE -pie -Wl,-z,relro

According to the dpkg-buildflags man page:

   LDFLAGS
  Options passed to  the  compiler  when  linking  executables  or
  shared objects

Unfortunate the linker will blow up if -fPIE is specified:

(cd elfshared; gcc --shared -o libcom_err.so.2.1 \
-L../../../lib -fPIE -pie -Wl,-z,relro \
-Wl,-soname,libcom_err.so.2 error_message.o et_name.o init_et.o 
com_err.o com_right.o -lpthread)
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/Scrt1.o: In function 
`_start':
(.text+0x20): undefined reference to `main'
collect2: error: ld returned 1 exit status

Should I file a bug against dpkg-buildflags?  Or the
hardening-includes package?  What is the suggested workaround if you
have a package that has both executables and shared libraries, and you
want to enable pie hardening for the executables?

Thanks,

- Ted



Re: Empty Contents and Packages files in http://deb.debian.org/debian-debug?

2016-05-21 Thread Theodore Ts'o
One other thought.  Since someone might be trying to debug a core file
for an executable belonging to a package which has since been
superceded by a newer version in unstable or in testing, it would be
useful if there was a Redis (or some other NoSQL) database where you
can look up a Build-ID and get a package and version number so the
appropriate dbgsym package can be downloaded by snapshot.debian.org.

I suppose the Build-Id index implemented by snapshot.debian.org's
Postgresql, but I suspect storing a build-id -> dbgsym package mapping
for every single executable released going forward might be somewhat
large for a single-server database.  :-/

Is anyone working on anything like tihs?

- Ted



Re: Empty Contents and Packages files in http://deb.debian.org/debian-debug?

2016-05-21 Thread Theodore Ts'o
On Sat, May 21, 2016 at 04:34:19AM +, Niels Thykier wrote:
> > Also, does anyone know if someone is working on a FUSE client that
> > could be mounted on top of /usr/lib/debug/.build-id so that the
> > debuginfo files could be automatically made available as needed when
> > gdb tries to access them?
> > 
> 
> No, but if you find one, I would very much like to hear about it.

I'm not sure when I'm going to find the time, but I was thinking about
using this as an excuse to learn Go (using Han-Wen Nienhuys's FUSE
bindings for Go[1]), and I wanted to make sure I wouldn't be
reinventing the wheel --- since it seems like a rather obvious thing
to do.

[1] https://github.com/hanwen/go-fuse

But if someone beats me to it (or has already done it), I won't
complain.  :-)

- Ted



Empty Contents and Packages files in http://deb.debian.org/debian-debug?

2016-05-20 Thread Theodore Ts'o
Hi, is it intended that the Contents and Packages file in the dbgsyms
archive are empty?

I was hoping to be able to add http://debug.mirrors.debian.org/debian-debug/ to 
my
apt.sources list so I could easily download the dbgsyms packages.

Also, does anyone know if someone is working on a FUSE client that
could be mounted on top of /usr/lib/debug/.build-id so that the
debuginfo files could be automatically made available as needed when
gdb tries to access them?

`   - Ted



Re: Bug#823465: dpkg: Won't run at all on i586 Pentium MMX due to illegal instruction

2016-05-14 Thread Theodore Ts'o
On Thu, May 12, 2016 at 02:10:17AM +0100, Ben Hutchings wrote:
> > If you discover that the package hoses your system then to rollback,
> > shutdown the system to single-user mode, and remount the file system
> > to be read-only, and then use the command lvconvert --merge to restore
> > your file system back to the state of the snapshot.  This will consume
> > the snapshot, and leave the file system (presumably ext3 or ext4) in a
> > potentially confused state, which is why you need to do this with the
> > file system remounted read-only.   Then reboot, and you're all set.
> 
> What could possibly go wrong?

The sysadmin will complain that she doesn't have enough excitement in
her life?  :-)

Seriously, I can imagine scenarios where you're rolling a glibc or
bash upgrade where this might not work, but if you're in single user
mode, the most likely failure mode is that the system wedges up and
you have to hit the big red button to reboot.

But yes, doing this in the initramfs makes a lot more sense.
Presumably some file such as /lvm-rollback would get left in the root
file sytem, and so after initramfs mounts the root file system, if it
seems the presence of that file, it will unmount the file system and
do the lvconvert -merge and then remount the root file system.


Cheers,

- Ted



Re: Bug#823465: dpkg: Won't run at all on i586 Pentium MMX due to illegal instruction

2016-05-11 Thread Theodore Ts'o
On Mon, May 09, 2016 at 10:21:21AM -0700, Nikolaus Rath wrote:
> > Another way is to use btrfs (or zfs or perhaps LVM snapshots): whenever
> > something goes south in a way that's not trivial to recover, you can
> > restore with a couple commands and reboot.  And if unbootable because,
> > for example, someone removed support for your CPU, you boot with
> > subvol=backups/sys-2016-05-07.
> 
> I'd advise against using LVM snapshots. The time for initial activation
> seems to go up exponentially with the amount of data in snapshot
> volumes. I think they are only intended for short-term use
> (e.g. to take a backup).

If what you want to do is a rollback operation after a package
installation goes badly, LVM snapshots are sufficient.  They aren't as
convenient as btrfs, but they do work.  So what you'd do is do is (a)
create the snapshot, (b) inststall the package.  If the package looks
good, then delete the snapshot.

If you discover that the package hoses your system then to rollback,
shutdown the system to single-user mode, and remount the file system
to be read-only, and then use the command lvconvert --merge to restore
your file system back to the state of the snapshot.  This will consume
the snapshot, and leave the file system (presumably ext3 or ext4) in a
potentially confused state, which is why you need to do this with the
file system remounted read-only.   Then reboot, and you're all set.

I there is a yum plugin for Fedora where you reboot, and the lvconvert
--merge is done as part of the reboot (either as the system is
shutting down, or in the initramfs before the file system is mounted).
That's a much more convenient and user-friendly way to do the
rollback; creating such a covenience setup is left as an exercise to
the reader.  :-)

Cheers,

- Ted



Re: The state of cross building

2016-02-02 Thread Theodore Ts'o
On Sat, Jan 30, 2016 at 08:08:09PM +0100, Helmut Grohne wrote:
> 
> We have cross compilers and crossbuild-essential-* packages in unstable
> for quite a while now. (Thanks to Matthias Klose.)

I see these haven't entered testing because:

* 183 days old (needed 5 days)
* crossbuild-essential-arm64/amd64 unsatisfiable Depends: libc6-dev:arm64
* crossbuild-essential-armel/amd64 unsatisfiable Depends: libc6-dev:armel
* crossbuild-essential-armhf/amd64 unsatisfiable Depends: libc6-dev:armhf
   
* Invalidated by dependency
* Not considered
* Depends: build-essential dpkg-cross

Am I right in thinking this is because of how the testing scripts
work; is this something that is likely to be fixed in the future, or
is crossbuild-essential-* something that is only intended for unstable
and never for testing/stable?

Thanks,

- Ted



Re: Being part of a community and behaving

2014-11-17 Thread Theodore Ts'o
On Mon, Nov 17, 2014 at 10:21:13AM +0100, Marco d'Itri wrote:
 On Nov 17, Steve Langasek vor...@debian.org wrote:
 
   This is what many still (retorically) wonder about: we the systemd 
   maintainers did not reject that change,
https://bugs.debian.org/cgi-bin/bugreport.cgi?msg=15;bug=746578
 Please try to be less selective in your quoting: the issue was still 
 being discussed.

May I gently suggest that tagging a bug wontfix has the unfortunate
tendency to perpetuate the perception that the systemd proponents
don't really care about any fallout that systemd might cause on the
rest of Debian --- ESPECIALLY if it's still open for discussion?

Especially without any discussion or explanation by any other systemd
maintainer?

It may not be accurate, but right now, given the feeling of hurt on
all sides of the issue, a bit more communication instead of a blunt
tags + wontfix without any word of explanation might have
contributed to a more productive amount of discussion.

Best regards,

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20141117232416.ga8...@thunk.org



Re: Being part of a community and behaving

2014-11-16 Thread Theodore Ts'o
On Sun, Nov 16, 2014 at 09:02:12AM -0500, The Wanderer wrote:
 
 I would, for example, have classified the discussions / arguments in the
 systemd-sysv | systemd-shim bug which appears to have recently been
 resolved by TC decision as being an example of what I thought was being
 referred to by the original bitter rearguard action reference:
 fighting over the implementation details in an attempt to maintain as
 much ground for non-systemd as possible.

I was really confused that this needed to go to the TC; from what I
could tell, it had no downside systems using systemd, and it made
things better on non-systemd systems.  What was the downside of making
the change, and why did it have to go to the TC instead of the
maintainer simply accepting the patch?

If this is an examble of bitter rearguard action, my sympathies
would be on thouse who are trying to keep things working on
non-systemd systems

Am I missing something?

 - Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20141116221147.ga9...@thunk.org



Re: Being part of a community and behaving

2014-11-13 Thread Theodore Ts'o
On Thu, Nov 13, 2014 at 08:25:57AM -0800, Russ Allbery wrote:
 What do you think we should have done instead?  debian-devel was becoming
 the standing debian-canonical-is-evil vs. debian-systemd-sucks standing
 flamewar.  (I think people are already forgetting the whole Canonical is
 evil flamewar that was happening at the same time, with the same degree of
 vitriol that is now being levelled at systemd.)

That doesn't match my perception of the history; but part of this may
have been that the vitriol level escalated significantly once the TC
announced they were going involve itself in the debate, and it doesn't
look like it has gotten any better ever since.

That being said, I am sure that the TC got involved with the best
intentions, and most of the DD's involved in the discussions were all
united in their passion for wanting the best for Debian (even if they
agreed on very little else, at least on the systemd mail threads :-).

If only everyone could really internalize this belief; I think it
would make these discussions much less painful.

 I think people have an idealistic notion here that consensus will always
 emerge eventually, and it's easy at this point in the process to
 sugar-coat the past and forget how bad it was.  Please, make a concerted
 effort to put yourself into the mindset the project was in during the fall
 of 2013.  It's always easy to see, in hindsight, the cost of the option
 that was taken; it's harder to see the cost of the option that was not
 taken.
 
 Personally, I strongly suspect that we could have waited until 2020 and
 there still wouldn't be any consensus.  And that has its own risk.

I have a different belief about the future, but (a) there was no way
to know whether things would have gotten worse back in Fall 2013, and
(b) there's no way any of us can know for sure what the future will
bring, or what would have happened if we had taken an alternate path.
All we can do is to go forward, as best as we can.

Because regardless of how this GR is settled, it doesn't really answer
the question about the use of all of the other pieces of systemd; or
at least, I don't think that any of the options are the equivalent of
a blank check adoption of systemd-*, whether it be systemd-networkd,
systemd-resolved, systemd-consoled, etc.  And it sure would be nice if
we don't have the same amount of pain as each of these components get
proposed.  (My personal hope is that if they are optional, as opposed
made mandatory because GNOME, network-manager, upower, etc. stops
working if you don't use the latest systemd-*, it won't be that bad
going forward.)

Regards,

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20141113171127.ga26...@thunk.org



Re: A concerned user -- debian Guidelines

2014-11-10 Thread Theodore Ts'o
On Mon, Nov 10, 2014 at 02:34:33PM +0100, Matthias Urlichs wrote:
 The Wanderer:
  Unfortunately, as far as I can tell, no one seems to be remotely
  interested in trying to address or discuss that disagreement directly...
 
 The problem is that, apparently, any 'support' short of remove systemd
 from Debian NOW will not shut up the most vocal detractors.

There will always be some vocal detractors, and yes, there will be
absolutely no way to make the most radical people shut up.

Part of the problem is that there are people who are working on making
things less painful for those people who don't want to support
systemd, and even for people like my self who have resigned myself (or
at least am willing to use systemd on my laptop for now), but which
under no circumstances are willing to use GNOME[1].  However, these
efforts are on a best efforts basis, and no one is willing to make any
public commitment about what will and won't work in Jessie or
post-Jessie --- which is fair enough, because because this is a
volunteer project, and so it's not like any promise we could really
make anyway --- and if the GNOME folks yolk themselves even more
firmly to some new systemd extensions (for example, perhaps a future
version of network manager will blow up unless you use the systemd
replacement for cron or syslog), that's an upstream change, and we
can't rewrite all of upstream.

However, at this point, given that Jessie is frozen, I think it will
be possible soon to be able to make some statements about what will
and won't work with Jessie, vis-a-vis using either systemd or any
alternative init system, and even give instructions if someone wants
to install Jessie and then switch to an alternative init system.  And
I suspect even more importantly for many people, which alternative
desktops will work with systemd, and how to work around various
breakages that the switch to systemd might have engendered.  If we can
tell people that it's OK, Jessie isn't going to force you to switch to
GNOME 3, and if you want your text log files, you can keep your text
log files, etc., I think there will be a people (not the most vocal
detractors, admittedly) that will probably be reassured and less
fearful about what the New Systemd World Order will bring.

It may be that the release notes would be a very fine place for some
of this information, and it might be useful for dispelling many of the
myths that people who might not be using testing, and who know that
while things did get rocky for a bit, XFCE and other alternative
desktops work very well, thank you very much, will hopefully feel much
more reassured.

At that point, I suspect the remaining fears about what may break post
Jessie, as sytemd starts taking over even more low-level system
components, and perhaps all we can do there is some maintainers can
make declarations about what they are and aren't willing to do with
their volunteer time.  The future is always uncertain, and but I think
if we assume that people are fundamentally trying to trying to do the
right thing, and there will be people working to make most use cases
work at least as well --- and hopefully even better --- again, that
will hopefully reassure many people that Debian is really striving to
be a Universal OS, and not just a GNOME/Core OS, and that while some
things may break for a while, as long as their are volunteers
interested in fixing things --- and if not at Debian, where else? ---
in the long run All Will Be Well.

Cheers,

- Ted


[1] Well, I'd be willing to invest time to try GNOME again when 2-D
workspaces are supported as a first class feature (i.e., is something
where developers will try to avoid randomly breaking this feature on
every new GNOME release --- and indeed, the extensions which provided
for a 2-D workspace broke *again* with the most recent GNOME release,
and last I checked, were still not fixed.)  That's actually the
primary reason why I'm sticking with XFCE, BTW.  If I were reasonably
assured that GNOME wouldn't break my workfow on every release, I'd
certainly consider switching back.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20141110162614.ga...@thunk.org



Re: bash exorcism experiment ('bug' 762923 763012)

2014-10-11 Thread Theodore Ts'o
On Sat, Oct 11, 2014 at 10:37:26AM -0700, Russ Allbery wrote:
  You have convinced me that in this case it's going to have to be that
  way, so my prejudices notwithstanding.  I've rationalised the pain away
  by deciding it's no so bad as any competent programmer could see that is
  it only tested to 190 regardless of what the standards say.
 
 Yeah, I do get that discomfort.  I would love for Policy to be more
 accurate about what's actually happening in the archive.  I just don't
 have much (any) time at the moment to try to push the wording in that
 direction.

I assume that posh meets the strict definition of 10.4.  And so
without actually changing policy, someone _could_ try setting /bin/sh
to be /bin/posh, and then start filing RC bugs against packages that
have scripts that break.   Yes?

Given that the freeze is almost upon us, I could see how this might be
considered unfriendly, but if someone wanted to start filing bugs (at
some priority, perhaps RC, perhaps not) after Jessie ships, we could
in theory try to (slowly) move Debian to the point where enough
scripts in Debian worked under /bin/posh that it might be possible to
set it at a release goal, for some future release.   Yes?

Now, this might be considered not the best use of Debian Developers'
resources, and which is why it might be considered bad manners to do
mass bug filings, particularly mass RC bug filings at this stage of
the development/release cycle.

But if individual Debian developers were to fix their own packages, or
suggest patches as non-RC bugs, there wouldn't be any real harm, and
possibly some good (especially for those people who are very much into
pedantry, and don't mind a slightly slower system --- but if a user
wants to use /bin/posh, that's an individual user's choice :-)

Cheers,

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20141011215714.ga21...@thunk.org



  1   2   >