Bug#1068823: Stepwise Debian upgrade to enable systems with little free storage space to upgrade without breaks due to "No space left on device"

2024-04-13 Thread David Kalnischkies
On Thu, Apr 11, 2024 at 04:46:03PM +, mYnDstrEAm wrote:
> With the two commands above one can already split it up into two steps but 
> especially the second command still requires a lot of disk space.

I am going to assume that your "a lot of disk space" stems from the
*.deb files that are downloaded. If so, you can e.g. attach an USB disk/
drive and mount it e.g. under /media/apt-archives

Tell apt to use that directory instead of /var/cache/apt/archives, e.g.:
apt upgrade -o dir::cache::archives=/media/apt-archives

(for some more free MBs you could 'apt clean' and then move dir::cache
 elsewhere, but for that you need to create some directories in the
 target location and the binary caches are not THAT large to make it
 really worthwhile in practice. Similar for other files like
 /var/lib/apt aka dir::state::lists)


Instead of an USB drive you could do the same with e.g. an SD card, drop
them into RAM (if your device has surprisingly more RAM than disk) or
even use a network share (NFS, sshfs, … you name it). The filesystem is
not usually a concern (as in: even fat32 should work given we encode
away the : in epochs).

Note that whoever has write access to the files on the storage (or in
case of unencrypted transfer, also everyone who can meddle with transfer
over the network) could use that to attack you as apt (well, apt will
casually check them first, but after that and dpkg, who actually
interacts with them the most) will assume that the files in
/var/cache/apt/archives (or where ever else you stored them and told apt
to use them) are valid & trusted.


Note also that apt uses for its space check statvfs(3) f_bavail, as in,
depending on how you configured your disk, it should have a couple of
additional free blocks in reserve (typically 5%, see tune2fs(8) -m).
If you know what you are doing, you could decrease that value.


Note that the value apt displays is only an estimate, powered by what
the individual packages claim (via dpkg), which is an estimate. Also, if
you happen to have a 2GB installed, the upgrade will roughly take an
additional 2GB as dpkg would first extract the new files along the old
ones and then replace them in one swoop – so for a bit, you have that
package installed two times. Multiple this by group size, divide by
unchanged files and sprinkle some salt over it for flavour.
Predictions are hard, especially about the future.


I would in general not recommend to try approaches like upgrading
individual packages as that easily leads unsuspecting users into
situations that nobody else has encountered before: aka bugs in
packages that nobody else will encounter as they are either hidden
by the involved set usually being upgraded together as intended™ or
– which tends to be even worse – the breakage is known but ignored
on purpose as the solution is far worse than the problem (at least for
everyone doing upgrades the normal way – example: usrmerge). Also, but
that is just an aside, people grossly overestimate how easy it is for
packages to be upgraded individually (compare: t64 testing migration).


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: /usr-move: Do we support upgrades without apt?

2023-12-21 Thread David Kalnischkies
On Thu, Dec 21, 2023 at 02:42:56PM +, Matthew Vernon wrote:
> On 21/12/2023 09:41, Helmut Grohne wrote:
> > Is it ok to call upgrade scenarios failures that cannot be reproduced
> > using apt unsupported until we no longer deal with aliasing?
> 
> I incline towards "no"; if an upgrade has failed part-way (as does happen),
> people may then reasonably use dpkg directly to try and un-wedge the upgrade
> (e.g. to try and configure some part-installed packages, or try installing
> some already-downloaded packages).

You can configure half-installed packages, no problem, this is about
unpacking (which is the first step in an install, where only Conflicts
and Pre-Depends matter, if you are not deep into dpkg vocabulary)


The "try installing" part is less straight forward. In general, you
are running into dpkg "features" (e.g. not handling pre-depends) or
into dpkg bugs (e.g. #832972, #844300): In the best case your system
state became a bit worse and hence harder to "un-wedge". In the worst
case a maintainer script has run amok as nobody tested this.
But yeah, most of the time you will indeed be lucky and hence come to
the unreasonable conclusion that its reasonable to call dpkg directly.


Anyway, if your upgrade failed part-way, you are probably in luck given
that its more likely the upgrade failed in unpack/configure than in
removal – so if you aren't too eager to install more packages by hand
but limit yourself to e.g. (re)installing the ones who failed you are
fine as apt will have removed the conflictors already for you (or
upgraded them, if that can resolve the conflict).


But lets assume you are not:
As you are driving dpkg by hand you also have the time to read what it
prints, which in the problematic case is (as exampled by #1058937):
| dpkg: considering removing libnfsidmap-regex:amd64 in favour of 
libnfsidmap1:amd64 ...
| dpkg: yes, will remove libnfsidmap-regex:amd64 in favour of libnfsidmap1:amd64
(and the same for libnfsidmap2:amd64 as well. If your terminal supports
 it parts of these messages will be in bold)

Note that the similar "dpkg: considering deconfiguration of …" which is
the result of Breaks relations is not a problematic case.

(Also note that this exact situation is indeed another reason why
 interacting with dpkg by hand is somewhat dangerous as you might not
 expect packages to be removed from your system while you just told
 dpkg to unpack something… just remember that the next time you happen
 to "dpkg -i" some random deb file onto your system.)

That is of course no hint that a file might have been lost due to
aliasing if you don't know that this could be the case, but on the
upside it is not entirely a silent file lose situation either. We could
write something in the release notes if someone happens to read them AND
also encounters this message.


Query your memory: Did you encounter this message before? Nothing in
the /usr merge plan makes that particularly more likely to be encountered
for a user and not all of the encounters will actually exhibit the file
lose. So if you haven't – and I would argue that most people haven't –
there is a pretty good chance you wont have a problem in the future
either…


So, in summary: Yes, there are theoretic relatively easy ways to trigger
it with dpkg directly. That isn't the question. The question is if a real
person who isn't actively trying to trigger it is likely to run into it
by accident (and/or if such a person can even reasonably exist) so that
we have to help them by generating work for many people and potentially
new upgrade problems for everyone – or if we declare them, existing or
not, a non-issue at least for the upgrade to trixie.


And on a sidenote: I would advise to reconsider interacting with dpkg
too casually – but luck is probably on your side in any case.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: /usr-move: Do we support upgrades without apt?

2023-12-21 Thread David Kalnischkies
On Thu, Dec 21, 2023 at 03:31:55PM +0100, Marc Haber wrote:
> On Thu, Dec 21, 2023 at 11:19:48AM -0300, Antonio Terceiro wrote:
> > On Thu, Dec 21, 2023 at 10:41:57AM +0100, Helmut Grohne wrote:
> > > Is it ok to call upgrade scenarios failures that cannot be reproduced
> > > using apt unsupported until we no longer deal with aliasing?
> > 
> > I think so, yes. I don't think it's likely that there are people doing
> > upgrades on running systems not using apt.
> 
> Do those GUI frontends that work via packagekit or other frameworks
> count as "using apt"?

I explained that in full detail in my mail to the pause-thread:
https://lists.debian.org/debian-devel/2023/12/msg00039.html

In short: helmuts "apt" (my "APT") includes everything that uses libapt.
That is apt, apt-get, python-apt, aptitude, synaptics, everything based
on packagekit, …

I know of only cupt and dselect which don't count, but I have some
suspicion that they would work anyhow. IF you don't run into other
problems with them, like that they do not implement Multi-Arch.


So this thread is really about:
How much are people REALLY fiddling with dpkg directly in an upgrade
and can we just say its unsupported – because, at least that is my view,
in practice nobody does it and its therefore also completely untested.

Case in point: We have this thread not because someone found it while
working with dpkg directly even through they had potentially years, but
because Helmut ended up triggering an edge case in which apt interacts
with dpkg in this way and only after that people looked for how to
trigger it with dpkg because triggering it with apt is hard (= as Helmut
checked, no package (pair) in current unstable is known to exhibit the
required setup).

(I will write another mail in another subthread about the finer details
 of what interacting with dpkg in an upgrade means and what might be
 problematic if you aren't careful – in general, not just with aliasing)


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Pause /usr-merge moves

2023-12-04 Thread David Kalnischkies
On Mon, Dec 04, 2023 at 01:13:43PM +0100, Helmut Grohne wrote:
> David Kalnischkies made me aware […]

Oh, did he? I think he wanted to tell you something else… 
As IRC seems to be really bad at transporting complicated things (who
had guessed?) and I need to sort my thoughts anyhow let me recount the
last few days in a lengthy mail… perhaps that helps me and whoever else
might be interested.

Disclaimer: /me is an independent APT developer for ~14 years aka
super biased if the topic is upgrades and APT.
Content warning: Gory details of APT and dpkg internals are described
 in text. Reader discretion advised.


First of, I care only about APT (= meaning everything using libapt, be
it apt, aptitude, unattended-upgrades or some python-apt script), not
about someone who believes they could perform an upgrade from bookworm
to trixie by hand with dpkg¹. I mean, APT can calculated it, so there
obviously is a way, but its so tedious for a human being that nobody
does that. Sure, there are people who believe "dpkg -i *.deb" would
work, but a) that wouldn't be affected by this problem to begin with and
b) it doesn't work as you have to spell out pre-depends, the order in
which the debs are given is important and last but not least you have to
work around a few things in dpkg (e.g. #832972, #844300). So whoever
believes they can do it without APT are probably lying to themselves
or at the very least wasting a lot of time such experts could use far
better to improve APT and/or dpkg for the benefit of all…
Source: I stumbled over many bugs while trying to simplify APT down to
that dpkg call more than seven years ago, so that is not even a new
thing and so not even worthy of grabbing your pitchforks because
I supposedly impose new things you haven't adapted to yet…
The release notes are actually saying you have to use 'apt', so I am
affording a significant amount of leeway here talking about APT.
(bootstrapping and such stuff which gets away with not using APT isn't
 upgrading anything, so this problem with Conflicts is of no concern)


So, after clearing that one up, lets focus on the issue at hand:
APT tells dpkg about all removes it will eventually schedule ahead of
time so technically all (well… some?) Conflicts your yaml includes could
be made to exhibit the problem, but APT usually always makes the removal
of a package explicit as well and in that case you/we/usr-merge is fine.

There is one exception we have to talk about: If a package is scheduled
to be removed in one dpkg call and in the next one unpacked. I figured
out today that I implemented that sorta by accident… actually, I was
working on crossgrades (= a package changing architecture in an upgrade,
which contrary to common usage of the word is at least for APT also the
case for all <-> any and as such can happen even on a single architecture
system) and accidentally its also catching temporary removals which
don't change architecture, but are also removing a package before
unpacking it (Its Debians fault for trusting me with this stuff…).

While the former might be interesting to look at as another source of
esoteric problems, lets focus on the temporary removals here:

Unversioned conflicts [usually] do not cause temporary removal. They
cause "permanent" removal as the packages aren't co-installable (yes,
I mentioned you could conflict on a provides from bookworm which is
removed in trixie. I don't think that actually happens in reality).

A versioned conflict is discouraged by policy and by lintian. Okay, we
need it here, so: It can cause it, but only if the conflict is mutual
but the packages still co-installable on trixie AND they were also
co-installable in bookworm (and for it to actually happen, they have
to be installed both from bookworm on the user system of course).

If "pkga conflicts pkgb (<< 2)" and "pkgb depends pkga (>= 2)" (like in
a package rename perhaps) APT can just "unpack pkgb pkga" and all is
fine. The problem you came with was instead "pkga conflicts pkgb (<< 2)"
and "pkgb conflicts pkga (<< 2)" (its the same if ONE of them is
a breaks, but not if both are) as APT can't just unpack them it has to
remove one of them before unpack both. That is what I called a temporary
removal as the package is only removed for a very short time.

But that short amount of time is already too long if the packages
involved are essential, which I suppose is the reason for §6.6 as if APT
(as it currently does by happy accident as described above) just tells
dpkg that it is allowed to deinstall one of them and unpacks both dpkg
can do its §6.6 dance to avoid the actual removal from disk. Yeah!
I implemented a generic improvement by accident instead of a bug!

Sadly, for usr-merge we need the removals to happen explicitly as dpkg
can't untangle the aliasing. The barrier idea achieves this by the way
of pre-depends as APT ha

Re: [APT repo] Provide a binary-all/Packages only, no primary arch list

2023-12-03 Thread David Kalnischkies
Hi,

(I think this isn't a good mailing list for apt vs. third-party repo
 config… users@ probably could have answered that already, deity@ if
 you wanted full APT background which I will provide here now…
 reordered quotes slightly to tell a better story)

On Sat, Dec 02, 2023 at 06:40:33PM +0100, MichaIng wrote:
> we recognised that APT falls back to/uses "binary-all/Packages"

APT doesn't fall back to it, its default behaviour to use them if
available and supported (← that word becomes important later on).

Debian repos actually opt out of it for Packages files:
https://wiki.debian.org/DebianRepository/Format#No-Support-for-Architecture-all


> while checking how to best enable riscv64 support for Webmin's own APT
> repository

And what did you do to the repository to enable riscv64 support?


> but still complains with a warning if no "binary-riscv64/Packages"
> is present and declared in the Release file of the APT repository:
> ---
> W: Skipping acquire of configured file 'contrib/binary-riscv64/Packages' as
> repository 'https://download.webmin.com/download/repository sarge InRelease'
> does not seem to provide it (sources.list entry misspelt?)
> ---

So you configured apt on the user systems to support riscv64,
but didn't change anything in the repository?


> Is this expected behaviour, i.e. is every repository expected to provide
> dedicated package lists for every architecture, or is there a way to provide
> an "all" architectures list only, without causing clients to throw warnings?

Yes, i.e. no and yes there is. The previously mentioned wiki says this
about the Architectures field: "The field identifies which architectures
are supported by this repository." So your repository doesn't support
this architecture and doesn't even ship the data, the user has configured
apt to get. Something is fishy, better warn about it.


So, add riscv64 to Architecture in Release and be done, except that
you should read the entire stanza as it will explain how a client will
behave with that information. It also explains the specialness of 'all'.
https://wiki.debian.org/DebianRepository/Format#Architectures


Why? So glad you asked! Nobody tested the repository with this arch.
If you e.g. have a Multi-Arch system bad things can happen if a library
package is not available for all configured architectures in the same
version. Or that arch:all package ships little-endian data, but your
system is big-endian (or vice versa), or its actually using linux-only
binaries in maintainer scripts, but you are running hurd-i386 …


> In case of Webmin, all packages are perl scripts, have "Architecture: all"
> declared and naturally support all architectures. So it seems unnecessary to

Well, arch:all packages do not "naturally" support all architectures
in edge cases and we love edge cases. It is also why an arch:all package
is not "naturally" also "Multi-Arch: foreign".


> provide clones of a single package list for every architecture explicitly,
> and having to do so whenever a new one appears.


So yeah, if you want you can ship only an -all/Packages file and add the
others if you ever ship some as long as you tell apt (and your users)
that you support an Architecture, they will manage.


Best regards

David Kalnischkies, who happens to have implemented most of this


signature.asc
Description: PGP signature


Re: How do you cause a re-run of autopkgtests?

2023-07-21 Thread David Kalnischkies
On Fri, Jul 21, 2023 at 05:57:23AM -0500, G. Branden Robinson wrote:
> But I see no mechanism for interacting with autopkgtests to force them
> to re-run due to the remedy of a defect in the test harness itself.
> 
> How is this to be done?  Should some automated mechanism for achieving
> this be added, and if so, where?

You already found the retry button from previous replies, but you
don't have to click it to get what you want…


See how migrating groff/1.23.0-2 to testing was tested (and failed)
every day without you lifting a finger?
https://ci.debian.net/packages/d/dgit/testing/amd64/

Triggering a rerun now would be pointless as (a fixed) dgit needs
to migrate first before groff can… which should be around ~tomorrow.
After all, groff can only migrate if importing it into testing isn't
causing a regression and it does as long as a bad dgit is in testing.

I guess the day after ~tomorrow groff will migrate as well – assuming
dgit was really your only problem.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: rejection of binary package based on file timestamp

2023-07-20 Thread David Kalnischkies
Hi,

On Thu, Jul 20, 2023 at 10:01:54AM +0200, PICCA Frederic-Emmanuel wrote:
> I am working on two packages pyfai[4] and python-fabio[3], I have got a 
> rejection based on the file timestamp which seems too old.

Looking at the dak (= Debian Archive Kit; aka the tool(s) handling
the archive) source [0] shows us that this is an explicit check
(BinaryTimestampCheck) against time travel as that "can cause errors
on extraction" (says the source, dating from 2012).

This check flat out refuses files from before 1975. For the future it
is a lot more restrictive… no more than 24 hours in the future.

I wonder a bit why this is not applied to sources as well, but I suppose
it could be legit to have unchanged source since then… not that I think
you will encounter a lot of trouble on extraction, but its likely so
untested that something will struggle with it like it does with e.g. the
years 2038 or year 0 (compare also the time_t 32 vs 64bit discussion).

[0] https://salsa.debian.org/ftp-team/dak/-/blob/master/daklib/checks.py#L461 ff


> If you lool at python-fabio status page, it seems that they all failed [5], 
> but if you only look at the build log the package build on most buildd.[6].

The build was successful on the buildds, so the binary package is
uploaded to the archive – but dak refused to import them. That is
also why it was successfully imported into some ports architectures
as ports is currently not dealt with by dak but by another tool
(dubbed mini-dak) for now (note for time travelers: This might change
 in the future).


> So during the build it seems that sphinx keep these timestamp and use it for 
> all the generated documentation.

Taking the timestamp of the source file is not the worst idea as that is
fixed and fixed is useful e.g. for reproducible-builds. I somewhat doubt
sphinx is doing this as the output usually depends on various input
files, but if that is what you see…

An alternative is using the value stored in SOURCE_DATE_EPOCH (if it
exists).

> My question is what should I do now...

If it is just about a few files each, it might be easiest to `touch`
the binary file in your debian/rules.

Somewhere at the top place for good measure:
SOURCE_DATE_EPOCH ?= $(shell dpkg-parsechangelog -S Timestamp)

and a bit later (as I think its the upstream changelogs):
execute_after_dh_installchangelogs:
touch -d"@$(SOURCE_DATE_EPOCH)" path/to/binary/file


I haven't actually tried this, so please don't rely on me typing it
correctly into the blue. Test it! Especially look at the timestamps
in the produced deb file.


It is a bit iffy to set the timestamp of the changelog (which changes
with every revision), but close enough. At least more realistic than
that this software wasn't changed since the start of the unix epoch…
So please drop this again if its no longer needed.


Best regards

David Kalnischkies

P.S.: d-devel@ isn't entirely wrong as this is sufficiently esoteric,
 but next time start perhaps on d-mentors@.


signature.asc
Description: PGP signature


Re: New deprecation warning in CMake 3.27

2023-06-17 Thread David Kalnischkies
On Fri, Jun 16, 2023 at 01:08:08AM +0200, Timo Röhling wrote:
> Attached is a list of most likely affected packages, which
> I generated with a code search for
> 
> (?i)cmake_minimum_required\s*\(\s*version\s*(?:3\.[01234]|2)(?:[.)]|\s)

fwiw apt would trigger this in its autopkgtest as one of them (the
main run-tests) builds a sub-directory of helpers with cmake via the
main "upstream" CMakeList.txt file. That test is allowed to have stderr
output through, so no problem on that front. I just report this back
as I think its a bit optimistic to assume everything building something
in tests would do so from within debian/tests. I would actually hope
most would build some part of upstream like apt instead… just saying.

(I doubt there is any reason apt uses that particular version, but my
 cmake knowledge is on a pure edit-semi-randomly-until-it-seems-to-
 work-as-wanted basis)


Can you recommend a relatively safe & old version to use instead of
< 3.5 which doesn't need bumping next month but is also available in
most semi-current releases of all major distributions (as that is what
most upstreams will care about if they don't have special needs)?


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: DEP 17: Improve support for directory aliasing in dpkg

2023-05-03 Thread David Kalnischkies
On Wed, May 03, 2023 at 10:31:14AM +0200, Raphael Hertzog wrote:
> On Tue, 02 May 2023, Helmut Grohne wrote:
> > I think there is a caveat (whose severity I am unsure about): In order
> > to rely on this (and on DEP 17), we will likely have versioned
> > Pre-Depends on dpkg. Can we reasonably rule out the case where and old
> > dpkg is running, unpacking a fixed dpkg, configuring the fixed dpkg and
> > then unpacking an affected package still running the unfixed dpkg
> > process?

APT instructs dpkg to --unpack and to --configure in different calls,
you can't mix and match those in the same call and apt never does the
(combining) --install (not that it would really matter here).
Also, dpkg is essential and as such has to work unpacked aka unpacking
a fixed dpkg means that this fixed dpkg will (later) configure itself.

Now, given dpkg is essential, it also means it gets the essential
treatment from APT (by default) which means it will try to unpack it as
soon as possible while trying to keep the time it remains unconfigured
at a minimum. Give it a try, you usually see essential packages being
interacted with first and in their own calls if you look close enough.
That isn't an accident, the idea is that some random 'optional' package
failing to install in some way should not leave you in a situation where
essentials are in a state of limbo.

If you increase the complexity of (pre-)requirements through APT will
end up being forced to hand multiple packages in one go. Just pull up
the last time you upgraded libc6: You will see a bunch of -dev packages
and MultiArch siblings being unpacked alongside libc6 and libc-bin. You
will only see those two being configured right after through. The
dependencies will it is… so we might have to be a bit careful about the
dependencies dpkg carries if such a route is taken.


That said, there is always the 'stretch' horror story of APT installing
all of KDE before touching dpkg because of the install-info transition…
Although that was avoided before the release by removing from dpkg the
Breaks leading us into this dark alley… (just to be sure: APT wasn't
wrong, the dependencies weren't – but the idea to manually upgrade dpkg
first to avoid some pitfalls was suggested which turned out to be wrong).

Also, I wonder if we run into Pre-Depends loops and similar nasties
given that the essential set is somewhat likely to pre-depend on
things which use(d) to be in /lib which would in turn Pre-Depend on
dpkg.

(I haven't tried and memory is sketchy about those finer more
 complicated matters, but dpkg certainly can produce working orders
 for loops by inspecting which maintainer scripts exist or not, so
 upgrades involving those might or might not work. All bets are off
 which version of dpkg would be dealing with those through)


> I don't know APT well enough to answer that question but from my point of
> view it's perfectly acceptable to document in the release notes that you
> need to upgrade dpkg first.

Those never work in practice through. Nobody logs in on their buildd
chroots and upgrades them "properly", we all just hope for the best.

Even on systems we care more about people are regularity caught red
handed by bothering support with questions whose answers are spelled
out in detail in the release notes. Case in point: "Changed security
archive layout" last time or "Non-free firmware moved to its own
component in the archive" this time around…

And those are easy to diagnose and fix. 'You "might" have some "random"
files not present on disk. So your system might not even boot or spawns
interdimensional portals. You better reinstall…' is not the type of
thing you wanna here from support.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Please, minimize your build chroots

2022-12-19 Thread David Kalnischkies
On Sun, Dec 18, 2022 at 06:08:57PM +0100, Johannes Schauer Marin Rodrigues 
wrote:
> Quoting David Kalnischkies (2022-12-18 17:18:28)
> > On Fri, Dec 16, 2022 at 03:38:17PM +0100, Santiago Vila wrote:
> > > Then there is "e2fsprogs", which apt seems to treat as if it were
> > > an essential package:
> > > 
> > > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=826587
> > 
> > As Julian explained, it is considered "essential" because the maintainer
> > said so. If you don't think e2fsprogs should be considered "essential"
> > for a system it is installed on please talk to the maintainer.
> > 
> > Sure, the package is not (anymore) really "Essential:yes", but 'apt'
> > isn't either and will print the same message anyhow. I don't think it
> > would be a net-benefit for a user to invent words for different types of
> > essentialness in apt because in the end you either know what you are
> > doing while removing a somewhat essential package and continue or you don't
> > know what you are doing and (hopefully) get the hell out.
> 
> would it be so difficult to cater to both kind of users? For those who do not
> know the terminology, using the word "essential" is probably fine. But for
> those who do it's confusing. Why can apt not say something like:
> 
> WARNING: The following packages will be removed. Apt considers them essential
> because they are marked as Priority:required. This should NOT be done unless
> you know exactly what you are doing!

This is objectively wronger™ through: prio:required packages are not
considered essential by apt. Most are for other reasons, but priority
has nothing to do with it. The same "you are about to remove an
essential package" (paraphrased) message is shown for:
- packages marked as Essential:yes in ANY [native] version known to apt
  (if you don't modify that behaviour with pkgCacheGen::Essential)
- packages marked as Important:yes/Protected:yes in ANY [native] version
  (surprisingly Julian has not added a option here… m)
- binary packages listed via the pkgCacheGen::ForceEssential option,
  (the list can NOT be empty, it will default to "apt")
- binary packages listed via the pkgCacheGen::ForceImportant option
  (empty list by default)
- packages that are (pre-)dependencies of the other points if that
  package is removed, too.

(Note that the mentioned options do work only if you generate a cache
 and also 'taint' that cache meaning that a reused cache later without
 those options will still behave as if they were given.
 You have been warned.)

The later ensures that you can e.g. change awk providers, but be smacked
with a huge clue bat if you remove the last provider, even if that
happens to be the prio:optional gawk which as a package itself doesn't
look like it would be essential in any way without going into a lot of
details completely lost on most apt users (for good reason, after all,
if you wanted to know all that, you would probably do dpkg by hand or
at least maintain apt… and nobody wants to do THAT, am I right…)

Also: "marked as …" – by whom? If you say it like that, a user might
think they did; like they marked some package to be held back for
example and that marking can (should?) be removed.


The problem in showing something different for Essential:yes (derived)
and Protected:yes (derived) essential packages is that the difference
between the two is marginal from apts POV: Essential:yes has to work in
unpacked state, but that is a dpkg-level thing to worry about and hardly
a real concern for the general public. Just like the reduced install
order requirements in general.

Okay, things don't need to depend on Essential:yes packages if they use
them, but that tends to be the case for Protected:yes as well as not
that much really "uses" an init system for example. Other distros slap
Protected:yes on high-level meta packages like 'gnome'. Nobody depends
on that either.

All the two really do in terms of apt (front ends – the message is apt
specific, but the fields aren't so it would be kinda nice if terminology
could be reused by other front ends if they so choose) is making it
a pain to remove them, but being too upfront about that has its problems
as well as it naturally leads to the question "why?" which apt preempts
with the ultimate hammer: Its essential for the system as the individual
reason for each package might even be distro-specific. Users usually
don't question that.

It's a lie. Heck, it might even be deception. But the truth hurts more:
"Heh, you are a great user, you really are, but you know, no offense,
but I am a computer program on a device you (think you) own and should
probably be able to do whatever you want to do with it, but there are
other people who are not you who think you might be an id

Re: Please, minimize your build chroots

2022-12-18 Thread David Kalnischkies
On Fri, Dec 16, 2022 at 03:38:17PM +0100, Santiago Vila wrote:
> Then there is "e2fsprogs", which apt seems to treat as if it were
> an essential package:
> 
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=826587

As Julian explained, it is considered "essential" because the maintainer
said so. If you don't think e2fsprogs should be considered "essential"
for a system it is installed on please talk to the maintainer.

Sure, the package is not (anymore) really "Essential:yes", but 'apt'
isn't either and will print the same message anyhow. I don't think it
would be a net-benefit for a user to invent words for different types of
essentialness in apt because in the end you either know what you are
doing while removing a somewhat essential package and continue or
you don't know what you are doing and (hopefully) get the hell out.


> This sort-of breaks sbuild when using an ordinary chroot (not overlayfs),
> because after building a package needing e2fsprogs, it may not be removed
> and it's kept in the chroot.

"It may". Well, certainly apt won't autoremove it. Like a lot of other
packages it wont even through they aren't essential or protected or
whatever… ("just" prio:required is enough for example). They are not not
irremovable through. It might just be a little harder to remove them
than to install them. Heck, some people believe its far easier to start
vim than to exit it.


> I think apt authors did not think that apt is used by sbuild to

I think (the few) apt authors deal with way too many users with way too
many (sometimes mutually exclusive) ideas of how it should behave:


> build packages. Here we would need some interface like SUDO_FORCE_REMOVE=yes,
> or maybe just stop doing anything at all with the Important:yes flag.

Ironically, one of the selling points for Protected:yes (that is how the
field ended up named which dpkg is supporting by now) was that it allows
to shrink the essential set (e2fsprogs even being an example) as there
is a non-empty set of people who believe users do incredibly dumb things
if you give them the option to.

I mean, we got practically bullied into replacing the "Do as I say!"
prompt with a semi-hidden cmndline flag (--allow-remove-essential)
because some wannabe YT star yolo'ed the prompt ending in misery and
somehow that was framed as our fault by everyone as we didn't show the
appropriate meme-gif (rendered with caca) to make them understand
without reading [sorry, not sorry. I am not even exaggerating that much].

Due to that, you are now presented with:
| E: Removing essential system-critical packages is not permitted. This might 
break the system.

See? "essential" again and even "system-critical" at that.
It is all a lie of course. Nobody really needs an init system, much less
some silly metapackage for it, as long as there is /bin/sh and a keyboard.
I should make a video about it to – essentially – become famous & rich…


Btw, apt also has behaviour specifically for sbuild: 'apt-cache show
mail-transport-agent' has a zero exitcode even through that makes no
sense at all apart from not making (some?) sbuild versions explode.
You are welcome. I hate it.


So, long story short, apt features and behaviour are very seldom done
because we are bored and had nothing better to do. It is far more common
that it was heavily requested to be that way for $REASONS. Sometimes its
even the same $REASONS you have for disliking it. Users are the worst,
I said it here first. But no problem, there usually is an option to
change anything in apt. If not, we can usually add it.

Just don't assume that the behaviour you prefer will be the default.
We have a strong tendency to make everyone unhappy.
(I should know, I never get what I want.)


Best regards

David Kalnischkies

P.S.: You thought we are surprised by sbuild using apt? Sorry, but you
are up against ISO building needing 'apt-cache depends' output previously
unknown even to the CD team itself (https://bugs.debian.org/218995#54)
(yes, it is a decade old. It's still my favorite). Try harder.


signature.asc
Description: PGP signature


Re: Firmware GR result - what happens next?

2022-10-14 Thread David Kalnischkies
should be very light on
(rev-)dependencies and demands upon them.

We are potentially burning one component name forever, but I suppose
we can live with that.


The question is if we should do that through as it would be libapt-
specific magic. I suppose as we are only talking about sources.list
I guess we can do whatever as after all, that file is libapt-specific
as well, but:

Alice as an upgrader has non-free in its sources.list and gets
non-free-firmware implicitly as a service from apt.

Bob installs Debian on a fresh system and gets non-free-firmware
in its sources.list as a service from d-i.

Cecile is an upgrader, too, but she has non-free in her sources.list
only for the firmware, so she would happily switch out non-free
for non-free-firmware, if only she would know.


All of them go online, read stuff potentially decades old about Debian,
firmware and non-free. Their software center GUIs talk about four
components now (do they?) they can enable (or not) in these funny little
dialogs…

As much as I would like apts sources.list to be apt-specific, I fear
a bunch of things read and even write it, which potentially need
to implement apt's magic as well to make sense to the user. Alice and
Cecile will be confused if e.g. the GUI says they don't have
non-free-firmware enabled, but they are getting packages from it…

That is not to say no, I just want to highlight that other places would
need some work as well and it could be confusing if we miss some –
but then, what change isn't confusing.


(That apt does things this way is due to historic grow. I entertain some
 changes which if they would exist would make similar split-offs work
 better, but those I would classify as requiring enormous patches.
 Absolutely not going to happen soon, if at all)


> 1. Document it in the release notes and let users handle it. This means
> lots of users won't get security updates for firmware (which are mostly
> only issued for x86 CPU microcode), since lots of folks won't read the
> release notes. This also means lots of support requests when users
> can't find the firmware package they wanted.

'apt update' still has the code which detected Debian sources accessed
via ftp, which told users that ftp will be shut down and points to
a press release about it. [0]

I didn't implement something similar for the security change as
I somehow got the memo way too late and it would have been harder given
in that case I would have no data to go by, but that is water under the
bridge by now.


We could do something similar to ftp here, detecting non-free but not
non-free-firmware for a Debian source and point users to a press release
explaining what this is all about (not the release notes, as e.g. sid
users would somewhat rightly not expect to need to look there for
information). That is somewhat trivial to do, we might even be able to
convince the stable managers to allow backport this, so a bullseye user
running 'apt update' while upgrading to bookworm would see that message
already or otherwise be bugged about it IF they later interact with apt
(which isn't a guarantee. So ideally other front ends would talk about
this, too).

That would be entirely Debian specific and hard-coded in apt (and in
other front ends) though. I am not an enormous fan of producing an index
of all repositories wanting to opt-in within apt source code. So moving
that to an external hook might be better from a backport sense (as
I suppose in the lifetime of stable, repositories will adapt. Not all
prepare for stable while we do but against the finished product).


I don't really want to rank either of the mentioned options as either
could work, they all have their benefits and drawbacks and most
importantly: while I am happy to impose work upon myself, I don't want
to decide what others should work on. I also have a bad track-record of
judging what is acceptable to bother users with…

If I completely ignore the work aspect, for me personally I would favour
3 as it has the hint of introducing the concept of a hierarchy in
components which might come in handy later if we want to split off other
sections in other components as well. But as said, either works and
I would be willing to support them from the apt side of things at least.

With one exception:
I rank any option even remotely considering a postinst failure well
below NOTA as that is a horrible user experience. isa-support is a hack,
not a role model. It is barely acceptable only because it effects only
a tiny fraction of our user base so far. And even for those, I would
like apt to help not installing broken packages (but that is another
topic).


So, who is gonna take the blame for deciding this for everyone?


Best regards

David Kalnischkies

[0] 
https://salsa.debian.org/apt-team/apt/-/blob/main/apt-private/private-update.cc#L88-106


signature.asc
Description: PGP signature


Re: Bug#903158: Multi-Arch: foreign and -dbgsym: too weak dependency

2022-10-10 Thread David Kalnischkies
On Mon, Oct 10, 2022 at 08:50:49AM +0800, Paul Wise wrote:
> On Sun, 2022-10-09 at 18:54 +0200, David Kalnischkies wrote:
> > I suppose we could use 'foo-dbgsym Enhances foo:arch (= version)'.
> 
> That sounds interesting and would be nice generally, however...
> 
> > On a sidenote: What the Depends ensures which the Enhances doesn't is
> > that they are upgraded in lockstep. As in, if for some reason foo (or
> > foo-dbgsym) have their version appear at different points in the archive
> > apt would hold back on a Depends while with Enhances this dependency
> > would be broken and hence auto-remove kicks in.
> 
> For the rolling Debian suites, the main and dbgsym archives are often
> out of sync, the dbgsym packages updates sometimes appear first and
> sometimes second. Keeping foo/foo-dbgsym in sync is strongly needed

Oh, are they? I thought they would be better in sync. Never noticed,
but I tend to have extremely luck avoiding any kind of apt problem… 


Anyway, that is solvable. An 'upgrade' e.g. keeps back an upgrade if
that would break a Recommends. Seems reasonable to keep it back also
if it would break a previously satisfied Enhances as loosing the
features of a plugin due to an automatic upgrade seems super-bad.

For full-upgrade we could go with a rule specifically targeted at
packages from the 'debug' section with such Enhances dependencies.
If you have multiple architectures of an M-A:same package installed
they keep each other in check as well as long as the "old" version
is still downloadable. So that shouldn't be too hard™…

The downside is that both are heuristics which are solver dependent, as
such aptitude likely and external solvers surely won't support that
(without implementing similar solution optimisation logic).

That said, this isn't really different from "miss-using" Depends for it
to have it be hold-back as is not working with every solver in every
situation either. For apt I am actually somewhat surprised if it does in
the general case as the -dbgsym should have close to no power (as
nothing depends on it), while the thing it has debug symbols for probably
has things depending on it, so if it comes to upgrading foo or keeping
it back it should favour upgrading foo (and hence removing foo-dbgsym)
in most cases currently (full-upgrade that is, upgrade of course not).


Anyway, if that is an acceptable/desirable option we should probably
move any apt machinery discussion into its own bugreport and away from
d-d@ and debhelper. For this thread I would say its enough to decide if
using Enhances in this way is acceptable for everyone.

If and how apt (and/or other tools) make then use of the data is up to
them in the end.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Bug#903158: Multi-Arch: foreign and -dbgsym: too weak dependency

2022-10-09 Thread David Kalnischkies
On Sat, Oct 08, 2022 at 03:42:59PM +0100, Simon McVittie wrote:
> I was under the impression that the Debian archive does not allow
> dependencies with an explicit architecture like this, only the :any
> qualifier for M-A: allowed packages (like python3:any).

"allow" is a strong word especially if you don't find it in policy
(but then, policy documents existing usage, so some usage pre-dates
 policy by definition I guess).

It is also not really related to MultiArch ipso facto as the initial
spec explicitly mentions it as a dropped discussion point. [0]
It was later added with MultiArchCross [1] with the previously mentioned
caveats still in place as cross-building is not a thing in the archive
(as we build everything natively – ignoring special cases like win32).

That said, we have some packages declare cross-architecture dependencies
in the archive (even in stable), but not as a hard-dependency as indeed
various archive tools can not deal with such dependencies and its
unclear if they even should as MultiArch is not a default configuration.
(which I want to highlight explicitly here as it is frequently compared
 to things we ship or enable by default for everyone)

Examples are:
- crossbuild-essential-i386 depends on gcc-i686-linux-gnu | gcc:i386
- gamemode recommends libgamemode0:i386
- libc6-i386 conflicts with libc6-x32:i386

(and that is just looking at :i386 in amd64 as that is a somewhat common
 usage to bring in i386 packages on amd64.)

The trick previously was to depend on a package only available in the
other architecture, which equally doesn't work with tools who only have
a single arch available and is less obvious for the casual on-looker.


apt (and libapt-based friends) and dpkg agree mostly on what things mean
and if they don't it tends to be way beyond the pay grade of the average
DD (an example would be #770345 that is have left hanging for 8 years
now) – no disrespect intended! It is just that I would certainly not
want to reason about any of that stuff if that wouldn't come hand in
hand with being an apt dev… bringing all those nitty-gritty details to
policy would certainly be an interesting endeavour but if it would
really be a service to human kind^W^Wcontributors I am not so sure even
if its frequently used as an argument against MultiArch and related
projects.


Best regards

David Kalnischkies

[0] 
https://wiki.ubuntu.com/MultiarchSpec#Allow_official_packages_to_have_cross-architecture_deps
[1] https://wiki.ubuntu.com/MultiarchCross#Cross-architecture_dependencies


signature.asc
Description: PGP signature


Re: Bug#903158: Multi-Arch: foreign and -dbgsym: too weak dependency

2022-10-09 Thread David Kalnischkies
 idea for
various reasons (hence why apt likes to go on a remove spree if that is
deemed more beneficial), but that would lead us too far off-topic here…


So, could that be an acceptable Option c) ?


Best regards

David Kalnischkies

¹ this example was explicitly chosen as its possible that you
  want to use them independently. I don't see a lot of reasons for
  independent usage of e.g. asc and asc-music even if its of
  course possible.


signature.asc
Description: PGP signature


Re: Automatic trimming of changelogs in binary packages

2022-08-19 Thread David Kalnischkies
On Fri, Aug 19, 2022 at 09:01:22AM +0800, Paul Wise wrote:
> Before we consider enabling this by default, first we need a way for
> `apt changelog` to download the full changelog rather than loading the
> changelog from /usr/share/doc in the currently installed package.

You can tell apt to ignore the local changelogs completely with:
Acquire::Changelogs::AlwaysOnline "true";

This is set by default on ubuntu-vendor apts, not sure why as for
a repository identifying as "Origin: Ubuntu" apt knows that it should
grab the online changelog, regardless of the vendor apt is built for
(Acquire::Changelogs::AlwaysOnline::Origin::Ubuntu).


That is an all or nothing setting through as Ubuntu does trimming
unconditionally for all their packages. I don't see a logic¹ that would
be able to detect "builds with dh >= 14 (and hasn't opted out)" given
only binary package metadata, but I guess for most needless downloads
are not much of a concern.

¹ I guess we could implement looking at the free-form text in trimmed
  changelogs, but that feels a bit brittle. If the last line of the
  changelog doesn't start with " -- " … except that this isn't always
  the case as e.g. shown by dpkg.



As previously said, another problem is that not all repositories have
online changelogs – and most tools building repositories have no option
for it –, but a dh change either effects them all or we get into the
problem of wanting to know if a package will end up in a repository
which has them or not while building (build-profile?).


Also note that e.g. d-security has no online changelogs as far as apt
is concerned as they are not to be found on metadata.f-m.d.o (#490848).

The tracker uses a different URI which seemingly has them (and other
files for download apt doesn't know/offer), but I have no idea who
maintains that, if it should be used by others and the URI scheme is
slightly different (it doesn't contain the component the package
belongs to) so apt can't be told to use it anyway.
(And IF apt should use it, it should be told via the Release files,
 which only stable does currently, stable-updates and stable-security
 rely on apts built-in fallback which is sad)


Best regards

David Kalnischkies

P.S.: As I was quoted already, as a side note: I am not against nor in
favour of trimming; I am just pointing out potential problems and
sometimes even their solutions as far as apt is concerned.


signature.asc
Description: PGP signature


Re: Project Improvements

2022-05-26 Thread David Kalnischkies
On Thu, May 26, 2022 at 08:50:21AM +0200, Marc Haber wrote:
> On Wed, 25 May 2022 20:21:03 +0200, David Kalnischkies
>  wrote:
> >apt actually marks dependencies
> >of packages in section metapackages as manually installed if the
> >metapackage is removed due to the removal of one of its dependencies
> >– but doesn't if you decide to remove the metapackage explicitly.
> 
> That sounds nice and it's probably good to avoid accidental mass
> removals, but it makes the "manual" mark kind of a misnomer.

You may be right, but it is how it is due to backwards-compat and
countless complains after accidents. The config option in control
of this is APT::Never-MarkAuto-Sections which previously did similar
things on install…

The current logic tries to preserve user choice more until it has no
other chance than to act out its configuration. On the upside, that
means you can disable this behaviour now "retroactively" and chains of
metapackages are easier to remove as they haven't marked themselves
manual on install.


I hate it. Especially if it turns into a huge media outcry like last
year, but sadly, we are sometimes forced to ignore what the user says
in the default configuration even through they literally typed "Do as
I say!" into a confirmation prompt…


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Project Improvements

2022-05-26 Thread David Kalnischkies
On Thu, May 26, 2022 at 03:44:29PM +0500, Andrey Rahmatullin wrote:
> On Thu, May 26, 2022 at 03:28:21PM +0500, Andrey Rahmatullin wrote:
> > > > I support many people with Debian, what I often see is that they remove 
> > > > a
> > > > package, and then also the meta-package is removed. And later all
> > > > dependencies of the meta-package are removed by accident.
> > > Not to rain on your parade, but those people should consider upgrading
> > > their Debian installations as since at least apt version 1.1 shipped
> > > before current old-old-stable (that is, they run at best Debian 8 jessie
> > > which is covered only by Extended LTS) apt actually marks dependencies
> > > of packages in section metapackages as manually installed if the
 
> > > metapackage is removed due to the removal of one of its dependencies
> > > – but doesn't if you decide to remove the metapackage explicitly.
> > Then I guess there are some other reasons for this to happen not
> > explainable by "these peoiple just run jessie".
> OK, this was really easy.
[…]
> # apt update && apt install task-kde-desktop && apt remove konqueror

task-kde-desktop has Section: tasks (as does all the other task- packages
as they are built from the same source package).


We could add "*/tasks" to the list of APT::Never-MarkAuto-Sections
in apt or reconsider having tasks be in their own Section, personally
I would prefer the later.

There are many other packages which feel like metapackages, but aren't
for apt as they are in the 'wrong' section – which is what I meant later
on in the mail, but that was arguably very well hidden.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Project Improvements

2022-05-25 Thread David Kalnischkies
On Wed, May 25, 2022 at 10:33:22AM +0200, Paul van der Vlis wrote:
> I support many people with Debian, what I often see is that they remove a
> package, and then also the meta-package is removed. And later all
> dependencies of the meta-package are removed by accident.

Not to rain on your parade, but those people should consider upgrading
their Debian installations as since at least apt version 1.1 shipped
before current old-old-stable (that is, they run at best Debian 8 jessie
which is covered only by Extended LTS) apt actually marks dependencies
of packages in section metapackages as manually installed if the
metapackage is removed due to the removal of one of its dependencies
– but doesn't if you decide to remove the metapackage explicitly.

So, given:

Package: mydesktop
Depends: texteditor, browser
Section: metapackages

And mydesktop manual, the rest auto-installed:
$ apt autoremove => nothing to be done

$ apt autoremove mydesktop => removes also texteditor & browser

$ apt autoremove texteditor => removes also mydesktop,
   but marks browser as manual

(This isn't specific to the autoremove command, it does happen for them
 all, even in full-upgrade. It is just easier to see this way.)


Something similar happens for packages which are put in Section: oldlibs
in that they move their manual marking (if they have it) to the
package(s) they depend on and mark themselves auto on upgrade to the
version moving to oldlibs.


As usual, both isn't really specific to apt but implemented in libapt,
so aptitude and co should behave similar as long as the conditions are
met.

Disclaimer: I implemented both a long time ago (somewhat improving on
similar existing behaviour… so even jessie is likely not effected, but
I am too lazy to check and it doesn't really matter that much anyhow)


That said, it is up to the maintainer to decide which section a package
belongs to and more importantly if a package is really that central to
the user experience of the metapackage that it must be a depends rather
than recommends.

(And yes, apt installs new recommends in upgrades since literal decades,
 so that is absolutely not a reason to use depends…)


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Bug#1008644: ITP: nala -- commandline frontend for the apt package manager

2022-03-30 Thread David Kalnischkies
Hi,

Disclaimer: As I am an APT developer, I am feeling obligated to note
that the following comment is just that, not an endorsement nor a review.
I am also not indicating interest or what not. It is just a comment.


On Tue, Mar 29, 2022 at 09:35:27PM -0400, Blake Lee wrote:
> This package is useful because it improves the UX of managing packages
> through the command line with python3-apt. Additionally provides some

(improves… tastes are very different I guess, but that is fine.
 It reminds me of an unfinished branch though… a well, one day.)


> extra quality of life features such as a transaction history you can

The README describes it as using /var/lib/nala/history.json, libapt
has /var/log/apt/history.log with I suppose roughly the some content,
although we don't have IDs in there and removing entries would be
strange. We have no interface for it so far though as we are as usual
chronically understaffed.

Anyway, 'undo' in relation to Upgrades triggers my spider-senses as
downgrades are in general not supported. The screenshots avoid that
problem supposedly by being only about installing a bunch of new
packages and eventually removing these packages again.


> […] Nala improves upon the hardwork of the apt […]

You don't mention it here, but the README features it first (after the
UI thing): Parallel Downloads.

My personal opinion on opening multiple connections to the same server
for parallel downloads aside, the bigger improvement seems to be that
you can use multiple different mirrors… except that all libapt client
can do that assuming you configure it: apt-transport-mirror(1).
(or the packages come from different sources to begin with).

As your entire downloading and verification process is written by you
rather than using libapt I would prefer a note here mentioning this.
I am of course totally biased, but I have seen enough "apt-fast"
variants doing this completely wrong while unsuspecting users were under
the impression that its just some shiny frontend on top of the good
old battle tested libapt implementations.

(Again, see Disclaimer. This is not a security review. I also don't want
 to imply that you have security bugs. Heck, perhaps libapt has more.
 My point is entirely on: Please be upfront on rolling your own)


> Nala is still in active development, but it is very usable. I've had
> many people ask me about getting this into the Official Debian repos so
> this is my request for that.
> 
> I assume that I would be in need of a sponser considering I've never
> uploaded anything into a Debian repository. But I did try my best to
> make the debian files proper, and I personally use sbuild for building
> the software.

That is two different things. A request to get it into Debian is
a Request For Packaging (RFP) – any user can ask and if the stars align
perhaps someone finds it useful enough to also want it in Debian
with the additional motivation to maintain the package within Debian
and wants to claim the work for themselves.

That is what an Intend to Package (ITP) is for. Writing debian/ once
is easy enough, the hard part is maintaining it over time. I (well,
Julian I guess, as I don't speak Python) will e.g. pester the maintainer
for this package in transitions to adapt to our newer APIs. So will the
Python teams. That might or might not align with upstream work. In
the mean time you as the maintainer (if upstream hasn't) are supposed to
interact with the security team. Your 'critical' bugfix in v0.6.0 e.g.
is a bug worthy of a CVE and would need to be backported into older
versions for stable and every other release supported by Debian (ideally
with coordination with the other distros with embargos and such).
If Upstreams solution to that problem was so far to "just upgrade to the
newest version" at least one of you is in for some work (I know you are
both, its just easier to realize that these are two different jobs if we
pretend you are not).

And last but not least: If you decide you want to be a maintainer, head
over to debian-mentors and read about Requests For Sponsorship (RFS)
which helps you getting your ITP package you prepared into Debian while
you are still learning the ropes and hence do not have rights to upload
unsupervised into Debian yourself yet.

(As this is python, the python team might be interested in helping
 maintaining it if you apply to them. While I would be happy if you
 would try to interact with us from the apt team, I don't think we
 have the resources to help you with packaging through.)


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: merged-/usr transition: debconf or not?

2021-11-19 Thread David Kalnischkies
On Fri, Nov 19, 2021 at 10:06:13AM +0100, Ansgar wrote:
> Given packages already did such moves in the last years and you claim
> this happens in a non-negligible number of cases, could you please
> point to some examples where this already happens in practice?

You need a / → /usr¹ and a pkg-a → pkg-b move in the same release.
Also, you need to have (the new) pkg-b be unpacked before pkg-a.

An example³ would be coreutils/util-linux/… moving everything from /bin
to /usr/bin and in the same Debian release splitting out one (or more)
of their tools into their own package (as they usually do).
As those are essential they will Pre-Depend² on the split out package
which guarantees that pkg-b will be unpacked before pkg-a.

The result is that the split out tool will be gone from /usr-merged
systems – which at that point should be all systems.


Another example would be systemd files debhelper moved for some time
already while the package does a foo and foo-data split in the same
Debian release. Just need to "solve" the unpack order now, but I will
leave that as an exercise for the reader.



The move and reorganisation is both forbidden by the CTTE for Debian 12
in "Moratorium on moving files' logical locations into /usr" which even
describes this problem as one of the reasons for it, but hopes to have
it resolved by 13 (without mentioning how).

Are you suggesting that Debian will use 13 to move each and every
file in / to its /usr counter-path while forbidding that packages
including moves are reorganised before 14 ?

Good thing 'which' isn't in /bin I guess. (SCNR)

Disclaimer: I am as usual not arguing for switching into full speed
reverse mode. I would "just" prefer if we acknowledge that problems
exist we have to deal with. Its gonna be hard enough to actually resolve
them given all bridges have been burned down years ago by pretending its
not a problem that dpkg has no idea what is done behind its back to the
files its supposed to manage.

(The problem itself isn't unique⁴ to /usr-merge, so ideally it would be
 resolved regardless, but /usr-merge undoubtedly makes heavy use of it
 so in an ideal world those interested in it would not only acknowledge
 the problems but actually work together to resolve them.
 Sadly, that isn't the world we seem to be stuck in at all.)


Best regards

David Kalnischkies

¹ You could of course also move the other way around.
² You can achieve the same with other dependency types, it is just
  harder to trigger in simple testcases as apt & dpkg try to avoid
  the auto-deconfiguration of pkg-a if there is an easy way out.
³ For fun I wrote an apt testcase with coreutils splitting out ln⁴,
  that might be a bit unrealistic, but you get the idea (attached).
⁴ as, you know, /usr-merge being the last symlink we will ever need
#!/bin/sh
set -e

TESTDIR="$(readlink -f "$(dirname "$0")")"
. "$TESTDIR/framework"

setupenvironment
configarchitecture 'native'

#mkdir -p rootdir/bin
ln -s usr/bin rootdir/bin

touch ln

mkdir -p tree/bin
cp -a ln tree/bin
buildsimplenativepackage 'coreutils' 'native' '1' 'stable' '' '' '' '' 
'tree/bin'
rm -rf tree

buildsimplenativepackage 'coreutils' 'native' '2' 'unstable' 'Pre-Depends: 
unneeded-ln'

mkdir -p tree/usr/bin
cp -a ln tree/usr/bin
buildsimplenativepackage 'unneeded-ln' 'native' '2' 'unstable' 'Breaks: 
coreutils (<< 2)
Replaces: coreutils (<< 2)' '' '' '' 'tree/usr'
rm -rf tree

setupaptarchive

testfailure test -e rootdir/bin/ln -o -e rootdir/usr/bin/ln
testsuccess apt install coreutils=1 -y
testsuccess test -e rootdir/bin/ln -o -e rootdir/usr/bin/ln

testsuccess apt full-upgrade -y
testsuccess test -e rootdir/bin/ln -o -e rootdir/usr/bin/ln


signature.asc
Description: PGP signature


Re: merged-/usr transition: debconf or not?

2021-11-11 Thread David Kalnischkies
On Wed, Nov 10, 2021 at 01:48:07AM +0200, Uoti Urpala wrote:
> David Kalnischkies wrote:
> > As the transition hasn't started everyone not already merged is currently
> > deferring it. That is true for those who upgrade daily as well as for
> > those people who seemingly only upgrade their sid systems once in a blue
> > moon. So, at which point have all those systems stopped deferring?
> 
> I think the logical answer is that you're "deferring" in this sense if
> you are using the suggested flag file or whatever other mechanism to
> prevent the merge. Until you do an upgrade which would perform the
> merge without use of such a mechanism, your system is just out of date,
> not deferring.

A distribution upgrade is not atomic. Between an unpack of package foo
and the configure of foo a million other packages can pass through
various stages. Ideally, that window will be pretty small for usrmerge
the package (or whatever the transition mechanism will be in the end),
but that depends on various factors and easily balloons out of hand.
In a previous thread I mentioned how not too long ago the entire KDE
desktop environment had to be at least unpacked before dpkg could be
upgraded due to one tiny Conflicts (which was correct). If you hadn't
KDE installed dpkg was one of the first things upgraded even without
users going out of their way to explicitly request it (as it should
be, as its essential and apt does special dances for those).

So the easiest way to check if an upgrade on a "quantum state merge"
system is going to work is to keep it at unmerged for the entire time
and manually trigger the merge at the end as that is what could
theoretically happen, but is likely not for most testers.
If it works with merged is already checked by already merged systems.


> So presumably it is valid for packages to gain dependencies which force
> merge or "deferring" state on installation.

Valid perhaps, but I would hope that it isn't lightheartedly plastered
all around just in case as the guarantees it provides for the package
depending on the transition mechanism package are slim (as in, the
system might or might not be merged, regardless of deferred or not¹,
while depending package itself passes through various stages) to
non-existent² depending on the specific implementation of the transition
while it puts potentially enormous problems on the shoulders of dpkg and
apt to produce an acceptable ordering:

The package usrmerge is e.g. currently implemented in perl (the big one,
not -base) and so any other package implemented in perl is effectively
forbidden from forming dependencies on usrmerge as we otherwise run into
loops of the form app -> usrmerge -> perl -> app which might or might
not be breakable based on the dependency type (and version) of each ->.
Oh, and if you happen to have a dependency on something written in perl,
congrats, you are part of this elusive group as well as everything else
depending on you…

It will be hard enough to have one essential package trigger the
mechanism without running into issues, the last we need is a couple
other packages inserting themselves needlessly into the loop just
because "it is valid".


Best regards

David Kalnischkies

¹ Spoiler alert: Even a Pre-Depends technically only makes guarantees
  for the preinst scripts, not for the unpack itself, but that is fine
  print usually only encountered in the deeper horrors of loops… you
  need explicit approval for Pre-Depends anyhow.

² Spoiler alert: You can e.g. Pre-Depend all you want on dpkg, but that
  doesn't mean that the version you are pre-depending on is actually
  used to work on your package instead of just lying around on disk.
  That is true for a few other packages, the most obvious perhaps apt
  and the kernel.


signature.asc
Description: PGP signature


Re: merged-/usr transition: debconf or not?

2021-11-09 Thread David Kalnischkies
On Tue, Nov 09, 2021 at 08:44:52PM +, Simon McVittie wrote:
> On Tue, 09 Nov 2021 at 19:01:18 +0100, David Kalnischkies wrote:
> > On Tue, Nov 09, 2021 at 03:21:25PM +, Simon McVittie wrote:
> > (Minus that for 12 it is technically still supported as long as it
> >  remains 12
> 
> No, it doesn't have to be supported, and the TC resolution explicitly
> said that it doesn't have to be supported.
> 
> What *does* need to be supported is the upgrade path from 11 to 12,
> or from current testing (11-and-a-bit) to 12, with any ordering of apt
> transactions that doesn't violate the packages' dependency conditions -
> and the TC's reasoning was that the simplest, most conservative, most
> robust way to make sure that continues to work was to mandate that all
> Debian 12 packages, individually, are installable onto unmerged-/usr
> Debian 11 (assuming that "installing a package" implies installing its
> dependencies, in any order that apt/dpkg consider to be valid and not
> breaking any dependency relationships).

Yes, any Debian 12.x package (even the very last security fix build for
it) needs to be installable on a Debian 11.y system as it could be part
of the upgrade from 11 to 12 and as it has no idea if its the first or
last package in that upgrade (within reason) it has to work on 11 as
well as on very-very-close-to-12.

As such, if you promise 11.x to 12.y upgrades I would expect 12.x to
12.y to work just as well as 12.x is very-very-close-to-12(.y).

If you say 12.x to 12.y isn't supported on unmerged it means effectively
that all "cattle" have to be constantly recreated as you can't have
a single package be considered an 'upgrade', they all need to be 'new
install' while e.g. installing build dependencies (as ironically a fully
upgraded 12 system is indistinguishable from an upgrade-in-progress-from-
11 system which just happens to install a bunch of new packages in the
end).

Its also quite a disaster for all systems already technically bookworm
like testing and sid as any upgrade, including to the release 12.0, will
be unsupported in your logic. Unsupported machines (aka our buildds)
building supported packages seems sad and I thought we had talked about
this before.


> > (Perhaps it comes with the job as apt maintainer, but I don't "discard
> >  and redo" systems or even chroots… upgrades until hardware failure…
> >  my current build chroots are from 2013. So I can totally see me opt-out
> >  first and later in… although probably more with apt_preferences)
> 
> For full systems that are managed as full systems ("pets" in the cattle
> vs. pets terminology), sure, do that; the Debian installation I'm typing
> this into has been copied from several older machines. However, deferring
> or avoiding the merged-/usr transition on these systems is not intended
> to be something that is considered valid for bookworm.

As the transition hasn't started everyone not already merged is currently
deferring it. That is true for those who upgrade daily as well as for
those people who seemingly only upgrade their sid systems once in a blue
moon. So, at which point have all those systems stopped deferring?

I would say that the first time you can say with absolute certainty that
a given system is no longer deferring the transition is the moment an
unpack of a trixie pkg is attempted as skipping releases is not
supported. All unpacks before that could happened on an unmerged
system as that system might very well be in the process of upgrading
from 11 to 12 at the moment.

(and btw, what I meant with me opting out for a while was delaying the
 upgrade of my sid "beasts" to a more exciting problem space than the
 first possible moment as that wouldn't be much of a test for apt, /usr
 merge and all those other packages installed around here. If I am
 asking for an upgrade path its only fair to not take the easiest road
 of transitioning to merged before anything could implicitly require
 it and hence fail for less lucky people not equipped to deal with it.
 My "eldritch horrors" are fine and behave, thanks for asking. )


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: merged-/usr transition: debconf or not?

2021-11-09 Thread David Kalnischkies
On Tue, Nov 09, 2021 at 03:21:25PM +, Simon McVittie wrote:
> > As I see it the CTTE decision effectively allows the transition to be
> > deferred until the moment you want to upgrade to 13.
> 
> I think you mean: until the moment you want to upgrade to testing after
> Debian 12 release day. That's not Debian 13 *yet*, although you could

Yes, I meant that indeed… should have used codenames after all.


> > So, wouldn't it make sense to go with an (extreme) low priority debconf
> > question defaulting to 'yes, convert now' which [I think] non-experts
> > aren't bothered with and users/systems wanting to opt-out for the moment
> > (like buildds) have a standard way of preseeding rather than inventing a
> > homegrown flag-file and associated machinery?
> 
> Speaking only for myself and not for the TC, I think a debconf question
> would be OK as an implementation of this, but the debconf question should
> indicate that the result of opting out is an unsupported system.

Sure.

(Minus that for 12 it is technically still supported as long as it
 remains 12, but those who have to know will know that and everyone else
 is better of following the default anyhow)


> I had intended this to be for the class of systems that you would expect
> to discard and re-bootstrap rather than upgrading (chroots, lxc/Docker
> containers, virtual machines, etc. used for autopkgtest, piuparts,
> reproducible-builds, etc.), where a way to undo the opt-out isn't really
> necessary because the system is treated as disposable.

That is likely what happens to most of them, but as we support running
the merge somewhere between a few years ago and first unpack of a trixie
package anyhow I don't see the harm of having an official opt-out of the
opt-out as long as it happens in time.

(Perhaps it comes with the job as apt maintainer, but I don't "discard
 and redo" systems or even chroots… upgrades until hardware failure…
 my current build chroots are from 2013. So I can totally see me opt-out
 first and later in… although probably more with apt_preferences)


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: merged-/usr transition: debconf or not?

2021-11-09 Thread David Kalnischkies
On Mon, Nov 08, 2021 at 12:56:49PM +0100, Marco d'Itri wrote:
> On Nov 08, Simon Richter  wrote:
> > Right now, it is sufficient to preseed debconf to disallow the usrmerge
> > package messing with the filesystem tree outside dpkg. Managed installations
> > usually have a ready-made method to do so.
> This is not really relevant, since the conversion is mandatory.
> The CTTE stated that the only exception is "Testing and QA systems 
> should be able to avoid this transition, but if they do, they cannot be 
> upgraded beyond Debian 12", and my plan is to arrange for this with 
> a flag file.

As I see it the CTTE decision effectively allows the transition to be
deferred until the moment you want to upgrade to 13. Ideally the
transition is performed already in the 11→12 upgrade automatically for
you, but you could prevent that automatism and do it manually someday
while you have 12 installed (as no 12 package can depend on merged /usr
as it would not be installable on upgrade from 11 and/or executable on
buildds/testing/qa systems at the least).


So, wouldn't it make sense to go with an (extreme) low priority debconf
question defaulting to 'yes, convert now' which [I think] non-experts
aren't bothered with and users/systems wanting to opt-out for the moment
(like buildds) have a standard way of preseeding rather than inventing a
homegrown flag-file and associated machinery?

As a bonus, if I had previously decided to forgo the automatic
transition for whatever reason (lets say to test build packages on that
box) I also have a standard way of triggering the conversion by calling
dpkg-reconfigure on usrmerge at leisure before the 13 upgrade.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: deb822 sources by default for bookworm

2021-11-05 Thread David Kalnischkies
On Fri, Nov 05, 2021 at 06:16:10PM +0100, Julian Andres Klode wrote:
> On Thu, Nov 04, 2021 at 12:13:48AM +0800, Shengjing Zhu wrote:
> > On Wed, Nov 3, 2021 at 11:45 PM Julian Andres Klode  wrote:
> > >
> > > Hi all,
> > >
> > > I'd like us to move from
> > >
> > > /etc/apt/sources.list
> > >
> > > to
> > > /etc/apt/sources.list.d/debian.sources
> > >
> > 
> > While it's really a nice feature for the third-party repository, I
> > don't see the benefits to change the default one, especially the path.
> > I had to admit that I have countless scripts which run `sed
> > /etc/apt/souces.list`, to change the default mirror, as well as in the
> > Dockerfile.

(fwiw you could use a 'mirror+file' entry (see man apt-transport-mirror)
 specifying your preferred mirror of the day. As a bonus that will reuse
 the files of the old mirror instead of discarding them blindly)


> There's a technical limitation in that we get the format from the file
> extension. It's a bit annoying.

It was shortly the case that .list files could contain both formats, but
I opted to change that as of course countless things poking into those
files broke left and right.

cups used to detect if it ran on a Debian-like system by checking the
sources.list file for deb entries… I doubt it does nowadays as there are
countless better options now, I just mention it as trivial example of
the type of unexpected breakage which is to be expected…


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: deb822 sources by default for bookworm

2021-11-05 Thread David Kalnischkies
On Wed, Nov 03, 2021 at 08:53:15PM +0100, Paul Gevers wrote:
> On 03-11-2021 16:45, Julian Andres Klode wrote:
> > There is some software "parsing" sources.list on its own, most of that
> > is better served by `apt-get indextargets` (and for downloading stuff
> > based on the urls, `apt-helper download-file`, such that it respects
> > proxies and supports all transports users may use in sources.list)
> 
> Like autopkgtest. When I was working on it to support Debian's migration
> testing, I looked at python-apt and because that didn't support it,
> stopped thinking. With indextargets and download-file I guess we could
> work on it again. When were those introduced? Ubuntu needs it on old
> releases so before autopkgtest can change it, we'd need support for a while.

`apt-get indextargets` is from 2015 and a part of the acquire-additional-
files feature used mainly by apt-file and appstream to have apt download
files it isn't using itself, so those tools don't have to implement it.

The job of indextargets is it mostly to give access to metadata (and
crucially filenames) for those previously configured and hopefully now
downloaded files. apt-file e.g. asks for the Contents files in this way
to avoid exposing file naming logic and location to other tools.

So, for the filenames of all (downloaded) Packages files:
apt-get indextargets --format '$(FILENAME)' 'Identifier: Packages'
(the default output is deb822 stanzas you could grep with more powerful
 tools than the simple inbuilt line-based filter)

Note that you either have to implement opening compressed files yourself
or use `/usr/lib/apt/apt-helper cat-file`.


That was historically the most common reason to fiddle with sources.list
parsing, hence Julian referring to it, but this seems not what autpkgtest
is aiming for. On a casual look (well, grep) I see only:

* lib/adt_testbed.py apt-pocket codepath seems to want to construct new
  sources.list entries based on existing ones. That should be possible
  with some indextargets busywork in general, but I am not completely
  sure what is going on here and the gymnastics should be similar to…

* setup-commands/setup-testbed tries to find the mirror (and release)
  used for target distribution based on your current system. I am
  a bit surprised that works actually…

Anyway, the later could perhaps be implement with:
apt-get indextargets --format '$(ORIGIN)|$(REPO_URI)' | sort -u | \
   grep -i -e "^$(. /etc/os-release; echo "$ID")" | cut -d'|' -f 2

(That is one line only for posterity – as you see, I am trying to fix
 the too general search by checking against Origin as defined by the
 Release file of a repository, but that would need work still to
 eliminate same-origin-but-different-repo cases)


Parsing of the sources files is not really indextargets job through, so
it might not always work for that task: It e.g. doesn't work if the data
files are not on disk which might or might not be okay for you (there is
'guess' mode, but that of course has no metadata extracted from the
Release files – the Origin I was using above).

The apt family doesn't really have a publicly exposed way of reasoning
about sources.list (or .sources) files and I am not quite sure it really
can as subtil differences between repositories make it hard to give them
all a common interface which makes sense. (I will probably be proven
wrong by Julian though.)


Like, for example, if stable is in the sources, make sure there
is also updates and security there and/or add them. What for Debian are
three distinct repositories might for others very well be components.

Assuming you even know which line refers to Debian: I was using Origin
above for this task as we can't really guess based on the URI. And even
then… that logic above finds the tor+mirror+file source I am using,
that won't work for autopkgtest, but I am special and this just
a default fallback, so I might be thinking way too much about it…


Anyway, if you have specific needs/questions feel free to ask on deity@
or #debian-apt. I am sure we will work something out even if in this
case it might very well be new code nobody really uses for years (as is
common in apt land – backward compat be damned ).

Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: users not reading manpages annoyance [was: apt annoyance]

2021-10-30 Thread David Kalnischkies
On Sat, Oct 30, 2021 at 10:14:15AM +0100, Tim Woodall wrote:
> When doing apt-get download -o RootDir=. apt
> once it's downloaded the package it effectively tries to move it to
> ./$( pwd )/
> 
> (the prefix is whatever RootDir points to) instead of moving to
>  $( pwd )/
> 
> This causes it to fail unless you do a
> mkdir -p ./$( readlink -f $( pwd ) )
> 
> Is this a bug or a feature?

Working as intended. 'download' wants to store the package in the
current directory, so it gets the absolute path name to that as "current
directory" isn't a very stable property.

With RootDir (as the manpage explains) you say: Whatever the path, stick
this in front of it – so you get what you asked for…
RootDir has some uses if you deal with chroots from the outside, but
fiddling even with absolute paths is usually not what you want – the
manpage mentions Dir which effects only "relative" paths (most paths in
apt like where it finds its config files are relative to Dir – which by
default is '/' making it an absolute path in the process).

As you fiddle with directories you are likely to need APT_CONFIG as that
is parsed before the configuration files (and so can effect where those
are) and long before the command line is looked at (all at length
explained in the apt.conf manpage).


There is no option to set 'download's target directory from current
directory to another place at the moment. Shouldn't be incredibly hard
to implement if someone wanted to try (apt-private/private-download.cc
→ DoDownload() → implement an option who sets Dir::Cache::Archives to
something else than the absolute CWD – absolute? I already mentioned
what would happen otherwise, so connecting the dots is left as an
exercise for the reader).


So, what are you actually trying to do?


And next time, try to pick a slightly more sensible title and perhaps
even a more fitting place to ask first… I am ever so slightly annoyed
if I am kicked into high gear expecting a world ending disaster as it
escalated to an apt-thread on debian-devel…
(so now, where were I before this 'emergency call' came in… mhhh)


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Bug#992692: general: Use https for {deb,security}.debian.org by default

2021-09-13 Thread David Kalnischkies
On Sun, Sep 12, 2021 at 03:10:27AM +, Paul Wise wrote:
> On Fri, Sep 10, 2021 at 6:03 PM David Kalnischkies wrote:
> > Because this thread started with the idea to switch the default of d-i
> > and co to another URI. If you target only apt then you still need
> > a solution for d-i and a way to convert whatever d-i had into what apt
> > gets in the end (of the installation).
> 
> ISTR the future of creating new Debian installations is to move from
> debootstrap to dpkg/apt. As an interim step, debootstrap could move
> from doing its own downloads to passing the appropriate
> APT_CONFIG/DPKG_ROOT/etc to `apt download`.
> 
> https://wiki.debian.org/Teams/Dpkg/Spec/InstallBootstrap

The spec deals with the installation of the essential set.
APT isn't essential – it is 'only' one of the first packages installed
after the bootstrap is done, usually at least.

Moving {,c}debootstrap to use apt means you increase the system
requirements from "can execute debootstrap" all the way up to "is
a fully bootstrapped Debian-based system". At which point you could
just use mmdebstrap instead of debootstrap and be done.

I am not involved with d-i to know if they would plan such a move, but
I have at least never heard of it and it seems outside the linked spec.
You might have confused this with the pipe-dream of obsoleting
mmdebstrap at some far away in the future point by folding it into apt
directly. The spec is one (of the many) pre-requirements for that.


Even if we do, that would move the goal post only slightly as you still
have the problem that the conf used to create the system might very well
not be the conf that can be used by the created system (as a trivial
example some old apt versions do not support https). That doesn't really
change regardless of using anna, debootstrap, apt or whatever else.


Best regards

David Kalnischkies

P.S.: Having apt be involved in its own bootstrap reminds me of that
time when I saved myself from drowning in a swamp by pulling on my hair…
https://en.wikipedia.org/wiki/Baron_Munchausen#Fictional_character


signature.asc
Description: PGP signature


Re: Bug#992692: general: Use https for {deb,security}.debian.org by default

2021-09-10 Thread David Kalnischkies
On Fri, Sep 10, 2021 at 11:08:38AM -0400, Michael Stone wrote:
> On Fri, Sep 10, 2021 at 04:33:42PM +0200, David Kalnischkies wrote:
> > On Thu, Sep 09, 2021 at 08:53:21AM -0400, Michael Stone wrote:
> > > The only thing I could see that would be a net gain would be to 
> > > generalizes
> > > sources.list more. Instead of having a user select a specific protocol and
> > > path, allow the user to just select high-level objects. Make this a new
> > > pseudo-protocol for backward compatibility, and introduce something like
> > >   deb apt:// suite component[s]
> > > so the default could be something like
> > >   deb apt:// bullseye main
> > >   deb apt:// bullseye/updates main
> > > then the actual protocols, servers, and paths could be managed by various
> > > plugins and overridden by configuration directives in apt.conf.d. For
> > 
> > In this scheme the Debian bullseye main repo has the same 'URI' as the
> > Darts bullseye main repo. So, you would need to at least include an
> > additional unique identifier of the likes of Debian and Darts, but
> > who is to assign those URNs?
> > (Currently we are piggybacking on the domain name system for this)
> 
> I have no idea what darts is, so I don't have an answer. :)

"Darts" was just a play on "bullseye". It is not hard to imagine
a repository which has the same suite and component(s) but is not
Debian itself. As a pseudo-random [= its in an other topic here] real
example you can take Wine (https://dl.winehq.org/wine-builds/debian/).
So to what is "deb apt:// bullseye main" referring? Debian or Wine?

And to pre-empt the most common response: As an apt dev I can assure you
that we won't accept a solution involving "I am on Debian, so it means
Debian" as that is impossible to correctly guess programmatically (for
example on derivatives using a small overlay repo).


> > Also, but just as an aside, whatever clever system you think of apt
> > could be using, you still need a rather simple system for the likes of
> > tools which come before apt like the installers/bootstrappers as they
> > are not (all) using apt, especially not in the very early stages, and
> > a mapping between them.
> 
> I'm not sure why you think I need that? The goal of my musings is to

Because this thread started with the idea to switch the default of d-i
and co to another URI. If you target only apt then you still need
a solution for d-i and a way to convert whatever d-i had into what apt
gets in the end (of the installation).

The configuration option which only works with apt tools already
exists in the form of the mirror method…


> > > their thing, and a plugin like auto-apt-proxy can override defaults to do
> > > something more complicated, using more policy-friendly .d configurations
> > > rather than figuring out a way to rewrite some other package's 
> > > configuration
> > > file.
> > 
> > JFTR: auto-apt-proxy has nothing to do with sources. It is true that
> > apt-cacher-ng (and perhaps others) have a mode of operation in which you
> > prefix the URI of your (in that case caching) proxy to the URI of the
> > actual repo, but that isn't how a proxy usually operates and/or is
> > configured.
> 
> I have no idea what you're saying here.

And I have no idea if you know what you are talking about.

auto-apt-proxy already uses an interface apt provides to configure the
proxy at runtime. It isn't in the business of modifying sources.list nor
has it any interest in that. So you using it as an example for a plugin
who could use your proposed scheme to modify the sources at runtime
makes no sense.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Bug#992692: general: Use https for {deb,security}.debian.org by default

2021-09-10 Thread David Kalnischkies
On Thu, Sep 09, 2021 at 08:53:21AM -0400, Michael Stone wrote:
> The only thing I could see that would be a net gain would be to generalizes
> sources.list more. Instead of having a user select a specific protocol and
> path, allow the user to just select high-level objects. Make this a new
> pseudo-protocol for backward compatibility, and introduce something like
>   deb apt:// suite component[s]
> so the default could be something like
>   deb apt:// bullseye main
>   deb apt:// bullseye/updates main
> then the actual protocols, servers, and paths could be managed by various
> plugins and overridden by configuration directives in apt.conf.d. For

In this scheme the Debian bullseye main repo has the same 'URI' as the
Darts bullseye main repo. So, you would need to at least include an
additional unique identifier of the likes of Debian and Darts, but
who is to assign those URNs?
(Currently we are piggybacking on the domain name system for this)

Also, but just as an aside, whatever clever system you think of apt
could be using, you still need a rather simple system for the likes of
tools which come before apt like the installers/bootstrappers as they
are not (all) using apt, especially not in the very early stages, and
a mapping between them.


> If someone wants to use tor by default but fall back to https if it's
> unreachable, they can do that. If someone wants to use a local proxy via
> http but https if they're not on their local network, they can do that. New
> installations could default to https, existing installations can keep doing

You can do most of the fallback part with the mirror method backed by
a local file. It is of no concern to apt how that file comes to be, so
you could create it out of a massive amount of options or simply by
hand. I do think if the creation is tool-based it shouldn't be apt
as I envision far too many unique snowflakes for a one-size-fits-all
approach.

(The Tor to https fallback can be done already if we talk onion services
 to others. You can't fall out of Tor – or redirect into it – through as
 that would allow bad actors to discover who you are/that you have an
 operational tor client installed. proxy configuration you can already
 change programmatically on the fly – a job auto-apt-proxy implements –,
 changing the mirror file with a hook from your network manager would
 be equally easy.)


> their thing, and a plugin like auto-apt-proxy can override defaults to do
> something more complicated, using more policy-friendly .d configurations
> rather than figuring out a way to rewrite some other package's configuration
> file.

JFTR: auto-apt-proxy has nothing to do with sources. It is true that
apt-cacher-ng (and perhaps others) have a mode of operation in which you
prefix the URI of your (in that case caching) proxy to the URI of the
actual repo, but that isn't how a proxy usually operates and/or is
configured.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#992692: general: Use https for {deb,security}.debian.org by default

2021-09-05 Thread David Kalnischkies
On Fri, Sep 03, 2021 at 02:42:29AM +, Paul Wise wrote:
> httpredir.d.o was an alternative CDN-like thing that was based on HTTP
> redirects to the mirror network. It had lots of problems, but now that
> we have a mirror checker and zzz-dists, perhaps it could work better.
> The mirror:// method in apt has probably improved since then too.

If you wanted to bring back a httpredir-like¹ you are better of to
combine both approaches as in: Have apt request a list of mirrors to use
via mirror(+https) and have the server generate that list based on the
requester as that gives you the "regional" mirrors as did httpredir while
solving the major grip it had by having a list of mirrors to use, rather
than one potentially non-working slightly outdated partial mirror (and
the httpredir service is contacted by each client once rather than for
each individual file to then be redirected elsewhere).

Obviously, that approach is only workable if you are actually using
libapt tools. Most debootstrap implementations couldn't really use that
which might or might not be a problem for a given use case. Such
a service would also have a hard time to 'redirect' you to a local
mirror in your network (compared to an 'official' region one).


So that isn't really what seems to be the main worry here:
https prevents MitM attacks including the friendly MitM ones like the
local network at home/at DebConf telling my laptop that there is an
on-site mirror or not telling at all and just proxy transparently the
entire network.

The later seems done for in a https-world, but the former might be
somewhat salvageable: We will have to get the Release² file(s) from the
repo defined in the sources, but the index files and debs after that are
a fairer game to get from elsewhere as they are either identical with
what the defined source would have provided or a hard error.
That still violates the privacy guarantees https has (assuming it does),
so that would still need to be opt-in/out, but that is a one time choice
per machine and could be similar in style to auto-apt-proxy.

Anyway, implementation wise apt could tell $MAGIC which repo it is
interested in (by Origin & Label) and would in return get a list of
mirrors as apt-transport-mirror would. apt would then add the original
source as least priority fallback and proceed with that list for this
source.
I say $MAGIC as I don't want apt to hard code the specifics of how to
get the list, similar to how it is agnostic to how a proxy is currently
picked up, as I could envision different implementations depending on
use cases.

That is different to just using apt-transport-mirror directly in the
sources in so far as the provider of the list remains untrusted (beside
that nobody is constantly editing their sources to adapt to the local
network the machine currently resides in).


Relatively quickly thought up, probably full of holes and not implemented
at all in apt so far, but if someone thinks that might work feel free to
report as a feature request against apt and I will see what I can do
from the apt side. It at least seems slightly more workable than hoping
to prevent https – which might have just as dubious a chance to succeed
as https has to factually improve security in terms of apt. 


> Maybe http redirects to local mirrors could be feasible again, but it
> would take a lot of work.

fwiw: apt does not allow https to http redirects (some https repos
ran into this in the past like those hosted on sourceforge until they
fixed their https 'everywhere' configuration). In this regard apt is
stricter than a normal webbrowser (a mirror list acquired via https can
redirect to http mirrors though, but see the man pages for details).


Best regards

David Kalnischkies


¹ which deb.d.o sort of is just that it is nowadays done via SRV instead
  of an explicit HTTP redirect – and that only one mirror is in the list
  rather than multiple httpredir had picked one to redirect to.

² The main security benefit of https for apt is that you can't fiddle
  with the Release file, mostly in terms of sending an older one (in
  the limits of Valid-Until if used). It is also minor in size compared
  to the indexes and especially the debs, so caching them is not much of
  a concern (if a cacher was even doing it, it probably shouldn't).


signature.asc
Description: PGP signature


Re: Bug#969631: can base-passwd provide the user _apt?

2021-08-30 Thread David Kalnischkies
On Mon, Aug 30, 2021 at 11:53:59AM +0100, Colin Watson wrote:
> On Mon, Aug 30, 2021 at 12:30:49PM +0200, David Kalnischkies wrote:
> > So, while for some/most usecases something akin to DynamicUser would be
> > enough, for others a more stable user would be preferred and then there
> > are also cases were it would be beneficial if the user had the same
> > UID across all systems.
> 
> And that's exactly the bit that seems tricky to achieve here.  If we
> only had deal with the bits that are internal to apt (as opposed to set
> up manually by sysadmins) then it wouldn't be so bad.

Personally, I don't think it is too bad as there shouldn't be too many
actually effected and those who are we could try to catch. We could e.g.
do static for new installs in bookworm and recommend transition in NEWS
(and co), have apt warn if it deals with files owned by _apt while not
being UID 42 and have trixie actually perform the transition for
upgrades, to then have new and upgrades being the same.


apt already tries for copy:/ and file:/ if _apt can access them and if
not falls back to not using it (with a warning). We don't warn on
unreadable https certificates explicitly currently, but it wouldn't be a
bad idea to be a bit more friendly anyhow (well, ideally we wouldn't
need to, like we managed for auth.conf, but I am not sure we can massage
gnutls enough for that).


> > > But I guess there's no way to do something like that
> > > outside of systemd, much less on systems that don't run systemd at all.
> > 
> > The problem with systemd in this context is that apt kinda needs to be
> > its own systemd --user instance as apt is not a system service, but
> > a service manager of its own. I doubt the systemd ecosystem offers that
> > functionality, especially considering that these parts would need to be
> > platform agnostic and reasonably light given they would be involved in
> > (cross)bootstrap and all the other situations apt operates in.
> 
> To be clear, I wasn't literally proposing that apt should use systemd; I
> don't think that would make sense.  It was just an analogy.

To be clear, I said that only to preempt the peanut gallery. ☺


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Bug#969631: can base-passwd provide the user _apt?

2021-08-30 Thread David Kalnischkies
On Sun, Aug 29, 2021 at 11:30:41PM +0100, Colin Watson wrote:
> case) it seems mostly like the sort of user that could be anonymous
> outside of the lifetime of an apt process, analogous to systemd's
> DynamicUser.

The _apt user started as 'nobody', but quickly people complained that
they didn't want to punch holes in their firewalls for nobody.

As Julian notes most cases in which _apt creates/owns files are things
to fix eventually, some of which were indeed already, but that is gonna
be hard work and probably not achievable in the short term. Especially
if other lower hanging fruits are still in reach. We are labouring on
_apt for more than seven years now after all.

So, while for some/most usecases something akin to DynamicUser would be
enough, for others a more stable user would be preferred and then there
are also cases were it would be beneficial if the user had the same
UID across all systems.


> But I guess there's no way to do something like that
> outside of systemd, much less on systems that don't run systemd at all.

The problem with systemd in this context is that apt kinda needs to be
its own systemd --user instance as apt is not a system service, but
a service manager of its own. I doubt the systemd ecosystem offers that
functionality, especially considering that these parts would need to be
platform agnostic and reasonably light given they would be involved in
(cross)bootstrap and all the other situations apt operates in.

I would be happy to be wrong through as it isn't exactly my dream to
make apt a decent service manager even through apt starts a lot
of processes, so a lot of management could and should be done here…


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Q: Use https for {deb,security}.debian.org by default

2021-08-22 Thread David Kalnischkies
On Sat, Aug 21, 2021 at 11:05:23PM +, Stephan Verbücheln wrote:
> What about HTTP 304 Not Modified?

What about them? Care to give details?


Note that APT nowadays hardly makes requests which can legally be
replied to with 304 as it knows which index files changed (or not)
based on comparing the old and new Release files.

That leaves the Release file itself, which even if the server replied
304 undergoes again the signature and other consistency checks
– including Valid-Until. Not only to detect serious attacks, but also to
detect if a mirror is no longer synced as the most common form of 'man
in the middle' "attack" https has no chance of preventing or detecting.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: merged /usr vs. symlink farms

2021-08-22 Thread David Kalnischkies
On Sat, Aug 21, 2021 at 12:47:51PM -0400, Theodore Ts'o wrote:
> Personally, I *don't* have a problem about telling people to manually
> update dpkg, apt, and/or apt-get before they do the next major stable
> release (maybe it's because this is something I do as a matter of
> course; it's not that much extra effort, and I'm a paranoid s.o.b.,

So, when did you last log into your build chroot to upgrade dpkg and
apt first? And while at that, did you follow the release notes – from
the future, as they have yet to be written for the release you are
arguably upgrading to already?

But okay, lets assume you actually do: apt and dpkg tend not to be
statically linked, so they end up having dependencies. Some of them even
surprising. In bullseye you e.g. have to upgrade cryptsetup first before
apt can be upgraded (apt → libgcc-s1 → cryptsetup-initramfs → …).
And that is just the first and most obvious chain.
But it would never happen that you would e.g. need to upgrade all of the
KDE desktop environment before upgrading dpkg, right? Well… in Debian
squeeze that nearly happened, but we had to break that chain because it
was actually forming a 500+ loop apt/lenny had trouble dealing with.
Those chains are only investigated if they lead to major problems:
I e.g. never looked at the C++ v5 (copy on write) transition which
likely entangled apt with half of the universe in the general case.

But okay, lets assume we form a team who actually looks into all these.
It is easy, right? No. Dependencies and their version constrains (can)
differ by architecture and mostly propagate by negative dependencies
meaning the individual system state is hugely important. The result is
that hardly any upgrade is the same even if we subsume it all under
"upgrade to bullseye". And regardless of how hard we try, there will
always be other packages which have to go before it is dpkgs turn, so
at least all these have to make due with what was released previously
anyhow.


> and I know that's the most tested path given how Debian testing
> works).

Not really as you can see e.g. with libgcc-s1, as most upgrade problems
aren't due to dpkg or apt in practice, but because the intermediate
steps of a package making testing upgrades a smooth sail aren't visible
for the stable updates. If you want a better tested path, I suggest
stable updates by jumping from week to week via snapshot.d.o. You may
want to skip weeks with actual bugs through.



In the end, it is just simpler to assume that every release+1 package is
installed by dpkg/release (and of course apt/release) than trying to
reason if and how we can make it happen that dpkg is handled before some
other package (without forming loops) or even can be sanely upgraded
ahead of the upgrade entirely. Or are you perhaps volunteering?


Don't get me wrong, as an apt dev I would love if we could do that. It
is kinda annoying to work around issues you have fixed years ago, but
aren't available in (soon) oldstable. We would need a more aggressive
stable-updates strategy but in reality we tend to be held to even higher
standards than all other packages because we are native key packages…
(just look when we froze dpkg… apt at least has a tiny loophole for now
 by not being build-essential in a strict sense)

Not that it really matters. It would just add more moving parts to the
upgrade process – a process, if this entire thread is any indication,
which is hardly understood by anyone in Debian and considered entirely
optional by many. That alone makes me very sad on many levels.


(I wrote this before reading Guillems replies which end on a similar
 note even though he comes from the opposite end – dpkg worried about
 the finer file-level details and apt about the general package-level
 picture meeting halfway as usual… kinda funny)


Best regards

David Kalnischkies

P.S.: As someone will ask: Ubuntu splits the user base in two: Those who
run their release upgrader which runs outside of the packaging system and
largely can do whatever (including bring in a standalone apt/dpkg just
dealing with an upgrade – they usually resign to much simpler things
through) and those who don't like for example chroots and containers who
effectively use whatever an upgrade path 'apt dist-upgrade' gives you.
Which also explains why Ubuntu hasn't fully /usr-merged yet more or less
waiting for Debian to figure that one out. Or, well, they spearhead even
here now as it is apparently too much to ask for an upgrade path in
Debian nowadays.


signature.asc
Description: PGP signature


Re: Q: Use https for {deb,security}.debian.org by default

2021-08-21 Thread David Kalnischkies
On Sat, Aug 21, 2021 at 09:45:54AM +0200, Tomas Pospisek wrote:
> On 21.08.21 09:14, Philipp Kern wrote:
> > defense in depth if we wanted to, but maybe the world just agreed that
> > you need to get your clock roughly correct. ;-)
> 
> I remember seeing apt-get refusing to update packages or the index because
> of them "having timestamps in the future" or in other words system time
> being out of sync in direction of the past.

APT requires the time to be more or less correct since ever¹ by virtue
of e.g. gpg keys (or signatures) expiring and expired keys are bad.

In recent years we became more reliant on the time to ensure
repositories are somewhat current refusing repos from too long in the
past as well as from the future. At least these can be worked around
with -o Acquire::Check-Date=false.

For gpg you will need another workaround I can't remember of the top of
my hat. There are likely more problems as it is easier to just set the
clock approximately correct than to remember all the workarounds in
"the time of need"…


Best regards

David Kalnischkies

¹ okay, ~15 years of apt-secure are not exactly ever, but close enough.


signature.asc
Description: PGP signature


Re: Q: Use https for {deb,security}.debian.org by default

2021-08-21 Thread David Kalnischkies
On Sat, Aug 21, 2021 at 12:04:32PM +0100, Phil Morrell wrote:
> On Sat, Aug 21, 2021 at 10:40:32AM +0200, Wouter Verhelst wrote:
> > On Fri, Aug 20, 2021 at 07:20:22PM +, Jeremy Stanley wrote:
> > > Yes transparent proxies or overridden DNS lookups could be used to
> > > direct deb.debian.org and security.debian.org to your alternative
> > > location,
> > 
> > I've been thinking for a while that we should bake a feature in apt
> > whereby a network administrator can indicate somehow that there is a
> > local apt mirror and that apt should use that one in preference to
> > deb.debian.org.
> 
> This already exists in the form of an avahi service announcement for
> _apt_proxy._tcp, issued by both squid-deb-proxy and apt-cacher-ng.
> Literally the only thing needed client-side is installation of
> squid-deb-proxy-client […]

That will instruct apt to use the proxy to connect to the internet, but
this is quite literal in meaning: apt will perform a CONNECT request
establishing a tunnel between itself and the remote server via the
proxy effectively by-passing any functionality the proxy would provide
if we wouldn't connect to the remote with https (as with http apt would
just issue GET requests to the proxy it could interact with).


apt can't just downgrade https to http if it knows about a proxy,
especially if that knowledge is provided by external potentially
untrusted sources. To do that we would need to at least ask the user
interactively if its okay to send the requests unencrypted to the proxy.

There is precedence with cdrom asking the user interactively to change
CDs if needed, so it isn't an entirely new concept, but libapt has no
generic question-asking code and cdrom is a cake walk compared to the
monster that is our http(s) implementation, so that is still a non-
trivial amount of code someone would need to write. Also in the libapt
front ends as you still need at least a bit of UI to actually expose the
question to the user.


Depending on how much control you have over the clients it might be
a lot easier to work with the mirror method. It can be (ab)used for
a lot more than most people give it credit for (Disclaimer: As I wrote
the current incarnation, I might be a *tiny bit* biased). That isn't
helping of course if you have no control at all over the clients as you
need some form of opt in at least. So far, that opt in was using http.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Changing how you do things: Was Re: merged /usr

2021-08-18 Thread David Kalnischkies
On Wed, Aug 18, 2021 at 08:33:01AM +0100, Tim Woodall wrote:
> […] and time taken updating this
> script to apt "because the documentation says I should" is time I cannot
> spend on more interesting stuff (from my PoV)

For the record: The apt documentation says the opposite. /usr/bin/apt
is even annoyingly insistent on not being run inside a script or even
its output being redirected to a file.


I was talking (in half-jest) about interactive usage – aka your fingers
typing all commands manually into a terminal one by one. That is also
what the release notes are primarily concerned with.


We know perfectly well that apt-get is used all over the place in the
strangest of usecases by scripts potentially nobody knows howto or has
the time to maintain. As a result apt-get (and the rest of the apt-*
family) aren't changing their behaviour much if at all from release to
release, which results in some behaviour being suboptimal if not
downright bad, but the default none the less as it would be too costly
to change the default (a fun example of a minor change having unexpected
consequences is breaking Debian CD building with apt-cache show #712435).

apt on the other hand can be changed "at will" as we expect a being on
the other side of the terminal who is able to react to changes like
a new question asking if this potentially security relevant change is
okay & expected. Scripts can't usually react, with them we can only
communicate via failing the execution if we deem the risk high enough to
do so to hopefully summon a being capable of reasoning to look into the
failing of the script.


btw: `apt-config dump Binary::apt` will tell you (most) of the config
options apt changes compared to the 'old' defaults in apt-get and co.
There aren't that many (so far).


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: merged /usr

2021-08-17 Thread David Kalnischkies
On Mon, Aug 16, 2021 at 03:13:48PM +0200, Marco d'Itri wrote:
> On Aug 16, David Kalnischkies  wrote:
> > Is perhaps pure existence not enough, do I need to provide an upgrade
> > path as simple as possible as well?
> If you have specific ideas about how the upgrade path could be improved 
> then I am interested in hearing them.
> I think that it is hard to beat "apt install usrmerge", but it could 

I see… we have a drastically different opinion on what a simple upgrade
path is then; but never mind me labeling it "couldn't be much worse" as
long as we agree it could …

> still be improved by having some essential package depend on
> "usrmerged | usrmerge" (with usrmerged being an empty transitional 
> package which ensures that the system has a merged-/usr).

I was discussing this here with Simon already as this needs either:
a) a guarantee that packages built on merged systems work on unmerged OR
b) supporting unmerged in bookworm so buildds and co can be run unmerged

Beside the promise that all packages in bookworm support running on
merged and unmerged as you can't really guarantee at which point the
conversion happens, but that at least is easy as it should be the
status quo (I know there are people who disagree on that already in
other branches of the thread, but I am not here to shave that yak).


a) couldn't be promised so far leading to chroots being unmerged and
b) is at odds with the CTTE decision and a bit awkward as it requires
manual intervention to keep build machines and co unmerged, but that is
at least a much smaller "manual intervention required" set than doing
nothing at all by default.


[Of course, the or-group itself would need to be reversed, but I guess
 that was a typo; and ideally usrmerge would be lighter – but that is
 already discussed in a bug – as it is pseudo-essential and installed
 for everyone]


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: merged /usr

2021-08-16 Thread David Kalnischkies
On Mon, Aug 16, 2021 at 12:59:31AM +0200, Marco d'Itri wrote:
> BTW: the usrmerge package has been in the archive for 6 years now.

/usr/bin/apt exists for 8 years now and the release notes advice using
it in every section. So, how come people are still typing apt-get
interactively to upgrade?

Is perhaps pure existence not enough, do I need to provide an upgrade
path as simple as possible as well?

At least apt is installed on every system in existence automatically,
you don't have to go out of your way to install it manually, so that
transition seems painless and even removes 4 keystrokes in comparison!


What is your transition plan from unmerged to merged?


That is the simple question of this sub-thread and so far Simon told me
how he plans it. As you have worked on yours for years now I would be
happy if you could point to/tell me yours as I could only find "you have
to do it manually" so far. Surely you came up with something a lot
better after all those years.


Best regards

David Kalnischkies

P.S.: For the avoidance of doubt: apt-get is of course going nowhere,
but that cuts both ways: It isn't changing as your fingers hate change –
so e.g. no new interactive questions fingers aren't trained to answer…


signature.asc
Description: PGP signature


Re: merged /usr

2021-08-16 Thread David Kalnischkies
On Sun, Aug 15, 2021 at 05:52:06PM +0100, Simon McVittie wrote:
> On Sun, 15 Aug 2021 at 11:52:21 +0200, David Kalnischkies wrote:
> One way out of this would be to say that it is a RC bug for packages
> in bookworm to have different contents when built in equivalent
> merged-/usr and unmerged-/usr chroots/containers (a higher severity
> than is currently applied, which I think would be a "normal" or "minor"
> bug for violating the Policy "should" rule that packages should be
> reproducible).

> > So, your reasoning is that tooling will help us ensure that packages
> > built on merged systems work on non-merged systems? Good!
> 
> This is basically another phrasing of the first option I described above,
> I think.

Yes.
And for the avoidance of doubt: If that is part of the plan I am
happy as it is one step closer to upgrade sanity.


> > No flag day
> > required then, we can just naturally upgrade all systems as they
> > encounter the $magic and have new buildd chroots bootstrapped now
> > merged instead of enforcing them being unmerged still
> > (modulo whatever the implementation itself might be of course).
> 
> If we are going to reach a state where package maintainers can
> assume/require merged-/usr (for example being able to drop code paths that
> only needed to exist because unmerged-/usr is supported), then we need
> some point in the release/upgrade process where that requirement becomes
> official - and IMO that point in time might as well be a particular Debian
> release, because that would be consistent with the rules we normally
> use to drop other code that was historically required but is no longer
> relevant, like Breaks/Replaces or workarounds in maintainer scripts.

Sure, such a flag day for stuff effecting bookworm+1 is fine, my concern
was with the effect the flag day has on bookworm itself, by saying
unmerged is not support in bookworm while potentially still requiring
such systems to continue to exist [= not option 1] and/or having no
facility to upgrade them automatically to stay supported. If it hasn't
you can do whatever as far as I am concerned, but that isn't what was
said until now (and what you repeat in the next paragraph…).

So, yes, your strict speaking:
> Strictly speaking, the cutoff in the timeline I proposed isn't bookworm r0,
> it's the first time you update from testing/unstable *after* bookworm r0.


> > ¹ e.g. Marga is saying in #978636 msg#153 that migration from unmerged
> >   is not required to be implemented for bookworm [and therefore
> >   effectively at all] for unmerged to be unsupported in bookworm.
> 
> Well, we have the usrmerge package, so an implementation exists. It isn't
> perfect, and I hope that between now and the bookworm freeze, we can get a
> better migration path than `apt install usrmerge` as currently implemented
> (either in a new revision of the usrmerge package, or elsewhere); but it
> mostly works in practice.

Yeah, well, that is exactly how I have read and understood the discussion
so far which means everyone has to run it manually to upgrade every
system, container, chroot, … not recently freshly installed … *urgh*

That isn't an upgrade path for me and I can't stand others claiming it
would be, which triggered this sub-thread to begin with if you remember…


> Doing what usrmerge does from a maintainer script is pretty scary from a
> robustness/interruptability point of view. Without my Technical Committee

"Upgrades are like sausages, it is better not to see them being made."
 -- Otto von Bismarck (except he said Laws of course)

I know it is a major discussion point in this megathread, but that isn't
my field of interest, so I feel not qualified to comment on technical
details of the implementation of any plan of /usr-merge itself and
happily leave that to experts.

All I am asking for is a way that ensures that 99% of systems currently
supported as buster/bullseye can be upgraded to bookworm and later to
bookworm+1 and remain supported without manual intervention.
That isn't too much to ask, is it? :P


Heck, if we figured out that this isn't possible with dependencies and/or
maintainer script we could perhaps even implement something in apt to
ensure invariants like "to be able to upgrade to X, you have to install
Y first (here, let me do that for you automatically)". Maybe we should
ask an apt maintainer…¹  But that pre-requires that a plan is made by
someone who actually knows how its supposed to work…

Julian e.g. proposed a silly one[0] in freeze, but that it is of course
not workable to print warnings if huge parts of systems apt runs on
(e.g. buildd chroots) have to ignore that warning for years (, can't
be automated due to this either) and is as usual 2 years too late.
(for at least 6 years now as Marco pointed out. I will

Re: merged /usr

2021-08-15 Thread David Kalnischkies
On Sun, Aug 15, 2021 at 12:16:39AM +0100, Simon McVittie wrote:
> On Sat, 14 Aug 2021 at 16:59:24 +0200, David Kalnischkies wrote:
> > Wouldn't it be kinda strange to have the chroots building the packages
> > for the first bookworm release using a layout which isn't supported by
> > bookworm itself…
> 
> Yes, it's a little strange, but that's what happens when we don't want
> a mid-release-cycle flag day: we have to sequence things somehow. For best
> robustness for users of non-merged-/usr, build chroots should likely
> be one of the last things to become merged-/usr, and build chroots for
> suites like buster and bullseye that support non-merged-/usr should stay
> non-merged-/usr until those suites are completely discontinued.

You snipped both times the [for me] logical consequence that all
bookworm build chroots are kept in a [then unsupported] unmerged state
as "one of the last things" aka until bookworm is discontinued,
so that they are building the packages who do will encounter unmerged
systems in the upgrade as a user can perfectly well upgrade from bullseye
to the ninth point-release of bookworm months after the initial release
of bookworm.


> The failure mode we have sometimes seen is packages that were built in
> a merged-/usr chroot not working on a non-merged-/usr system, although
> that's detected by the reproducible-builds infrastructure and is already
> considered to be a bug since buster (AIUI it's considered a non-RC bug
> in buster and bullseye, as a result of things like #914897 mitigating it).

So, your reasoning is that tooling will help us ensure that packages
built on merged systems work on non-merged systems? Good! No flag day
required then, we can just naturally upgrade all systems as they
encounter the $magic and have new buildd chroots bootstrapped now
merged instead of enforcing them being unmerged still
(modulo whatever the implementation itself might be of course).
I am happy as that wasn't clearly said before and current practice
and previous discussions suggested the opposite¹ (at least for me).
Thanks & good luck!


If on the other hand you do still anticipate problems with packages
built on merged systems for non-merged systems requiring a flag day
I don't understand why it makes sense to have that flag day be
bookworm release day² as that brings the anticipated problems to the
bullseye→bookworm upgrades with the first point release (with the
first package with a stable or security update to be more exact³).


Best regards

David Kalnischkies

¹ e.g. Marga is saying in #978636 msg#153 that migration from unmerged
  is not required to be implemented for bookworm [and therefore
  effectively at all] for unmerged to be unsupported in bookworm.

² Leaving aside how we would even technically implement a flag day so
  that unstable (building bookworm packages until release day) stays
  unmerged until magically merging on release day while testing
  merges on install (before that) for… testing?

³ I understand that only a subset actually breaks for non-merged if
  build on merged, but I prefer to assume that the first one is such
  a package to be prepared rather than pray to deity (pun intended) and
  hope for the best.


signature.asc
Description: PGP signature


Re: merged /usr

2021-08-14 Thread David Kalnischkies
On Sat, Aug 14, 2021 at 02:26:29PM +0100, Simon McVittie wrote:
> On Sat, 14 Aug 2021 at 14:33:44 +0200, David Kalnischkies wrote:
> > the current 'transition' plan is to have the
> > release notes nudge all people who upgrade instead of reinstall their
> > systems, chroots and what not to please do it for all of them by hand
> > at a to be specified flag day someday between now and bookworm
> > freeze
>
> I think the earliest flag day that would be possible (for requiring merged
> /usr, or for completely undoing merged /usr, or for any similarly "big"
> transitional path) is the bookworm release date. We specifically don't

That would be nice, but isn't what the CTTE ruled as the implementation
of the resolution (= no longer supporting non-merged-usr layout) is
delayed until after the release of bullseye. That is also what the
bullseye release notes say, too.

Wouldn't it be kinda strange to have the chroots building the packages
for the first bookworm release using a layout which isn't supported by
bookworm itself… and wouldn't it be even worse if we change from the
quasi-bookworm unmerged unstable chroots to the bookworm merged chroots
[as unmerged isn't supported for them] for building the packages of the
first point release?

That is why I said freeze as I kinda doubt the release team would like
to have a big change for bullseye after the freeze…


> 2. bookworm release: systems must transition at or before the upgrade to
>bookworm (bullseye systems are not required to transition until/unless
>they are upgraded)

The "at" in this sentence means that all bookworm packages must support
unmerged as you can't guarantee that the transition happens before¹ and
forces bookworm chroots to be unmerged as well as the packages built in
them will be used to upgrade from bullseye as we don't do →.0 → .1 → …
upgrades. That is of course in direct contraction to not supporting
it anymore.


Best regards

David Kalnischkies

¹ well, you could by having $magic implemented with the essential set
and shipped in something like base-files (= installed everywhere) to
pre-depend on it from quite literally every package in existence.
[shouldn't be a new package as release notes traditionally advice to
 run an upgrade without installing new packages first] Kinda doubt
that would work in practice…


signature.asc
Description: PGP signature


Re: merged /usr

2021-08-14 Thread David Kalnischkies
On Sat, Aug 14, 2021 at 02:08:33PM +0100, Luca Boccassi wrote:
> Were upgrades impossible in Ubuntu when it switched and were manual
> reinstallation mandatory for the entire user base, chroots, whatnot?
> No. Then why should it be the case for Debian if we do the exact same
> thing with the exact same tools?

Oh, I didn't know we had a release upgrade tool orchestrating the
upgrade like they do. You should tell the release team, I think they
are still looking for a bulletproof solution to upgrade ssh early among
other things.

And I am pretty sure the unmerged chroots are an open question for them
still as nobody is running the upgrader in there of course.
See also https://bugs.debian.org/985957.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: merged /usr

2021-08-14 Thread David Kalnischkies
On Fri, Aug 13, 2021 at 10:16:57AM +0100, Luca Boccassi wrote:
>  Unless the intention is to deprecate allowing to change 
> /etc/apt/sources.list and mandating that only hard-coded official Debian 
> repositories can be used on Debian installations, of course, which would be, 
> uh, interesting to see?

That is ironic given the current 'transition' plan is to have the
release notes nudge all people who upgrade instead of reinstall their
systems, chroots and what not to please do it for all of them by hand
at a to be specified flag day someday between now and bookworm
freeze – while a buster user might be able to do it now by hand,
a repository admin has no such luxury as their packages have to continue
to work until the flag day, so no dice embedding /usr/bin/grep, but
after the flag day all dependencies could embed it breaking the unmerged
build chroot still only having /bin/grep, so I hope we are picking a day
on which every repository owner has some free time to do the flip… or
well, deprecate them all as you suggest. As an APT dev I would approve.

Not sure why you are singling out Debian as fine though, given we have
the very same problem for our fleet of buildds and porterboxes, some DSA
owned and some not. Thank goodness binary uploads by maintainers are
a thing of the past never to be seen or even required for… oh, right…

But yeah, upgrades. Minor problem.
Nothing which can't be fixed with a good reinstall.


To be clear: I couldn't care less about the if and how of /usr-merge.
I do appreciate that some plans have a better upgrade experience though,
not only as a user, but as dev as failed upgrades tend to be attributed
to apt –– and I am a bit shocked we are fine with flag days nowadays.
In the good old MultiArch days (that is a decade ago already!) a flag
day wasn't even seriously considered an option desperate the costs. How
times change… so it is okay now if I finally axe aptitude, right? :P
(I am joking, I still think doing it this way was the right move – and
 the bigger cost is arch:all not being M-A:foreign by default anyhow)


Best regards

David Kalnischkies

P.S.: I picked out only this line as I think most of the rest is more
or less discussed to death already in other sub-threads and at times
actually objectively wrong – like the amount of packages shipping
something in /bin and co – so I don't feel like rehashing those. Not
that I feel like wanting to discuss this point either, I just find it
hideous to use a "what about upgrades?!?" hyperbole in this situation.


signature.asc
Description: PGP signature


Re: Debian package manager privilege escalation attack

2021-08-12 Thread David Kalnischkies
On Thu, Aug 12, 2021 at 08:32:14AM +0200, Vincent Bernat wrote:
>  ❦ 12 August 2021 10:39 +05, Andrey Rahmatullin:
> >> I just ran across this article
> >> https://blog.ikuamike.io/posts/2021/package_managers_privesc/ I tested
> >> the attacks on Debian 11 and they work successfully giving me a root
> >> shell prompt.
> > I don't think calling this "privilege escalation" or "attack" is correct.
> > The premise of the post is "the user should not be a root/admin user but
> > has been assigned sudo permissions to run the package manager" and one
> > doesn't really need a long article to prove that it's not secure.
> 
> I think the article is interesting nonetheless. Some people may think
> that granting sudo on apt is OK. In the past, I think "apt install
> ./something.deb" was not possible.

It wasn't that easy, but if you can feed config options into apt you can
basically do whatever (like setting a sources.list, including your own
local repo including your bad deb). Beside the command line -o and -c
you can also use environment variable APT_CONFIG.

APT (, dpkg, …) just never was designed to be used in a restricted way or
we wouldn't have hundreds upon hundreds of options to do all sorts of
(sometimes) crazy things like using apt for bootstrap…

I would say dd-schroot-cmd is a good example of what you would need,
although I am relatively sure someone truly hostile can find a way if
enough energy is invested (and then there is always the risk of the APT
team adding yet another innocent option derailing the plan like the
ability to install deb files directly used to back in 2014).


> Maybe it would be worth to also set LESSSECURE (less is not the default
> pager on minimal installs but I think it is the most common, more cannot
> be secured this way).

External solvers (--solver/--planner) are run as a (configurable)
different user, currently defaulting to _apt. That is nice, as it isn't
root, but _apt is also used by the download methods, which means it can
have permissions to files it shouldn't have. Ideally, we would need
an extra user for that. Except that different solvers probably shouldn't
be able to access each other, so multiple I guess. Can't really be nobody
(or a temporary) as the solvers might very well have their own config,
cache, I could even envision some asking an online oracle for input
(reproducible, open bugs, …) and firewall rules for nobody are bad ………
sorry, my head hurts, were where I?

Right, pagers. Ideally I would like to not run them as root as well,
but they are a lot more user facing, so if your usual config (hello
lesspipe) disappears it is sad. Fun would be to run the pager as the
user who sudoed initially… :P


We could set this environment variable I guess, but dpkg doesn't set it
either and a quick codesearch in Debian suggests that while the variable
seems sufficiently ancient (console-log changelog mentions it in 2000)
I don't see a whole lot of adoption – and golang-github-sean--pager
surprises me with setting it only if the called pager is named less.
Not sure I like systemds envvar to override an envvar either
(and they of course all use different LESS flags to begin with).

So, before I am rushing off to do whatever I like, could we perhaps
agree on a "sensible-restricted-pager" (I dare not to name it secure…)
sort-of implementation first?


Oh and, btw, there is no point¹ in running 'apt changelog' with root
permissions – it is beside the point here, but I feel obligated to
mention it.


Best regards

David Kalnischkies

¹ well, there is a teeny weeny one: an outdated binary cache is updated
and stored on disk rather then build in memory and discarded afterwards,
but ideally your cache isn't outdated – it usually isn't if you aren't
doing things with envvars, options, …


signature.asc
Description: PGP signature


Re: Issue with installing manually created Debian archive with APT

2021-08-01 Thread David Kalnischkies
On Sat, Jul 31, 2021 at 04:20:25PM -0500, Hunter Wittenborn wrote:
> When I try installing the packages my program builds (with 'apt install 
> ./debname.deb'), it works fine. But if I then try to reinstall the package 
> (with 'apt reinstall ./debname.deb'), this error keeps popping up for some 
> reason (the variables surrounded by '{}' represent values in a control file):
> 
> Repository is broken: {Package}:{Architecture} (= {Version}) has no Size 
> information
> 
> Is there anything I'm doing wrong when creating the .deb package […]

Maybe as dpkg does a bit more than just tar and stuff while building
packages, but no, this message is due to a bug of sorts in apt:
https://salsa.debian.org/apt-team/apt/-/merge_requests/177

The changes are already merged to main and should be in experimental,
which was also the only release pocket effected in Debian (well,
technically the misconception as such is a few years old, but the
code equally hiding and bringing it to light is not).

Not sure about the state in Ubuntu, but I guess its only their
development release und only until the next sync/upload.


You can btw use -o Acquire::AllowUnsizedPackages=1 to disable this check.
Or, simply don't try to reinstall packages for (probably) no reason…


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Bug#990521: I wonder whether bug #990521 "apt-secure points to apt-key which is deprecated" should get a higher severity

2021-07-01 Thread David Kalnischkies
(Disclaimer: It was me who implemented Signed-By, also most of the
 current monster apt-key is, trusted.gpg.d, … I might be a *tiny bit*
 biased than it comes to apt and these topics as a result.)


On Thu, Jul 01, 2021 at 02:40:31PM +, Jeremy Stanley wrote:
> maybe add some further explanation to it indicating the real-world
> threats this recommendation mitigates. Security policy should be

Lets not throw the baby out with the bathwater, shall we?


Signed-By (with a filename) has enormous benefits from an implementation
stand-point as if you dare to open the shell script apt-key, you will
notice that quite a bit of its 800+ lines deals with massaging files
into one keyring gpgv will understand and hopefully support forever
(gpg has a limit of 40 keyrings per invocation, which was eventually
 triggered by people and their prolific use of third party repos and at
 some point upstream said they would drop that down to one keyring;
 never put into action though and seems not to happen) – and that isn't
magically disappearing just because we deprecated apt-key, that is just
reducing the public interface of this nightmare which still serves as
our internal wrapper to shield us against eldritch horrors within…


Many users add countless repos over time, but hardly ever remove them.
They eventually fail or are commented out, but the keys once added usual
stay there until the end of times. So e.g. old RSA1024 keys for repos
you no longer have enabled participate in verification – sad if said key
was compromised in the mean time and is now used to MITM a repo you
still have enabled like Debian security.
(most 40 limit-hitters had like 3 repos enabled if I remember right)

There are also "fun" things you can do if the repository has sufficient
similar metadata – at some point you could e.g. reply to a request
for the Debian security repo with the metadata for the main Debian repo
(or any repo really which happened to use the same layout) and nothing
will tell the user that something is wrong. The repository data of the
two will change with bullseye so that this wont work anymore (That
works "even better" in older releases as the metadata did not need to
match at all, but that change got me a lot of backlash as somehow people
wanted to have their repo suite change at basically random – also known
as release day – still).

Other scenarios exist, which are all individually not very strong, but
Julians point is that this shouldn't be regarded as THE security feature
which it really isn't. I mean, what problem is sudo actually solving
compared to a root account and still, there are people who believe that
adding sudo makes not only sandwiches great. Pointing that out doesn't
mean its entirely pointless security theater either though.


So, long story …: If you use "apt-key add" you are doing it wrong.
If you use "apt-key adv --recv-key" or similar you are doing it triple
wrong. And your deity [because deity@ doesn't] may have mercy with your
poor soul if you happen to do "apt-key adv --refresh-keys" in a cronjob
(yes, I have seen that in the wild and in case you didn't know, both are
MAJOR security holes if you are not on a recentish gpg2 and even then…).

… short: Its fine to drop files into trusted.gpg.d, but you earn brownie
points with me for not doing it and using signed-by as that is a tiny
bit more secure. I did implement it mostly in preparation for other
changes, not because I believed everyone has to instantly switch to it.


There are ideas to pull the key itself into the .sources file, to match
keys to sources based on metadata automatically and/or to use
none-gpg-based signatures and what not, but its a bit early (or a bit
late, depending on how you look at it from a release standpoint) to
really think and discuss it, even if HN and reddit seem to have a blast
discussing the doodles on our proverbial toilet wall for days now.


Oh and btw, back than I implemented trusted.gpg.d more than a decade ago
it was actually the plan to eventually replace apt-key with it.
That worked oh so well that it instead exploded in complexity (but at
least I managed to remove the gnupg requirement, so that is a plus…).

"I shall commit myself to achieve the goal, before this second decade
is out, of landing a patch series to shoot apt-key safely to the moon,
never to return to the main branch again, […] because that challenge
is one that we are willing to accept, one we are unwilling to postpone,
and one we intend to win".

Lets see how that will go.
Cavendish surprised most experts, too.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Planning for libidn shared library version transition

2021-06-01 Thread David Kalnischkies
On Fri, May 28, 2021 at 05:12:01PM +0100, Simon McVittie wrote:
> On Thu, 27 May 2021 at 16:53:45 +0200, David Kalnischkies wrote:
> > dpkg has the notion of "disappearing packages" (packages which have no
> > files left on a system) which could solve this cleanup compulsion, but
> > it is currently not supported (as in forbidden in practice) in Debian.
> 
> Am I correct to think that the reason this is forbidden in practice is
> the requirement that every package contains either its changelog and
> copyright file in /usr/share/doc/${package}/{changelog.Debian.gz,copyright},
> or a symlink /usr/share/doc/${package} -> /usr/share/doc/${other}, either
> of which will prevent the package from fully disappearing because the
> replacement package isn't going to contain those files?

Well, the idea is that the newpkg provides+conflicts+replaces with the
oldpkg. oldpkg contains just a softlink from /usr/share/doc/oldpkg
to newpkg and Depends on newpkg. The newpkg replaces this symlink by
containing it as well which leaves oldpkg with no remaining files which
in turn leads dpkg to make oldpkg disappear.

dpkg can only do this if no package on the system currently depends on
oldpkg though. And that is where the tricky problems begin as dpkg could
have made oldpkg disappear before another package is installed which
happens to depend on oldpkg (I guess versioned provides come to our
rescue here actually, now that I think about it).

It also means that this can only be used for the most straightforward of
transitions: a simple package rename – as as soon as you want the
transition package to actually contain meaningful content (like a node
→ nodejs symlink for compat) its not applicable anymore, so its rather
special interest.


I seem to have forgotten what the big problem preventing "widespread"
use was last time. I thought it would be forbidden by policy, but it
isn't as you point out. At this point it might be just "nobody used it
in production for a decade, there might be bugs". Guillem as the dpkg
maintainer probably knows more about it than I do if there is interest
in pursuing this further.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Planning for libidn shared library version transition

2021-05-27 Thread David Kalnischkies
re looks in practice more like the
tricky mess which results if you put three cables neatly separated
in a box and you look away for a split second…


> If it's feasible to solve this, then I suspect the only packages that
> would need code changes would be apt and cupt (and maybe aptitude).

I guess you would also need to change external solvers like aspcud, but
those usually do not concern themselves too much with upgrades, but are
mostly used in build-chroot constructions, so you might get away without
it (see also optimisation criteria "less removals" from above).

As the number increases we would also need an "upgrade" command which
allows those removes but not the others as our current "upgrade" becomes
less and less useful.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Help required to determine why some packages are being installed

2021-05-27 Thread David Kalnischkies
Hi,

On Sun, May 16, 2021 at 10:00:29PM +0300, Dmitry Shachnev wrote:
> We determined that the reason for users getting libqt5gui5-gles installed
> is the qt5-default package. We removed it in October 2020 because it is no
> longer needed. But the latest version of that package that ever existed had
> this dependency:
> 
>   Depends: qtbase5-dev (= 5.14.2+dfsg-5) | qtbase5-gles-dev (>= 5.14.2+dfsg)
> 
> The latest available version of qtbase5-dev cannot satisfy that dependency,
> but the latest version of qtbase5-gles-dev can!

The self-inflicted joy of avoiding a transitional package (see also the
other thread about transitional packages on d-d@ I should comment) and
of too strict dependencies (just because its the same source package
doesn't mean = is a good choice; >= would have avoided that mess). ☺


> So for people who had qt5-default installed, apt tries to replace the normal
> Qt stack with -gles one to keep that dependency satisfied. It does so even
> if it's going to remove qt5-default anyway!

Yeah, greedy solver at its best. The problem is here mostly that an
earlier part of the solver has figured out (since bit more than a year
now) that the dependencies of qt5-default can't be satisfied (as it
tries to remove itself), but a) this knowledge isn't used much in the
caller yet and b) a later part who is usually responsible for contested
remove decisions isn't as good at figuring it out yet, so it decides to
try the bad gles subtree and as greedys are bad at reconsidering their
decisions it leads down to madness.

That specific case might actually be solvable on the apt side with some
more heavy work I intend to do eventually (just like I did for the
mentioned earlier part), but as you might expect, this isn't appropriate
to even think about while in freeze so don't hold your breath – and less
clear instances of this will probably remain out of reach still, so lets
look at our options:


> As an attempt to solve this problem, I added "Breaks: qt5-default" to both
> qtbase5-gles-dev and libqt5gui5-gles packages yesterday [2]. I thought this
> would convince apt to not consider these packages as a way to satisfy
> qt5-default dependency. But that did not work :(

Add the Breaks also to qtbase5-dev. Also, make that breaks versioned so
that you can reintroduce qt5-default if that turns out to be needed
(Lintian probably complains about it, too).

Usually I would recommend reintroducing qt5-default, but as you
described it as a sid-user only problem it seems more beneficial to keep
it removed for everyone rather than to readd for sid only confusing
users who want to look up if a removal of qt5-default is okay or not.


As qtbase5-dev is installed and depended on while nobody likes
qt5-default anymore the ProblemResolver will have an easy time figuring
out that removing qt5-default is better than holding back qtbase5-dev
would be and hence as a second step wont try to save qt5-default by
installing qtbase5-gles-dev (just to then figure out that gles-dev
breaks qt5-default, so it has to remove the later anyhow).


Thanks for looking into this: That seems like a simpler and less
controversial example than the one I used for the last big round of
resolver changes … and sorry it took me a while to reply.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Tips for debugging/testing debian/control Depends/Breaks etc changes?

2021-03-31 Thread David Kalnischkies
On Mon, Mar 29, 2021 at 07:12:18AM -0700, Otto Kekäläinen wrote:
> apt install --with-source ./Packages -s mariadb-server mariadb-client
> libmariadbclient18

I just want to add here, that --with-source also works with other apt
commands like "upgrade". Depending on what you want to test, it might
be more realistic to use these rather than making complicated explicit
requests no 'normal' user will ever perform.


> The debugging options produce a ton of output which I did not yet
> learn to ready, but I'll paste it here below for reference for others
> to see what the resolver debug output looks like:

On the surface the problem you are facing is:

> Broken mariadb-server-10.5:amd64 Conflicts on mysql-server:amd64 < 
> 5.7.30-0ubuntu0.18.04.1 -> 5.7.33-0ubuntu0.18.04.1 @ii umU > (< 
> 1:10.5.10+maria~bionic)
>   Considering mysql-server:amd64 0 as a solution to mariadb-server-10.5:amd64 > 0
>   MarkKeep mariadb-server-10.5:amd64 < none -> 1:10.5.10+maria~bionic @un uN 
> Ib > FU=0
>   Holding Back mariadb-server-10.5:amd64 rather than change mysql-server:amd64

That is one of the first decisions the problem resolver (second half)
makes as these two packages want to be installed, but no sufficient
reason is found to remove what works currently on the system
(mysql-server) by bringing in something else (mariadb-server-10.5).
The second line has them both listed as having 0 points – packages earn
points by being depended on by other installed packages, lose some for
conflicts on them, and a bunch of other reasons in both categories.

Users hate then packages are removed, so apt tends to hate that, too.
apt and friends even have entire commands which allow you to upgrade as
far as you can go without removes as upgrades usually give you new
features (and bugs) to play with while removes do the opposite.



For me, this whole situation seems wrong though. Why do you have
versioned package names (mariadb-server-*) when they are all mutually
exclusive with one another due to all shipping the same binary?

Either embrace versioned names like e.g. gcc/clang do or drop the
pretense and ship an unversioned mariadb-server. Most packages aren't
packaged versioned after all and that is (mostly) fine (same for client
and co which only makes this more complicated and worse).

Mixing the two causes your users to experience the worst of both worlds:
The packages can not be co-installed forcing them through the change in
one sitting and they are an upgrade nightmare as there will always be
one more situation in which apt (or another resolver, or even a human)
decides that (part of) an upgrade is not worth the perceived cost.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Tips for debugging/testing debian/control Depends/Breaks etc changes?

2021-03-25 Thread David Kalnischkies
On Wed, Mar 24, 2021 at 12:37:46PM -0400, Otto Kekäläinen wrote:
> As an example of 1, sometimes I see this:
> 
> apt install mariadb-client
>  The following packages have unmet dependencies:
>  mariadb-client : Depends: mariadb-client-10.5 (>= 1:10.5.10) but it
> is not going to be installed
> 
> apt install mariadb-client-10.5
>  Installing.. Done!
> 
> When this happens I have no idea why apt did not resolve the
> dependency by itself automatically, as there was no real conflict in
> installing it.

Nitpicking, but that are quite different requests. The message shown to
users can also be very unhelpful as decisions are reverted in many cases
to allow alternatives to be tried, even if those alternatives do not
really exist higher up the tree… so you end up with a failure on the
first level than the problem is in reality twenty level down… its a long
standing wishlist item to improve it – but as you might guess its not as
simple as it sounds and the team is tiny, so the improvements are
usually small steps and not giant landslides.


Anyway, you can tell apt to give you more details on different aspects.
The README.md has some info on a few aspects, but usually you can get
out the big guns for simplicity:

-o Debug::pkgDepCache::Marker=1 -o Debug::pkgDepCache::AutoInstall=1
 -o Debug::pkgProblemResolver=1


The first two are for the first step of the default resolver – which
basically tries to follow dependencies – while the later shows the
second step dealing with, as the name implies, problems arising from
packages who can not coexist [both oversimplified].

It can be quiet a bit of output, it changes slightly between versions
and might be a bit overwhelming at first, but you get used to it with
some experience – and if you have questions feel free drop by in
#debian-apt or deity@l.d.o.


> For the problem 2, I hate to rebuild all of the packages (and
> binaries) just because there was a change in debian/control and go
> through the hassle of updating a test repo etc.

You might know that "apt install ./file.deb" works. What most people
don't know is that "apt install ./file.changes" works, too. And
basically nobody realizes that this is a short-hand for "apt install
--with-source ./file.deb pkgname". --with-source also accepts beside
deb and changes (and dsc, '.' and debian/control for source packages)
a file with the syntax of a Packages (or Sources) file. So if you are
able of writing the later by hand you don't have to build packages
(I would advice running apt in simulation mode, though, as you will
 make it very sad otherwise. And of course, not as root).

So, e.g.:
| apt-ftparchive packages . > ./Packages
| apt full-upgrade --with-source ./Packages -s
| # edit Packages file and repeat last call until satisfied

The command line option is backed by a config option, and that one will
work for basically every libapt-based client, so if you feel like
fiddling with aptitude instead… less handy but possible (the resolvers
are entirely different though, so knowledge doesn't transfer well).


It is "only" available starting with Debian stretch (current oldstable)
and I might have blogged about it three years ago – in other words: We
are well on track as it usually takes a decade before people start using
things. It usually takes another decade before people stop complaining
that an implemented feature is not implemented (case in point:
manual vs. automatic installed packages which some users still believe
only exists in aptitude – more than 16y later…).

So don't worry that you haven't heard about it yet: You are not the last
one to know, you are in fact well ahead of the curve. 


As said, if you have questions (or ideas), feel free to praise the cow
^Ŵ^W^W join #debian-apt on IRC, mail us at deity@d.l.o or even report
a bug against apt.


Best regards

David Kalnischkies

P.S.: Disclaimer: If that wasn't clear already: This mail is shameless
advertisement for apt by an APT developer; aka: I "might" be biased.


signature.asc
Description: PGP signature


Re: `Build-Depends` parsing problem for node-* packages

2021-03-17 Thread David Kalnischkies
Hi,

On Wed, Mar 17, 2021 at 04:52:38PM +0100, Arnaud Ferraris wrote:
> I've been confronted with an issue affecting a number of node-* packages
> (and maybe others): apt is unable to parse the Build-Depends field,
> making `apt build-dep` effectively unusable with those packages.
> 
> One good example is node-ramda[1]: as the first non-space character apt
> encounters when parsing Build-Depends is a comma, it considers this a
> syntax error and errors out.

While that particular style is odd for my taste¹, apt is not the arbiter
of the allowed style here. You would need to make a case for dpkg to
refuse this style and that seems even more odd [bad pun intended].

Also, being reminded of #875363 I think the ship has sailed on
"that is odd, lets not support this in apt" even if we would like to be
the arbiters (we don't).


> My personal opinion is that apt's current behavior is sane and the 2nd

Thanks for giving apt so much credit, but this is just another instance
of "apt developers are lazy and unimaginative" as they reused the parser
which was once written for well-defined machine-written and trusted data
and let it loose on all sorts of crazy handcrafted files with comments,
spaces, newlines and empty fields all over the place. Its a mystery how
that isn't crashing all the time even through it isn't written in Rust.
(SCNR)


So please move this thread to a (wishlist) bug against apt and I will see
how I can mangle the input string we pass to good old parser to accept
commas ,all, over ,the, , place, too (currently it accepts only a single
trailing comma, optionally surrounded by white space) ,

As `apt build-dep .` is (to my knowledge) not used in any critical infra
but only by some humans this isn't fixable for bullseye though as such
a change would likely not get a freeze exception.


Best regards

David Kalnischkies


¹ ; char const * const name = "World"
  ; printf("Hello %s!", name)

(that style reminds me of some other language I have seen but can't
 quite remember at the moment; I just haven't seen it in a control file
 so far. As said later, some apt developers are so unimaginative…)


signature.asc
Description: PGP signature


Re: Help required to determine why some packages are being installed

2021-01-30 Thread David Kalnischkies
On Wed, Jan 27, 2021 at 04:53:50PM +0100, Johannes Schauer Marin Rodrigues 
wrote:
> Quoting Jonas Smedegaard (2021-01-27 16:15:17)
> > I suspect that's not really the case - that instead apt tools might pick at
> > random.
> 
> no, apt does not pick at random. The apt solver prefers the first alternative.

To refine this: apt¹ picks the first alternative which is either:
1. already installed and satisfying as is
2. already installed, but needs an upgrade to satisfy
3. not installed, but can (may) be and would satisfy

(in reality you have to factor in explicit provides and implicit ones
 (like M-A:foreign) for all of them individual as well – and while apt
 has a partial order for those, you don't want to hear the details…
 just pretend its "clever" "random" between different providers of a
 given alternative).

(And for completeness "A|B, B|C" will have you end up with A and B
 installed except if e.g. A depends on C, then it will be A and C.
 So it's depth first for now. If you like this behaviour is a question
 of how preferable A is over B and how co-installable they are)

None of this is explicit requirement by Debian policy (which is silent
on many other things apt is required to do by tradition as well), it
only sort of follows out of "§7.1 Syntax" saying that any will do while
"§7.5 Virtual Packages" says that the default should be the first.


Note that if apt¹ can figure out that an alternative will not work it
will pick another alternative which may or may not be what you meant
with "is not available" as that (especially in unstable) includes if
a dependency of this alternative is not satisfied. Sounds innocent and
simple, right?

Consider that this includes all types of dependencies, not only the
positive depends on a not-yet build other package but also e.g. conflicts
with another package which must be part of the solution (= a direct
chain without other solutions from initial to this package exists) or
breaks you thought are there to force an upgrade of a package, but as
that version isn't built yet effectively means that the package has to
be removed instead.


So, without giving this a deep inspection my guess is simply that in
these cases apt managed to figure out that libqt5gui5 was not a good
alternative at the moment the user made the install/upgrade and went on
and choose the other. That's the point of alternatives after all.


In a way, that might be a regression of me working on those conundrums
last year to have apt figure out these things better to avoid breaking
recommends (in which these or-group and provides problems exist as well)
and letting apt find solutions which would previously be errors (as it
picked an unworkable alternative at first and wouldn't be able to
recover from it later on). So some (not all, apt always tried hard, it
just a bit more clever about it) of these users might previously be
greeted by apt error messages until the unstable archive stabilised
reporting bugs against apt as a solution "clearly" exists while apt will
now find that solution you would prefer it not to find it seems.


> I really would like to prevent this from happening for other users,
> so any suggestions would be welcome.

I don't think there is a solution to your problem – assuming that is
indeed this problem as I am only guessing – as at the very end its a
problem of the user accepting a solution they shouldn't, but to know
that you need to have specific domain knowledge. The hidden joys of
Debian unstable I guess… (doesn't help that it sort of sounds like
a package rename when libqt5gui5 is removed and -gles installed).


That said, I find it a bit odd that only libqt5gui5-gles conflicts with
libqt5gui5. I doubt it will help apt, but it seems more honest to also
have the reverse. Fun fact: having it only on one side actually gives
the one having it a scoring advantage in apts conflict resolution, so
for apt it reads in fact like -gles is the preferred package of the
two making it less likely that apt holds back libqt5gui5. In practice
other score points should level the playing field for libqt5gui5 though.
(At least on my system more things depend on it than -gles provides).


Best regards

David Kalnischkies


¹ Then I say apt here I mean the default resolver parts implemented in
libapt and used wholesale by pretty much everything using that library
including apt, apt-get, synaptics, various software centers, …
aptitude being a notable exception with its own resolver reusing only
some parts and reimplementing others which may or may not include the
parts responsible here (it is not an exaggeration when I say we have no
active Debian contributor who could answer that question as the only
person knowing how the aptitude resolver works went MIA years ago. So
the main reason it still (FSVO) works is that supercow is benevolent).
The other notable exception would be (c)debootstrap, but y

Re: Recommending packages via virtual package

2020-04-14 Thread David Kalnischkies
On Tue, Apr 14, 2020 at 04:45:55PM +0200, Markus Frosch wrote:
> > I would hence ask you to explain a bit better why you think APT is wrong
> > and provide an example which actually shows these characteristics.
> > Otherwise I will apparently be a tiny bit annoyed by this thread.
> 
> Why so passive aggressive here? I asked for help and tried to explain in a
> short and simple way.

Frankly, I was just honest as this thread-style annoys the heck out of me,
but I guess I could have worded that a bit differently to make more obvious
what I mean. So, let me explain where my anger comes from:

You started with:
| not sure if this has been discussed elsewhere, but I recently noticed
| a change in APTs lookup for Recommends. Maybe also for other dependencies.

So, you "asked for help", but not in debugging APT or related, but in finding
where this change/bug in APT is discussed, providing your opinion on why the
change should be fixed/reverted ("policy", "wide spread usage") and asking
others to join in ("What are your thoughts on that?" [that = the change]).

Or in other words: You were asking for help in forming a mob to force the bad
apt devs into behaving (slightly exaggerated for effect).

That your example is both not showing the described problem and easy to
reason about showing that the bug you have postulated doesn't exist is
"just" icing on the "It is obviously APTs fault" cake.


It isn't what you meant to say of course, but you would be surprised how often
that style is used rather than the intended "I have no idea why APT does that
here, could someone please explain it?".


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Recommending packages via virtual package

2020-04-14 Thread David Kalnischkies
On Tue, Apr 14, 2020 at 10:26:54AM +0200, Markus Frosch wrote:
> Apparently this no longer works. When I install a package like nginx
> and then a package recommending a web server, APT will still try to
> install apache2.
> 
> > apt install -y nginx
> > apt install wordpress

Lets ignore for the moment that wordpress actually doesn't have the
recommends line which is the TOPIC OF THE THREAD, but has instead a
"Depends: apache2 | httpd" among other things (← hint hint):

$ apt install wordpress nginx -so Debug::pkgDepCache::Marker=1
[…]
  MarkInstall nginx:amd64 < none -> 1.16.1-3 @un puN Ib > FU=1
[…]
  MarkInstall wordpress:amd64 < none -> 5.4+dfsg1-1 @un puN Ib > FU=1
MarkInstall libapache2-mod-php:amd64 < none -> 2:7.4+75 @un uN Ib > FU=0
  MarkInstall libapache2-mod-php7.4:amd64 < none -> 7.4.3-4 @un uN Ib > FU=0
[…]
MarkInstall apache2:amd64 < none -> 2.4.43-1 @un uN Ib > FU=0
[…]

So, the reason you get apache2 installed is the second(!) dependency:
"libapache2-mod-php | libapache2-mod-php5 | php | php5" which isn't THAT
surprising or very hard to find, is it?


I would hence ask you to explain a bit better why you think APT is wrong
and provide an example which actually shows these characteristics.
Otherwise I will apparently be a tiny bit annoyed by this thread.


Best regards

David Kalnischkies

P.S.: For these cases -o Debug::pkgDepCache::AutoInstall=1 shows pretty much
the same with less scary details. I just picked Marker as it is literally the
first think I try and as I implemented the display of these "@un puN Ib" flags
they are a little less scary for me.


signature.asc
Description: PGP signature


Re: trimming changelogs

2020-03-20 Thread David Kalnischkies
On Fri, Mar 20, 2020 at 12:50:29AM +0100, Adam Borowski wrote:
> In the rush for cutting away small bits of minbase, it looks like we forgot
> a big pile of junk: /usr/share/doc/

Honestly, on space constraint systems, isn't the whole /usr/share/doc
directory "junk". Probably not the solution for everyone or as
a default, but I want to highlight that dpkg supports excluding files
and entire paths from being unpacked:

$ cat /etc/dpkg/dpkg.cfg.d/01_exclude_paths
| path-exclude /usr/share/doc/*
| path-include /usr/share/doc/*/copyright
|
| path-exclude /usr/share/locale/*
| path-include /usr/share/locale/en*
| path-exclude /usr/share/man/*
| …

Sure, all these files are handy to have on a "normal" system, but that
is the point: If I want to look at them, I want to do that 99,9% of the
time on a normal system, not on a single-purpose minbase(based) one –
where I don't even have a sane editor available (SCNR).


> Ubuntu keep only 10 last entries, for _all_ packages.

The benefit of treating all packages the same is that tools working with
changelogs can handle the grunt work: "apt{,-get} changelog pkg" prefers
the changelog on disk if available – except for repositories which
identify as "Ubuntu" for which it will always download the online
changelog for display.

Assuming the repository supports it. I have yet to encounter
a third-party which does, so if Debian would trim e.g. in debhelper by
default some care might need to be applied so that this happens only to
packages which end up in Debians repositories… which could complicate
reproducibility as its clear for a buildd, but my local sbuild…


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Is running dpkg-buildpackage manually from the command line forbidden?

2020-01-20 Thread David Kalnischkies
(Disclaimer: This is a xkcd:386-like response to this subthread)

> Here's the current list of these packages on my system:
> 
>   $ aptitude -F '%p' search '~prequired !~E'

The list omits 'apt' as a libapt internally flags it as essential to
grant it the utmost protection by all clients along with its (due to
that) pseudo-essential dependencies both in terms of user actions as
well as (re)solver and installation ordering algorithms.

So, to see the "real" list, you need something like:
$ aptitude -F '%p' search '~prequired !~E' -o pkgCacheGen::ForceEssential=',' 
-o Dir::Cache=/dev/null

(The second -o is needed to prevent libapt from using its binary caches
and forces it to reparse everything in memory; the first -o is the knob
defaulting to 'apt' if unset. And yes, it is really ',' and you probably
don't want to know why and just accept it as meaning empty list)


Now, suggesting that apt is not an integral part of a Debian system and
could henceforth be removed is of course heresy! The only thing saving
you vile heretics is apts heavy involvement in the creation of these
chroots.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Accepted vim-youcompleteme 0+20191218+git9e2ab00+ds-1 (source) into unstable

2019-12-29 Thread David Kalnischkies
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Format: 1.8
Date: Sun, 29 Dec 2019 16:40:19 +0100
Source: vim-youcompleteme
Architecture: source
Version: 0+20191218+git9e2ab00+ds-1
Distribution: unstable
Urgency: medium
Maintainer: David Kalnischkies 
Changed-By: David Kalnischkies 
Changes:
 vim-youcompleteme (0+20191218+git9e2ab00+ds-1) unstable; urgency=medium
 .
   * Set myself as maintainer via one-year hijack (see #912690)
   * New upstream version 0+20191218+git9e2ab00
 - Advertise debian repack from git HEAD
 - Update d/copyright for 2019
 - Update patches
   * Bump Standards-Version (no change required)
   * Bump debhelper compat to 12
   * Use ycm-core namespace instead of valloric as upstream
   * Build-Depend on recent ycmd for core version 42
   * Remove pointless ancient x-python-version >= 3.5
Checksums-Sha1:
 c177a0ec5cb87b806797efdcf455dcb7fc883341 2261 
vim-youcompleteme_0+20191218+git9e2ab00+ds-1.dsc
 66d86b34090f9f27efe537f3d25fd86d28403da7 209104 
vim-youcompleteme_0+20191218+git9e2ab00+ds.orig.tar.xz
 cd41adc70d1c2b7292a57c7f93a06b1e363f8c6e 8420 
vim-youcompleteme_0+20191218+git9e2ab00+ds-1.debian.tar.xz
 8eb9a12c3f77f8cf63be155bf3bb6e7a702606ce 7272 
vim-youcompleteme_0+20191218+git9e2ab00+ds-1_amd64.buildinfo
Checksums-Sha256:
 a158cf0cd5bc676ccbbbebe80cdce0293082020b2a2b624bf2b9889f58a68a56 2261 
vim-youcompleteme_0+20191218+git9e2ab00+ds-1.dsc
 5832bb67026a5fa13ac00883e942e418cb59b944c8d39fd3c76c6ea638468791 209104 
vim-youcompleteme_0+20191218+git9e2ab00+ds.orig.tar.xz
 7c76194b485551481910b5922469262f7b67573d49afe2ab1709b92583fe0dd5 8420 
vim-youcompleteme_0+20191218+git9e2ab00+ds-1.debian.tar.xz
 0d819a440c8591ecf91c4f83c2c560a2c654d927b1b186a329bea93f6e10b3ef 7272 
vim-youcompleteme_0+20191218+git9e2ab00+ds-1_amd64.buildinfo
Files:
 8bda6476ef6cd661d2dd1ea467a7489b 2261 editors optional 
vim-youcompleteme_0+20191218+git9e2ab00+ds-1.dsc
 44e36c868ce384a21e01f74d94763f17 209104 editors optional 
vim-youcompleteme_0+20191218+git9e2ab00+ds.orig.tar.xz
 dc006dac01c69a21efc8b5b4c9c00a72 8420 editors optional 
vim-youcompleteme_0+20191218+git9e2ab00+ds-1.debian.tar.xz
 0a52abcae9270662cff375ebd473e0f5 7272 editors optional 
vim-youcompleteme_0+20191218+git9e2ab00+ds-1_amd64.buildinfo

-BEGIN PGP SIGNATURE-

iQJHBAEBCgAxFiEE5sn+Q4uCja/tn0GrMRvlz3HQeIMFAl4I0V8THGRvbmt1bHRA
ZGViaWFuLm9yZwAKCRAxG+XPcdB4g87jD/4sF7sNcNbXBR3cH5tjYlLI74p669Vk
S0gSGYseeA9E1g8Nlp1XjCXbYkASBncGieexYiHK/8S0RzS+tiNj+tWYus6ia9Z6
yQzLewOVh/nk3YU7+bhoGPjqFpIOkzlQ59t3K/edtUkmRNYnoo5zUoIUYNJDjfm1
M152LLO8aQkz44ZbA4tQ0OzRzNz4J2E9kVV/G9NRXwFqrWiyUuVrTJNP6M6wAwI8
2UsBpTRA3l3C57ncaGw51aUlxqCI1F6Chq/Gmtp2ho9XX9lNTQih37P2QEcJeihb
L/b+VonWnmyxVEcLE8+0H7VAvFX/Tl6Dw25jasO0rIii7S9++452/oe9JawyPLjk
x/t8jP1yvKlhs9tRCeV00uu4CSP4pXhE53WcXCjFbBM7J2OHyaXWVQTe5M7yXFS0
luKAKdH8jDQUuUBNLh9A49jQSxnhWyqZEsonLGitn5frovpLpW5410ZY1GuuZHeB
CZl+K/129JQYE8P1vC58UtOKMN4RyAaH59cSyzoa3aNwNGUcBT/HNYoO8xBD1ln9
9xf5h9Jl+8xyBxjIxmz3yAog8BF4Uu9bAXpp80lFYH0k7HJO+AjgRVyDecqQZaKg
J1JIcrqIeaWpwjP3fPAl4UaAOWC1Vw+V3qDC/8JH3i832d/jAyCWByQiWzHUVjQo
x5ZrP6ZkkVjNpg==
=yTqj
-END PGP SIGNATURE-



Accepted ycmd 0+20191222+git2771f6f+ds-1 (source) into unstable

2019-12-29 Thread David Kalnischkies
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Format: 1.8
Date: Sun, 29 Dec 2019 15:13:44 +0100
Source: ycmd
Architecture: source
Version: 0+20191222+git2771f6f+ds-1
Distribution: unstable
Urgency: medium
Maintainer: David Kalnischkies 
Changed-By: David Kalnischkies 
Closes: 912690 929987 947439
Changes:
 ycmd (0+20191222+git2771f6f+ds-1) unstable; urgency=medium
 .
   * Set myself as maintainer via one-year hijack (Closes: #912690)
   * New upstream version 0+20191222+git2771f6f+ds
 - Advertise debian repack from git HEAD with +ds
 - Add clangd testdata locations to CUDA copyright entry
 - Update d/copyright for 2019
 - Update d/patches, fixing tsserver support (Closes: #929987)
 - Update README.Debian with completer support state
   * Use local gopls installation if available for golang support
   * Use system-provided pybind11 now that it is new enough
   * Fix finding clang library based on provided llvm path (Closes: #947439)
   * Depend on clang-9 explicitly for now (see #947439)
   * Use ycm-core namespace instead of valloric as upstream
   * Set R³: no in debian/control
   * Bump Standards-Version (no change required)
   * Bump debhelper compat to 12
Checksums-Sha1:
 b48143b5ec9c5b3baf287ad8a71c48b828e779f2 2114 
ycmd_0+20191222+git2771f6f+ds-1.dsc
 014459cee669269ff0820946ba6205a88f161bd9 1973943 
ycmd_0+20191222+git2771f6f+ds.orig.tar.gz
 35392dc7e0b9e6667c422a05f4ed28163789a3f0 14176 
ycmd_0+20191222+git2771f6f+ds-1.debian.tar.xz
 189571b0d02113878bb0acff58ff2d68c20b4a42 8629 
ycmd_0+20191222+git2771f6f+ds-1_amd64.buildinfo
Checksums-Sha256:
 9cd3e1625324507810464300f8722dac82a3197f5757cf69e58de56e520ad28b 2114 
ycmd_0+20191222+git2771f6f+ds-1.dsc
 994e694053af27214678e6e516dbc53c91d0048b93ebd714ed44ca238a55 1973943 
ycmd_0+20191222+git2771f6f+ds.orig.tar.gz
 2232c03b5a1ae424903f8770672cd68cbd93a17ca1ecd00efe306e9cfbeb59aa 14176 
ycmd_0+20191222+git2771f6f+ds-1.debian.tar.xz
 a07b9c6432428a7c153080630b370adff1e64d2658c1a57b53c196cb8a72786d 8629 
ycmd_0+20191222+git2771f6f+ds-1_amd64.buildinfo
Files:
 97bf62a23cca11e0761e0c1d2712c83d 2114 devel optional 
ycmd_0+20191222+git2771f6f+ds-1.dsc
 83477d2cd6518a36f3a0bbb94b16675e 1973943 devel optional 
ycmd_0+20191222+git2771f6f+ds.orig.tar.gz
 83792837aa13b85b24b45204d01d7058 14176 devel optional 
ycmd_0+20191222+git2771f6f+ds-1.debian.tar.xz
 099fcfee15df0d21f9df7311c6c8ff5a 8629 devel optional 
ycmd_0+20191222+git2771f6f+ds-1_amd64.buildinfo

-BEGIN PGP SIGNATURE-

iQJHBAEBCgAxFiEE5sn+Q4uCja/tn0GrMRvlz3HQeIMFAl4IvUYTHGRvbmt1bHRA
ZGViaWFuLm9yZwAKCRAxG+XPcdB4g3hiEACaPSFMSV3F4lOv2i1y2S9WO/80GWNk
ZcnFtxk261GOcF6Krghoinxb4/vumLD2sUx1eVEtGVDDoCWx6OU60VF6tYY/MTMU
5akfiWDRtWZyHfO0QeCeOwJ+bTdU4bsmqX2HHh1oN9OsxKwQ6Pmv5r++nkH0E51m
oNHEubDyJ30ml6UQMGFXY/2rQlDUwkeXftC4s48T7S4hyrT0dzw3P7O5LUYHcQPn
rTL5Pkc1nQREm4O4mgpKWErP3moEQbXKZJNxuD4dxJyhdszTIGsHSmX0T3KJX/hi
VNRmeOZRyV+x0yei27q5i9yLpFzqa5chRuq9Jd5MjZQmY/gprho4iQJIgB4kyRr+
9fhkjn/ZuKI5ZlcVqs141IlAN7+D/LFvCx/UdU6qUgUYbPIASDjgr9OacT9RwoNT
fEKVVcuqAOQPYxrZQKXQmAu8JRAdRwosNYUGv4CgVQtcO5HXCYJ2yWWNhkB+dqhU
j24tRoluSZjgRWPatxZBW8FHFFZXx5mJusBh0wtF0YC29ksNXGdaQH075oqO7x9s
YdxJ6aWS92rKgWP/63bykJ+Rb1KUnlcb586BbDp7xpPehQ4ubbqbKBPkIfe9OW6L
0efXkS3fHexIK6AWJz8mfYXn9zsBXDdp3KL2garovQh0CV4FPNioa6SsHegK5uq9
PcCZFiGKcUZqaQ==
=MG3H
-END PGP SIGNATURE-



Re: unsigned repositories

2019-08-05 Thread David Kalnischkies
On Mon, Jul 29, 2019 at 08:01:47AM +0100, Simon McVittie wrote:
> sbuild also uses aptitude instead of apt (for its more-backports-friendly
> resolver) in some configurations, and that doesn't have --with-source.

JFTR: aptitude (and all other libapt-based frontends) can make use of
that feature via the config option APT::Sources::With, the commandline
flag is just syntactic sugar.

So, as aptitude has I think -o you could e.g. say
   -o APT::Sources::With::=/path/to/file.deb
or if all else fails a config file of course.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: unsigned repositories

2019-08-05 Thread David Kalnischkies
On Mon, Jul 29, 2019 at 10:53:45AM +0200, Johannes Schauer wrote:
> squeeze ended, we finally were able to remove a few hundred lines of code from

Julian is hoping that removing support for unsigned repositories would
do the same for us with the added benefit that for apt these lines are
security related … 


So far all usecases mentioned here seem to be local repositories
though. Nobody seems to be pulling unsigned repositories over the
network [for good reasons]. So perhaps we can agree on dropping support
for unsigned repositories for everything expect copy/file sources?

The other thing is repositories without a Release file, which seems to
be something used (legally) by the same class of repositories only, too.
That is in my opinion the more useful drop as the logic to decide if
a file can be acquired with(out) hashes or not is very annoying and
would probably benefit a lot from an "if not-local: return must-hashes"


These should at least help with the security aspect even if I am not sure
yet how that could be refactored to work [but that code area needs lots of
love anyhow, as in the last years I was just busy adding jetpacks and
nitro-injection to this horse-drawn vehicle to keep it afloat, would be
nice if we could retire at least the horses eventually.].


> > Both sbuild and autopkgtest are designed to target multiple Debian releases
> > including the oldest release that still attracts uploads (currently jessie,
> > for LTS), so relying on "apt-get install --with-source" is undesirable.
> > sbuild also uses aptitude instead of apt (for its more-backports-friendly
> > resolver) in some configurations, and that doesn't have --with-source.

Well, we are now building the tools we will be using in ten years in
this really old and clunky bullseye LTS release rushing for a time
machine so that we will would have had done this or that. Lets pretend
for a minute we could avoid that (or: … will be could have had? …).

What is it what you need? Sure, a local repository works, but that
sounds painful and clunky to setup and like a workaround already, so in
effect you don't like it and we don't like it either, it just happens to
work so-so for both of us for the time being.


> Yes. In sbuild we also cannot use other apt features like "apt-get build-dep"
> because sbuild allows one to mangle the build dependencies, so it works with
> dummy packages. So sbuild will have to keep creating its own repository.

Julian did "apt satisfy" recently and build-dep supports dsc files as
input, so naively speaking, could sbuild just write a dsc file the same
way it is now writing a Sources file? Also, --with-source actually
allows to add Packages/Sources files as well, I use them for
simulations only, but in theory they should work "for real", too.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Apt-secure and upgrading to bullseye

2019-07-11 Thread David Kalnischkies

As I have noted in my previous reply there are VARIOUS bugreports dealing
with different aspects of this, so rehashing it all lumped together on
d-d@ is not very productive and I would like to advice anyone seriously
interested in this to contribute to the relevant one instead.

And the rest can be happy as they were asking for "testing" and they got
to test something and the gathered test results are now being worked on…


On Wed, Jul 10, 2019 at 11:47:28PM -0400, The Wanderer wrote:
> For myself, no, a shorter/simplified version of the release notes
> probably wouldn't have made me more likely to read them.

Clients producing these errors can optionally also print a pointer to
the release notes btw, just in case that would nudge anyone to give them
a read, it was just not used for now for buster.

N: More information about this can be found online in the Release notes at: 
https://example.org/future


> it using apt-get - since that's my preferred client, and the idea of
> switching clients just for a single task like this strikes me as
> intuitively wrong somehow. In fact, it's possible that I *did* do that;

JFTR: apt and apt-get use the very same code for "update" via libapt.
In fact all package managers in Debian do, be it aptitude, synaptics or
your preferred software center [okay, there are exceptions, but if you
happen to use one you will know that].

As such you can mix and match apt clients as much as you like. The
difference is in the presentation: "apt" tries to be a little friendlier
in interactive usage while "apt-get" sticks to 'what it always did' as
much as it can without negative effects [= big bugs and security tend to
be the only reason for it changing drastically]. As it is usual for apt
clients there is an option for basically everything though. Setting the
options listed by the following command for apt-get as well will make it
behave as if it were apt: apt-config dump --no-empty Binary::apt

Binary::apt::APT::Get::Update::InteractiveReleaseInfoChanges "1"; being
responsible for the interactive question in update btw. APT is really
not as much magic as people believe… (but I might be biased )


> different clients, earlier in this thread. IMO, if the release notes
> need to document any of them, they should document all - or, if it's

As an example, the current plan is to make the switch over for Suite
changes automatic – if some preconditions are satisfied. The discussion
about that isn't hard to find, but here you go: #931566. You are welcome
to add any good ideas not already present (that hopefully shows also
that this is a tiny bit more complex than it looks at first).


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Apt-secure and upgrading to bullseye

2019-07-10 Thread David Kalnischkies
Hi,

On Wed, Jul 10, 2019 at 12:31:51PM +0100, Julian Gilbey wrote:
> Hooray, buster's released!  Congrats to all!

Indeed! ☺

> E: Repository 'http://ftp.uk.debian.org/debian testing InRelease' changed its 
> 'Codename' value from 'buster' to 'bullseye'
> N: This must be accepted explicitly before updates for this repository can be 
> applied. See apt-secure(8) manpage for details.

There are various reports about that against apt/aptitude, so I am not
feeling like adding lots of duplicated content now, but the gist is:

Either use "apt update" (it will ask an interactive question) or
"apt-get update --allow-releaseinfo-change" (see apt-get manpage) or
[least preferred option] set the config option
Acquire::AllowReleaseInfoChange for basically any apt-based client.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: buster backports question/status

2019-07-10 Thread David Kalnischkies
On Wed, Jul 10, 2019 at 11:42:40AM +0200, Julien Cristau wrote:
> buster-backports exists.  AIUI this is an apt bug when dealing with
> empty repos.  (Although why are you setting default-release to
> buster-backports?)

JFTR: Yes it is an apt "bug" in that empty repositories do not create
the structures apt is checking later on if a given target-release is
sensible – that is a feature since 0.8.15.3 (2011) btw → #407511.

I think we will end up creating the structures again for other reasons
so that error will disappear for this edgecase – but I have to second
the question as that seems wrong (and why I put "bug" in quotes).


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Accepted vim-youcompleteme 0+20190211+gitcbaf813-0.1 (source) into unstable

2019-02-14 Thread David Kalnischkies
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Format: 1.8
Date: Thu, 14 Feb 2019 11:38:37 +0100
Source: vim-youcompleteme
Architecture: source
Version: 0+20190211+gitcbaf813-0.1
Distribution: unstable
Urgency: medium
Maintainer: Onur Aslan 
Changed-By: David Kalnischkies 
Changes:
 vim-youcompleteme (0+20190211+gitcbaf813-0.1) unstable; urgency=medium
 .
   * Non-maintainer upload.
   * New upstream version 0+20190211+gitcbaf813
 - Refresh patches
 - Drop (Build-)Depends on python3-frozendict
 - Update sign place regex pattern for vim >= 8.1.0614
   * Bump Standards-Version to 4.3.0 (no changes)
Checksums-Sha1:
 a38ab2335e3f4e7cf42a03d03bb9053018ee0efb 2315 
vim-youcompleteme_0+20190211+gitcbaf813-0.1.dsc
 bb3315921f6db7cf1b96023e9fbbc170da1a4833 187816 
vim-youcompleteme_0+20190211+gitcbaf813.orig.tar.xz
 de7476b3bbdd222fba3cb6f008837de5de6eb44b 8124 
vim-youcompleteme_0+20190211+gitcbaf813-0.1.debian.tar.xz
 e75d530b45272aa39622fa189f062810d6410ccb 7069 
vim-youcompleteme_0+20190211+gitcbaf813-0.1_amd64.buildinfo
Checksums-Sha256:
 06c76a3999e39215863cdf2cd27d614d4bca34df686423cb908b69e93d32dfe3 2315 
vim-youcompleteme_0+20190211+gitcbaf813-0.1.dsc
 6b89a0d627d791c94dbc2844d667935afaf5697994aa2db4c66537daa9e8728d 187816 
vim-youcompleteme_0+20190211+gitcbaf813.orig.tar.xz
 263217af050e289c6cf0a7ad168f0e3c1b30fe8cd7b7f558a924e8aea06732a4 8124 
vim-youcompleteme_0+20190211+gitcbaf813-0.1.debian.tar.xz
 ab9d7ff5b65a5bb1dd252119ac051c46785b004abb71a19f9efa4173182d6280 7069 
vim-youcompleteme_0+20190211+gitcbaf813-0.1_amd64.buildinfo
Files:
 5f157808f791b4c915c422f9e200b075 2315 editors optional 
vim-youcompleteme_0+20190211+gitcbaf813-0.1.dsc
 c8eab9fcf9e85efe8e38386e1e1fcfcf 187816 editors optional 
vim-youcompleteme_0+20190211+gitcbaf813.orig.tar.xz
 013435f398a6694da299f0928d3c461d 8124 editors optional 
vim-youcompleteme_0+20190211+gitcbaf813-0.1.debian.tar.xz
 dcc0bb99ec0d4880641eaa8742d8287a 7069 editors optional 
vim-youcompleteme_0+20190211+gitcbaf813-0.1_amd64.buildinfo

-BEGIN PGP SIGNATURE-

iQJHBAEBCgAxFiEE5sn+Q4uCja/tn0GrMRvlz3HQeIMFAlxlVRgTHGRvbmt1bHRA
ZGViaWFuLm9yZwAKCRAxG+XPcdB4g0bJD/9l1gJYv1fvkRwV4NLUO1EwrnmIYPQ6
GuwocftDR6QeRDZRTInO13nSmMbxoUJd8wfWq7dGd0L0/7Iqr4w0+gYzqNDCcmWE
yok/15XTcteZsfxCLinC/teljBB+6lONI0oXg2cOTmhalFEPBGDbRYxecvtHbGgL
EpdWTOk2BtrzAnOzXE/FiSOz1NZVYlt5HtMY1RTvQos/9WmNSOeJVVucQL4zdOYT
g1lmoIPUhbGyDs+nmuUHkk1VHaaDXDRWkfM5ddn5Gzr7lOF2CIhleDcfTfjQiNOR
j5W4/3L0YJ+HQwWVq2Bt6ZD7dQ/XfjRWC0umotGAnydfOGNbVxXYy4e1FyzWVVZT
cuGeD9iNzh0zoDp98T1hOqaIUOY0mrBuraCaliDSbBK/KqLGXlY6dUY/nnPAMBEE
gIWTzR2goDnBkefkAYG2a7JHZfCmTliM51d8cudb3hKr3pb6eu9zu3xML2GzIrOt
akBMJBXrLuQs7/kqS0S+PUez07lN+72fMkY10GZMTCI4b22tc5hyPI+++8RUl0bY
wMkxFCjdll3wAjOiV00gdCwBWQB+woZYypQr7nLcipSPQdPd8sQ48TMcNImopNm9
9Sh17Wie9xe6kQzCQy/fo2oajRrQ2BKMXHyrrfNWkyMOWS+uHA37MlxIXVc4ETUw
aU44cbvYprIS+Q==
=DMcU
-END PGP SIGNATURE-



Accepted pyhamcrest 1.8.0-1.1 (source) into unstable

2019-01-09 Thread David Kalnischkies
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Format: 1.8
Date: Mon, 07 Jan 2019 18:49:27 +0100
Source: pyhamcrest
Binary: python-hamcrest python3-hamcrest
Architecture: source
Version: 1.8.0-1.1
Distribution: unstable
Urgency: medium
Maintainer: David Villa Alises 
Changed-By: David Kalnischkies 
Description:
 python-hamcrest - Hamcrest framework for matcher objects (Python 2)
 python3-hamcrest - Hamcrest framework for matcher objects (Python 3)
Closes: 917660
Changes:
 pyhamcrest (1.8.0-1.1) unstable; urgency=medium
 .
   [ David Kalnischkies ]
   * No-change non-maintainer upload to have python3-hamcrest rebuild without
 the use of deprecated collections ABI usage causing FTBFS in at least
 src:vim-youcompleteme (Closes: #917660)
 .
   [ Ondřej Nový ]
   * Fixed VCS URL (https)
   * d/control: Set Vcs-* to salsa.debian.org
   * Convert git repository from git-dpm to gbp layout
 .
   [ Piotr Ożarowski ]
   * Add dh-python to Build-Depends
Checksums-Sha1:
 60697fced5404a343153302d4472b53c44a865a5 2169 pyhamcrest_1.8.0-1.1.dsc
 39a830bc957a78865b662192299842d0c6ab5f88 3248 
pyhamcrest_1.8.0-1.1.debian.tar.xz
 800e9a940f92dc0f3608006c6743b1f653400f56 6771 
pyhamcrest_1.8.0-1.1_amd64.buildinfo
Checksums-Sha256:
 55cc06ad6911d5c5b6487761a1f95ec09d9ced585ad464c3fafb35dee91941ea 2169 
pyhamcrest_1.8.0-1.1.dsc
 6425574c1dd73ef9991826675dd379ea5f15b8b0058c3273af314ecb3946943b 3248 
pyhamcrest_1.8.0-1.1.debian.tar.xz
 07e170d32f39ff4e352b8e6ab81321542a7b3a757d4147db5c7260dc4e62c1b4 6771 
pyhamcrest_1.8.0-1.1_amd64.buildinfo
Files:
 56780deae18723c6c6b4767533eee681 2169 python optional pyhamcrest_1.8.0-1.1.dsc
 350d3a08b614f457c3b1ef1124460c46 3248 python optional 
pyhamcrest_1.8.0-1.1.debian.tar.xz
 39205d44b296b298445444a024e9dc11 6771 python optional 
pyhamcrest_1.8.0-1.1_amd64.buildinfo

-BEGIN PGP SIGNATURE-

iQJHBAEBCgAxFiEE5sn+Q4uCja/tn0GrMRvlz3HQeIMFAlwzuE8THGRvbmt1bHRA
ZGViaWFuLm9yZwAKCRAxG+XPcdB4g9IQD/sHmpsn77ZXvBNoNYQF7OuNthVII3x0
ua1wiCJZ+U1BjDF5mc0P3rzLSNyy+F+bB0RA+eeZDN9gnZ5OF59L0MvBU48yOge9
dnVcGrh4LOxUwfXDrTdg7hVNXNl+RbjLugwrxdcDVCUchernyuNc8/2qsm6LPr2w
kOcm18RU4I/bMefAOysxiph9rDk5EWwU4Hw2fdz5wihnGDV0x8VzofZb8DhyIp9B
xvRg42ZheP1QtfW2rGHe66HlpaAiEgB52EjRy19EtUPSWujqsg+tqvuOf6LZJ5j8
PwYE4XAoSg2HdieAUAgeHB3lLC8uo7J28tv1GOxUstAcFFD76RJgTtgoRMXY191n
HjS+3dknGPEqYz/hQGgy3TQ06pz9tFsG1ZtUxyYimgi9AIwZlswhiDLE+kHrbmXI
AaWJ1fSi7seB+v4FwpY2LcjYIe7qJQSM8tjZDbhZeKgzUAVWW442/YwSuu0L1yJr
5TiRMAAEjbhBBPViHmb+2arpBX6o/3UhUr0abs18DSCTDjo4UWoybut6m05iIHlw
2vckag1BMF4vpUZjVGCkV1aLeSKasz0PLe81QDv23vCFx7sIk0sx6MbxZUMUOtV1
k0x57CHmi2bUyvsd/zoRu9pn+oHLdGYxqBY1DHDx/wyd01pC3tVfqM63iyg8WGvT
00v48FsMZdfHJA==
=XJzV
-END PGP SIGNATURE-



Accepted vim-youcompleteme 0+20181101+gitfaa019a-0.1 (source) into unstable

2018-11-02 Thread David Kalnischkies
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Format: 1.8
Date: Sat, 03 Nov 2018 01:35:40 +0100
Source: vim-youcompleteme
Binary: vim-youcompleteme
Architecture: source
Version: 0+20181101+gitfaa019a-0.1
Distribution: unstable
Urgency: medium
Maintainer: Onur Aslan 
Changed-By: David Kalnischkies 
Description:
 vim-youcompleteme - fast, as-you-type, fuzzy-search code completion engine for 
Vim
Closes: 912030
Changes:
 vim-youcompleteme (0+20181101+gitfaa019a-0.1) unstable; urgency=medium
 .
   * Non-maintainer upload.
 .
   [ Ondřej Nový ]
   * d/copyright: Use https protocol in Format field
   * d/control: Set Vcs-* to salsa.debian.org
 .
   [ Sylvestre Ledru ]
   * New upstream release
 - Refresh path-to-server-script.patch
 .
   [ David Kalnischkies ]
   * New upstream version 0+20181101+gitfaa019a
 - Use uscan to generate tarball from upstream git HEAD
 - Refresh path-to-server-script.patch
   * Depend on ycmd via ycmd-core-version provides (Closes: #912030)
   * Don't use dpkg-parsechangelog directly in debian/rules
   * Switch from debhelper 9 to 11
   * Bump Standards-Version to 4.2.1 (no changes)
   * Run nose-based tests at buildtime
   * Use https for upstream homepage
   * Set R³: no in debian/control
Checksums-Sha1:
 56512d15f545d02a8cf84038d5c6586502d2fca7 2335 
vim-youcompleteme_0+20181101+gitfaa019a-0.1.dsc
 273930d5c0b708bf53bb77df75c65e13ddc0cd6c 187420 
vim-youcompleteme_0+20181101+gitfaa019a.orig.tar.xz
 895ccfb0f533e6bd0531c08d104b123bc5d90a01 7824 
vim-youcompleteme_0+20181101+gitfaa019a-0.1.debian.tar.xz
 1ce94008e643fcc36725462a0228ab25ef2f9979 7167 
vim-youcompleteme_0+20181101+gitfaa019a-0.1_amd64.buildinfo
Checksums-Sha256:
 298c3136d5b8d7568f54a9d56e0b75889033d8f22e22d61c42d93813cb248c56 2335 
vim-youcompleteme_0+20181101+gitfaa019a-0.1.dsc
 048613aef55db9bcc3bccf7c6db38f3d09a76e068328b1d84b8a8f4e02d0ac39 187420 
vim-youcompleteme_0+20181101+gitfaa019a.orig.tar.xz
 183db1ef9ff5a5e99a696beb69205df6810e860907a8e14317c2e027e9418188 7824 
vim-youcompleteme_0+20181101+gitfaa019a-0.1.debian.tar.xz
 333e4a36a28a7605e19cf4ee258a36c16433a96a89519a8b4fd6d7ae54408c27 7167 
vim-youcompleteme_0+20181101+gitfaa019a-0.1_amd64.buildinfo
Files:
 85ddea5f781683ec9202058c52295dad 2335 editors optional 
vim-youcompleteme_0+20181101+gitfaa019a-0.1.dsc
 25155d1b63bb2053877701c9aa36dbba 187420 editors optional 
vim-youcompleteme_0+20181101+gitfaa019a.orig.tar.xz
 b439bc8fc3023b5068d4b4db3225f99e 7824 editors optional 
vim-youcompleteme_0+20181101+gitfaa019a-0.1.debian.tar.xz
 4e39643477b88f168b00c71dbacdca6b 7167 editors optional 
vim-youcompleteme_0+20181101+gitfaa019a-0.1_amd64.buildinfo

-BEGIN PGP SIGNATURE-

iQJHBAEBCgAxFiEE5sn+Q4uCja/tn0GrMRvlz3HQeIMFAlvc8IITHGRvbmt1bHRA
ZGViaWFuLm9yZwAKCRAxG+XPcdB4g41UD/927H4IXrJ7i4bPUKPufe374r5mHPpf
nm2OwdOGz4b+XsTnjGZ78So/If+g9EtFYiP8w1ZSWN18GSYBIAGiQcD+0B2oJ4+A
NIPFo2os68cDMFmQ0wjy9+NZ2uie4DXhfygcaVsCIUHesol6Grg3yEa9GcSG8gh6
T6ju9a12hUuLn2entfR4OYN0Ye4hrAK2S4v7bLtGv1GPWtpeY58cwkgrMjJ+ukiT
39drmszR25kNLuA3KqqyeKJnZxTA0wT67Wil1d+oIW1EvLfWGQR8+dFXVyhiCTvE
S6OqnRJOM4oAJvBA8LmQEXCr2WXDMkGD47hFQ3OTN7weuOkXtQPP3PRCVB6X4B5/
9x3aOXgmqAs70/4HmLT4z2M/YKkbZSQE/29dgKNGK8RLeVoxG0myeK02JfxayJUb
PeqXUHcn6P67sM+XBXLkNPu2yMPIh7koj64CVqxa/x8ge2e4FpCx9TgS/WuTQMOT
NfsFz7yq7UqdKju1IAJ0NeBYL9w26JX9ZIcJFzj8stWC3r5Ot28pSckSCSyp39nQ
9oTtRLoNmIKeZfxcZc4lnVpWOEaUJyzPFLpriwwAj7Wd5aoS2Umys8Gku4/EmV2N
cFZBiooszlgeCp4SwP19Rj/udYlz9Hwat/mVOOmPYx1to8ZhtUyYnppU75UmB7PU
4EeVFnurMChfiQ==
=NaOR
-END PGP SIGNATURE-



Accepted ycmd 0+20181101+git600f54d-0.1 (source) into unstable

2018-11-02 Thread David Kalnischkies
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Format: 1.8
Date: Fri, 02 Nov 2018 19:57:14 +0100
Source: ycmd
Binary: ycmd
Architecture: source
Version: 0+20181101+git600f54d-0.1
Distribution: unstable
Urgency: medium
Maintainer: Onur Aslan 
Changed-By: David Kalnischkies 
Description:
 ycmd   - code-completion & comprehension server
Closes: 849239
Changes:
 ycmd (0+20181101+git600f54d-0.1) unstable; urgency=medium
 .
   * Non-maintainer upload.
   * New upstream version 0+20181101+git600f54d
 - The release intended to be uploaded previously instead of
   a renamed tarball of a previous version
 - Update debian/copyright file
 - Use uscan + debian/watch for git HEAD for source tarball
 - Don't include prebuilt api doc in tarball
 - Don't exclude the examples/ folder from the tarball
 - Run without non-existent third_party directory
   * Switch from debhelper 9 to 11
   * Drop references to unused JediHTTP
   * Remove unused libboost-python-dev build-dependency
   * Don't use dpkg-parsechangelog directly in debian/rules
   * Drop ancient X-Python-Version field
   * Run GTest-based tests at build-time
   * Drop runtime-deps from build-deps for now
   * Use always the default clang version for building
   * Provide ycmd-core-version for more relaxed dependencies
   * Build against python3 (Closes: #849239)
Checksums-Sha1:
 ec3ab69ae9f391b017ae6aa769b2698a95d397ff 2144 
ycmd_0+20181101+git600f54d-0.1.dsc
 40cf6d385dfab0f80242380ca2acc6144ddaaaca 1266373 
ycmd_0+20181101+git600f54d.orig.tar.gz
 0badc6d88323cd30e4382b098e6a79cf236112b8 13168 
ycmd_0+20181101+git600f54d-0.1.debian.tar.xz
 eb5bad271e2292817905ac91feb9b11ccd67b84d 8922 
ycmd_0+20181101+git600f54d-0.1_amd64.buildinfo
Checksums-Sha256:
 63c075b99131a04ea4d43637d422da4bd980e6162350c5af2b88e95b50b69b37 2144 
ycmd_0+20181101+git600f54d-0.1.dsc
 f7e5898aa4d289b132ad82dfffd55cbaee97805632b0a775d63020af68a73fe9 1266373 
ycmd_0+20181101+git600f54d.orig.tar.gz
 402b40a13ab390a6367c88eb2504cab6df98d435ff073871f29928c78772d716 13168 
ycmd_0+20181101+git600f54d-0.1.debian.tar.xz
 4dd474876fe1adcd5077a9bbd2ec5d0395f44844a23e630fd7d7d820f82fe98f 8922 
ycmd_0+20181101+git600f54d-0.1_amd64.buildinfo
Files:
 8a0a19d81bf3de55a0f4c96071faec21 2144 devel optional 
ycmd_0+20181101+git600f54d-0.1.dsc
 7d722032b29288640eea00c3b7422046 1266373 devel optional 
ycmd_0+20181101+git600f54d.orig.tar.gz
 c054cda7b1f2cabb8b9defc6bab46e8d 13168 devel optional 
ycmd_0+20181101+git600f54d-0.1.debian.tar.xz
 6834e20c531c8176fa5aec26b239c8fb 8922 devel optional 
ycmd_0+20181101+git600f54d-0.1_amd64.buildinfo

-BEGIN PGP SIGNATURE-

iQJHBAEBCgAxFiEE5sn+Q4uCja/tn0GrMRvlz3HQeIMFAlvcqrwTHGRvbmt1bHRA
ZGViaWFuLm9yZwAKCRAxG+XPcdB4g4JDD/wJnABfYbwCeD8urYJnIQj+DXkpKkOe
SKdqW6nfbzv58DEswmfn9p3Rkd/lNtMqeHHgQRYXODFvHeaYBJ/XNyxDCHhxj3LI
DHXpdcz+4fnXx+sf3EzvDfQPMzFOMOfy0Rr4HPHTVB2XfR/HTtdD0p6X4iOaks5q
vQBo6dp7bVShj1BMsgaIo/cikN/eQW7FBlZ3LAnjC/RkGA4UQwFHF2B0hqbzDR/w
kb709vlgZ4nHaEC4Gnzc2o2Sci0Ma2TTfElcTXBzFYik2bABfOj+Y3EwYPloGKst
o6r6GsMMUPhAOs/kHVXBXK0WnhT9PB8p7H91CLyfwkdcPo+cOYv0EzMzTQwrOVrr
AoZMVuk3xMpNG99Qy5U8eLq5hJ9olXJLoHC+9f+fXsQumC/Rqg72/r7dFpJ6TH5T
gLywskVgZ2Aup4ePYmT0FTrUgnGtNM8zSlBWVGX76qFqfaI/AKgQ79juRx/k5ghU
OMAcCgaZrXKUw9j913MPRNVh7F6DpETVboyQiIHZnHGYKKCvxdQBDv3yXFaTqz6i
KuNrZp+JGcXHgJ6ItLGI7l73cV5n+OQNZeJqDnzV/0t37hFm5oep2f+qKFIHLm0s
gnXt77NPxJN/4IYaiyfu039zUEZxpyjokP/M6a9aVcysUSjhw+gFjZFKKa8RKoDT
3L/j++tc3Suw4g==
=DdGM
-END PGP SIGNATURE-



Re: Reducing the attack surface caused by Berkeley DB...

2018-01-27 Thread David Kalnischkies
On Fri, Jan 26, 2018 at 12:24:26PM +0100, Miriam Ruiz wrote:
> 2018-01-26 12:02 GMT+01:00 Colin Watson <cjwat...@debian.org>:
> >> Finding someone performing the daunting task of actually switching code,
> >> documentation and existing databases over on the other hand… I at least
> >> don't see me enthusiastically raising my arm crying "let me, let me, …".
> >
> > I don't blame you!
> 
> Might that be a candidate project for GSOC?

I debated with myself if I should add a comment about gsoc/outreach, so
as the "don't mention it" faction won due to length, let me give the
opposition a chance to comment now:

I don't think so. The size might be alright, but the task itself…
The tasks should usually be something the mentors could and would do
themselves (in less time), but propose them as interesting projects
instead to trap unsuspecting students into not only completing their
project, but hopefully sticking around now that they know the drill.

This task on the other hand… the potential mentor isn't terribly excited
about it: Big warning sign. The bigger problem is through that it is
a dead end: As a student you will learn stuff about the now obsolete
libdb, you are working on apt-ftparchive which is on life support
(personally, I only touch it as testing apt is just easier if it comes
with its own archive tool; for the same reason we have an libapt-based
webserver… tends to be hard to convince other projects to implement
broken behaviour so you can test against it) and after the project is
done your knowledge isn't applicable to any other apt part…

The visibility of your task isn't that great either: I did MultiArch in
APT years ago and people are still complaining about it! That project on
the other hand… not a lot of users – and the few you have will either
never notice that you did something or stumble over a bug and complain
that you did something – usually with a ~2 years delay as basically
nobody is running a big archive on a Debian unstable box (no idea why…).
For newbie motivation reasons you want the exact opposite.

So that task feels more like: Nobody wants to do it, so lets convience
Google/our sponsors to pay a GSoC/Outreach student to do it. (S)he wont
like it, but we got the job done – other orgs do this, but I don't want
Debian to do it, even if it has shortterm benefits (for me/us). If
someone has money to burn we can probably find someone to do the job,
we don't have to waste our perhaps once in a lifetime chance to make
a student a longtime open source contributor with this task.



I guess you can kill both birds with one stone if you go for a "write
libdb-api-compatibility layer for your favorite other db", but that
wouldn't really be a Debian task anymore. Without even thinking a split-
second about the feasibility of this, that might be the more realistic
way of deprecating libdb as I would imagine that most tools still using
it aren't using it because its so great, but because the code exists and
nobody feels like changing it.


To finish the view point of apt-ftparchive: I guess at the time the
libdb remove is immanent we will just remove the database support and be
done. apt-ftparchive is hardly the only tool capable of producing an
archive and most of these tools have a focused upstream… the apt client
needed a server to start rolling, but nowadays this server side hustle
is more a brake than an accelerator.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Reducing the attack surface caused by Berkeley DB...

2018-01-27 Thread David Kalnischkies
On Fri, Jan 26, 2018 at 11:49:41PM +0100, Lionel Debroux wrote:
> > Anyway, the only util in apt-utils making use of libdb is
> > apt-ftparchive which a) isn't used much in Debian – but by some
> > derivatives¹ and b) can operate without the backing of a db, but you
> > don't want to run a large archive without it.
>
> Could that program conceivably be split to another package ?

Not really. apt-utils includes three tools: apt-extracttemplates,
apt-ftparchive and apt-sortpkgs. The later two should be together and
the first one shouldn't even exist… it exists only temporary as
a stopgap as long as there is no dpkg tool (which would be the more
natural place for extracting files from a deb file)[0] for this task.
In other words: We realized only later that its existence is permantent,
like with all good temporary solutions.

Splitting packages now means that the split will reach effect at most in
bullseye… (buster needs at least a recommends for upgraders, likely
depends as there are tools like local-apt-repository depending on
apt-utils to get apt-ftparchive) that might be a bit too far off for
your case, especially as we haven't really gained anything by it. We
just (literally) moved the problem.

(The other aspects I will hopefully answer with another mail in the
gsoc/outreach subthread)


Best regards

David Kalnischkies

[0] https://wiki.debian.org/Teams/Dpkg/RoadMap


signature.asc
Description: PGP signature


Re: Reducing the attack surface caused by Berkeley DB...

2018-01-26 Thread David Kalnischkies
On Thu, Jan 25, 2018 at 11:59:06PM +0100, Lionel Debroux wrote:
> In practice, Berkeley DB is a core component of most *nix distros.
> Debian popcon indicates that libdb5.3 is installed on ~80% of the
> computers which report to popcon.

I wonder how many of this ~80% is only due to having installed apt-utils
(99.83%) for apt-extracttemplates (which is responsible for having many
debconf questions before the installation process starts).

Anyway, the only util in apt-utils making use of libdb is apt-ftparchive
which a) isn't used much in Debian – but by some derivatives¹ and b) can
operate without the backing of a db, but you don't want to run a large
archive without it.

Famous last words, but I doubt there is anything libdb does for
ftparchive which couldn't be done by any other database, so switching
shouldn't be too hard database-wise…

Finding someone performing the daunting task of actually switching code,
documentation and existing databases over on the other hand… I at least
don't see me enthusiastically raising my arm crying "let me, let me, …".


Best regards

David Kalnischkies

¹ The Census has a field for "Archive tool", but that isn't filled by
everyone in the census. The biggest fish might be launchpad/Ubuntu.


signature.asc
Description: PGP signature


Accepted apt-transport-tor 0.4 (source) into unstable

2018-01-22 Thread David Kalnischkies
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Format: 1.8
Date: Mon, 22 Jan 2018 17:36:38 +0100
Source: apt-transport-tor
Binary: apt-transport-tor
Architecture: source
Version: 0.4
Distribution: unstable
Urgency: medium
Maintainer: APT Development Team <de...@lists.debian.org>
Changed-By: David Kalnischkies <donk...@debian.org>
Description:
 apt-transport-tor - APT transport for anonymous package downloads via Tor
Changes:
 apt-transport-tor (0.4) unstable; urgency=medium
 .
   * fix typo in Vcs-{Git,Browser} URI
   * document that a-t-tor is not a The Tor Project product
   * use deb.d.o instead of httpredir.d.o as example mirror
   * drop mozilla.d.n from example onion list
   * add a paragraph about changelog download via Tor
   * add paragraph about leaking locale via Translation files
   * add support for combination with apt-transport-mirror
   * move upstream git from alioth to salsa
   * mark package as Rules-Requires-Root: no
   * bump to debhelper compat level 11
   * drop Recommends apt-transport-https for apt >= 1.6
   * no-change bump of Standards-Version to 4.1.3
Checksums-Sha1:
 1478d3f5ab6c7515e5184713f2812e89627fc31b 1695 apt-transport-tor_0.4.dsc
 328005cd57a3010e18f9ff13e06959a24d13a6c8 11152 apt-transport-tor_0.4.tar.xz
 ba1982526775be372794ab62c8c8163269679c38 5482 
apt-transport-tor_0.4_amd64.buildinfo
Checksums-Sha256:
 bd4c841449406dd8dad62b587f0bd2176d3c4021e221aa98c3d12b383a15c9ab 1695 
apt-transport-tor_0.4.dsc
 225e1050aea19d3999f4bdfbaa8c752444c3c8c382c91db03a9514df17643600 11152 
apt-transport-tor_0.4.tar.xz
 c94f91abfd141f6924a48733369b3eab18dabefecdcfd94a3c245486a61b4d67 5482 
apt-transport-tor_0.4_amd64.buildinfo
Files:
 4f25103208a12e34b8a238b861909a23 1695 admin optional apt-transport-tor_0.4.dsc
 308f0baebf07e7f5b9f108ebba48fa2a 11152 admin optional 
apt-transport-tor_0.4.tar.xz
 69c187131fffebff589ece9c8c0e0406 5482 admin optional 
apt-transport-tor_0.4_amd64.buildinfo

-BEGIN PGP SIGNATURE-

iQJHBAEBCgAxFiEE5sn+Q4uCja/tn0GrMRvlz3HQeIMFAlpmItcTHGRvbmt1bHRA
ZGViaWFuLm9yZwAKCRAxG+XPcdB4g26VD/47pYsUVPJk7bwt/rtjGJSrDg87pJaY
fvl+2JZRdWY2H63weMIl3VdILSEdh79AxyVIja9+X6Cjn4vjeBZHkTx+IZtMZYbs
qHCocudOFRl+B0y2+DDUNOjyKEeOD0i8b7qjw/9xmbcOWwBJ3n8v8/6pI3yeMXT9
efvDH0+p3Zr9tgLo6XfD+2ZGhCPPhvsdNapQ1ldpliPKVIfWrP/KZ3xtCZZfcuI5
BcaoQ2RrRaTAYeIcIHqXo+q0EvUS7ClPiNNm9nclCNQHRgNcaBTlsXM9TxCpcXgp
v+3PditqpcRFrAonhEBTlVb+usBRFkqoHkduZ7fH2l81WjP3VHBM+k+IXlJzdNyd
yP90idKwRhyysZDtqS2TbJbB2oFGCwHTu1m0kPKDPejgDkVlefYPqoGSKejnV15R
7ST4pjZkDeMCKbhTYRma+oTILvWiqIu+wG3UO6FriRSUvFLmV2npVZzqp1j48jJ5
rWJRPYaRvPqFEVrG/Nlmr8ERbjlFIXBF6ZqpGIxYK2EnwFcqfJlGl2a8OLlI+Ntj
1et5udOI6Wy/VBUjG15wkpl+BkoRo576w0gCytbtjXhWIRIMzkfhWpvuTAms7qlr
AkOO6gSXlfNeDVR5cQuj9937t06PHiMPHXKxVTZeWsk7HJDKAf457HxHeIP81/p0
fMNMUQgWIgKGPA==
=eD2V
-END PGP SIGNATURE-



Re: Bug#882445: Proposed change of offensive packages to -offensive

2017-11-23 Thread David Kalnischkies
On Wed, Nov 22, 2017 at 05:18:37PM -0700, Sean Whitton wrote:
> >   "cowsay-offensive".  In this situation the "-offensive" package can
> >   be Suggested by the core package(s), but should not be Recommended
> >   or Depended on, so that it is not installed by default.
  ^^

While it seems to be a reasonable explanation for why it should be at most
a suggests, this half-sentence is hardcoding behaviour of a specific
package manager in its current default configuration into policy.

"Installed by default" is something policy is speaking of in the context of
priorities only. In the context of dependency relations it is speaking
only about how reasonable it is for the average user of a package to not
install this other package [which can, but doesn't need to be the same].

Personally, I would vote for just dropping the half sentence as the use of
Suggests follows directly from its definition – as the whole point of a
maintainer introducing an -offensive package is very likely that it is
"perfectly reasonable" to not install it: Why introducing it otherwise?

There might be a point in mentioning "Enhances" here, but that could
come off as offensive if it is suggested explicitly that offenses can
enhance a package… which might even be an argument to drop the entire
dependency relation regulation sentence [as even in the best case it is
a repeat of §7.2 while in many others a prime example of potentially
offensive out-of-context quotes waiting to happen].

(I have no opinion on the topic of -off vs -offensive itself; as a non-
native I was always kept off by those packages due to being put off by
-off – but I will not be pissed off if -offensive takes off.  SCNR)


> I second this patch.  I suggest we add it as section 3.1.1, i.e., as a
> subsection to 3.1 "The package name".

[As this is the first subsection I wonder if there will soon be many
more "rip-off" naming conventions added like python-*, *-perl, … and if
for style reasons its a good idea to have -offensive be the first]


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: apt cron autoclean not enabled by default

2017-10-08 Thread David Kalnischkies
On Sun, Oct 08, 2017 at 03:20:04PM +0100, Ian Jackson wrote:
> You're all going to laugh at me now, but: some years ago I gradually
> stopped using dselect.  Multiarch was the last nail in the coffin.
> 
> Now I discover while debugging something that each one of my machines
> has been accumulating a museum of obsolete .debs in /var/cache/apt.
> 
> Is this supposed to be cleaned up by default by something ?

Now that you are in the habit of switching tools (SCNR), try "apt"
instead of "apt-get" which defaults to deleting .deb files after it has
used them for installation.

The NEWS entry for 1.2~exp1 gives some details on how to disable this
for apt and/or how to enable it for other libapt-based tools.

The short version is:
echo 'APT::Keep-Downloaded-Packages "false";' > /etc/apt/apt.conf.d/01clean-debs


> None of my installations seem to have the necessary machinery enabled.
> This is bizarre because all the relevant moving parts (principally
> /etc/cron.daily/apt) seem fully functional when I turn them on.

The cronjob does pretty much nothing by default, you can teach it to run
autoclean or clean or some age-based mixture through (I don't use it, so
I know next to nothing about it and stay out of touching it)

Details can be found e.g. at the top of /usr/lib/apt/apt.systemd.daily
(to whom it may concern: yes, I know, the filename includes systemd.
No, the file isn't systemd specific. There is compat-machinery to call
it from cron if systemd-timers are not available)


> I realise we've been arguing for years about turning on _updates_ by
> default, but I hadn't realised that we doubted whether people would
> want to delete decades-old .deb files...

People are different, so, much like you will find people complaining
about automatic upgrades you will also find people complaining about you
having deleted files they wanted to share with other machines via
sneaker nets, needed for that downgrade to unbreak their system, want to
hold a bit longer on to the files they have paid a heavy fee/time for
while downloading via mobile/third-internet-world countries or just like
to run their private partial copy of snapshot.debian.org.

I guess we (= Debian) will invent something after automatic upgrades are
done as there is then a "need" for it (assuming those tools don't delete
already, which I am not sure), much like we will eventually figure out
how to phase out old kernels (at the moment they are stuck waiting for
an autoremove to be run, but that can't really be run without user
confirmation).


btw: I don't want to be sounding like "that guy", but reading the
release notes helps preventing "decades-old" as that includes the
suggestion to run 'clean' before the upgrade and Debian releases a
few times within a decade… (heck, apt hasn't reached the "decades"
milestone yet… so technically… but just a few more months).


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Automatic way to install dbgsym packages for a process?

2017-08-09 Thread David Kalnischkies
On Wed, Aug 09, 2017 at 06:07:23AM +0900, Mike Hommey wrote:
> One would argue this should be a feature of apt. In Fedora land, you use

And apt developers would argue back that apt could indeed do it if
someone would write a patch – which isn't a new idea, the ddeb
granddaddy of what we have now already suggested apt changes [0] in
2006…

I haven't seen the perl script (and I don't speak perl, so even if
I had), but from the description of the functionality it doesn't sound
too hard and like a natural fit. Personally I would just prefer if
someone writes it who knows how it should work and would use it – not me
who doesn't even have the debug archive in its sources (as libc6-dbg is
not a -dbgsym yet) nor deals with crash.dumps all too often…


Long story short: I am happy to help via IRC/deity@ & Julian is at
DebConf in case someone wants to talk about apt in person.


Best regards

David Kalnischkies

[0] https://wiki.ubuntu.com/AptElfDebugSymbols


signature.asc
Description: PGP signature


Re: Bug#863361: dgit-user(7): replace apt-get build-deps with mk-build-deps

2017-05-28 Thread David Kalnischkies

On Fri, May 26, 2017 at 03:33:17PM +0100, Ian Jackson wrote:
> Emilio Pozuelo Monfort writes ("Re: A proposal for a tool to build local 
> testing debs"):
> > Or you can just do
> > 
> > $ sudo apt-get build-dep ./
[…]
> Probably we should recommend --no-install-recommends.

I would recommend not to recommend it because apt follows the general
recommendation of not recommending the installation of recommendations
of build-dependencies by default for all recommended Debian releases.

Recommended summary: Already the default since 2011.

Recommending everyone to have a wonderful day,

David Kalnischkies


signature.asc
Description: PGP signature


Re: apt-get upgrade removing ifupdown on jessie→stretch upgrade

2017-02-22 Thread David Kalnischkies
On Wed, Feb 22, 2017 at 09:04:16PM +0100, Luca Capello wrote:
> On Wed, 22 Feb 2017 13:16:27 +0100, David Kalnischkies wrote:
> > On Wed, Feb 22, 2017 at 01:06:24PM +1300, martin f krafft wrote:
> > > What am I not understanding right here? Shouldn't "apt-get upgrade"
> > > NEVER EVER EVER EVER remove something?
> [...]
> > Fun fact: We have a few reports which request "upgrade" to remove
> > packages. You know, automatically installed packages, obsolete ones or
> > obviously clear upgrades like exim4 to postfix (or the other way around,
> > depending on who reports it). I tend to be against that, but in case of
> > need we could still consider that a feature and close bugs… win-win :P
> 
> Please do not change the current behavior because...

JFTR: That wasn't really meant to be serious… as said I tend to be
against it for all sort of reasons, but even if not it would be hidden
behind config options and if enabled by default only for 'apt' as we did
e.g. with --with-new-pkgs. And yes, we haven't willingly implemented
that & I still can't really believe that it actually happened without
a LOT more details.

The funfact is more a comment on what people assume the current
behaviour to be based on either having formed an opinion of what
"upgrade" means by popular opinion (e.g. on a mailinglist) or learned by
experience (or documentation) that certain specific rules apply – one of
them being that no package is supposed to be removed in this mode.


But I don't have a good day today in terms of writing working
patches/mails so I can see how I failed here, too.
Too much carnival around me I guess.


> > Oh, and of course the standard reply: You know, apt does print
> > a proposal not an EULA – so you don't have to press 'yes' without
> > reading.
> 
> ...it will break existing practices, e.g.:
> 
>  DEBIAN_FRONTEND=noninteractive apt-get upgrade -y
> 
> FYI, I would call it a regression.

That specific invocation can "fail" for all sorts of interesting reasons
like dpkg config files or apt hooks. "fail" as in apt (and debconf) does
what it was told to do, but that doesn't say dpkg what it is supposed to
do. Or apt-list{changes,bugs} or …

Ignoring that reading the apt output even in such invocations isn't
a bad idea as it will e.g. tell you which packages it can't upgrade
– I kinda hope you aren't performing a release upgrade unattended…


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: apt-get upgrade removing ifupdown on jessie→stretch upgrade

2017-02-22 Thread David Kalnischkies
On Wed, Feb 22, 2017 at 01:06:24PM +1300, martin f krafft wrote:
>   root@cymbaline:/etc/apt/sources.list.d# apt-get upgrade
[…]
>   The following packages will be REMOVED:
> ifupdown libasprintf0c2 libperl4-corelibs-perl libuuid-perl python-bson 
> python-pymongo
>
> and indeed, it then went on to remove ifupdown.

Outrageous! apt was always slow to adapt, so the new way of saying one
thing and doing the other isn't fully implemented yet. I am sorry. SCNR


> What am I not understanding right here? Shouldn't "apt-get upgrade"
> NEVER EVER EVER EVER remove something?

I am not opposed to the possibility of bugs in apt in general, but the
amount of "upgrade with removal"-bugs which all turned out to be either
scrollback-confusion, aliases or wrapper scripts is astonishing, so
triple-double-check this first.

Fun fact: We have a few reports which request "upgrade" to remove
packages. You know, automatically installed packages, obsolete ones or
obviously clear upgrades like exim4 to postfix (or the other way around,
depending on who reports it). I tend to be against that, but in case of
need we could still consider that a feature and close bugs… win-win :P


> Can I find out in hindsight (can't reproduce this) what might have
> happened?

/var/log/apt/history.log* should be able to tell you which commands you
have run and which solutions were applied due to it. That also includes
dates, so you might be able to fish a /var/lib/dpkg/status file from
before the "bad" interaction in /var/backups/dpkg.status.*. Pick the
apt.extended_states* file from around the same date for good measure.
A good idea might also be to write down the result of "grep ^Date
/var/lib/apt/lists/*Release" somewhere to have an easier time of getting
the same mirror state out of snapshot if we need that. Armed with that
you can try debugging on your own as detailed in apts README (in the
source) and/or I would suggest to report a bug with all the details you
collected [and all those the bugscript wants to collect] as its hard to
reproduce otherwise and in general: native tools are offtopic (by thread
popularity) on d-d@ …

… but let me help you to get the thread some replies: I don't have
ifupdown installed anymore. systemd-networkd + wpa_supplicant FTW.
(also: RC bugs for all node packages failing a cat-picture test!)


Oh, and of course the standard reply: You know, apt does print
a proposal not an EULA – so you don't have to press 'yes' without
reading.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: IPv6 problem for one debian mirror

2017-02-08 Thread David Kalnischkies
Hi Vincent (and anyone else reading this),

On Wed, Feb 08, 2017 at 01:05:51AM +0100, Vincent Danjean wrote:
>   As a side note, it is really sad that APT imposes a 1min timeout
> for such problem. I proposed a patch in #668948 but never got any
> feedback :-(

You are like most people under the impression the APT team has a healthy
size and could stay on top of the incoming bugreports and even triage
older ones… but unfortunately we are not. The report isn't tagged as
including a patch so another "good" reason to get lost in our pile…
apt has earned and needs to defend its second place[0] after all… :(


It does sound similar to another bugreport I replied to "recently"
(#636871) talking about "Happy Eyeballs" (RFC6555) which seems like the
formalisation of your implementation – and that newcomer offer is still
up for grabs (as all others I have ever made, just saying…).

The patch itself looks a bit strange on first look regarding general
style, using environment variables instead of config options and I am
not a gigantic fan of 90% code copies of functions in the same file
(with a name diffing only in naming-styles…) – and it of course doesn't
apply cleanly anymore, but if you are still interested in working on
this we can surely work this all out. Just ping the bug with an updated
patch and I will work on a more detailed review.


Thanks in any case for report and initial patch and sorry for not
getting feedback in years from the fellowship of the cow!


Best regards

David Kalnischkies

[0] https://qa.debian.org/cgi-bin/bugs-by-source


signature.asc
Description: PGP signature


Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-11-10 Thread David Kalnischkies
On Thu, Nov 10, 2016 at 12:39:40PM -0200, Henrique de Moraes Holschuh wrote:
> I'd prefer if we enhanced apt transports to run a lot more protected
> (preferably under seccomp strict) before any such push for enabling
> https transports in apt.  It would reduce the security impact a great
> deal.

I am helplessly optimistic, so I will say it again even if the past
tells me it pointless: Anyone in any way interest in improvements is
more than welcome to join deity@l.d.o / #debian-apt.

Very few things get done by just talking on d-d@ about nice to haves.


> Mind you, at fist look it seems like apt transports will *run as root*
> in Debian jessie.  HOWEVER I didn't do a deep check, just "ps aux" while
> apt was running.  And I didn't check in unstable.  So, I (hopefully)
> could be wrong about this.

For jessie you are right. The few of us took an awful lot of time to
basically reimplement many parts of the acquire subsystem in the last
few years. You can watch Michael talk about it at DC14, me at DC15 and
Julian at DC16 if you like, but the very basic summary is that in
stretch onwards all apt methods run effectively as _apt:nogroups (and
with no-new-privs) & apt itself requires repositories to be signed and
expects more than just SHA1 or MD5 (as usual, that applies to everything
related to apt like aptitude, synaptics, packagekit, apt-file, …).

There is still much we wanna do, but for now we are actually happy that
we seem to have managed to satisfy all the people who responded to those
changes: The army of complainers that it breaks their firewalls, strange
so called sneaker net abominations or other interesting workflows[0] …


> Can you imagine trying to contain an exploit in the wild that will take
> advantage of people trying to update their systems against said exploit
> to spread even further?  Well, this is exactly what would happen.  We

Let the code with no bugs cast the first stone – you could just as well
say that any http bug is if wrapped in https less critical. libcurl
depends on a crapload of stuff we don't actually need because we use it
just for https and not for ftp, samba, …. And then most TLS exploits
tend to be in tricking it to consider a cert valid while it isn't, which
is a big problem for most things, but for apt those kind of bugs are
a lot less critical as we don't trust the transport layer anyhow (if we
treat it more as a MITM annoyance instead of one-and-only security).
As such, completely non-empirical of course, but I think it would be
a net-benefit to have https sources available for use by default even if
its overrated in this context – but you will only die very tired if you
try to explain why https-everywhere is a requirement for your browser
and even most (language specific) package managers to add a tiny layer
of security to them, but our beloved apt doesn't strictly need it for
security (but happily accepts the tiny layer as addition).


As already said, we are open to consider replacing libcurl with
a suitable alternative like e.g. using libgnutls directly – but see
optimistic paragraph above, I still hope that a volunteer will show up…
(as the biggest TLS exploit is usually the implementor who hasn't worked
with the API before and I haven't).

And I would still like to have some for a-t-tor, too. The package is
even way smaller than even the smallest node packages [SCNR] nowadays
and someone with an eye for detail, integration and documentation could
do wonders… but I start to digress.


Best regards

David Kalnischkies

[0] https://xkcd.com/1172/


signature.asc
Description: PGP signature


Re: Multi-Arch: allowed

2016-11-01 Thread David Kalnischkies
On Tue, Nov 01, 2016 at 09:24:10PM +, Simon McVittie wrote:
> On Tue, 01 Nov 2016 at 18:11:27 +0100, Thibaut Paumard wrote:
> > The -dbg package is Multi-Arch same. It Depends on the packages for
> > which it provides debugging symbols, some of which are Multi-Arch:
> > allowed. Lintian complains when I don't specify an architecture for
> > those packages:
> > 
> > W: gyoto source: dependency-is-not-multi-archified gyoto-dbg depends
> > on gyoto-bin (multi-arch: allowed)
> > N:
> > N:The package is Multi-Arch "same", but it depends on a package
> > that is
> > N:neither Multi-Arch "same" nor "foreign".
> 
> It is not useful for gyoto-dbg to be Multi-Arch: same as long as it
> Depends on gyoto-bin.
> 
> Imagine you want to be able to debug gyoto i386 and amd64 libraries,
> or some other pair of architectures, at the same time (which is the
> reason why Multi-Arch: same debug symbols are useful). You install
> libgyoto0:amd64 and libgyoto0:i386 (or whatever the SONAME is); fine.
> Next you install gyoto-dbg:amd64, which pulls in gyoto-bin:amd64; still
> fine so far. Finally, you try to install gyoto-dbg:i386, but it Depends
> on gyoto-bin:i386, which is not co-installable with gyoto-bin:amd64,
> so you can't.
> 
> You can either:
> 
> * stop generating gyoto-dbg, and get the automatic debug packages
>   (but they won't be made available in jessie-backports)
> 
> * remove the Multi-Arch field from gyoto-dbg
> 
> * weaken its Depends on gyoto-bin to a Recommends or something

I would add:

* Check if gyoto-bin really needs to be M-A:allowed. Name, Description
and the list of filenames included in the package suggest to me that the
package can and should be M-A:foreign – or in other words: Why is it
allowed?

* otherwise: Check if gyoto-bin can't be split up into a package which
can be marked M-A:foreign and one which can be marked M-A:same.


Rule of thumb: Don't make any package M-A:allowed as long as you haven't
got a bugreport telling you it would be nice from some cross-folks (be
it grader, builder, bootstrapper, …). Reason is that M-A:same/foreign is
instantly useable/ful, but M-A:allowed is useless if nothing ends up
depending on it with :any.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Multi-Arch: allowed

2016-11-01 Thread David Kalnischkies
On Tue, Nov 01, 2016 at 02:43:21PM +0100, Thibaut Paumard wrote:
> How do you actually use Multi-Arch: allowed? Should a dependent
> package then specify either :same or :foreign? Looks

Neither is valid syntax.
What you do with these is depending on a package with the literal
architecture "same" (or "foreign"). Thats not going to work…


> I was able to find documentation about what allowed is supposed to do,
> but not on how to depend on such a package.
> https://wiki.debian.org/Multiarch/HOWTO

The spec [0] linked from that page says how, but in summary:
If a package (lets say perl) is marked as Multi-Arch: allowed your
package foo can depend on perl:any and a perl package from any (foreign)
architecture will statisfy this dependency, while a 'simple' perl would
have just accepted a perl from the architecture your package foo was
built for (with arch:all mapped to arch:native).

DO NOT use ":any" on a package which is NOT marked as Multi-Arch:
allowed [1]. This isn't satisfiable by ANYTHING, not even your native
package.

If it helps: Instead of "perl having Multi-Arch: allowed" envision it to
have a "perl provides perl:any" and you are depending on this virtual
package – which also explains why such a missing provides causes
perl:any to be unresolveable.


That said, the usecase for 'allowed' is small – mostly interpreters
– and you are trying to use it on… a -dbg package? I haven't looked
closely, but that smells wrong… What are you trying to express here?
(and have you heard that automatic debug packages are a thing nowadays?)


Best regards

David Kalnischkies

[0] https://wiki.ubuntu.com/MultiarchSpec
[1] There are ways around it. See the "If it helps" remark for a hint.


signature.asc
Description: PGP signature


Re: When should we https our mirrors?

2016-10-28 Thread David Kalnischkies
On Thu, Oct 27, 2016 at 10:05:56PM +0200, Tollef Fog Heen wrote:
> ]] David Kalnischkies 
> > I would kinda like to avoid encoding the entire answer and sending that
> > in for display because it would be a lot of noise (and bugreporters will
> > truncate it if it is too long trying to be helpful), so if people who
> > actually know what they would need to deal with issues (I don't) would
> > decide upon a set and report a bug against apt to implement it, we will
> > see what we can do.
> 
> It would be great if we could have all of it (dropped in a log file
> would probably be ok, we can then ask for it from the user).

I feared you would say that, but I would very much like to avoid that as
it complicates everything: The acquire system has roughly 3 usecases:
Download indexes, download packages and download "random" files
(sources, changelogs, …). The later two can be done as root and as
a 'normal' user. We couldn't let the individual transport log on their
own as they have no idea where to. So I would need them to tell the
system the stuff they want logged which also doesn't really know what it
does (and especially when it switches jobs at runtime in interactive
tools) so I would need the individual callers (apt, aptitude,
packagekit, …) to decide where to log to (+ multiple acquires could be
running at the same time all over the place which shouldn't battle for
logfiles – there are even situations in which an acquire system runs as
part of another one…) which pretty much rules out stretch to have it.
And given the world adapts very slowly to apt changes I wouldn't hold my
breath for buster either. We would also need a rotation setup as the
first thing someone does on an 'update' failure is to run it again (or
a process in the background does like packagekit without the user
realizing that the manual invocation was superseded), …

[Having "stuff" (not limited to acquire) logged to a file might be
a good idea anyhow eventually, but that project is at least as boring as
it is big, so…]


Showing a few more lines is trivial by comparison, effects everyone
instantly and is hence oh so much more appealing from an apt POV… ala:

| Transport-Debug:
|  Connected-To: http.fastly.debian.example.org:8080 (0.0.0.0)
|  Via: 1.1 varnish, 1.1 varnish
|  Fastly-Debug-Digest: 
b6ea737814cc1feed0f9205c8ee1338025c8d316c1029a16c6f4365c6a7c6cdd
|  X-Served-By: cache-ams4141-AMS, cache-fra1222-FRA
|  X-Cache: MISS, MISS
|  X-Timer: S1477471085.983025,VS0,VE27

and

| Transport-Debug:
|  Connected-To: http.cloudfront.debian.example.org:8080 (0.0.0.0)
|  X-Cache: Miss from cloudfront
|  Via: 1.1 b74a7a3f7ddfd685212e870d027c332d.cloudfront.net (CloudFront)
|  X-Amz-Cf-Id: OWWfvAJ_et1_QVyPiP07-bodyCenkWtGTz8OeRW041eyeRDuvmGgCA==

(assuming these headers would tell you anything, they just look good to me)


> > P.S.: Fastlys Via response header seems to be important, given that it
> > is sent twice, but apart from that…
> 
> Not really, it's just that it passes through multiple caches on the way.

[Ill-attempt at humor – to the untrained eye it just looks like Via
should be what X-Served-By contains. Especially merged as above it
looks like a mistake but so be it.]


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: When should we https our mirrors?

2016-10-26 Thread David Kalnischkies
On Wed, Oct 26, 2016 at 08:38:33AM +0200, Philipp Kern wrote:
> On 10/24/2016 09:19 AM, Tollef Fog Heen wrote:
> > ]] Philipp Kern 
> >> It's also a little awkward that apt does not tell you which of the SRV
> >> records it picked. (The "and why" is clear: round robin.) I had a short
> >> read earlier today and I had no idea how to even report it without that
> >> information. (Of course I know how to turn on debugging but then it
> >> picked a different one and succeeded.)
> > 
> > Even getting the SRV record won't help much, you want to know what IP it
> > resolved to and what headers you got from the backend to uniquely
> > identify problems with a single POP or machine in a POP.
> 
> Fair enough. I never saw the current hash sum mismatch output before. I
> suppose it'd be helpful if apt could print more details about the
> machine it fetched it from in there -- if it still has the information,
> which is probably the more tricky part given pluggable transports.

It is tricky, but in the end a transport can send arbitrary deb822 fields
up to the apt process and apt can do whatever with them, so it should be
doable if we know what we have to send up the chain:
SRV hostname + IP we ended up connection to, okay, but what else?

I had a look at the HTTP responses we get from both CDNs, but while
there are perhaps a few interesting fields there, they are different per
CDN…

Answer for: http://deb.debian.org/debian/dists/sid/Release.gpg
| HTTP/1.1 200 OK
| Server: Apache
| Last-Modified: Wed, 26 Oct 2016 03:30:41 GMT
| ETag: "612-53fbc3fde0a18"
| X-Clacks-Overhead: GNU Terry Pratchett
| Cache-Control: public, max-age=120
| Via: 1.1 varnish
| Fastly-Debug-Digest: 
b6ea737814cc1feed0f9205c8ee1338025c8d316c1029a16c6f4365c6a7c6cdd
| Content-Length: 1554
| Accept-Ranges: bytes
| Date: Wed, 26 Oct 2016 08:58:06 GMT
| Via: 1.1 varnish
| Age: 0
| Connection: keep-alive
| X-Served-By: cache-ams4141-AMS, cache-fra1222-FRA
| X-Cache: MISS, MISS
| X-Cache-Hits: 0, 0
| X-Timer: S1477471085.983025,VS0,VE27

Answer for: http://deb.debian.org/debian/dists/sid/Release.gpg
| HTTP/1.1 200 OK
| Content-Length: 1554
| Connection: keep-alive
| Date: Wed, 26 Oct 2016 08:59:15 GMT
| Server: Apache
| Last-Modified: Wed, 26 Oct 2016 03:30:41 GMT
| ETag: "612-53fbc3fde0a18"
| Accept-Ranges: bytes
| X-Clacks-Overhead: GNU Terry Pratchett
| Cache-Control: public, max-age=120
| X-Cache: Miss from cloudfront
| Via: 1.1 b74a7a3f7ddfd685212e870d027c332d.cloudfront.net (CloudFront)
| X-Amz-Cf-Id: OWWfvAJ_et1_QVyPiP07-bodyCenkWtGTz8OeRW041eyeRDuvmGgCA==


I would kinda like to avoid encoding the entire answer and sending that
in for display because it would be a lot of noise (and bugreporters will
truncate it if it is too long trying to be helpful), so if people who
actually know what they would need to deal with issues (I don't) would
decide upon a set and report a bug against apt to implement it, we will
see what we can do.


Best regards

David Kalnischkies

P.S.: Fastlys Via response header seems to be important, given that it
is sent twice, but apart from that…


signature.asc
Description: PGP signature


Re: When should we https our mirrors?

2016-10-24 Thread David Kalnischkies
On Mon, Oct 24, 2016 at 07:26:37PM +0200, Tollef Fog Heen wrote:
> ]] Paul Tagliamonte 
> > On Mon, Oct 24, 2016 at 04:00:39PM +0100, Ian Jackson wrote:
> > > It is also evident that there are some challenges for deploying TLS on
> > > a mirror network and/or CDN.  I don't think anyone is suggesting
> > > tearing down our existing mirror network.
> > 
> > https://deb.debian.org/ is now set up (thanks, folks!), so my attention
> > is now shifted away from the push to https all the things (not everyone
> > will, so I just want a stable well-used domain that could be a sensable
> > default, and let those who don't want to move forward stay in the past)
> > and on to considering the apt https transport and thoughts on how this
> > could become part of the base install.
> 
> Note that the performance of HTTPS there is worse than for HTTP due to a
> lack of SRV support in apt-transport-https, though, which means it falls
> back to doing HTTP redirects.

(as apt-transport-https is mentioned I have to comment again…)

It should be only one redirect with apt >= stretch for indexes. For the
*.debs its a redirect each. APT doesn't store redirects between runs as
its hard to keep that current and authoritative (obvious for http,
slightly less obvious for https perhaps).

The SRV support needs to be implemented in (lib)curl as I wouldn't feel
to comfortable working around this [0]. Or, well, implement TLS in apt
directly, but I already mentioned how likely I think that is, but if
anyone wants to try…

And as already mentioned pipeline support in a-t-https would be nice if
someone feels like implementing it via curls multi-interface.

If someone is exceptionally bored we could implement an opportunistic use
of https if a-t-https is installed and "_https._tcp." srv-lookups get
a favorable response by letting http ask for that first and internally
redirect in that case.

So, you see, as usually, there isn't a shortage on ideas. If someone
wants to work on any feel free to join deity@ and/or #debian-apt.


Best regards

David Kalnischkies

[0] which isn't exactly easy and could only be considered a hack as you
can't take over resolving for curl, you could just misuse a facility for
DNS caching to feed it an IP, but you first need to establish that this
IP and port can be connected to, disconnect and hope a "reconnect" in
curl will be equally successful later on as making sense of error codes
isn't exactly easy either…


signature.asc
Description: PGP signature


Re: client-side signature checking of Debian archives

2016-10-24 Thread David Kalnischkies
(Disclaimer: I am a maintainer of apt-transport-tor… but also of
-https and apt itself, so I am biased beyond hope in this matter)

On Sun, Oct 23, 2016 at 07:20:35PM -0700, Russ Allbery wrote:
> Paul Wise <p...@debian.org> writes:
> > On Mon, Oct 24, 2016 at 7:21 AM, Kristian Erik Hermansen wrote:
> >> The point is to improve privacy.
> 
> > Better privacy than https can be had using Tor:
> 
> > https://onion.debian.org/
> 
> Yeah, but this is *way* harder than just using TLS.  You get much of the
> benefit by using TLS, and Tor comes with a variety of mildly problematic

TLS doesn't give you a lot of privacy in the context of Debian mirrors.
The traffic analyse Russ has hinted at is one thing, but the biggest
privacy issue is actually that you are a Debian user – and that is
communicated in the clear regardless of using HTTPS or not e.g. if you
connect to security.debian.org. Keeping track of then you connect to
figure out how long it takes you to react to DSAs isn't exactly hard
either. Would it be interesting to know which packages you install?
Maybe if I am really interested in you as it takes ages to get to know
all your packages (if you don't happen to do an upgrade to a new major
release), but as the average evil doer I know more than enough already:
Your IP and that you are likely suspect to recent exploits for at least
a few minutes still. That should be enough to add you to my botnet… (or
lets imagine something "less scary": The bar you are in offering
a special two-for-one-beer for Debian users "out of nowhere"…).


> side effects (speed issues,

Maybe its just me being lucky, but speed seems not to be an issue for me
for apt via Tor. Okay, the initial connect takes slightly longer, but
after that is done apts (tor+)http method with its support of pipelining
is actually perfectly capable of maxing out my connection (regardless of
onion or "normal" mirrors I am connecting to) in most cases.


> rather more complicated to set up and keep
> going for the average person,

No. For the average user its a matter of installing apt-transport-tor
and changing sources.list [if you have ideas/patches to enhance this
further feel free to contact us]. You have to do the same for https.
You don't have to go all Tor for everything at once…

(okay, it gets tricky perhaps if your network is blocking connection to
known Tor nodes at which point you need bridges, but the same network
could forbid [non-MITM] HTTPS, so that argument isn't super strong)

Operating an onion service is a different matter of course, but your
average person isn't very likely to setup a good http (or https) mirror
either and you don't absolutely need an onion service. Your usual http
will do. Sure, all-knowing traffic analyse will be capable of perhaps
figuring out what you do in that case, but that chance is a lot lower
the more traffic is routed through the Tor network and the information
that you are a Debian user isn't clearly written on your connection…
(Your are trading it in for "Tor user" which might or might not be
a better label to have at the moment, but given that we are talking
about people out there trying to get you they probably don't need
additional incentive…)


That said, sure, having https would be cool against the casual MITM like
these pesky login-before-you-can-use-our-free-internet portals, but we
already know that. We don't need yet another person coming here and
trying to convince us that HTTPS is the magic bullet we have all been
waiting for because it isn't. Various people have said for various teams
already which technical challenges need to be solved before we can
seriously think about rolling out https on a broad scale and as usual
the problems aren't fixing themselves if only we talk long enough about
them…


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: When should we https our mirrors?

2016-10-19 Thread David Kalnischkies
On Tue, Oct 18, 2016 at 01:58:10PM -0400, Robert Edmonds wrote:
> Since the Debian project controls the mirror client (in particular the

No. Debian "controls" 'a' client, not 'the' client. APT isn't used in
bootstrapping for example. Also proxy-setups are (potentially) not going
to work anymore leaving a lot of people stranded. I would also not feel
particular good inventing and maintaining https-debian-style://.  More
or less locking ourselves into a Debian-specific (security) protocol
sounds like a recipe for disaster.

(I know what you are thinking: apt-secure is a Debian-specific protocol,
but it uses standard things like checksums and keys. We haven't invented
our own checksum nor use DSA¹ for keys. The Debian-specific part is that
we have tools who do the security automatically for us – you could
easily perform it "by hand" anywhere: compare bootstrapping)


> code responsible for performing certificate validation), surely there is

No as apt-transport-https is using libcurl, so that code is the
responsibilty of whoever maintains curl and its upstream. Or gpgv for
that matter. Given the amount of security relevant bugs they (and
anything else trying to do security) have I bet the security team would
be overjoyed if all clients talking to a mirror would embed such code…

Best regards

David Kalnischkies

¹ overloaded term, here it means: "Debian Signature Algorithm" – SCNR


signature.asc
Description: PGP signature


Re: When should we https our mirrors?

2016-10-19 Thread David Kalnischkies
On Mon, Oct 17, 2016 at 08:48:57PM +0200, Cyril Brulebois wrote:
> should revisit this setup when I find more time. There's also Pipeline-
> Depth option's being advertised as not supported for https, too.

Yes. apt-transport-https is on a pretty low priority maintainership wise
which if combined with a cronically understaffed APT team isn't the best
setup for major development work as the scare resources could be better
invested in transports which are actually used…

The missing pipelining is due to our use of the simple API. We would
need to switch to the multi API with its added complexity. Note that the
apt acquire system itself deals with most of what is done by "multi" (as
in parallel connection to different hosts and such) so I guess
a non-trivial amount of code would be dealing with 'reverting' the
multiness of libcurl so that it isn't interferring with apts or perhaps
we already have that (as we take over redirect handling for example)
– someone would need to investigate…

Pipelining isn't the only missing feature through. curl does not support
SRV records (like many other web clients) which is quite handy for
deb.d.o and the mirror network in sofar as you can declare fallbacks
a lot easier this way (as in you can have them always set instead of
reacting to emergency calls).

In case someone wonders why curl or why the -gnutls variant: APT code is
GPL2+ and the current implementation is heavily intwined with many apt
parts. At least one past contributor denied adding an OpenSSL exception
back then that was the 'default' curl variant (and some more never
replied to the private inquires), so don't even think about relicensing
to gain access to other variants/libraries.

Someone could of course do a clean-room implementation (even in whatever
language you want) as apt uses a text protocol to talk to the transport
processes. Some day I might implement a stunnel<->http-based https just
for the lulz. How realistic it is that a clean-room implementation would
be solving both the "bloat" and "maintainance" problem is left as an
exercise to the reader.


Best regards

David Kalnischkies

P.S.: There are various arguments why -https is in terms of mirrors – as
their content is static and public knowledge – more an obfuscation
rather than 'real' security/privacy enhancement. If you want more/better
have a look at apt-transport-tor and onion mirrors. The setup is as
painless as -https: Install the package and change sources.list.


signature.asc
Description: PGP signature


Accepted apt-transport-tor 0.3 (source) into unstable

2016-10-01 Thread David Kalnischkies
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Format: 1.8
Date: Sat, 01 Oct 2016 17:36:39 +0200
Source: apt-transport-tor
Binary: apt-transport-tor
Architecture: source
Version: 0.3
Distribution: unstable
Urgency: medium
Maintainer: APT Development Team <de...@lists.debian.org>
Changed-By: David Kalnischkies <donk...@debian.org>
Description:
 apt-transport-tor - APT transport for anonymous package downloads via Tor
Closes: 755675 812490 835128
Changes:
 apt-transport-tor (0.3) unstable; urgency=medium
 .
   [ David Kalnischkies ]
   * use apt 1.3 directly instead of an embedded copy (Closes: #835128)
 - consistent configuration option fallback
 - run with less privileges as _apt instead of as root
 - tor and non-tor users have the same User-Agent by default
 - new circuit per host connected to
   * set APT team as maintainer & add me as uploader
   * change package from arch:any to arch:all
   * change to source package format 3.0 (native)
   * move tor from Depends to Recommends (Closes: #812490)
   * bump Standards-Version to 3.9.8 (no changes needed)
   * install README.md as documentation (Closes: #755675)
 - add pointers to repository onion services
 - mention how to disable non-tor sources in apt
 .
   [ Sotirios Vrachas ]
   * README: use httpredir.debian.org as example
Checksums-Sha1:
 4d1a10c69df9d290ad58c0e1a59582bbb1b3ebe0 1662 apt-transport-tor_0.3.dsc
 abc883cd8407c0721013bd7384710f3cc00a8e3d 9816 apt-transport-tor_0.3.tar.xz
Checksums-Sha256:
 208d5469df98fc0bc4e6981c07424517650cea46bc1bca0bc2fa5668ac897da5 1662 
apt-transport-tor_0.3.dsc
 44c76fa4619aad0fbd8a8fdb86963aad3a0a11e7f3ec49da9466449024e70fdb 9816 
apt-transport-tor_0.3.tar.xz
Files:
 334243a4d935651af00a5900bc23e0b9 1662 admin optional apt-transport-tor_0.3.dsc
 f9001490febdcbfff97c0a605856cc23 9816 admin optional 
apt-transport-tor_0.3.tar.xz

-BEGIN PGP SIGNATURE-

iQIwBAEBCgAaBQJX7+R0Exxkb25rdWx0QGRlYmlhbi5vcmcACgkQMRvlz3HQeIO5
DBAAlIpgfQdXK8C1/A6RJa/bvPv/9McIsiq1Y0zs6bj4yWS+aQ6tdv96+pKgqJZo
WCS86mZiRmO2p36mDuL/LOXUyEeALUlz12bHGEQp4TIwT9gk/fc1dIF1verWkrbT
76bDOjFfOIAUejsRg30W7NEqfSP2mqZsoWXSn8goVnZDkJWKFD9q4oKkphK3c4Vk
KN2TbgXez8/yq0CVt29gxFyOLX9TQpUF3qnDd6gVhEkBnoF0G5mk/SWyO3v8nGn9
oA/PfJC+rN8JFF0rSk/UjIY7UpKVHsPA80xXAifK6FlRD6IO48w/c4dk48n9JdMv
HVyRtjfyQhUueLDmupcjmmu6DR49f8S2K0rjU1c/RMRw5qZaJsohbhn/PZDQuGq7
YEFlijaid0N1/lCgq0U6AFBV3at4pfILsYn8SRulNsw0XIGHClyxYwfcQxSI0gJB
ZdgN9E6f3ItslZMswAIxPfoHwWee4RtymW2DaD9O7s+of04CtxaCW+hr4pSQaxM/
THCa1MnANL2ndzd6At8rC12jl/Q9QLSASj7ywJxt7GBK0jqwP5xljLfNE5RkOiVj
1u1xJQ2wgQm04SmG8lM80oe64s/4zpa+Ou9RyhnZhp72YemJ9pEM1KMkIGj/EnCL
J9YLzzOe3x7E9YSdNomPb/DvBxEM2jSAYMbBMk1Q3s7IJQw=
=4x5S
-END PGP SIGNATURE-



Re: Repeated Upgrade of Package

2016-08-23 Thread David Kalnischkies
On Mon, Aug 22, 2016 at 03:10:54PM -0700, pdbo...@cernu.us wrote:
> On top my problem- The situation we're encountering is that apt-get (and 
> aptitude, fwiw) continuously want to upgrade an installed package that we 
> maintain, even when the version, hash, etc., is not changed. This output 
> demonstrates the issue:

Your package doesn't match the metadata provided for it, so libapt
thinks its a rebuild (without version change) and downloads the "new"
version for installation.

In your case at least the Installed-Size: field is missing in the
Packages file (which is hence assumed to be zero), while the control
file inside the package has such a field with non-zero size. Other
fields which must match to the letter are e.g. Depends and Pre-Depends
(a "common" mistake is e.g. having versions there which are rewritten by
some tools like a 0-epoch).


Not sure what tool you are using to generate an apt repository, but it
seems rather non-standard given that it includes the deb file also in
the Release file… using one of the many already existent solutions might
be better…


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Handling Multi-Arch packages that must be installed for every enabled architecture?

2016-06-26 Thread David Kalnischkies
On Sat, Jun 25, 2016 at 12:04:58PM -0700, Josh Triplett wrote:
> > See https://wiki.debian.org/HelmutGrohne/MultiarchSpecChanges (and
> > links) for more examples, potential solutions and problems/shortcomings
> > from each of them.
> 
> The "Install-Same-As" header on that page looks ideal for this case.

The devil is in the details, e.g.: "It is only considered installed if
it is installed for all architectures that the listed package is
installed for" which translates roughly to "implement it in ALL tools,
wait for these tools to be released as Debian stable, use it in
stable+1" as "considered installed" is a dpkg thing. It also forbids the
use of "magic" as hat doesn't play nice with hard rules about package
installation states.


> Do you know if anyone has looked into what it would take to implement
> Install-Same-As in apt?

I haven't and so I somehow doubt anyone else has, but who knows…


The problem with a possible implementation is that it is in effect
a "conditional auto-install enhances" and conditions are hard to
implement. Theoretically it is possible to implement that type of
conditions with a bunch of virtual packages as a workaround, but in
practice apts dependency resolver will not like it, for roughly the same
reasons it isn't as good as aptitude in solving sudoku puzzles (not that
aptitude will invite you with open arms either, it will just not slam
the door in your face).  So, an implementation effecting all libapt
users instantly via a workaround I would consider highly unlikely. If we
talk implementing conditions for real, we are in the flying pigs for
human transport department.

That would leave us with adding it as "magic" somewhere between
apt(-get) and libapt which depending on where it is placed exactly will
effect more or less other libapt users which is very rougly proportional
to the complexity of implementing the magic in that place. And in all
the other places elsewhere to catch the rest of the fish.


Disclaimer: This is just an educated guess. I could be entirely wrong.
It would be more predictable if more than a few rough ideas existed.
Cornercases like upgrading existing systems, opt-in/out configuration
and syntax, if multiple packages are mentioned – assuming that is legal
– is it an OR or an AND and if the later, does OR exist as | then?, is
it a versioned relation, keeping API and/or ABI, what if v2 of a package
adds/modifies/removes the field, interaction with autoremove………


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Handling Multi-Arch packages that must be installed for every enabled architecture?

2016-06-25 Thread David Kalnischkies
On Sat, Jun 25, 2016 at 02:01:27AM -0700, Josh Triplett wrote:
> Sven Joachim wrote:
> > On 2016-06-24 23:01 -0700, Josh Triplett wrote:
> > > Some packages, if installed on any architecture, must be installed for
> > > every enabled architecture.  Most notably, an NSS or PAM module package,
> > > if enabled in /etc/nsswitch.conf or /etc/pam.d respectively, must exist
> > > for every enabled architecture to avoid breaking programs for that
> > > architecture.

See https://wiki.debian.org/HelmutGrohne/MultiarchSpecChanges (and
links) for more examples, potential solutions and problems/shortcomings
from each of them.

So, as usual, what looks like a simple and easy thing is actually
a complexity monster eating little kids^W DDs for breakfast.

(That isn't to say stop all discussion… this discussion died down and in
case we still need a solution it should 'just' be resumed from the last
good state rather than from zero)

> > > As one possible solution for this problem (but not an ideal one, just a
> > > thought experiment), dpkg could support a new value for "Multi-Arch",
> > > "Multi-Arch: every".  This value would imply "Multi-Arch: same", but if
> > > installed, would additionally cause dpkg to act the same way it does for
> > > Essential packages: install the package when enabling the architecture.
> > 
> > This is not at all what dpkg does, the Essential flag only means that
> > dpkg will not remove the package in question, unless given the
> > --force-remove-essential switch.
> 
> It's been a while since I'd set up "dpkg --add-architecture i386" on a new
> system, so I'd misremembered.  I had thought that doing so (or subsequently
> installing an i386 package) would force the installation of Essential packages
> for i386 for any package that was "Multi-Arch: same".  Apparently not.

Nitpick, but there are no essential M-A:same packages (first, because
libraries are forbidden from being essential and M-A:same applies mostly
only to them and second, a package tends to be essential because it
ships an architecture-independent commandline interface – hence most of
them are M-A:foreign – so important that noone can be bothered to depend
on it).


> > > (And when installing the package, dpkg would need to require
> > > it for every supported architecture; dpkg could refuse to configure the
> > > package if any enabled architecture doesn't have it unpacked.)
> > 
> > One problem here is that dpkg does not even know which packages are
> > available. […]

Another nitpick, but dpkg does know (to a certain extend). That isn't
all to important in the suggested case here as if you would model it as
a hard-requirement as suggested here with configure-refusal or later
with an automatic deconfiguration the package would need to exist for
all architectures anyhow as if you allow it to be not available for one
it isn't a hard-requirement anymore, but a magical recommends depending
on which sources a user has currently configured (and happend to be
successful in downloading).

Such a requirement also prevents packages from adding/removing
architectures (probably very very uncommon) and makes the interaction
all around pretty strange: I add a new architecture via dpkg and
instantly ever package manager screams at me that my system is broken.
But that is already written in the wiki…


> > I think such problems are better solved in apt: apt-get dist-upgrade
> > already reinstalls every Essential package, the same way it could ensure
> 
> That sounds quite reasonable to me.  The question then becomes how apt

It does to you and me perhaps, but apts handling of essentials (which
aptitude has copied and extended to prio:required in recent versions) is
the source of constant complains.

With every magic implemented in apt it should also be considered what
that means for all non-src:apt package managers be them libapt-based or
not as for better or worse apt(-get) is by far not the only thing
dealing with packages.


For Multi-Arch itself I managed to hide away most of it behind implicit
dependency relations, versioned provides and 'strange' virtual packages
for the libapt-based ones which made that transition quite "easy" all
things considered, but we can't pretend it will always be that "easy"…


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Problem with Google APT repo

2016-06-17 Thread David Kalnischkies
On Fri, Jun 17, 2016 at 03:02:56PM +0100, Greg Stark wrote:
> But as far as I can see the file I get at that URL from my browser
> does in fact match the md5sum and sha1 in the package description. As
> far as I can tell this either means there's a bug in APT or there's a

Its a bug in APT in sofar as it isn't saying what is actually the
problem: You might have noticed that this repository generated[0]
warnings/errors in 'apt update' before talking about the usage of SHA1
as algorithm guarding the Release file signature.

The APT team is pushing for the removal of SHA1 from our trustchain[1]
as its simply to weak going forward. Browsers do the same for SSL
certificates btw. If you wanna know more about this I suggest listening
to Julians talk about this (and other apt stuff) at DebConf btw.

So, the error shouldn't say hashsum mismatch, but something more like
"too weak hash" – but error is error either way, so you may want to talk
to the repository maintainers (there are more than just this repository
with such an issue) and I should write a patch to produce a better
message as we were talking in the APT team about it for a while now…


Best regards

David Kalnischkies

[0] It did in the past, but was recently updated, so I give it the
benefit of the doubt as I don't feel like checking…

[1] https://wiki.debian.org/Teams/Apt/Sha1Removal


signature.asc
Description: PGP signature


Re: Debian i386 architecture now requires a 686-class processor

2016-05-12 Thread David Kalnischkies
On Wed, May 11, 2016 at 09:26:18PM +0200, Adam Borowski wrote:
> Too bad, there has been a misguided change to apt-listchanges recently: if

Somehow I doubt you will convince anyone to follow you into the light
if you keep that "I know how it should be, subject to my will you
misguided maintainer idiots" attitude – especially if there are enough
options to change the default behavior in either direction & it is
documented in the NEWS file.

You asked the (new) maintainer and he explained his reasons. You asked
the apt maintainers and I said we aren't going to overrule him (and
can't do it even if we would like to). The remaining option is CTTE,
not flinging poop on public mailinglists every chance you got.


At least sometimes it can be beneficial to tune down his own hearts
arrogance in being the only one to know how things should be and have
trust in others to at least not fuck it up completely as that helps
keeping everyone on board and well fed. Stirring up strife all the time
just helps in making sure that people will leave the boat and you alone
will be too busy keeping all the pieces together to eat anything, but if
weight loss is your objective…


If you wonder about the wording of the last paragraph:
> --
> How to exploit the Bible for weight loss:
> Pr28:25: he that putteth his trust in the ʟᴏʀᴅ shall be made fat.

That proverb (I guess quoted from the King James Translation,
so I gonna use the same) actually starts with:
"He that is of a proud heart stirreth up strife: but …".


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: How shall I report a bug in the .deb packaging itself?

2015-12-24 Thread David Kalnischkies
On Thu, Dec 24, 2015 at 01:50:45AM +0100, Alberto Salvia Novella wrote:
> Luis Felipe Tabera Alonso:
> David Kalnischkies:
> > It is not a good idea to perform autoremoves unattended for situations
> > in which you have installed A (gui) depends B (console) depends
> > C (data), but later decide that you don't like A.
> 
> What if:
> - "remove" removes a package and all the unused depends, recommends and
> suggests.

The problem is the word "unused" here. You have obviously a different
definition (which isn't implementable) than apt has, otherwise "apt
autoremove $pkg" would already do what you describe – your complain was
after all that apt declares some packages you consider unused as used…


> - but before removing asks the user to exclude packages from the removal,
> and from that moment marks those as installed by the user.

apt shows a list of packages which could be autoremoved as well as
a list of packages which will be removed after pressing yes. Preventing
a package to be considered for autoremove is as simple as installing it…
Asking the user to confirm each and every removal would end up being
pretty annoying very fast and if its something you don't want to train
users to do, its is brainlessly pressing yes a hundred times as the
important question nr 101 will surely be answered just as quick with yes
as well – potentially followed a split-second later by a "N!" but
its too late then.

Also note that we talked about "unattended" here. Who should answer all
those questions if the point of the exercise was to make autoremove do
its thing without asking any questions…


(And if you can't imagine that many removes, dist-upgrades can easy
include that much and thanks to the gcc5-transition this time around it
will be an especially big list)


> - "autoremove" only removes depends.

You will have to give slightly more details on this one as by itself,
that sentence makes no sense for me.


> David Kalnischkies:
> > After 6 years I think I have enough 'battle' experience to say that
> > even I have still ideas which look good on paper only... and its good
> > that others put a stop to such ideas before those ideas have a chance
> > to hurt me (and I can assure you, I implemented ideas which never
> > should have been and now taunt me by their mere existence).
> 
> Imagine that we have the perfect way to innovate. That we have decided
> slowly along with other people, the change is small, we have put it on test
> as prototype, and the outcome seems to be very positive.
> 
> Would that make us get rid of complains?

If you add free icecream and a cherry topping… maybe… that wasn't at all
what I was trying to say through, as implementing good ideas is
(comparatively) easy in free software. The hard part is figuring out the
good ideas as people tend to feel very attached to their own ideas and
can take it the wrong way if others tell them the idea isn't as good as
they believe - even if they provide examples and hard facts.


Imagine that you drive on the highway and a wrong-way driver crosses
your path. Then a second. And a third. And yet another…

How many wrong-way drivers does it take before you realize that maybe it
is you who is driving in the wrong direction? Now read all the replies
to your idea again and compare numbers.


Enough from me for this year & thread now through, so:
Happy "package management" days and best regards,

David Kalnichkies


signature.asc
Description: PGP signature


Re: How shall I report a bug in the .deb packaging itself?

2015-12-22 Thread David Kalnischkies
On Tue, Dec 22, 2015 at 12:35:25AM +, Robie Basak wrote:
> On Mon, Dec 21, 2015 at 03:08:51PM +0100, Julian Andres Klode wrote:
> > I'll repeat this one last time for you: If A suggests B, and you
> > install B in some way, you may have come to rely on the fact that A is
> > extended by B on your system. Automatically removing B could thus
> > cause an unexpected loss of functionality.
> 
> I understand your logic here. But doesn't the same logic apply to
> Depends? If B depends on A and you install B in some way, then you may
> have come to reply on the fact that A is extended by B on your system,
> etc.

What? A isn't extending B – B needs A to function, that is all. [What
you describe is maybe "Enhances", which is a sort-of reverse Suggests
(expect that there is no option to install them all by default… I wonder
what the point would be to install all iceweasel extensions)].


If you installed B either A was already installed or A was installed by
your request for B. Either way A will not be autoremoved (even if it was
at some point automatically installed to satisfy a dependency-relation
of C on A) as long as B is there (and/or C).

A package can only be autoremoved if it is auto-installed and isn't
a possible satisfier for a (Pre-)Depends/Recommends/Suggests relation
(or-group) of another package which isn't autoremovable.


> I had always assumed that this is the risk you take by using autoremove
> and thus you need to pay attention to what you autoremove, which is for
> example why unattended-upgrades is sensible by not doing it by default.

It is not a good idea to perform autoremoves unattended for situations
in which you have installed A (gui) depends B (console) depends
C (data), but later decide that you don't like A (gui) anymore as you
prefer using the console interface (B) directly. apt doesn't know that
you ended up using B directly - it still believes it was installing
B just for A, so after you removed A it will offer to autoremove B and C.

Not the end of the world of course: reinstalling B is easy if it got
removed and as long as you don't purge it it will be as it was before.
You are in danger of surprising the user through (what the hell
happend?!?  Where is B?) and it is possible it will occur to the user
that B is missing at a very inconvenient time (no internet or simply
uninstallable at the moment). Its easy to dismiss this as no real
problem, but if you ever experience this first hand your opinion might
change. The alternatives might be worse through.


> What makes Recommends and Suggests special?

They aren't special, that is the point. The only difference between
these relations is just if they will be installed by default (Depends,
Recommends), if apt allows you to remove it without removing the package
which has such a dependency-relation on it (Recommends, Suggests) and if
apt is allowed to break such a relation via autoremove (none).

There are options to change property one (you can't change it for
Depends of course) and three (ditto) and if I remember right e.g.
aptitude warns if you do two [something I want do implement for apt some
day].


apt tends to be *very* conservative with removes which is a common
complain - this thread is an example, the "usual" upgrade-problems if
a maintainer decided that a transitional package is probably not needed
is another. "Interestingly", if apt eventually decides to remove
a package that tends to cause people to complain as well…


We are open to ideas to improve apt, but apt is used by many people with
very different expectations, so an idea which looks like an obvious
no-brainer in your head might not survive contact with reality.
After 6 years I think I have enough 'battle' experience to say that even
I have still ideas which look good on paper only… and its good that
others put a stop to such ideas before those ideas have a chance to hurt
me (and I can assure you, I implemented ideas which never should have
been and now taunt me by their mere existence).


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: Upcoming version of apt-file - using apt-acquire and incompatibilities

2015-12-08 Thread David Kalnischkies
On Sun, Dec 06, 2015 at 08:50:55AM -0500, The Wanderer wrote:
> On 2015-12-06 at 07:01, David Kalnischkies wrote:
> > On Sat, Dec 05, 2015 at 07:58:07AM -0500, The Wanderer wrote:
> > (as I am sightly lying, it is actually possible – just not very
> > accessible for a user and it would have issues so I am not going to
> > say how here)
> 
> In public, where it can be discovered later by people who won't know or
> be in a position to even judge (much less handle) those issues, you
> mean?

I am not going to mention it because that makes people end up using it
because "its faster" or other non-sense. They will eventually find it
anyway in a posting by some 'experts' on the interwebs like it is with
other topics involving "downloading stuff" but if all the security bugs
eventually catch up to them I at least can honestly say I had nothing to
do with it… Anyway: It isn't too hard to figure it out by yourself if
someone really wants to, but if this someone isn't motivated enough to
figure it out, it is unlikely to be a good idea:

The problem is in various short-cuts apt is using to avoid costly
hashsum calculations: If it has an 'old' Release file on disk and is
asked to download the Contents file for amd64, it opens the old Release
file and compares with the already downloaded new Release file: If the
hashsums match apt doesn't bother asking the server for the Contents
file as even in the best case it would just get a "you already have the
newest file" response from the server (and some servers do not support
this, so they send us the entire content again, which we have to deal
with and in the end discard – total waste of time and bandwidth).
The same happens for all the other files, like Packages, Sources,
Translations, …

If you disable one of these indexes temporarily (and disable list
cleaning) [and yes, that is the "solution" blueprint], the next run in
which the index is enabled again will believe that the index file is as
up-to-date as the Release file is – which isn't correct, so that you
have after an update not the latest of all files but some strange
mixture until the files change again. That could take a while and
confuse the heck out of pdiff – and if this not-really-uptodate happens
to effect security updates you are unprotected for longer than it would
be needed…


Beside the obvious fix (calculating hashes all the time which kills
performance) we could invent other ways of dealing with this, but that
is a bunch of design and code work – which, personally, I would like to
avoid if there isn't a good reason for it as that time could be invested
in other more pressing bugs/features.



The rest, others have hopefully answered to your satisfaction.
I just have to add that while conceptually similar, it isn't via hooks,
but src:apt >= 1.1~ allows other packages to declare that they want
libapt-based front-ends to acquire files for them. apt-file is just the
first example.  DEP11 software is likely to follow 'shortly'.
Technical details can be found here:
https://anonscm.debian.org/cgit/apt/apt.git/tree/doc/acquire-additional-files.txt
(please, before anyone commenting on this, read the doc first – its very
likely the problem you see with it is not only covered in the docs but
also not existing in reality due to practical limitations of what apt
can be told to acquire).

The point is that apt-file (and all other tools requiring Debian
metadata) can drop all of its code responsible for getting files from
a Debian repository over potentially very hostile channels, which isn't
an easy task – even if you ignore security aspects as apt-file did – and
instead outsource it to libapt who needs to do that anyway and has
decades of features and experience with it already.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


  1   2   3   >