Bug#1058937: /usr-move: Do we support upgrades without apt?

2023-12-21 Thread David Kalnischkies
On Thu, Dec 21, 2023 at 02:42:56PM +, Matthew Vernon wrote:
> On 21/12/2023 09:41, Helmut Grohne wrote:
> > Is it ok to call upgrade scenarios failures that cannot be reproduced
> > using apt unsupported until we no longer deal with aliasing?
> 
> I incline towards "no"; if an upgrade has failed part-way (as does happen),
> people may then reasonably use dpkg directly to try and un-wedge the upgrade
> (e.g. to try and configure some part-installed packages, or try installing
> some already-downloaded packages).

You can configure half-installed packages, no problem, this is about
unpacking (which is the first step in an install, where only Conflicts
and Pre-Depends matter, if you are not deep into dpkg vocabulary)


The "try installing" part is less straight forward. In general, you
are running into dpkg "features" (e.g. not handling pre-depends) or
into dpkg bugs (e.g. #832972, #844300): In the best case your system
state became a bit worse and hence harder to "un-wedge". In the worst
case a maintainer script has run amok as nobody tested this.
But yeah, most of the time you will indeed be lucky and hence come to
the unreasonable conclusion that its reasonable to call dpkg directly.


Anyway, if your upgrade failed part-way, you are probably in luck given
that its more likely the upgrade failed in unpack/configure than in
removal – so if you aren't too eager to install more packages by hand
but limit yourself to e.g. (re)installing the ones who failed you are
fine as apt will have removed the conflictors already for you (or
upgraded them, if that can resolve the conflict).


But lets assume you are not:
As you are driving dpkg by hand you also have the time to read what it
prints, which in the problematic case is (as exampled by #1058937):
| dpkg: considering removing libnfsidmap-regex:amd64 in favour of 
libnfsidmap1:amd64 ...
| dpkg: yes, will remove libnfsidmap-regex:amd64 in favour of libnfsidmap1:amd64
(and the same for libnfsidmap2:amd64 as well. If your terminal supports
 it parts of these messages will be in bold)

Note that the similar "dpkg: considering deconfiguration of …" which is
the result of Breaks relations is not a problematic case.

(Also note that this exact situation is indeed another reason why
 interacting with dpkg by hand is somewhat dangerous as you might not
 expect packages to be removed from your system while you just told
 dpkg to unpack something… just remember that the next time you happen
 to "dpkg -i" some random deb file onto your system.)

That is of course no hint that a file might have been lost due to
aliasing if you don't know that this could be the case, but on the
upside it is not entirely a silent file lose situation either. We could
write something in the release notes if someone happens to read them AND
also encounters this message.


Query your memory: Did you encounter this message before? Nothing in
the /usr merge plan makes that particularly more likely to be encountered
for a user and not all of the encounters will actually exhibit the file
lose. So if you haven't – and I would argue that most people haven't –
there is a pretty good chance you wont have a problem in the future
either…


So, in summary: Yes, there are theoretic relatively easy ways to trigger
it with dpkg directly. That isn't the question. The question is if a real
person who isn't actively trying to trigger it is likely to run into it
by accident (and/or if such a person can even reasonably exist) so that
we have to help them by generating work for many people and potentially
new upgrade problems for everyone – or if we declare them, existing or
not, a non-issue at least for the upgrade to trixie.


And on a sidenote: I would advise to reconsider interacting with dpkg
too casually – but luck is probably on your side in any case.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#1058937: /usr-move: Do we support upgrades without apt?

2023-12-21 Thread David Kalnischkies
On Thu, Dec 21, 2023 at 03:31:55PM +0100, Marc Haber wrote:
> On Thu, Dec 21, 2023 at 11:19:48AM -0300, Antonio Terceiro wrote:
> > On Thu, Dec 21, 2023 at 10:41:57AM +0100, Helmut Grohne wrote:
> > > Is it ok to call upgrade scenarios failures that cannot be reproduced
> > > using apt unsupported until we no longer deal with aliasing?
> > 
> > I think so, yes. I don't think it's likely that there are people doing
> > upgrades on running systems not using apt.
> 
> Do those GUI frontends that work via packagekit or other frameworks
> count as "using apt"?

I explained that in full detail in my mail to the pause-thread:
https://lists.debian.org/debian-devel/2023/12/msg00039.html

In short: helmuts "apt" (my "APT") includes everything that uses libapt.
That is apt, apt-get, python-apt, aptitude, synaptics, everything based
on packagekit, …

I know of only cupt and dselect which don't count, but I have some
suspicion that they would work anyhow. IF you don't run into other
problems with them, like that they do not implement Multi-Arch.


So this thread is really about:
How much are people REALLY fiddling with dpkg directly in an upgrade
and can we just say its unsupported – because, at least that is my view,
in practice nobody does it and its therefore also completely untested.

Case in point: We have this thread not because someone found it while
working with dpkg directly even through they had potentially years, but
because Helmut ended up triggering an edge case in which apt interacts
with dpkg in this way and only after that people looked for how to
trigger it with dpkg because triggering it with apt is hard (= as Helmut
checked, no package (pair) in current unstable is known to exhibit the
required setup).

(I will write another mail in another subthread about the finer details
 of what interacting with dpkg in an upgrade means and what might be
 problematic if you aren't careful – in general, not just with aliasing)


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#1052804: ycmd: FTBFS: make[1]: *** [debian/rules:28: override_dh_auto_test] Error 1

2023-11-03 Thread David Kalnischkies
Control: forwarded -1 https://github.com/ycm-core/ycmd/issues/1718

Hi,

the problem is that upstream code isn't supporting Unicode 15.1 yet,
which introduced a new word break rule. They embed the code (for 13),
but for Debian I opted to drop the embed and use whatever unicode-data
ships to rebuild the files, which bites us in the rear end here –
as it kinda should be.

I reported upstream and they have a PR implementing the needed support
already as well: https://github.com/ycm-core/ycmd/pull/1719
As said there it works fine for me in local tests, so this issue here
should be resolvable in the near future.


vim-youcompleteme (which is a rdepends of ycmd) is currently affected by
a regression in vim through (https://bugs.debian.org/1055287) which
makes updating ycmd with the unmerged upstream patch not that useful for
now (as it would never migrate – or, well, battle with vim for a spot).

So, I am currently waiting for either vim or upstream to act first while
dealing with other housekeeping things (clang-17 support) in the
meantime; so much as a status report in case anyone wonders.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#1052058: apt: refuses to downgrade itself to a version that works on the system

2023-09-19 Thread David Kalnischkies
On Mon, Sep 18, 2023 at 07:56:09PM -0400, Philippe Grégoire wrote:
> As such, I can no longer install or remove packages since my system is 
> partitioned. I'd like to point out that the above link does not specifically 
> mention disk partitioning, but only how files are placed on disk.
> 
> Obviously, re-partitioning the system is something I'd like to avoid at the 
> moment.
> 
> Thinking about it, in the long term, due to the merge and how packagers are 
> expected to be able to address files (e.g. /bin/sh vs /usr/bin/sh), I don't 
> see any other way than re-partitioning. Re-partitioning will be done by a 
> future me.

It is sort-of the point of /usr-merge that /usr can be another partition
instead of having packaged content split over multiple subdirectories
of / which could all be individual partitions, but only really work if
you mount them all anyhow… (yes, /etc, /var and all that jazz. People
have opinions on that, too. Lets focus on the problem we already have
now instead of pilling additional ones on top).


What should be the case is that /usr is a directory and e.g. /bin is
a symlink to /usr/bin. That is what the apt code is trying to check in
a somewhat roundabout way with inode as both /usr and /usr/bin should
point to the same real directory occupying the same inode.


That should be the case even if you have /usr on a different partition.
Are you sure your system is properly merged – as in you haven't unmerged
it with e.g. dpkg-fsys-usrunmess or prevent the merge to be executed
automatically by the installation of usrmerge?

In either case, it is probably better to contact a user support list
to resolve your issue.


> P.S. I'm uncertain why /lib isn't also merged with /usr/lib

It is? The code even checks for /sbin, /bin und /lib – but that isn't
all that /usr-merge entails and APT doesn't really want to be checking
for everything. Just for some easy to verify truths to ensure nothing
went south… like it seems to have happened on your system.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#1042089: src:golang-defaults: fails to migrate to testing for too long: triggers autopkgtest failure on armhf

2023-07-26 Thread David Kalnischkies
On Wed, Jul 26, 2023 at 04:48:55PM +0200, Paul Gevers wrote:
> [2]. Hence, I am filing this bug. The version in unstable triggers an
> autopkgtest failure in vim-youcompleteme on armhf.

This used to be the case for armel, too, until today, which had gccgo-13
13.1.0-9 migrate to testing – the day before it failed with 13.1.0-6.
armhf seems to have followed up just now with a passing grade:
https://ci.debian.net/packages/v/vim-youcompleteme/testing/armhf/


golang-defaults should hence migrate shortly *fingers crossed*.
If so, as vim-youcompleteme maintainer, I am quite happy as its kinda
scary to block the defaults meta package for a programming language
you know nothing about with your leaf package…


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#1024457: "apt changelog" fails to display the complete changelog

2022-11-19 Thread David Kalnischkies
Control: severity -1 important

On Sat, Nov 19, 2022 at 10:42:01PM +0200, Adrian Bunk wrote:
> Severity: serious

Well, no. You haven't provided a reason and I fail to find an obvious
one as apt's key functionality is hardly effected by a not (by default)
working changelog sub-command… could Debian release with this "bug"
unfixed? Certainly, given that it might/likely fails for other reasons
like the online repository not actually providing changelogs.

That said, severity hardly makes a difference for us anyhow as somehow
assigning severity doesn't magically assign free time as well (at "best"
higher values have a negative effect), so grave or wishlist doesn't
really matter, but I suppose important is closer to your hope of getting
that fixed before the release some way.


> debhelper recently started removing older changelog entries from
> binary packages, but the way to get them with apt does not work:

https://salsa.debian.org/apt-team/apt/-/merge_requests/261


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#1008759: marked as pending in apt

2022-04-01 Thread David Kalnischkies
Control: tag -1 pending

Hello,

Bug #1008759 in apt reported by you has been fixed in the
Git repository and is awaiting an upload. You can see the commit
message below and you can check the diff of the fix at:

https://salsa.debian.org/apt-team/apt/-/commit/889462ec33480940a355589b0ae57987f17a86e2


Recognize Static-Built-Using and order it below Built-Using

dpkg added a new field (see there for details) which breaks our
testcases due to an unknown field. apt doesn't make use of the field,
but we can at least order it nicely in output we generate.

References: dpkg commit 16c412439c5eac5f32930946df9006dfc13efc02
Closes: #1008759


(this message was generated automatically)
-- 
Greetings

https://bugs.debian.org/1008759



Bug#995115: /usr/bin/ruby: symbol lookup error: /lib/x86_64-linux-gnu/libruby-2.7.so.2.7: undefined symbol: rb_st_numhash

2021-09-26 Thread David Kalnischkies
Control: reassign -1 apt-listbugs 0.1.35

On Sun, Sep 26, 2021 at 03:27:19PM +0200, xiscu wrote:
> Justification: renders package unusable

Which package is unusable?


> [...]
> 274 upgraded, 6 newly installed, 0 to remove and 0 not upgraded.
> Need to get 0 B/745 MB of archives.
> After this operation, 313 MB of additional disk space will be used.
> Do you want to continue? [Y/n]
> /usr/bin/ruby: symbol lookup error: /lib/x86_64-linux-gnu/libruby-2.7.so.2.7: 
> undefined symbol: rb_st_numhash
> E: Sub-process /usr/bin/apt-listbugs apt returned an error code (127)
> E: Failure running script /usr/bin/apt-listbugs apt

The script exits unsuccessfully and as such the action is stopped.
apt hence works as intended, it is "just" not intended that the scripts
called do crash in such ways, but that is the bug of those scripts or
their interpreters not of apt itself.


> trying to deinstall apt-listbugs results on the same problem.
> trying to upagrade apt (listbugs) first, results in:
> 
> bin# apt-get install -t sid apt

If you want to upgrade apt-listbugs first you will have to use that
package name, not apt, apt doesn't contain apt-listbugs.

That said, there is no new version of apt-listbugs at the moment, so
there is nothing to upgrade to. Seems like a ruby upgrade broke it, but
I don't know if it is intended breakage (= to be fixed in apt-listbugs)
or unintended (= somewhere in ruby) or something in between. That is for
someone to investigate who has an idea about ruby, hence reassigning
down the chain.

You may want to add which versions of ruby packages and apt-listbugs are
involved.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#984966: marked as pending in apt

2021-03-11 Thread David Kalnischkies
Control: tag -1 pending

Hello,

Bug #984966 in apt reported by you has been fixed in the
Git repository and is awaiting an upload. You can see the commit
message below and you can check the diff of the fix at:

https://salsa.debian.org/apt-team/apt/-/commit/0d25ce3d466ecddea02d171981f011f7dbf95e08


Harden test for no new acquires after transaction abort

If a transaction is doomed we want to gracefully shutdown our zoo of
worker processes. As explained in the referenced commit we do this by
stopping the main process from handing out new work and ignoring the
replies it gets from the workers, so that they eventually run out of
work.

We tested this previously by checking if a rred worker was given work
items at all, but depending on how lucky the stars of the machine
working on this are the worker would have already gotten work before the
transaction was aborted – so we tried this 25 times a row (f35601e5d2).
No machine can be this lucky, right?

Turns out the autopkgtest armhf machine is very lucky.

I feel a bit sorry for feeding grep such a long "line" to work with, but
it seems to work out. Porterbox amdahl (who is considerably less lucky;
had to turn down to 1 try to get it to fail sometimes) is now happily
running the test in an endless loop.

Of course, I could have broken the test now, but its still a rather
generic grep (in some ways more generic even) and the main part of the
testcase – the update process finishes and fails – is untouched.

References: 38f8704e419ed93f433129e20df5611df6652620
Closes: #984966


(this message was generated automatically)
-- 
Greetings

https://bugs.debian.org/984966



Bug#983014: manpages-de: Fails to upgrade from 4.2.0-1 to 4.9.1-5: This installation run will require temporarily removing the essential package manpages-de:amd64 due to a Conflicts/Pre-Depends loop.

2021-02-18 Thread David Kalnischkies
On Thu, Feb 18, 2021 at 08:12:06AM +0100, Axel Beckert wrote:
> I though have no idea why apt regards manpages-de as
> essential. X-Debbugs-Cc'ing the APT developers at

Does the output of
$ apt rdepends manpages-de --important
include more than task-german and parl-desktop-eu?

In particular, does it include a local metapackage which is tagged
Essential/Important:yes?

(Important packages are considered like Essential which here is probably
 wrong as we shouldn't make ordering guarantees for it… but then,
 I guess that is part of why libgcc ↔ libgcc_s works now. And btw: The
 commandline flag stands for "important dependencies", which in this
 case is Depends and Pre-Depends, it has nothing to do with the
 Important flag or the other billion things also called important)


If not have a look into /var/backups for a dpkg.status file from before
your (failed) upgrade, so that we might be able to reproduce this.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#982716: [Aptitude-devel] Bug#982716: aptitude: FTBFS: tests failed

2021-02-13 Thread David Kalnischkies
On Sat, Feb 13, 2021 at 06:11:03PM +0100, Lucas Nussbaum wrote:
> Relevant part (hopefully):
[…]
> > FAIL: cppunit_test
[…]
| aptitude_resolver.cc:680 ERROR - Invalid hint "-143 aptitude <4.3.0": the 
action "-143" should be "approve", "reject", or a number.

The test uses aptitude_resolver::hint::parse in 
src/generic/apt/aptitude_resolver.cc
which in line 676 uses StrToNum to parse the hint which fails with
apt >= 2.1.19 as StrToNum is refusing to parse negative numbers now.

The data type of StrToNum is unsigned and using strtoull internally
which works on an unsigned long long (ull), too, but defines that
for negative numbers "the negation of the result of the conversion" is
returned… which tends to be unexpected (Negative numbers played a minor
role in e.g. CVE-2020-27350 for example).

You could convert to using strtoul directly to replicate the old
behaviour, with something like

| char * endptr;
| auto score_tweaks = strtoul(action.c_str(), , 10);
| if (*endptr != '\0')

(ideally you would check errno for failures of the conversion, but
 StrToNum wasn't doing that either in the past, so to replicate bugs…
 it does do a few other things instead, but they are not relevant here
 aka: it was an odd choice from the start and the only place it is used
 in aptitude)

BUT a bit further down the number is reinterpreted as a signed int which
suggests to me that aptitude wasn't actually expecting to get
a potentially huge positive value for a negative number, but would in
fact prefer to get a negative number if it parsed one and it just didn't
matter for this test either way (and negative hints by users are
probably not that common, too).

So I guess what is intended here is more like:
| char * endptr;
| errno = 0;
| auto score_tweaks = strtol(action.c_str(), , 10);
| if (errno != 0 || *endptr != '\0')


Note that I have not checked my hypotheses. (The code samples are also
typed in my mail client, so I have probably included some typos letting
them not even compile.)


Sorry for this breaking change this late in the cycle! If its any
consolation I am also angry that I not only not managed to finish the
fuzzing project in time, but also not managed to salvage the more useful
bit in a more timely fashion either.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#982281: reportbug gets source package name wrong

2021-02-08 Thread David Kalnischkies
Hi,

(Disclaimer: I am not a pythonista, but a libapt c++ dev)

(Disclaimer 2: Changed to cloned bugreport 982281 as its the one against
 reportbug, even if the request message didn't end up there so as not to
 derail the udeb thread further. Subject changed accordingly.)


On Mon, Feb 08, 2021 at 01:24:36PM +0100, Nis Martensen wrote:
> What happens here is that the binary package lookup in the apt cache fails 
> (probably because this is an udeb package), but the SourceRecords().lookup() 
> with the given package name succeeds. Shouldn't this only succeed if there is 
> a source package with this name? The documentation does not mention binary 
> packages in the section on SourceRecords().lookup().

The documentation does not seem to indicate this, but in the bindings
code python/pkgsrcrecords.cc:175 you can read "Records->Find(Name,
false)". The 'false' is setting a parameter SrcOnly (which is optional
and always defaulted to false since the dawn of time… 1999).

The parameter controls as the name might suggest if the lookup happens
only on the source package name field (true) or if additionally the
binaries field is inspected for inclusion of the name (false).

I do not see a way in the python code to control this parameter,
but I haven't looked too closely either.


> I'm wondering whether reportbug is using apt's python bindings incorrectly. 
> What would be the correct way to obtain the source package name of an udeb 
> package using python3-apt?

I presume something like:

srcrecords = apt.apt_pkg.SourceRecords()
if srcrecords.lookup(package):
return srcrecords.package

Would be more correct as not finding the source package name of an udeb
can't really be the intention of the code (which would be the result if
we somehow managed to set the parameter to true).

It's not too common the first try block doesn't find the name though so
it likely is just a matter of nobody noticing before…
(how often is someone really reporting a bug against a binary package not
 build for the architecture or sources.list components of the reporters
 machine).


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#964475: dpkg breaks apt autopkgtest: dpkg: error: unknown option --foreign-architecture

2020-07-07 Thread David Kalnischkies
Control: tags -1 + pending

Hi,

On Tue, Jul 07, 2020 at 08:47:21PM +0200, Paul Gevers wrote:
> Currently this regression is blocking the migration of dpkg to testing
> [1]. Due to the nature of this issue, I filed this bug report against
> both packages. Can you please investigate the situation and reassign the
> bug to the right package?

We (apt & dpkg maintainers) talked about this on IRC already and
I pushed a workaround (of sorts) to apt.git [0] for this.


The underlying problem is that the hookscript pkg-config installs does
not support DPKG_ROOT while the newer version of dpkg does support it
(more) now reporting architectures from the DPKG_ROOT rather than the
host system it previously did (arguably incorrectly). Based on that
pkg-config-dpkghook attempts to create (or remove) symlinks in /usr/bin
for architectures added/removed in dpkgroot which it isn't allowed to
(as it isn't run as root) and fails.

APTs testcases detect these failures in "dpkg --add-architecture" and
assumes it is working with a previous version of a dpkg Multi-Arch
implementation used e.g. in older Ubuntus resulting in the tests
eventually using commandline options unknown to the dpkg version
it is really using here.

The workaround detects these failures and ignores them now – as the
hookscript is post-invoke the architecture is added, so we got what
we were asking for – but ideally #824774 in pkg-config would be resolved.

(Ignoring that dpkg should also probably not read /etc/dpkg.cfg.d files
 from the root system, but that might be an even longer endeavour)


Best regards

David Kalnischkies

[0] 
https://salsa.debian.org/apt-team/apt/-/commit/3fe1419433f195d57b948b100b218cf14a2841d0


signature.asc
Description: PGP signature


Bug#953875: runit - default installation can force init switch

2020-04-27 Thread David Kalnischkies
Hi,

On Sat, Apr 25, 2020 at 12:56:48AM +0200, Lorenzo wrote:
> I have a question:
> let's say that I change the recommends, like the following
>
> Recommends: runit-systemd | runit-sysv | runit-init
>
> What happens in a given system if the "runit-systemd" package is not
> in the apt index (the package is not in the repository)?

apt will ignore an or-group member if the candidate version is not able
to satisfy the dependency and look at the next one – as an unknown
package has no candidate, it is ignored. The same is true for a known
package which is pinned to a negative value. Or if the dependency is
versioned (>= 2) but the candidate is only version 1.


> What happens if the 'runit-systemd' package exists, but one of its
> dependency, like, for instance, 'systemd-sysv' does not exists in
> the repository?

Short answer: Strange things.

Currently apt will see that the first or-group member is itself
a satisfier and hence will try to install it. That this will not work
out is realized too late to do anything about it. The Recommends will
hence not be satisfied at all! (which is not an error as Recommends are
not *required*, but apt should of course try a little harder…)

That is a rabbit hole I went into for the weekend [0], so in the future
apt should be behaving better here noticing earlier that dependencies of
the or-group member will not be satisfiable and look at the others just
like in the first question – aka: "apt will work as expected™".


> I'm asking because there are downstreams (like Devuan) that blacklist
> systemd packages in their archive

If they are rebuilding packages it might make sense to change the order
of the or-group based on for which distribution you are building for.
See dpkg-vendor and deb-substvars. In apt we are e.g. using this to
depend on the correct -archive-keyring package for the distribution we
are built for, there are probably easier/better examples though.


Best regards

David Kalnischkies

[0] https://salsa.debian.org/apt-team/apt/-/merge_requests/117


signature.asc
Description: PGP signature


Bug#953875: runit - default installation can force init switch

2020-04-24 Thread David Kalnischkies
Control: reassign -1 runit 2.1.2-35

Hi,

On Fri, Apr 24, 2020 at 01:54:42PM +0200, Lorenzo Puliti wrote:
> Control: reassign -1 apt 2.0.2
> 
> Dear apt maintainers,

Tipp: Your message does not automatically reach the maintainers of
a package you reassign a bug to, only the 'Processed' notification
might which is easy to miss, so you should CC them manually e.g.
with packagen...@packages.debian.org, see also dev-ref §5.8.3 (2).


> Please implement a logic where apt parse the alternative recommends in order 
> and pick up
> the first that does not require a removal of any installed package in the 
> system. If none
> of the listed recommends can be installed without removing any packages, then 
> I think it's
> ok to pick the first listed.

Such heuristic ideas come up every once in a while, but sadly they do
not work in generic case. Your specific heuristic would currently be near
impossible to implement in apt and it isn't easy to answer who it is that
"requires a removal": If A conflicts with B and you want to install A,
A seems to be the culprit, right? Well, what if you wanna install B? Is
it Bs fault A has a conflict on it? What if C is a M-A:same package
where i386 wasn't built yet but amd64 is and B depends on >= newer. We
will need to remove C:i386 if we want B, but that isn't Bs fault, is it?

A fan favourite is the heuristic "needs the least new packages". Seems
reasonable, right? Yes, until you consider that near no-op packages like
discarding MTAs or memory-only settings managers are the best choice
then.

Apart from these, the general problem with these is that we get more
"magic" meaning that you need to understand and keep a lot more state in
your head to understand what is going on. Many people don't understand
what apt is doing now, what happens if we make it even harder…


What you want to express here is a conditional dependency:
if A is installed: install B (else C).

We don't have these kind of expressions and I highly doubt we ever will
as that gets funny rather quickly as well (if A is later installed, do
we need to install B now? Of course not you might think, but look at the
recommends discussed here: It could make sense…).


That said, we might eventually solve certain sub-cases (think: language
packs, virtual package provider choice, …) but that is a LONG way off,
certainly nothing you could wait for and put this issue on hold for the
time being. Also: Even if we had it tomorrow, there would still be the
issue of apt/stable not supporting it, so we still have the exact
problem in stable→stable upgrades. So, you have to resolve this one way
or the other and I am therefore handing this bug back to you.


> Note that this bug is not easy to fix on runit side:
> * changing the order of the recommends will cause the same issue for sysvinit 
> users

Note that I have *NO IDEA* about runit, init systems or the various
dependencies surrounding it, so I have no idea if runit is a package
a user is expected to install (in which case choosing the right is
relatively easy) or if it is e.g. a common dependency of other packages
in which case that seems less okay.

I am probably stepping on a few landmines now, but the first choice in
an or-group should be the least surprising option for the most users.
If you aren't sure, that usually achieved by going the route of "default
install" needs this one simply because everyone who isn't on the default
is more experienced and can deal with choosing non-default options in
related cases as well.

Also, if there is a reasonable way of making use of runit without either
of these recommends installed, then the recommends is indeed wrong.
Debian policy is quite clear that Recommends isn't a "just in case", but
"for everyone, expect unusual".

It MIGHT (remember, I have no idea) also makes sense to integrate into
all init systems by default instead of asking the user to pick one via
the recommends and do something about it at runtime.


Anyway, as there is nothing apt can really do about this I have to
reassign back even if I dislike bug-pingpong. Feel free to ask if you
have come up with a specific solution for your problem and want some
feedback from us regarding apt, but be prepared to explain a lot in your
question as apt developers are "only" (FSVO) experts in apt, not in the
dependency trees of all of Debian (you may consider us the rubber duck
debugging of dependency resolution). 


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#945911: APT leaks repository credentials

2019-12-01 Thread David Kalnischkies
> 
> | deb http://public.mirror.example/debian buster main
> 
> In auth.conf:
> 
> | machine internal.server.example
> | login top
> | password secret
> 
> Now, when you request some package list or something from
> public.mirror.example, public.mirror.example could redirect you to
> http://internal.server.example/whatever/file, and APT would then treat that
> file as if it were the packages list that is found on public.mirror.example
> - I suppose?

Welcome to the internet and "man in the middle" (MITM) attacks. You are
a couple of years to late for that. See "https everywhere" efforts
around the internet. Note though that whatever apt got still needs to
pass verification, so whatever you got is still signed by a key you
trust. apt has also a couple other checks in place which will likely
trigger like metadata mismatches compared to previously acquired data.

No idea though what you mean with "leaking content". The user is quite
obviously allowed to see the content as they have the password for it, so
nothing is leaked to the user and "used in a context it wasn't meant
for" … if valid repository data isn't intended to be downloaded in this
very context I don't know what is.



So, please explain in detail what attacks you envision which aren't
inherent in the used protocols (http) and targeting static content.
It is also a good idea to first understand what apt actually does in an
update command and the multitude of checks it deploys as many things are
covered by these already which for other more generic internet clients
remain a huge problem.

So far I see only very generic guess and maybe-ifs which are not
actionable and very much not a release critical bug – mostly because
I don't see an actual bug be described so far…
(also, it is a good idea /if/ you found a security problem to disclose it
more responsibly to the security teams involved so they can coordinate
fixes, patches and uploads. If you don't do that you are forcing the
hand of the involved people, which tends to cause more aggressive and
crisply responses as they are forced to deal with things NOW.)


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#931566: Don't complain about suite changes (Acquire::AllowReleaseInfoChange::Suite should be "true")

2019-07-21 Thread David Kalnischkies
On Fri, Jul 12, 2019 at 04:52:36PM +0200, Julian Andres Klode wrote:
> On Tue, Jul 09, 2019 at 03:31:07PM +0200, David Kalnischkies wrote:
> > On Mon, Jul 08, 2019 at 12:19:36PM +0200, Julian Andres Klode wrote:
> > Anyway, that apt is enforcing the metadata isn't changing is a security
> > feature in that if you don't check an attacker can reply to a request to
> > foo-security with foo – perfectly signed and everything so apt has no
> > chance to detect this and the user believes everything is fine – even
> > seeing files downloaded from -security – while the user never receives
> > data for -security. foo has no Valid-Until header so the attacker can
> > keep that up for basically ever. Compared to that serving old versions
> > of -security itself is guarded by Valid-Until. Not serving any data has
> > basically the same result, but the errors involved might eventually
> > raise some alarm bells. So that is a good strategy to keep you from ever
> > installing security upgrades.
> 
> This should not happen silently, as there'll be a warning that the
> name of the distro in the sources.list entry matches neither Codename
> nor Suite.

Codename and Suite match for Debian and -security:
http://deb.debian.org/debian/dists/buster/Release
http://security.debian.org/debian-security/dists/buster/updates/Release

They differ in Label (, Version) and Components only. As such I can
easily hand you the Release file for Debian if you ask for security.

Components has its own verify, but as both styles ("updates/main" and
"main") exist in realworld scenarios we can't be that strict about it.

Version is a fun one to verify as that would imply enforcing a specific
version scheme.

Yes, they have different keys, but while the option exists nobody really
configures signed-by per sources.list entry (or Release or …) for
various reasons. !33 wants to help with that, but will/does depend on
the metadata to match keys entries to sources.list entries.


bullseye has that "fixed" as Suite & Codename will vary for both
archives, but there weren't only positive replies about that and I am
not that arrogant to call all other repositories "broken" just because
they don't exactly copy the choices Debian made – and even if they would
they would be a candidate for the flipping.


I guess Julian was thinking of the Docker example I had in a later
message which is caught by having the wrong suite compared to the
sources.list; but that example I included mainly to show that Debians
interpretation of what Suite and Codename are might not be applicable to
other repositories – for Docker, it makes perfect sense to say that
"buster" is the suite they release stuff for. It is at least quite
obviously not the codename for their product.


> We also can't do a rollback attack from current testing to an older
> testing either, as the Date is not allowed to roll backwards.

Assuming I don't catch you on your first update I just have to pick
a stable release (update) after the last security update you have
to switch you over to never receiving security updates again (perhaps
staling -security in the bounds of Valid-Until until stable has caught
up with the next point release).

(Me being a MITM or an evil [https-]mirror operator of course)


> I'm not convinced we are adding any actual security here - we should
> just upgrade the warning for name mismatch between sources.list and
> Release file to an error for that.

So, to be clear, you think about reverting the entire thing for all
fields – which is considerably more than what this bugreport is asking
for.

It is a valid option of course, but personally I would like to hear
reasons to allowing Origin/Label to change – so far I have only seen
complains about (mostly) Suite and (some) Codename [and for the record,
I also got some positive replies for both, so regardless the default,
I would at the very least like to retain the option] – as that
jeopardises stuff like !33 as well.


> > A) For a user with "debian stable" in sources.list the change of the
> > Codename from buster to bullseye is a giant leap which should not
> > be done carelessly or even automatically in the background.
> 
> That's true, but leaving the user stranded without updates is not
> helpful either.

I would personally consider no-upgrades a better scenario than
not-for-me-upgrades, but then I wouldn't classify a failing automatic
process as "stranding" given that human intervention is needed in both
scenarios.


> > B) For a user with "debian buster" in sources.list the change of the
> > Suite from e.g. stable to oldstable is an important event as well; not
> > right at this moment as there is a grace period, but security support is
> > about to end and plans should be set in m

Bug#931566: Don't complain about suite changes (Acquire::AllowReleaseInfoChange::Suite should be "true")

2019-07-10 Thread David Kalnischkies
On Tue, Jul 09, 2019 at 06:52:58PM +0200, Julien Cristau wrote:
> I think overall what you're trying to do here (the whole "notify the
> user they're out of date" thing) does not belong in apt.  IMO it belongs
> in higher level tools that are going to heavily depend on the use case
> and so there's not really a good generic answer you can come up with at
> the lowest (apt) level.

Well, all user-facing clients use libapt for their equivalent of "apt
update". So your one-off python script, apt, apt-get, aptitude,
synaptics, all software centers, unattended-upgrades, apt-file … they
all share the code – and the data which is downloaded by one of them for
them all.

It is the choice of the frontend performing the update to present
encountered problems and apt{,-get} chooses to push the messages to the
usual list at the end of the run as it is common for apt. It could just
as well do nothing or open a blicking popup dialog, whatever seems
sensible for both the user using the current frontend AND for the other
frontends – so yes, there needs to be a default generic good answer on
the lowest level as we otherwise need to tell users to pick ONE frontend
and use that for the rest of their life exclusively.

Earmarking potentially dangerous data in hope that all frontends will
implement proper safety procedures while dealing with them doesn't work.
That was quite (in)visible for unauthenticated packages and I am very
happy we got mostly right of it by now.


apt{,-get} could choose to implement checks if the changed metadata
breaks its configuration – and we might do if I am really bored – but
that would indeed be lots of code and we still have the problem that the
other frontends might be broken by the new data.


#931524 and #931649 alone show that user config is easily broken & that
users frequently miss obvious and well-known changes to repositories. Or
did we already forget the recent outcry as oldoldstable left the
mirrors? And that is the main archive of your distribution. I doubt
users are following changes in 3rdparty repos they are using any closer.

I agree that apt{,-get} isn't the best place to keep you posted about
this stuff, but libapt is the best (because it is the only) place to
download the data all frontends can use and if present apt{,-get} should
make use of it as users expect it to help them. Even better if there are
higher level tools making better use of the data!


BUT that isn't about notifying users or breaking configuration even if
that is the most visible in practice. You don't have to look particular
far for an opportunity to create disaster (at least in the eyes of the
user) if you allow an attacker to change one metadata field:

http://deb.devuan.org/devuan/dists/stable/Release
http://deb.devuan.org/merged/dists/stable/Release

The two release pockets differ by Suite only. [Okay, they differ by
Label, too, so apt would catch that – but that looks more like an
accident than deliberate. And how much impact can a Label change have…
that should be automatic, too! btw "Debian stable" and "Debian
stable-security" differ by Label only in buster].
(not an endorsement btw, just a semi-random pick from the census)

Or because everyone loves docker lets look there:
https://download.docker.com/linux/debian/dists/buster/Release
https://download.docker.com/linux/debian/dists/stretch/Release
https://download.docker.com/linux/raspbian/dists/buster/Release
https://download.docker.com/linux/ubuntu/dists/artful/Release
https://download.docker.com/linux/ubuntu/dists/zesty/Release
These indeed differ by Suite only and are therefore freely exchangeable
if we allow unsanctioned Suite changes.

Having Oracle in your trusted keyring? Great! Then I will serve you this
the next time you request the Debian unstable repository:
https://oss.oracle.com/debian/dists/unstable/main/binary-i386/Release
[caught by codename – or more realistically by signed-by, expect nobody
uses that. MR !33 to the rescue but this requires correct and relatively
stable metadata as well]

The last one is just the literally first hit on duckduckgo for '"Origin:
Debian" Release' for me. It isn't particularity hard to find others with
better usability factors.

So if the proposed solution is over engineered I am all ears for
alternatives which deal with these issues.


Best regards

David Kalnischkies

P.S.: Pinning and other actions in libapt by codename was the first big
feature I implemented a decade ago. So in Debian terms it isn't that old
and indeed lots of documentation still uses suite names for this – and
in many cases that isn't wrong. Other frontends got that feature even
later – assuming it is implemented at all by now. Reading mails worded
as if codenames were a universal truth is quite funny in that context.


signature.asc
Description: PGP signature


Bug#931566: Don't complain about suite changes (Acquire::AllowReleaseInfoChange::Suite should be "true")

2019-07-09 Thread David Kalnischkies
inda like how that lists the options.
Anyway, as N are notices they do not change the exit status of clients
and might not even be shown – apt-get e.g. doesn't in non-interactive use.


On 2019-07-06, the release day:

$ cat buster/Release
Codename: buster
Suite: stable
Release-Notes: https://example.org/upgrading-to-buster

$ apt update  # from a system with the last update call after the 2019-06-11
N: Repository '…' changed its 'Suite' value from 'testing' to 'stable'.
N: More information about this can be found online in the Release notes at: 
https://example.org/upgrading-to-buster

If you have 'debian buster' in sources.list, if you have 'debian stable':

$ apt update
E: Repository '…' changed its 'Codename' value from 'buster' to 'bullseye'.
N: More information about this can be found online in the Release notes at: 
https://example.org/upgrading-to-buster
N: This must be accepted explicitly before updates for this repository can be 
applied. See apt-secure(8) manpage for details.
N: These changes can be acknowledged interactively with 'apt' or with 
--allow-releaseinfo-change.

If you happen to not upgrade between the announcement and the release
the later case will play out exactly the same, the first one will be an
error akin to the Codename case just with the Suite data instead.


I have the code for it mostly written, so that seems to work and isn't
too bad, but I am fairly open to feedback from everyone involved. Lets
just not wait again 2 years shall we? 

Code will be on salsa hopefully later today, if you want to have a look
or comment on specific implementation details (like field/option names,
wording of messages,… …) I would encourage you to comment there and
leave that bugreport for the overarching "this sucks!" and "greatest
thing since sliced bread!" on the whole infrastructure as for this to
work at least release, ftp & publicity team have to accept me imposing
work on them (which arguably they already do anyhow, but still) and
hence quickly derails if we argue about Soon/Upcoming/Next/Future-
in here, too.

Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#917660: NMU of pyhamcrest to fix FTBFS in vim-youcompleteme

2019-01-07 Thread David Kalnischkies
Hi,

On Mon, Dec 31, 2018 at 12:06:53AM +0100, David Kalnischkies wrote:
> As the freeze is drawing near I would appreciate a reply in the next
> week so that we can proceed accordingly – I am e.g. happy to sponsor
> uploads if need be. On the other hand, if I get no reply I plan to
> upload at least a no-change -2 revision soon to resolve at least the

I just uploaded -1.1 to DELAYED/2 as I wanted to wait for the 5 year
anniversary of the last upload. ;)

It's a strange diff in the binary python3-hamcrest package causing
a FTBFS (and hence serious) bug in at least one package I care about
fixed just by a rebuild, not a patch, so I hope its okay that I was
waiting only 7 (+2) days in between years before NMUing.

Feel free to drop me a line if I should cancel the upload OR override
with an upload of your own – I am also happy to sponsor an upload if
need be!


Debdiff attached – which beside my changelog-only change includes also
the so far not uploaded trivial changes in the VCS as it seemed like
a good idea to point to an existing VCS (even if I have no access as
I am not a member of python modules team) and correcting b-d. I have a
branch locally with the NMU if you would prefer that.


Best regards

David Kalnischkies
diff -Nru pyhamcrest-1.8.0/debian/changelog pyhamcrest-1.8.0/debian/changelog
--- pyhamcrest-1.8.0/debian/changelog   2014-01-09 22:27:04.0 +0100
+++ pyhamcrest-1.8.0/debian/changelog   2019-01-07 18:49:27.0 +0100
@@ -1,3 +1,20 @@
+pyhamcrest (1.8.0-1.1) unstable; urgency=medium
+
+  [ David Kalnischkies ]
+  * No-change non-maintainer upload to have python3-hamcrest rebuild without
+the use of deprecated collections ABI usage causing FTBFS in at least
+src:vim-youcompleteme (Closes: #917660)
+
+  [ Ondřej Nový ]
+  * Fixed VCS URL (https)
+  * d/control: Set Vcs-* to salsa.debian.org
+  * Convert git repository from git-dpm to gbp layout
+
+  [ Piotr Ożarowski ]
+  * Add dh-python to Build-Depends
+
+ -- David Kalnischkies   Mon, 07 Jan 2019 18:49:27 +0100
+
 pyhamcrest (1.8.0-1) unstable; urgency=low
 
   * New release
diff -Nru pyhamcrest-1.8.0/debian/control pyhamcrest-1.8.0/debian/control
--- pyhamcrest-1.8.0/debian/control 2014-01-09 22:27:04.0 +0100
+++ pyhamcrest-1.8.0/debian/control 2019-01-07 18:49:27.0 +0100
@@ -5,14 +5,15 @@
 Uploaders: Debian Python Modules Team 

 Build-Depends:
  debhelper (>= 7.0.50),
+ dh-python,
  python-all (>= 2.6.6-3),
  python-setuptools (>= 0.6b3),
  python3-all,
  python3-setuptools
 Standards-Version: 3.9.5
 Homepage: http://code.google.com/p/hamcrest
-Vcs-Svn: svn://anonscm.debian.org/python-modules/packages/pyhamcrest/trunk/
-Vcs-Browser: 
http://anonscm.debian.org/viewvc/python-modules/packages/pyhamcrest/trunk/
+Vcs-Git: https://salsa.debian.org/python-team/modules/pyhamcrest.git
+Vcs-Browser: https://salsa.debian.org/python-team/modules/pyhamcrest
 
 Package: python-hamcrest
 Architecture: all


signature.asc
Description: PGP signature


Bug#917660: vim-youcompleteme: FTBFS (failing tests)

2018-12-30 Thread David Kalnischkies
Control: merge -1 917682
Control: reassign -1 python3-hamcrest 1.8.0-1
Control: affects -1 vim-youcompleteme

Hi

On Sat, Dec 29, 2018 at 09:39:19PM +, Santiago Vila wrote:
>   File "/usr/lib/python3/dist-packages/hamcrest/core/helpers/hasmethod.py", 
> line 13, in hasmethod
> return isinstance(method, collections.Callable)
>   File "/usr/lib/python3.7/collections/__init__.py", line 52, in __getattr__
> DeprecationWarning, stacklevel=2)
> DeprecationWarning: Using or importing the ABCs from 'collections' instead of 
> from 'collections.abc' is deprecated, and in 3.8 it will stop working

I am not a pythonista, but running a simplified version [0] of the
indicated code produces the same message, so I would happily delegate
this to python-hamcrest maintainers – Hi not-me David :) – who hopefully
know more as I am just puzzled:

The source code of this file [1] is different in that it doesn't import
collections and the "bad line" mentioned above:
|13:return isinstance(method, collections.Callable)
is in the source just:
|12:return callable(method)

Which is also what the python-hamcrest package code is and what is
produced if I rebuild this package locally… o_O ?!?


In general it seems like the pyhamcrest package could use some love, so
@David Villa Alises, let me add the question if you are still active in
Debian and/or if you are still interested in maintaining this package?

As the freeze is drawing near I would appreciate a reply in the next
week so that we can proceed accordingly – I am e.g. happy to sponsor
uploads if need be. On the other hand, if I get no reply I plan to
upload at least a no-change -2 revision soon to resolve at least the
immediate problem before following the MIA track.


Best regards & wishes for the upcoming new year

David Kalnischkies

[0]
$ python3
Python 3.7.2rc1 (default, Dec 12 2018, 06:25:49)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import collections
>>> collections.Callable
__main__:1: DeprecationWarning: Using or importing the ABCs from 'collections' 
instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working

>>>

[1] 
https://sources.debian.org/src/pyhamcrest/1.8.0-1/src/hamcrest/core/helpers/hasmethod.py/


signature.asc
Description: PGP signature


Bug#909155: Bug #909155 in apt marked as pending

2018-09-20 Thread David Kalnischkies
Control: tag -1 pending

Hello,

Bug #909155 in apt reported by you has been fixed in the
Git repository and is awaiting an upload. You can see the commit
message below, and you can check the diff of the fix at:

https://salsa.debian.org/apt-team/apt/commit/6f1d622c84b3b7f821683bf69b8fcdb6dcf272a2


Deal with descriptions embedded in displayed record correctly

The implementation of "apt-cache show" (not "apt show") incorrectly
resets the currently used parser if the record itself and the
description to show come from the same file (as it is the case if no
Translation-* files are available e.g. after debootstrap).

The code is more complex than you would hope to support some rather
unusual setups involving Descriptions and their translations as tested
for by ./test-bug-712435-missing-descriptions as otherwise this could
be a one-line change.

Regression-Of: bf53f39c9a0221b67074053ed36fc502d5a0
Closes: #909155



(this message was generated automatically)
-- 
Greetings

https://bugs.debian.org/909155



Bug#897149: Package erroneously expects googletest headers in /usr/include

2018-04-29 Thread David Kalnischkies
On Sat, Apr 28, 2018 at 11:20:46PM -0500, Steve M. Robbins wrote:
> Your package relies on this behaviour and now fails to build since
> googletest version 1.8.0-9 no longer installs the duplicate header
> files.

*sigh* Seems like we can solve this by:

diff --git a/test/libapt/CMakeLists.txt b/test/libapt/CMakeLists.txt
index 86c0b28b5..c770d57bb 100644
--- a/test/libapt/CMakeLists.txt
+++ b/test/libapt/CMakeLists.txt
@@ -16,7 +16,7 @@ if(NOT GTEST_FOUND AND EXISTS ${GTEST_ROOT})
set(GTEST_LIBRARIES "-lgtest")
set(GTEST_DEPENDENCIES "gtest")
set(GTEST_FOUND TRUE)
-   find_path(GTEST_INCLUDE_DIRS NAMES gtest/gtest.h)
+   find_path(GTEST_INCLUDE_DIRS NAMES gtest/gtest.h PATHS 
/usr/src/googletest/googletest/include)

message(STATUS "Found GTest at ${GTEST_ROOT}, headers at 
${GTEST_INCLUDE_DIRS}")
 endif()


Not really knowledgeable enough about cmake through to know if that is
the best we can do – it looks kinda ugly/dirty.


We could also switch to using the prebuilt library in libgtest-dev; it
is happily picked up if available (which is why I didn't notice before).
Not sure how the version constraints need to look to deal sanely with
the constant name-changing of gtests packaging through…

(or well, we probably need to end up with a mix of all that to keep
working everywhere, so… no patch, just a hint for others looking into
it as I was a bit surprised it worked for me locally…).


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#887629: libc6: bad upgrade path: libexpat1 unpacked and python3 called before libc6 unpacked

2018-01-18 Thread David Kalnischkies
Hi,

On Thu, Jan 18, 2018 at 09:45:51PM +0100, Aurelien Jarno wrote:
> > > > [...]
> > > >   Preparing to unpack .../3-libglib2.0-dev_2.54.3-1_i386.deb ...
> > > >   /usr/bin/python3: /lib/i386-linux-gnu/libc.so.6: version `GLIBC_2.25' 
> > > > not found (required by /lib/i386-linux-gnu/libexpat.so.1)
> > > >   dpkg: warning: subprocess old pre-removal script returned error exit 
> > > > status 1
> > > >   dpkg: trying script from the new package instead ...
> > > >   dpkg: error processing archive 
> > > > /tmp/apt-dpkg-install-wfemKS/3-libglib2.0-dev_2.54.3-1_i386.deb 
> > > > (--unpack):
> > > >there is no script in the new version of the package - giving up
> > > >   /usr/bin/python3: /lib/i386-linux-gnu/libc.so.6: version `GLIBC_2.25' 
> > > > not found (required by /lib/i386-linux-gnu/libexpat.so.1)
> > > 
> > > This failure is normal given libexpat1 requires the new libc which has
> > > not been unpacked yet.
> > 
> > Yeah, well, it needs to Pre-Depend on it then I guess, if it's being used
> > in preinst actions. The thing is that Depends only after postinst ordering,
> > not unpack ordering.
> 
> Well it's not the preinst script, but the prerm script. The problem is
> unpacking libexpat1 before libc6 breaks libexpat1 and not usable
> anymore.

prerm is the very first script being called (see §6.6) and usually it is
the script of the version installed (only in error cases the script from
the version being upgraded to will be tried as detailed in the dpkg
messages) so I would argue that the dependencies (maybe) satisfied are
the dependencies of the installed version, not the one being installed
(argueably the dependency set of v1 and v2 could conflict with each
other, so if dependencies of v2 would be satisfied that means v1 script
would be bound to explode). But thats perhaps just the fear talking as
going with dependencies of v2 would probably result in a lot of hard
coding problems for apt & dpkg (and other low package managers).

In any case, the unpack of these packages is in the same dpkg call, so
if dpkg would have wanted it could have reordered them & apt has no idea
about maintainerscript in general, so I would say this isn't an apt bug.

(Althrough, if we decide on v2, I guess apt needs to change anyhow as
that same call thing might be just dumb luck in this case. Not even sure
if v1 is in any way "guaranteed" to be perfectly honest…)

Can't stop the feeling that we had issues with python begin called from
prerm before and the general advice was: "don't – stick to essential".


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#879662: http and https as well

2017-10-25 Thread David Kalnischkies
On Wed, Oct 25, 2017 at 08:55:11PM +0200, nicodache wrote:
> [20:51] <nicodache@tcherepnin> ~ $ sudo apt-get update
> Réception de:1 http://ftp.belnet.be/debian sid InRelease [235 kB]
> Lecture des listes de paquets... Fait
> E: Method http has died unexpectedly!
> E: Le sous-processus http a reçu le signal 31

Signal 31 is SIGSYS, so likely, althrough…

> [20:51] <nicodache@tcherepnin> ~ $ apt --version
> apt 1.6~alpha1 (amd64)

… that is an architecture Julian has likely tested extensively – and it
works for me on amd64, too, so a coredump would be handy for Julian to
look at as he already said as your problem is unreproducible for us.


> I managed to update my Sid after modifying my source.list to point towards
> ftp://. It all went fine, but when reversing to http (for
> https://www.debian.org/News/2017/20170425), error is still present.

Please don't do this. Julian already quoted the NEWS.Debian file with
details on how to disable seccomp, so if that is your problem, disable
it for the time being and be happy – don't change to ftp which will be
gone in a few days (at the very least, ftp uses seccomp, too, so its not
a generic problem like kernel not supporting it or something, but really
some syscall not whitelisted which should be).


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#871656: apt-offline: Does not validate Packages or .deb files in bundle

2017-08-20 Thread David Kalnischkies
Hi,

(Input from apt devs was requested on IRC, so here you go – please CC me
if there is something you think I could help with. Note that I am not an
apt-offline user nor do I know how it works; I have just read the
package description)


On Fri, Aug 18, 2017 at 04:33:01PM +0530, Ritesh Raj Sarraf wrote:
> Currently, our approach has a flaw. It completely misses to validate
> the Packages files. Instead, just after verifying the Release file, it
> assumes everything is clean and blindly copies the Packages files.

You are hardly the only one with this problem – and even if you would do
it 100% secure we as apt developers would probably not be 100% happy
about it as it means that /var/lib/apt/lists must be handled like
a public interface as in no changes to the filenaming or even bigger
changes to the storage (like e.g. compressing the files). Perhaps from
the apt side we should implement something like "apt-helper
import-lists-directory" to provide a way out of this mess in the
longterm.

Interesting might be to implement a local (http) proxy as you can make
that work with every apt version, but that of course gives the user the
wrong impression that files are downloaded from "somewhere" while in
reality the proxy would just serve files from the bundle on request.

[I am thinking about implementing both more or less for a while,
but haven't made any actual progress and somehow doubt I will in
a reasonable timeframe on my own. If someone wanted to pick it up
I could probably help with reviews through]


> We may not need this validation for .debs.

You need to do this for debs as well. The quick test just works as
expected because the deb file has a different filesize than what is
expected and apt checks the filesize as apt can do it for free while
checking for file existance and so deletes "obviously" bad files
silently.


As a workaround for this part, I think (= haven't tried) you can place
the deb files in partial/ – the download methods should pick up the
partial file and notice that it is already completely downloaded without
doing online requests. The files will then take there usual way through
the verifcation of checksums and end up in archives/ if everything is
fine.

That doesn't work for lists/ as Release files are always requested from
an online source (as apt can't know if its complete or outdated already)
and the other files tend to be no longer compressed & you can't be sure
that if you compress it again, that you would get the same hash (as e.g.
different versions of a compressor can generate different compatible
files).


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#872201: libc-bin: sometimes throws std::logic_error while processing triggers

2017-08-19 Thread David Kalnischkies

On Sat, Aug 19, 2017 at 11:58:09AM +0200, Andreas Beckmann wrote:
> >>>>>   terminate called after throwing an instance of 'std::logic_error'
> >>>>> what():  basic_string::_M_construct null not valid
> 
> Could this be related to #871275 "libapt-pkg5.0: requires rebuild
> against GCC 7 and symbols/shlibs bump" which was fixed recently in apt?
> IIRC this started after making gcc-7 the default ... I'll look if there
> are new occurrences of this bug.

Unlikely that it is related to this symbol change as it is a single
symbol related to URIs, which apt uses while acquiring files, but at the
stage packages are installed nothing worries about URIs anymore…

A core dump would really be handy. It could still be a problem with gcc
or perhaps dpkg changed slightly a log message and apt stumbles over
parsing it correctly (althrough, dpkg wasn't uploaded in a while)… or…
construction of a std::string's is done all over the place, implicit and
explicit, so without details we can't even guess sensibly.

Can't really suggest debugging options either, as most of them include
the construction of std::string's which could easily hide the problem
– and the debug usually involves stuff happening before/after dpkg is
called, usually not while it's running, but oh well, we could try -o
Debug::pkgDPkgProgressReporting=1 perhaps – that is heavy on string
operations through, so that could turn out to be more confusing than
helping… (and as said, dpkg hasn't changed in a while).


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#871275: libapt-pkg5.0: requires rebuild against GCC 7 and symbols/shlibs bump

2017-08-08 Thread David Kalnischkies

On Tue, Aug 08, 2017 at 04:33:52PM -0400, James Cowgill wrote:
> Maybe I misunderstood your question, but if you compile a library
> exporting an affected conversion operator using GCC 7, GCC will emit an
> alias to ensure that the old and new symbols both work. This is why

*doh* You said it right in the paragraph I quoted and still I missed it
that both symbols are emitted and thought the symbols patch is a typo
missing a '-' … thanks brain, very good job…


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#871275: libapt-pkg5.0: requires rebuild against GCC 7 and symbols/shlibs bump

2017-08-08 Thread David Kalnischkies
On Mon, Aug 07, 2017 at 03:47:15PM +0100, jcowg...@debian.org wrote:
> In GCC 7, the name mangling for C++ conversion operators which return a
> type using the abi_tag attribute (most commonly std::string) has
> changed. When your library is compiled with GCC 7, it will now emit two
> symbols for the conversion operator using the new and old naming.
> Executables compiled with GCC 7 will always use the new symbol, while
> old executables compiled using <= GCC 6 will use the old symbol. For new
> executables to build without undefined references, your library will
> need rebuilding with GCC 7.

On the upside, going through the list of severity pushes [0] I can spot
only aptitude from our reverse build-dependencies – and while it is
indeed using that API, it will stop doing so in the next upload
(#853316) so the practical effect is rather low (assuming we can
convince mafm to upload before we do I guess).

On a more theoretical note isn't there some way to emit a function with
the old mangle calling the new mangle (or duplicating it)?
I can't really believe that all of libstdc++6 doesn't contain a single
abitagged conversion operator, so I would presume they managed to pull
it of somehow (or we would be looking at v7 everywhere now).


Best regards

David Kalnischkies

[0] 
https://lists.debian.org/<handler.s.c.150187694331698.transcr...@bugs.debian.org>


signature.asc
Description: PGP signature


Bug#863367: [Pkg-openssl-devel] Bug#863367: libecryptfs-dev: unable to install because of unmet dependency

2017-05-28 Thread David Kalnischkies
On Sat, May 27, 2017 at 04:31:46PM +0200, Kurt Roeckx wrote:
> In general, I disagree that we should declare a conflict at both
> sides of the conflict and that the package manager should be able
> to deal with a conflict on just one side. It's not a conflict that
> involves version numbers.

The idea behind not automatically having the conflict effect both sides
is that a package which declares a conflict has a competitive advantage
over the conflictee as it reduces the score of the conflictee which
makes it easier for the conflictor to win against it in fights.
If apt would apply the conflict automatically on both sides the
advantage disappears. That hinders the successful resolution of the
usual situation in case a conflict isn't declared on both sides: The
package which hasn't the conflict is the "old" package (not updated for
the release e.g. because it was removed) which should loose against the
"new" package which has the conflict declared.


Beside that little heuristic trickery I believe it to be cleaner and
more discoverable for a user that such a conflict exists and is intended
if it is declared on both sides.

And lastly, I guess 'domain knowledge' is involved as we wouldn't be
talking if libssl-dev would be a new mail-transport-agent. It would be
perfectly clear that it must conflict with the others even if there is
no technical reason for it given that the other mail-transport-agents
already conflict with it.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#863367: libecryptfs-dev: unable to install because of unmet dependency

2017-05-27 Thread David Kalnischkies
Control: reassign -1 libssl-dev 1.1.0e-2
Control: retitle -1 libssl-dev: declare conflict with libssl1.0-dev to help apt 
find solutions

On Sat, May 27, 2017 at 09:32:34AM +0300, Adrian Bunk wrote:
> Control: reassign -1 apt
> Control: retitle -1 apt does not find solutions that involve libssl1.0-dev -> 
> libssl-dev
> 
> On Thu, May 25, 2017 at 09:16:30PM +0200, s3v wrote:
> > Package: libecryptfs-dev
> > Severity: grave
> > Justification: renders package unusable

(technically wishlist, but people might disagree in practice, so I will
leave severity decisions at this stage to maintainers/release team –
please realize that this means this bug is RELEASE CRITICAL atm)

General advice:
Don't (re)assign package uninstallabilites to apt. The team has neither
the knowledge nor the manpower to deal with the installation problems of
more than 5 packages in existance. All it does achieve is that it
will get downgraded on the spot to normal or lower and left to die^Wbe
closed in a couple years in the already existing bugpile; in short:

Not being installable is the problem of the package which isn't
installable – even if that is due to bugs in a package manager!


> libecryptfs-dev Is not actually uninstallable, the core problem is that 
> you have libssl1.0-dev installed and apt fails to find the solution to
> solve the dependencies:
> 
> # apt-get install libtspi-dev
[…]
> root@localhost:/# apt-get install libtspi-dev libssl-dev
[…]
> The other direction works:
> 
> # apt-get install libh323plus-dev

The defining difference between the two is that libssl1.0-dev conflicts
with libssl-dev while the later doesn't with the first.

As you are trying to express a mutially exclusive relationship between
two packages which should both be shipped in the release it would be
a good idea to declare this exclusiveness on both sides and indeed in
a quick test that is already enough to give apt the hint it needs as
this changes the scoring for the little 1on1 cagefights happening behind
the scenes.

Have a look at them with -o Debug::pkgProblemResolver=1
(kids-friendly as no violence is depicted)


That wasn't all to hard to figure out and I am pretty sure that would
have happened just as fast/good if assigned to one of the involved
packages rather than to apt, which always carries the risk of getting
ignored instead… I was actually 2 seconds away from tagging it
'wishlist'¹ for apt and get on with never looking at it again in my
lifetime.

Note that this solution might not be a good one, but that requires
knowledge about the packages involved which I just don't have as hinted
above. Please CC de...@lists.debian.org if there are any questions you
think we could answer.


Best regards

David Kalnischkies

¹ The cagefights are a design decision in the current default resolver,
which is impossible^Whard to change and absolutely not going to happen
any time soon yet alone days before release. As such it would qualify
for 'wishlist'.


signature.asc
Description: PGP signature


Bug#854554: dpkg: trigger problem with cracklib-runtime while upgrading libcrypt-cracklib-perl from jessie to stretch

2017-04-08 Thread David Kalnischkies
Control: reassign -1 libcrypt-cracklib-perl 1.7-2
# beware, this is an RC bug!

[CC'ed cracklib2 & perl maintainers as it seems to be some "funny"
interaction between packages in their responsibility space.]

On Sat, Apr 08, 2017 at 04:50:15AM +0200, Guillem Jover wrote:
> [ Please see the bug archive for more context. ]
[…]
> > So the fault is in apt ... and that's jessie's version of apt that is
> > running the upgrade :-(
> > 
> > If I start the upgrade with upgrading only apt (and its dependencies)
> > and thereafter running the dist-upgrade (with squeeze's version of apt),
> > I cannot reproduce the bug.
> 
> Thanks for verifying! Reassigned now to apt.

So, given that apt works in newer versions there remains no action for
the apt team. The responsibility of making an upgrade work with whatever
we have got (bugs included) is by the individual package maintainer(s)
– simply because we (or the release team, or …) can't handle 5
source packages in a reasonable way. [and upgrading apt/dpkg/others
first doesn't work all the time due to … wait for it … dependencies].


That said, we are happy to help of course. The basic idea of convincing
apt to do something it didn't do on its own is adding/removing
dependencies (mostly of the type Depends or Breaks/Conflicts) – that
tends to be helped by some "field knowledge" about the packages in
question which is why the maintainers are best, but given guessing and
creativity are involved anyone can help!

The easiest (although potentially time-consuming) way of finding the
right combination is to make a chroot with the "before upgrade" setup
(= as in the next step would be 'apt-get dist-upgrade' – so update
done), copy the entire chroot, modify the /var/lib/apt/lists/*Packages
file so that it has the "new" dependencies and run the dist-upgrade.
Repeat from copy until happy.

In your case you can potentially speed that up by looking at the output
of "-o Debug::pkgDpkgPm=1" – that shows how dpkg is called (it doesn't
call dpkg!) – which makes the iteration faster at the expense of you
trying to make sense of that: Having "--configure libcrack2" before the
other crack-related packages might be your target (see guess below).

Good luck & as said, ask if you are stuck (please provide the [lengthy]
output of -o Debug::pkgPackageManager=1 -o Debug::pkgDpkgPm=1). We are
reachable via de...@lists.debian.org or #debian-apt on IRC.


That said, educated guess follows:

Looks like apt acts on libcrypt-cracklib-perl early as it looks simple
enough (after upgrading perl) as the dependency on libcrack2 is already
satisfied at the start of the upgrade (as its a version before jessie).
As the dependencies of libcrack2 are very lightweight (just libc6 which
is done at that point) it might already work if you artificially require
a stretch-version here (= guess, not tested at all).


Best regards

David Kalnischkies, who is in a love-hate relationship with triggers


signature.asc
Description: PGP signature


Bug#858934: apt FTBFS with po4a 0.50-1

2017-03-28 Thread David Kalnischkies
Control: reassign -1 po4a 0.50-1

Hi,

On Tue, Mar 28, 2017 at 10:05:52PM +0200, Helmut Grohne wrote:
> Since today apt fails to build from source in unstable on amd64. The
> typical failure looks like:
> 
> | cd "/<>/obj-x86_64-linux-gnu/doc" && po4a --previous 
> --no-backups --package-name='apt-doc' --package-version='1.4~rc2' 
> --msgid-bugs-address='APT\ Development\ Team\  | Invalid po file po/es.po:
> | msgfmt: error while opening "po/es.po" for reading: No such file or 
> directory

The full command is:

cd /path/to/apt/build/doc && po4a --previous --no-backups 
--package-name='apt-doc' --package-version='1.4~rc2' --msgid-bugs-address='APT\ 
Development\ Team\ <de...@lists.debian.org>' --translate-only de/guide.de.dbk 
--srcdir /path/to/apt/doc --destdir /path/to/apt/build/doc 
/path/to/apt/doc/po4a.conf

As you can see we switch into a build directory and instruct po4a to
pick up all files it needs with --srcdir from the source. That used to
work, but it seems to no longer work in the new version – I can fix the
build with a simple symlink:

ln -s /path/to/apt/doc/po /path/to/apt/build/doc/po

As I don't see what the apt buildsystem could be doing wrong in po4a
call and because it worked before I am reassign to po4a to fix that
regression – especially as we are in freeze.

If there is something wrong with the call on the other hand it would be
nice if we could get some details on what to do instead and how to
achieve compatibility with "old" and "new" po4a.


Thanks Martin for picking up po4a development btw even if the timing is
a bit unfortunate for (accidental?) uploads to unstable…


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#851774: [pkg-gnupg-maint] Bug#851774: Stop using apt-key add to add keys in generators/60local

2017-02-05 Thread David Kalnischkies
On Sun, Feb 05, 2017 at 12:23:19AM -0500, Daniel Kahn Gillmor wrote:
> On Sat 2017-02-04 19:48:54 -0500, Cyril Brulebois wrote:
> > [ dkg wrote: ]
> >> Regardless of the choice of filesystem location (fragment directory or
> >> elsewhere), gpgv does want to see the curated keyrings it depends on
> >> in binary format, so on to the next bit:
> >
> > I'm a bit confused here: apt-get update (in a sid chroot, not attempted
> > in d-i) is fine with an armor key in the fragment directory; are you
> > saying that using the Signed-by option for sources.list would mean
> > having to have a (curated) keyring, and an non-armored version, hence
> > the need for the transformation you're suggesting below?
> 
> Sorry, i guess it's possible that apt is doing something fancier that i
> don't know about, then.
> 
> gpgv on its own expects the --keyring files it encounters to be either a
> sequence of raw OpenPGP packets that together form a series of OpenPGP
> certificates (a.k.a. "a keyring") or GnuPG's "keybox" format.  AFAIK,
> gpgv does not accept ascii-armored files for its --keyring argument.
> 
> maybe the apt folks can weight in on what's going on with armored
> fragments?  If it's converting them before handing them off to gpgv,
> maybe you can just count on it to convert the files that aren't in the
> fragment directory as well?

apt >= 1.4 uses basically the awk snippet (it is slightly more complex
to deal with two or more armor keys in one file, but that is implemented
more for our testcases than a real external requirement) [see apt-key
for implementation details].

Note that you can NOT use files in keybox format in exchange as apt
merges all keyrings into a big one with 'cat' to avoid both a dependency
on gnupg and to avoid running into limits on the amount of keyring files
(gpg has a limit of 40 keyring files in a single invocation – and there
is always the looming threat of having that be 1 one day…).

So, as long as you make it so that an armored file has the extension
'asc' and binary (OpenPGP packet) file has 'gpg' apt will do the right
thing with them in the fragment directory just as well as in Signed-By
[in stretch, but Signed-By is a new-in-stretch feature, too].


> > Remember we're talking about adding extra repositories with custom d-i
> > configuration, so I'm fine with people having broken stuff because they
> > pasted a whole mail…
> 
> agreed, we can expect these folks to get the details right.

For the same reason I wouldn't worry too much about people using *.asc
files with binary format contents and vice versa to be honest.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#844300: nvidia-driver-libs:amd64: upgrade failure due to dependency issue

2016-11-22 Thread David Kalnischkies
reassign -1 dpkg 1.18.15

(cutting down heavily on the text)

 On Tue, Nov 22, 2016 at 02:43:35PM +0100, Vincent Lefevre wrote:
> --\ Packages to be upgraded (17)
[…]
> iuA nvidia-driver-libs367.57-1 
> 367.57-2
[…]
> --\ Packages being removed because they are no longer used (27)
[…]
> idA nvidia-driver-libs:i386 -180 kB   367.57-1 
> 367.57-2
[…]
> dpkg: error processing package nvidia-driver-libs:amd64 (--configure):
>  package nvidia-driver-libs:amd64 367.57-2 cannot be configured because 
> nvidia-driver-libs:i386 is at a different version (367.57-1)

This looks like a bug in dpkg as it is not considering the removal of
nvidia-driver-libs:i386 as solution to the problem it runs into here
even through libapt has told it via selections that it wants it removed.

Reproducing is 'easy' with any M-A:same package which is installed for
two (or more) architectures in version 1 and one of the architectures is
upgraded to version 2 while the other is removed.


That said, you can see this bug with apt(itude) only as libapt
incorrectly detects a crossgrade here dropping the explicit remove.
As we (= libapt) want to eventually drop the explicit removes and other
frontends arguable have already like dselect I am reassigning to dpkg
– "fixing" (its closer to a workaround) this in libapt is partly done
already, so I don't need/want a clone.

In terms of the solution itself: I haven't looked closely, but apt tries
to not explore solutions caused by M-A:same version screw – aptitude
seems way more willing to suggest such solutions; that is okay I guess
as it is way more interactive, too.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#844721: libgtest-dev isn't replacing dir with symlink on upgrade

2016-11-20 Thread David Kalnischkies
On Sat, Nov 19, 2016 at 10:06:17PM -0600, Steve M. Robbins wrote:
> On Fri, Nov 18, 2016 at 01:29:07PM +0100, David Kalnischkies wrote:
> > You should also update your README.Debian and the descriptions with the
> > new paths and the transitional package as [...]
> 
> Thanks.  Updated README.Debian.  Not sure what you mean about the
> descriptions -- there is nothing in the control file about the
> paths. (?)

I was refering to saying in the description of libgtest-dev perhaps
something to the effect that it is a (mostly) empty transitional
package.


Anyway, thanks for fix & upload!


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#844721: libgtest-dev isn't replacing dir with symlink on upgrade

2016-11-18 Thread David Kalnischkies
Package: libgtest-dev
Version: 1.8.0-1
Severity: serious

Hi,

libgtest-dev contains in 1.8.0-1 a symlink to the new on-disk location.
That works for new installs, but doesn't on upgrades – a user ends up
with an empty /usr/src/gtest in that case.  You need to work with
maintainerscripts here, see "man dpkg-maintscript-helper" and especially
the section about dir_to_symlink for details on how and why.

The justification for 'serious' is a bit of a stretch (pun intended) as
the policy isn't explicitly saying that upgrades must produce a working
package… but I hope you and/or the release team implicitly agree. :)


In all likelyhood its the same with mock, but I am not using it…

You should also update your README.Debian and the descriptions with the
new paths and the transitional package as I guess you want to retire the
old package/path some day and the longer the grace period the better…

btw: Upstream seems to have retired their remark on compiling googletest
on your own as I can't find it any longer on their website and e.g. in
the RPM/BSD worlds you get a binary only.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#835094: in apt marked as pending

2016-08-23 Thread David Kalnischkies
Control: tag 835094 pending

Hello,

Bug #835094 in apt reported by you has been fixed in the Git repository. You can
see the commit message below, and you can check the diff of the fix at:

https://anonscm.debian.org/cgit/apt/apt.git/diff/?id=fb51ce3

(this message was generated automatically based on the git commit message)
---
commit fb51ce3295929947555f4883054f210a53d9fbdf
Author: David Kalnischkies <da...@kalnischkies.de>
Date:   Mon Aug 22 21:33:38 2016 +0200

do dpkg --configure before --remove/--purge --pending

Commit 7ec343309b7bc6001b465c870609b3c570026149 got us most of the way,
but the last mile was botched by having the pending calls in the wrong
order as this way we potentially 'force' dpkg to remove/purge a package
it doesn't want to as another package still depends on it and the
replacement isn't fully installed yet.

So what we do now is a configure before remove and purge (all with
--no-triggers) and finishing off with another configure pending call to
take care of the triggers.

Note that in the bugreport example our current planner is forcing dpkg
to remove the package earlier via --force-depends which we could do for
the pending calls as well and could be used as a workaround, but we want
to do less forcing eventually.

Closes: 835094



Bug#812173: apt fails to distinguish between different Provides of different package versions

2016-01-21 Thread David Kalnischkies
Control: severity -1 important

On Thu, Jan 21, 2016 at 08:07:27AM +0100, Johannes Schauer wrote:
> Severity: serious
> Justification: Policy 7.5

I am downgrading not because I disagree, but because that idiocy isn't
'new', its here since day one (I assume as build-dep is very seldom
changed… basically only on bugreport from you ;) ) which shows how
serious that is in the grand scheme of things, but lets start at the
top:

> It seems automake (= 1:1.14.1-4) provides automake-1.14 while automake
> (= 1:1.15-3) provides automake-1.15. It might be that somehow apt sees
> that "automake" provides automake-1.14 but does not store which version
> of "automake" provides it

apt does store this information and the apt resolver 'happily' makes use
of that information…

> and just either installs the newest one in the
> "apt-get build-dep" case or

… its just that build-dep has its own 'resolver' for the first level of
dependencies (the Build-Depends in the Sources files) which is very
suboptimal for various reasons even if it were on feature-parity with
the rest of apt - but it of course isn't even that.

I am actually working for a few days now on retiring this build-dep
resolver – basically how sbuild and co do it too: create a dummy package
and call install on it, but that is easier said than done. It slowly
begins to work through and I stumbled yesterday over a grossly
simplified instance of this bug:

On a amd64 machine, install foo:armel (v1 M-A:no) have a foo:armel (v2
M-A:foreign) in your sources and run build-dep foo-depender
(Build-Depends: foo). build-dep happily believes it doesn't need to
upgrade foo (it helps understanding why that is the same bug knowing
that multi-arch is implemented with [versioned] provides in apt).

[One of our tests was actually implementing this by accident and I lost
a few hours to debugging until I realized that the apt resolver is
actually right and the test (and build-dep) wrong…]


So, I think we will be able to close this bug (and plenty of
[unreported] others) relatively soon, but that patch isn't going to be
easily backportable to jessie. If someone feels like needing this I am
happy to help, but I will not invest time on it myself…

(after all, the result of that backport would just be an error until the
next paragraph is resolved)


> Ideally, apt should see that there are different versions of the same
> package with the same pin value but different provides and pick the one
> that satisfies the dependency even if it's of a lower version. After
> all, both packages are part of the same suite. Though with apt's
> behaviour of selecting only the highest version as the candidate I can
> see why this will not be happening and maybe instead automake should

Ideally, yes. We have this problem with arch:all packages and arch:any
packages having =-depends on them already, so eventually I want to work
on this, but the current architecture of the resolver (parts) do not
make that a very easy task and its hardly the only thing I want to work
on…


> change how they do their virtual packages. But then again, on what
> grounds should automake change their provides? Only because of an apt
> limitation? As far as I can see they are policy compliant and other

Debian 'frequently' changes packages to workaround bugs/limitations in
apt either because an upgrade is unfeasible (dist-upgrades) or because
nobody sits down and actually works on the problem in apt. The same is
done for many other "too big to fail" projects. So the ground is simply
reality. You have the moral high ground perhaps as "that should work"
– but as apt is a Debian native package it itself has the even higher
moral ground of "why is nobody working on me(=apt) then?".

We are happy to help out with specific answers if a package maintainer
can't make it work by itself (if they pick the 'easy' way out), but
please understand that we can't pro-actively check all source packages
and even if we are told the source package we tend to lack the knowledge
in this area to provide help without the maintainer describing the
intend behind all this first.

[ btw: If we go all policy on this for a second, I wonder if §3.6 isn't
prohibiting this usage without discussion given that this isn't
a private virtual package, but a public interface… /nitpick ]


> resolvers like dose3 or aspcud will happily find a solution.

I hope so! Otherwise I would seriously question what researchers have
done in the last decade (we are actually getting close to two now)… ;)
Of course, they aren't perfect either or someone surely had put in the
effort of making them the default…


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#806475: in apt marked as pending

2015-11-28 Thread David Kalnischkies
Control: tag 806475 pending

Hello,

Bug #806475 in apt reported by you has been fixed in the Git repository. You can
see the commit message below, and you can check the diff of the fix at:

https://anonscm.debian.org/cgit/apt/apt.git/diff/?id=ebca2f2

(this message was generated automatically based on the git commit message)
---
commit ebca2f254ca96ad7ad855dca6e76c9d1c792c4a0
Author: David Kalnischkies <da...@kalnischkies.de>
Date:   Sat Nov 28 13:17:57 2015 +0100

disable privilege-drop verification by default as fakeroot trips over it

Dropping privileges is an involved process for code and system alike so
ideally we want to verify that all the work wasn't in vain. Stuff
designed to sidestep the usual privilege checks like fakeroot (and its
many alternatives) have their problem with this through, partly through
missing wrapping (#806521), partly as e.g. regaining root from an
unprivileged user is in their design. This commit therefore disables
most of these checks by default so that apt runs fine again in a
fakeroot environment.

Closes: 806475



Bug#806475: apt: Breaks debian-installer build, select with no read/write fds?

2015-11-27 Thread David Kalnischkies
On Fri, Nov 27, 2015 at 09:08:35PM +0100, Cyril Brulebois wrote:
> | E: Method gave invalid 400 URI Failure message: Could not get new groups - 
> getgroups (22: Invalid argument)
> | E: Method copy has died unexpectedly!
> | E: Sub-process copy returned an error code (112)

So, getgroups gets called there to verify that we really lost all groups
(beside the one _apt is in: nogroup). A few lines above we set the list
of (supplementary) groups containing only this group, then we switch uid
and gid (the later isn't enough for group switching aka we would be
still in root without the setgroups before).

So, us calling getgroups should really only return one group. Getting an
EINVAL suggests we get more than one… that is probably bad, but I have
a slight glimmer of hope that its just two times the same group – even
if that makes no sense… anyway, I can't reproduce this at the moment, so
it would be nice if someone could try the attached patch which could at
least tell us in which groups we remain (or it just works if we really
see duplicated groups here). Everything is possible I guess.

Given that schroot is involved mentioning if your host has an _apt user
or not might also help. As I learned today schroot is copying users and
groups into the schroot which makes all of this kinda strange… (#565613)
[two years of testing and you are still surprised on release…]

btw: To not block anyone: You can use the config option
Debug::NoDropPrivs to true to disable privilege dropping for the moment.


Best regards

David Kalnischkies
diff --git a/apt-pkg/contrib/fileutl.cc b/apt-pkg/contrib/fileutl.cc
index 46de634..f754b31 100644
--- a/apt-pkg/contrib/fileutl.cc
+++ b/apt-pkg/contrib/fileutl.cc
@@ -2322,12 +2322,17 @@ bool DropPrivileges()			/*{{{*/
   return _error->Errno("seteuid", "Failed to seteuid");
 #endif
 
-   // Verify that the user has only a single group, and the correct one
-   gid_t groups[1];
-   if (getgroups(1, groups) != 1)
-  return _error->Errno("getgroups", "Could not get new groups");
-   if (groups[0] != pw->pw_gid)
-  return _error->Error("Could not switch group");
+   // Verify that the user isn't still in any supplementary groups
+   long const ngroups_max = sysconf(_SC_NGROUPS_MAX);
+   std::unique_ptr<gid_t[]> gidlist(new gid_t[ngroups_max]);
+   if (unlikely(gidlist == NULL))
+  return _error->Error("Allocation of a list of size %lu for getgroups failed", ngroups_max);
+   ssize_t gidlist_nr;
+   if ((gidlist_nr = getgroups(ngroups_max, gidlist.get())) < 0)
+  return _error->Errno("getgroups", "Could not get new groups (%lu)", ngroups_max);
+   for (ssize_t i = 0; i < gidlist_nr; ++i)
+  if (gidlist[i] != pw->pw_gid)
+	 return _error->Error("Could not switch group, user %s is still in group %d", toUser.c_str(), gidlist[i]);
 
// Verify that gid, egid, uid, and euid changed
if (getgid() != pw->pw_gid)


signature.asc
Description: PGP signature


Bug#806475: apt: Breaks debian-installer build, select with no read/write fds?

2015-11-27 Thread David Kalnischkies
On Sat, Nov 28, 2015 at 12:30:52AM +0100, Cyril Brulebois wrote:
> Now if I log out of the schroot session, remove my user 'kibi' from the
> cdrom group and re-enter a schroot session, I'm now getting a failure on
> the next group:
> | (sid-amd64-devel)kibi@wodi:~/debian-installer/installer$ make -C build 
> build_netboot-gtk USE_UDEBS_FROM=sid 
> | make: Entering directory '/home/kibi/debian-installer/installer/build'
> | Using generated sources.list.udeb:
> |deb [trusted=yes] copy:/home/kibi/debian-installer/installer/build/ 
> localudebs/
> |deb http://localhost/debian sid main/debian-installer
> | make[2]: 'sources.list.udeb' is up to date.
> | Reading package lists... Done
> | E: Method gave invalid 400 URI Failure message: Could not switch group, 
> user _apt is still in group 25
> | E: Method gave invalid 400 URI Failure message: Could not switch group, 
> user _apt is still in group 25
> | E: Method copy has died unexpectedly!
> | E: Sub-process copy returned an error code (112)
> | 
> | (sid-amd64-devel)kibi@wodi:~/debian-installer/installer$ getent group floppy
> | floppy:x:25:kibi
> | 
> | (sid-amd64-devel)kibi@wodi:~/debian-installer/installer$ groups
> | kibi floppy audio dip video plugdev sbuild kvm libvirt
> 
> Iterating again, I'm now failing because of the audio group…

Mhh. apt is run as root (as we don't reach this codepath with uid !=
0), but it has all the groups of kibi and a setgroups is silently
ignored… wtf…

The code is if someone wants to look:
https://anonscm.debian.org/cgit/apt/apt.git/tree/apt-pkg/contrib/fileutl.cc#n2264
I will go to bed now, maybe I have an epiphany tomorrow.
(or manage to reproduce this for a start)


> While I've been experimenting with adding/removing myself from the said
> groups, I'm noticed this a few times, without being able to figure out
> what exactly causes this…
> | W: No sandbox user '_apt' on the system, can not drop privileges
> 
> In which case, going back to apt.git and "sudo debi -u" to reinstall all
> packages I've built seems to fix the issue.

As mentioned briefly schroot copies users & groups from your host
system, so if your host system has no _apt user, the _apt user in your
schroot will "disappear" next time it is copied over.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#806408: apt: adequate: underlinking (undefined symbols)

2015-11-27 Thread David Kalnischkies
Control: severity -1 minor

On Fri, Nov 27, 2015 at 09:41:30AM +0100, Thorsten Glaser wrote:
> | Shared libraries MUST be linked against all libraries that they use
> | symbols from in the same way that binaries are.
> (emphasis mine)

> adequate reports:
> […]
> apt: undefined-symbol /usr/lib/x86_64-linux-gnux32/libapt-private.so.0.0.0 => 
> _Z8ShowHelpR11CommandLine
> apt: undefined-symbol /usr/lib/x86_64-linux-gnux32/libapt-private.so.0.0.0 => 
> _Z11GetCommandsv
> […]
>
> See also: Policy §8.6.1

This applies to SHARED libraries (emphasis mine), but libapt-private
isn't one. Its an internal library¹ used to share code between apt,
apt-get and co which is too specific to be moved into the real shared
library libapt-pkg (or its ugly cousin libapt-inst), but creating "fat"
binaries isn't in our best interest either.

The mentioned symbols (which are implemented by the respective binary
itself by the way) are the best examples for this: The --help text and
which commands (like install/remove/clean) a given binary supports (and
which function is used to deal with it) are of no concern for other
apt-frontends and so do not need to clutter the ABI/API of libapt-pkg
and even in libapt-private it makes no sense to define them as its
different for all of them, but then I am lazy and don't want to maintain
pretty much the same copycat calling code in each frontend…


I guess to appease adequate I can pull some weak symbol magic, which
I initially intended anyhow, but part forgot and part found useful while
working on this change.


¹ if "-private" wasn't enough of a hint, headers aren't available, no
symbols/shlibs file and exactly nobody cares about ABI/API in there.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Bug#784548: apt-get update race condition in candidate dependency resolution

2015-05-07 Thread David Kalnischkies
Control: severity -1 normal
Control: merge -1 717679

On Wed, May 06, 2015 at 04:39:05PM +0200, Stefan Schlesinger wrote:
 Justification: could cause unwanted versions to be installed without notice
 Severity: grave

No*. At least if you are careful you have a few chances of noticing it.
The -V option e.g. tells you the versions. The download progress tells
you were stuff comes from.


 Running apt-get update on a system at the same time with other apt commands,
 causes apt to resolve package dependencies and policies inconsistently.
 
 Behavior causes race conditions during package cache update, resulting in 
 altered
 candidate versions and you will end up with unwanted versions being installed 
 (eg.
 when running 'apt-get upgrade -y’ or working with unattended-upgrades).
 
 Current stable version in Jessie - 1.0.9.8 is also affected.

Every version ever in existence in the last 17 years is effected.
That it survived 17 years is reason enough to not give it RC severity.

Also note that you need root-rights to run 'apt-get update' (or
equivalent) commands, much like you need root-rights to run 'rm -rf /'.
rm got over the years safeguards to prevent this, but this will never be
perfect and we are much in the same position. In the end root is the
only account who can do everything, but it should be asked if that means
that it should do everything. Maybe running multiple apt commands in
parallel is just in general not a good idea… and btw, neither it is for
dpkg or any other package management tool. Many things are heavily
interlocked here working in concert to make package installation
possible. Sometimes I think 'we' higherlevel parts like apt just made it
too easy. On every other plattform which can be updated nobody would
ever ask for running stuff in parallel, simply because those happen in
total lockdown…


 IMO, this should either be an atomic file/directory move, once package files 
 where
 downloaded successfully, or apt-get update should make use of locking as well.

This isn't really about the Packages files, but about the Release files
and only on a second step about all the other files apt downloads. What
sounds like a trivial implementation happily explodes into a code
nightmare with only slightly less edgecases than decimal places of pi.
We are slowly getting better, but there is only so much you can do with
the resources we have…


 We saw this issue affecting multiple apt commands and actions:

Everyone using libapt is effected if you will, so even stuff like the
typical software-center. And not all these are started and/or always
working as root, so just 'locking' is not an option if you don't happen
to forbid every use of libapt as non-root in the process and only allow
libapt to be loaded by only one root application at the time. That would
immensly cripple the useability for next to no gain…


Best regards

David Kalnischkies


signature.asc
Description: Digital signature


Bug#781696: [PATCH] apt-key del keyid is case sensitive

2015-04-09 Thread David Kalnischkies
Control: tags -1 - jessie sid
Control: found -1 0.9.10
# introduced by git commit 04937adc655ceda0b3367f540e76df10296cfba1

On Thu, Apr 09, 2015 at 01:30:19AM -0400, Nathan Kennedy wrote:
 Tagging this as a security issue since the effect is to allow
 installation of packages signed with keys that the administrator (or a
 an administrative script) specifically intended to remove, and this is
 a regression from wheezy.

I think 'security' is a bit much here but well… The 'OK' was always the
case, removed or not and changing this at this point in time is
completely out of question as there might be maintainerscripts depending
on this behavior (many -keyring packages remove 'old' keys or transition
to fragment files and not all of them ignore the exitcode of apt-key in
this case). Beahviour changes two seconds before release are no good.

The regression on lower-case (aka the grep -i) is under consideration
for jessie. A pre-approval unblock request with this (and other things)
was filled yesterday (see #782131).

Note that the output from gnupg as well as from apt is uppercase, so its
likely that lowercase is only encountered if people interactively type
out the keyids, which is not a very common usecase – after all apt-key
is supposed to be used mainly by -keyring packages and even those are
supposed to get away from using it leaving next to nobody with a valid
usecase to use it…

… expect apt itself in future. See the experimental branch which reworks
apt-key to make what used to be considered the enemy (= the idea was to
remove dependency on gnupg and ultimatively drop apt-key) our best
friend (= future gnupg2 is going to enforce some new rules like only one
keyring which work directly against e.g. trusted.gpg.d/ so we need gpg
to do all sorts of clever magic instead of resorting to gpgv only :/ ).


 Pull request with fix to sid is attached. This doesn't fully restore
 previous behavior; before long keyids could be used as well, but it
 allows mixed case and fails if deletion fails (LP 1256565).

Please split up your patches into meaningful self-containt entities.
Ignoring case is independent from erroring on not found for example,
so that should be two commits, not bundled up in one.

As said, the later isn't going to be considered for jessie, but we could
do e.g. a warning for stretch and error for buster. Please report a new
bug for this – if you want to adapt your patch all the better (please
against /experimental aka soon-to-be stretch).

btw: /experimental also restores fingerprint (long keyid is supported).
What it doesn't is all the various other things of matching a key gnupg
supports – if those happen to work (or not) is undefined by the manpage
(it says only keyid).


 +msgtest Try to remove a 'nonexistent keyid'
 +testfailure --nomsg aptkey --fakeroot --keyring rootdir/etc/apt/trusted.gpg 
 del BOGUSKEY

Such a test is probably better of using a valid keyid – otherwise you
are testing if bogus ids trigger an error, not if a nonexistent id
triggers an error.

  msgtest Try to remove a key which exists, but isn't in the 'forced keyring'
  testsuccess --nomsg aptkey --fakeroot --keyring rootdir/etc/apt/trusted.gpg 
 del DBAC8DAE

Shouldn't (at least) this testcase fail if you fail on not acting?


Best regards

David Kalnischkies


signature.asc
Description: Digital signature


Bug#781858: apt: dangling pointer crash

2015-04-06 Thread David Kalnischkies
Control: fixed -1 1.1~exp4

Hi,

On Mon, Apr 06, 2015 at 03:44:01PM +0200, Tomasz Buchert wrote:
 :D Of course! I focused so much on not breaking API  ABI, that I forgot
 about it.

The problem, which is also why this got a FIXME instead of a fix back
then is that you not only have the problem of keeping the ABI which is
already hard, but also the API – and the API doesn't indicate that this
string is stored in the heap and needs to be freed. In fact, it goes as
far as explicitly stating that it doesn't (which is why you were forced
to cast so much). That is all fine if all the code accessing and
setting it would be under our control, but it isn't. Nothing really
stops a libapt user to implement its own acquire items (and e.g.
aptitude does this), which very well could set Mode as well (I think
aptitude doesn't) which in the best case means leaked memory and in the
worst case libapt is trying to free non-heap memory in the items
destructor later on resulting in a crash.

The apt/experimental version does away with this problem by moving to
a std::string under a different name and deprecating Mode – which gets
assigned the c_str of the new std::string and hence isn't running out
of scope anymore¹ while also keeping the now deprecated Mode working.

I have a ¹ here as in an implementation theoretical way the std::string
which is stored in Mode isn't running out of scope in many compiler
implementations of std::string as they deploy copy-on-write, and we don't
change, so the string we us here comes from the global config which is
always in scope. I know that is a lot of if's and very brittle, but
that worked since Aug 2009 for e.g. apts progress reporting (where Mode
is used) – even through back then I really had no idea and just got
lucky as this line was part of one of my first contributions (and my c++
background was equally thin - not that the later is much better now) …
[you can 'easily' verify this by e.g. printing Mode in the item
destructor. The output is correct. Now go and append something to the
decompProg string before assigning it to Mode and you will notice that
the result is unpredictable garbage. All hail to optimization! ;) ]

I wonder what is so special about aptdaemon that it has problems now -
so can someone please verify that this is really the problem and not
just the first thing someone stumbled over while trying to find
a culprit (no blame, it would be my first bet, too)?

Anyway, if that is really a problem we can fix that in a more compatible
way: Instead of assigning the decompProg string, we can go with decomp
or unpack or similar such. Its used only for display proposes anyway
and if a user sees a bzip2 21kB/42kB or a decomp 21/kB/42kB² should
not matter much (trivial diff attached).


Best regards

David Kalnischkies

² 'decomp' mostly because apt has a tendency to use incomprehensible
strings here – or does any normal user know what rred is?
Most who see it think its a typo for 'read' after all. ;)
diff --git a/apt-pkg/acquire-item.cc b/apt-pkg/acquire-item.cc
index 253cbda..a603479 100644
--- a/apt-pkg/acquire-item.cc
+++ b/apt-pkg/acquire-item.cc
@@ -1194,8 +1194,10 @@ void pkgAcqIndex::Done(string Message,unsigned long long Size,string Hash,
Desc.URI = decompProg + : + FileName;
QueueURI(Desc);
 
-   // FIXME: this points to a c++ string that goes out of scope
-   Mode = decompProg.c_str();
+   if (compExt == uncompressed)
+  Mode = copy;
+   else
+  Mode = decomp;
 }
 	/*}}}*/
 // AcqIndexTrans::pkgAcqIndexTrans - Constructor			/*{{{*/


signature.asc
Description: Digital signature


Bug#776910: apt: upgrade from wheezy to jessie breaks in the middle

2015-03-08 Thread David Kalnischkies
 will silently discard it.  So follow up with a separate mail
 afterwards to ensure we notice once you have done it.
 
 I've learned, that attachments are OK. So here is my (most relevant)
 dpkg.status after the upgrade. Regretably, at this point I don't have the
 one from the date of an upgrade.

Thanks, but the status is indeed 'too late'. Its your system already
fully upgraded to jessie. Good would have been your wheezy status, or
the one in between, but well, that can be attributed to me being such
a slowpoke in replying, sorry. :/

I am inclined to close this as not an issue anymore instead of merging
it with the other trigger loop bugs as we have enough of them already
with very similar information (to be fair, I am inclined to close them
as well, but I guess it will be a jessie-ignore by Nils [or another
release teamer] instead to scare me).

Or is there anything left unanswered/open?


Best regards

David Kalnischkies


signature.asc
Description: Digital signature


Bug#779592: [apt] /var/lib/apt/lists/partial/ gets filled by Diff_index file

2015-03-08 Thread David Kalnischkies
Control: fixed -1 1.1~exp4
Control: severity -1 normal

Hi,

On Mon, Mar 02, 2015 at 08:34:33PM +0100, Valerio Passini wrote:
 This bug it's tremendous: if in my source list there is this Debian mirror 
 line:
 
 deb http://ftp.it.debian.org/debian/ experimental main contrib non-free;
 
 the directory /var/lib/apt/lists/partial/ is quickly filled by a Diff_index 
 file 
 growing at a 30MB/s rate until the partition is full. Quite obviously the 
 next 

Well, I can't reproduce it here.

Was the file really just called Diff_index?

I presume it was:
ftp.it.debian.org_debian_dists_experimental_main_binary-amd64_Packages.IndexDiff
which is the filename for this file:
http://ftp.it.debian.org/debian/dists/experimental/main/binary-amd64/Packages.diff/Index


 boot is going to fail for the lack of disk space. I can't understand if this 
 bug it's in the mirror or in apt, but it's quite annoying and should really 
 be 
 fixed ASAP. Best regards

There isn't much we can do about it at the moment. apt/stretch (not
jessie!) will know (most) filesizes in advance and check that it isn't
getting feed too much, but that is just preventing bad sympthoms to
appear (= full disk), it isn't a solution for the (unknown) initial
problem.

Could be a misbehaving proxy (do you have one?), a misbehaving server,
your ISP (via a misbehaving proxy) or any classic man-in-the-middle
really. Hard to know without details. Can you even reproduce it?


I am not fully closing this bug as fixed in future version just yet
because I would like to understand what is going on here in case we can
do anything about it (further) to prevent this from happening, but I am
downgrading drastically as this isn't a new issue (= always possible in
all versions of apt), apt isn't made unusuable by it, we are not loosing
any data (well, with a fulldisk we potentially are, but that is the bug
of other tools not handling this case) and its not opening a security
hole. So neither of the reasons for 'grave' apply here and hence not
release critical.


Best regards

David Kalnischkies


signature.asc
Description: Digital signature


Bug#779294: /usr/bin/python: /lib/i386-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by /usr/bin/python)

2015-02-28 Thread David Kalnischkies
On Fri, Feb 27, 2015 at 08:17:26PM +0100, Andreas Beckmann wrote:
Preparing to replace python2.7-minimal 2.7.3-6+deb7u2 (using 
  .../python2.7-minimal_2.7.8-11_i386.deb) ...
Unpacking replacement python2.7-minimal ...
[…]
Preparing to replace debconf 1.5.49 (using .../debconf_1.5.55_all.deb) 
  ...
/usr/bin/python: /lib/i386-linux-gnu/libc.so.6: version `GLIBC_2.15' not 
  found (required by /usr/bin/python)
dpkg: warning: subprocess old pre-removal script returned error exit 
  status 1
dpkg: trying script from the new package instead ...
/usr/bin/python: /lib/i386-linux-gnu/libc.so.6: version `GLIBC_2.15' not 
  found (required by /usr/bin/python)
dpkg: error processing /var/cache/apt/archives/debconf_1.5.55_all.deb 
  (--unpack):
 subprocess new pre-removal script returned error exit status 1
[…]
  This looks a bit like python was unpacked before the new glibc.
  
  debconf calls pycompile (and python).  It looks like this kind of thing can
  happen with any binary which needs the new glibc, and in this case it hits 
  python.

The dpkg error is talking about the prerm script of debconf.
Looking at it shows that it indeed calls python scripts (pyclean,
py3clean) generated by dh_python2 and dh_python respectively.

Now, the guaranties you have while prerm is running are not really
great: Everything can be half-installed (in a new version), but was
configured (in an old version) [see §6.5]. Not really a 'problem' as
debconf has no dependency on python-minimal at all, so it can be in any
state anyway.

Looking at python-minimal (which contains the /usr/bin/python link) and
then on python2.7-minimal (which contains the link target) looks better:
python2.7-minimal pre-depends on glibc, which is a strong guaranty and
given that the log contains the unpack of python2.7-minimal, it should
also contain unpack+configure of glibc – if the version already
installed isn't high enough.

The python2.7-minimal version 2.7.9-1 currently in sid pre-depends =
2.15 on amd64 and i386 (and a bunch of other archs - not on all!), so we
should have seen glibc here and before someone is showing log to
disprove me, I presume tagging 'sid' was a mistake.

The python2.7-minimal version 2.7.8-11 currently in jessie and the one
this log was talking about pre-depends = 2.15 on amd64, but on i386 the
pre-depends is a relaxed = 2.3.6-6~. That is satisfiable by wheezys
libc6 (currently at 2.13-38+deb7u8) easily (The same or similar again
for other archs as well). I have my doubts this version contains 2.15
symbols through, but this is by definition not apts fault. The question
is now how this pre-dependency came to be, but that is something python
and glibc maintainers can work out.



Slightly unrelated sidenote: python-minimal might be better of
pre-depending on python2.7-minimal. I have my doubts it could actually
happen in practice, but in theory I could freshly install python and
upgrade debconf in the following order:
unpack python-minimal  (the pyclean script is installed)
unpack debconf (prerm finds pyclean script and calls it)
unpack python2.7-minimal (the python interpreter is installed)

The second one will fail as pyclean can't be executed as the interpreter
isn't installed. APT will avoid doing this in general, hence my doubt
that this is a problem in practice, but it is technically allowed (as
long as debconf has no python dependency). This probably get slight more
real if python-minimal ever decides to link to (e.g.) python5 instead.



Best regards

David Kalnischkies


signature.asc
Description: Digital signature


Bug#778375: apt-transport-https: segfaults

2015-02-23 Thread David Kalnischkies
On Mon, Feb 16, 2015 at 01:16:19AM +0100, Tomasz Buchert wrote:
 The tricky HTTPS server returns this line: HTTP/1.1 302. Note that
 there is no explanation for the status code 302 (it should be
 Found). Anyway, this is fine, the code seems to be prepared for
 that case: elements is set to 3 in server.cc:128.

apt is since 0.8.0~pre2 (23 Aug 2010). I think back then it was also
a sourceforge server triggering this. Note that this is a violation of
the HTTP1.1 spec (see rfc7230 section 3.1.2) which allows for an empty
reason-phrase, but the space before that is non-optional.


 However, Owner is NULL (I don't know why, I don't know the code, but
 it is) so Owner-Debug fails in server.cc:132.
 
 The attached patch checks whether Owner is NULL before dereferencing
 it. This fixes this problem for me, but somebody who knows what Owner
 is should make sure that it makes sense.  Feel free to adjust the
 patch to your needs, it's in public domain.

rambling
That is a good catch! 'Owner' refers here to the ServerMethod owning the
ServerState (that was a very helpful explanation, wasn't it? ;) ).

It boils down to this: In Sep 2013 I wanted to fix some bugs in https by
using less curl and more of our own http code. For this I invented
a bunch of Server classes as parents for http and https – in handsight,
I really should have used another name, but well, anyway – expect that
both were completely different in their implementation.

Somehow I managed to pull it of anyway with the result that they share
most of their State parsing/tracking which is quite helpful. It also
means through that the actual Methods using the State are still very
different so getting a common interface for them was hard. Somewhere
down that line I took a shortcut giving the HttpsState a NULL for its
owner as it 'never' really needed it and can hence be fixed 'later'
correctly, right?

Fast forward one and a half years and the 'never' as well as the 'later'
is spoiled. Its a bit ironic that a debug message does this to me…
/rambling

The proposed patch works just fine as the other users for 'Owner'
aren't used by https and for http its always properly set (and nobody
dies if a debug message isn't shown even if requested) and at that point
in the release I guess everyone will be happy about a one-line fix.
(Michael is uploading it any minute now)

Attached is my fullblown 'proper' patch with a testcase I am going to
apply to our /experimental branch for comparison in the meantime.


Best regards

David Kalnischkies
diff --git a/methods/https.cc b/methods/https.cc
index 3a5981b..444bdef 100644
--- a/methods/https.cc
+++ b/methods/https.cc
@@ -109,7 +109,7 @@ HttpsMethod::progress_callback(void *clientp, double dltotal, double /*dlnow*/,
 }
 
 // HttpsServerState::HttpsServerState - Constructor			/*{{{*/
-HttpsServerState::HttpsServerState(URI Srv,HttpsMethod * /*Owner*/) : ServerState(Srv, NULL)
+HttpsServerState::HttpsServerState(URI Srv,HttpsMethod * Owner) : ServerState(Srv, Owner)
 {
TimeOut = _config-FindI(Acquire::https::Timeout,TimeOut);
Reset();
@@ -313,13 +313,11 @@ bool HttpsMethod::Fetch(FetchItem *Itm)
curl_easy_setopt(curl, CURLOPT_LOW_SPEED_TIME, timeout);
 
// set redirect options and default to 10 redirects
-   bool const AllowRedirect = _config-FindB(Acquire::https::AllowRedirect,
-	_config-FindB(Acquire::http::AllowRedirect,true));
curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, AllowRedirect);
curl_easy_setopt(curl, CURLOPT_MAXREDIRS, 10);
 
// debug
-   if(_config-FindB(Debug::Acquire::https, false))
+   if (Debug == true)
   curl_easy_setopt(curl, CURLOPT_VERBOSE, true);
 
// error handling
@@ -356,7 +354,7 @@ bool HttpsMethod::Fetch(FetchItem *Itm)
 
// go for it - if the file exists, append on it
File = new FileFd(Itm-DestFile, FileFd::WriteAny);
-   Server = new HttpsServerState(Itm-Uri, this);
+   Server = CreateServerState(Itm-Uri);
 
// keep apt updated
Res.Filename = Itm-DestFile;
@@ -451,6 +449,25 @@ bool HttpsMethod::Fetch(FetchItem *Itm)
 
return true;
 }
+	/*}}}*/
+// HttpsMethod::Configuration - Handle a configuration message		/*{{{*/
+bool HttpsMethod::Configuration(string Message)
+{
+   if (ServerMethod::Configuration(Message) == false)
+  return false;
+
+   AllowRedirect = _config-FindB(Acquire::https::AllowRedirect,
+	_config-FindB(Acquire::http::AllowRedirect, true));
+   Debug = _config-FindB(Debug::Acquire::https,false);
+
+   return true;
+}
+	/*}}}*/
+ServerState * HttpsMethod::CreateServerState(URI uri)			/*{{{*/
+{
+   return new HttpsServerState(uri, this);
+}
+	/*}}}*/
 
 int main()
 {
diff --git a/methods/https.h b/methods/https.h
index 411b714..f8d302d 100644
--- a/methods/https.h
+++ b/methods/https.h
@@ -52,7 +52,7 @@ class HttpsServerState : public ServerState
virtual ~HttpsServerState() {Close();};
 };
 
-class HttpsMethod : public pkgAcqMethod
+class HttpsMethod : public ServerMethod
 {
// minimum

Bug#776910: apt: upgrade from wheezy to jessie breaks in the middle

2015-02-03 Thread David Kalnischkies
Control: tag -1 - newcomer

Hi Rafal,

On Tue, Feb 03, 2015 at 10:20:26AM +0100, Rafal Pietrak wrote:
 My guess is that some limit on number of errors was taken into account
 unneceserly during an upgrade - upgrades are expected to rise trancient 
 errors.

Your report seems to error into the total opposite unfortunately by not
mentioning a single error. Upgrading is a tricky business and basically
different for everyone (as which packages you have installed can vary
widely as you have ~3 to choose from). Your report is hence as
actionable as a weather report saying: It is going to rain tomorrow
somewhere on earth. That isn't really telling me much about if I should
carry an umbrella around or not on my adventure around my little patch of
dirt. For this, as well as here, we need details, details, details to
actually do something about it.

But my crystalball tells me that you might mean #776063 as dpkg shows in
this context a too many errors message (litmus test: the word dbus
was printed all over the place, right?).

If not we need at the very least the actual error message(s). The
current system state (/var/lib/dpkg/status) as well as the state before
the upgrade (the /var/backups/dpkg.status* file dated before the update)
could also be helpful.


On a more general note: Try not to guess in bugreports. You are the
eyewitness, you know the facts. I am the guy on jury duty who has to
come up with a coherent story of what happened and why. I know its
tempting to add evidence as a witness, but that can spoil the whole
process.


Best regards

David Kalnischkies

P.S.: The 'newcomer' tag is for maintainers to indicate bitsized bugs
which a newcomer to the project/package can try to tackle to get
started. I wouldn't recommend these sort of upgrade bugs as a starting
point… and we certainly don't need to label bugs from newcomers as such
(which I guess is what you meant it to mean) as a bug is a bug, it isn't
worse just because a longtime contributor reported it.


signature.asc
Description: Digital signature


Bug#776063: dbus fails to upgrade rendering entire apt unusable

2015-01-30 Thread David Kalnischkies
(sry about leaving you guys hanging… I am not exactly blessed with free
time at the moment and this stuff requires the exact opposite… anyway,
more a problem description/comment than a solution. Far from the later…)


On Sun, Jan 25, 2015 at 12:45:10AM +, Simon McVittie wrote:
 Is the fix for https://bugs.debian.org/769609 expected to fix this
 particular issue, or am I misreading it?

No. Its not fixing any issue at the moment. It will be if dpkg/stretch
drops its compatibility run all runnable pending triggers code, but
that is (not) going to effect you in the jessie - stretch upgrade; not
now nor would it help as this trigger is not runnable (given that dpkg
tries, but ultimatively fails to do so) and all this bug is supposed to
prevent is that triggers which should have been run are not.


So whats the problem? Lets pretend this universe (which is a simplified
realworld one):

Package: systemd-sysv
Depends: systemd

Package: systemd
Postinst: trigger dbus-trig

Package: dbus
Depends: libdbus-1-3
Trigger: interest-await dbus-trig

Package: libdbus-1-3

Now, for simplicity lets just assume all of these packages are already
installed and we are going to install them again. Lets further assume
we pick libdbus-1-3 to be unpacked first and then we deal with the rest,
like this:

dpkg --unpack libdbus-1-3*.deb
dpkg --install systemd*.deb
dpkg --configure libdbus-1-3

So, what you will see is that the second dpkg call will fail because
systemd will end up in 'iW' on dbus which itself can't leave 'it' as it
depends on libdbus-1-3 which is just unpacked ('iU').

This never happened in the past as dpkg would just run the dbus trigger
anyway (not ideal, right?). This new style assumes on the other hand that
a dpkg-frontend like apt is making stuff up on the go rather than
planning out what to do once as for a frontend like apt the need to run
the dbus trigger comes out of no-where and derails the entire plan.


Now the universe constructed above is totally made up in that it is so
simple. In this simple scenario apt doesn't call dpkg in that way. Why
it runs dpkg in this way and insists on it I haven't checked thanks to
time issues, but given that this is the only trigger I know of there it
happens in real life and it isn't even limited to wheezy-upgrades (as
I got this on a sid system (only updated once in a blue moon through)
the other day) I presume its a very special snowflake thing going on
around pseudo-essentials, pre-depends and conflicts which apt is trying
to eagerly to avoid and instead steps on this trigger trap (SCNR).


 Or if dropping it down to interest-noawait would help, that isn't
 really semantically correct, but it's probably acceptable in practice?

Its not helping the general case of course, but -noawait triggers can't
run in to this problem as nothing can end up in 'iW' with them. So if
you think this is acceptable, I think it might be better than the
alternatives like ripping this out of dpkg again or busy-waiting for me
to figure something out (especially as I doubt that it will be pretty or
even simple if at all solveable for wheezy-upgrades given we only have
apt/wheezy for it…).


Best regards

David Kalnischkies

P.S.: apt isn't recovering in this situation as it ignores 'iW' states,
while it should probably just half-ignore it by trying to get the
system in a state in which the package could be configured, but never
explicitely requesting the trigger processing, but the commit making it
ignore it is years old and as usual: changes to the install-order scare
the shit out of me, especially five minutes before release.


signature.asc
Description: Digital signature


Bug#774924: apt: Jessie version cannot find upgrade path (but Wheezy version can)

2015-01-10 Thread David Kalnischkies
Control: found -1 0.9.16
Control: tags -1 patch

On Fri, Jan 09, 2015 at 04:00:10PM +0100, David Kalnischkies wrote:
 In the meantime, I hopefully figure out what is the meaningful
 difference between wheezy and jessie score keeping here. I remember
 a few changes, but they should actually help in these cases rather than
 making it fail spectacularly…

(Assuming you implement them correctly of course…)

Commit 9ec748ff103840c4c65471ca00d3b72984131ce4 from Feb 23 last year
adds a version check after 8daf68e366fa9fa2794ae667f51562663856237c
added 8 days earlier negative points for breaks/conflicts with the
intended that only dependencies which are satisfied propagate points
(aka: old conflicts do not).

The implementation was needlessly complex and flawed through preventing
positive dependencies from gaining points like they did before these
commits making library transitions harder instead of simpler. It worked
out anyhow most of the time out of pure 'luck' (and other ways of
gaining points) or got miss attributed to being a temporary hick-up.

(Changing the priorities would still be a good idea anyhow)


Best regards

David schrödinbug Kalnischkies
commit 77b6f202e1629b7794a03b6522d636ff1436d074
Author: David Kalnischkies da...@kalnischkies.de
Date:   Sat Jan 10 12:31:18 2015 +0100

award points for positive dependencies again

Commit 9ec748ff103840c4c65471ca00d3b72984131ce4 from Feb 23 last year
adds a version check after 8daf68e366fa9fa2794ae667f51562663856237c
added 8 days earlier negative points for breaks/conflicts with the
intended that only dependencies which are satisfied propagate points
(aka: old conflicts do not).

The implementation was needlessly complex and flawed through preventing
positive dependencies from gaining points like they did before these
commits making library transitions harder instead of simpler. It worked
out anyhow most of the time out of pure 'luck' (and other ways of
gaining points) or got miss attributed to being a temporary hick-up.

Closes: 774924

diff --git a/apt-pkg/algorithms.cc b/apt-pkg/algorithms.cc
index 608ec7f..b838310 100644
--- a/apt-pkg/algorithms.cc
+++ b/apt-pkg/algorithms.cc
@@ -468,7 +468,7 @@ void pkgProblemResolver::MakeScores()
 	 if (D-Version != 0)
 	 {
 	pkgCache::VerIterator const IV = Cache[T].InstVerIter(Cache);
-	if (IV.end() == true || D.IsSatisfied(IV) != D.IsNegative())
+	if (IV.end() == true || D.IsSatisfied(IV) == false)
 	   continue;
 	 }
 	 Scores[T-ID] += DepMap[D-Type];
diff --git a/test/integration/test-allow-scores-for-all-dependency-types b/test/integration/test-allow-scores-for-all-dependency-types
index a5c98f3..d60cb8d 100755
--- a/test/integration/test-allow-scores-for-all-dependency-types
+++ b/test/integration/test-allow-scores-for-all-dependency-types
@@ -32,6 +32,11 @@ insertpackage 'multipleyes' 'foo' 'amd64' '2.2' 'Conflicts: bar (= 3)'
 # having foo multiple times as conflict is a non-advisable hack in general
 insertpackage 'multipleyes' 'bar' 'amd64' '2.2' 'Conflicts: foo (= 3), foo (= 3)'
 
+#774924 - slightly simplified
+insertpackage 'jessie' 'login' 'amd64' '2' 'Pre-Depends: libaudit1 (= 0)'
+insertpackage 'jessie' 'libaudit1' 'amd64' '2' 'Depends: libaudit-common (= 0)'
+insertpackage 'jessie' 'libaudit-common' 'amd64' '2' 'Breaks: libaudit0, libaudit1 ( 2)'
+
 cp rootdir/var/lib/dpkg/status rootdir/var/lib/dpkg/status-backup
 setupaptarchive
 
@@ -142,3 +147,26 @@ Inst foo [1] (2 versioned [amd64])
 Inst baz (2 versioned [amd64])
 Conf foo (2 versioned [amd64])
 Conf baz (2 versioned [amd64])' aptget install baz -st versioned
+
+# recreating the exact situation is hard, so we pull tricks to get the score
+cp -f rootdir/var/lib/dpkg/status-backup rootdir/var/lib/dpkg/status
+insertinstalledpackage 'gdm3' 'amd64' '1' 'Depends: libaudit0, libaudit0'
+insertinstalledpackage 'login' 'amd64' '1' 'Essential: yes'
+insertinstalledpackage 'libaudit0' 'amd64' '1'
+testequal 'Reading package lists...
+Building dependency tree...
+The following packages will be REMOVED:
+  gdm3 libaudit0
+The following NEW packages will be installed:
+  libaudit-common libaudit1
+The following packages will be upgraded:
+  login
+1 upgraded, 2 newly installed, 2 to remove and 0 not upgraded.
+Remv gdm3 [1]
+Remv libaudit0 [1]
+Inst libaudit-common (2 jessie [amd64])
+Conf libaudit-common (2 jessie [amd64])
+Inst libaudit1 (2 jessie [amd64])
+Conf libaudit1 (2 jessie [amd64])
+Inst login [1] (2 jessie [amd64])
+Conf login (2 jessie [amd64])' aptget dist-upgrade -st jessie


signature.asc
Description: Digital signature


Bug#774924: apt: Jessie version cannot find upgrade path (but Wheezy version can)

2015-01-09 Thread David Kalnischkies
On Fri, Jan 09, 2015 at 09:01:00AM +0100, Niels Thykier wrote:
   In the dpkg + APT first run[2], APT ends up concluding that
 login should be removed and aborts as it refuses the uninstall an
 essential package. In the regular run[1], the login package is
 (eventually) upgraded without any issues.

The reason is that [2] decides to keep libaudit0 installed. I haven't
figured out yet why its a -1 draw between libaudit0 and libaudit-common
in the aptfirst case rather than a 'clear' 0 vs 8 as in the regular
case, but I have to note that its quiet strange that the audit family of
packages is prio:optional while they are a direct pre-dependency of an
essential package (login)… not to mention systemd and sudo (which is
probably the reason this package appears in another instance).

Getting the priority of audit pkgs reflect reality with an override
should fix that specific instance and is probably not hurting for
clarity anyhow.

In the meantime, I hopefully figure out what is the meaningful
difference between wheezy and jessie score keeping here. I remember
a few changes, but they should actually help in these cases rather than
making it fail spectacularly…
(Remove an essential pkg? Seriously, bro?)


Best regards

David Kalnischkies


signature.asc
Description: Digital signature


Bug#772641: apt: E: Setting TIOCSCTTY for slave fd fd failed when run as a session leader

2014-12-10 Thread David Kalnischkies
Hi,

On Tue, Dec 09, 2014 at 03:57:05PM +0200, Apollon Oikonomopoulos wrote:
 apt 1.0.9.4 does not work correctly when run as a session leader, 
 reporting a failed ioctl on the pty used by dpkg. When called by puppet, 
 it emits the following output:
[…]
 Apart from the error message, it also appears that apt is trying to close its
 own control terminal, thus SIGHUP'ing itself, signaling an unclean exit:

There was a time in which I had read Why isn't X the year of the Linux
Desktop articles and the answer was always because of terminals.
I couldn't understood what could be so wrong with terminals. I loved them.

Then, I started hacking on the PTY handling in apt…
And all of a sudden I understand…


But enough about my first world problems:
The setup is like this: the apt process itself is keeping
a reference open to the pseudo terminal slave as Linux is upset
otherwise (see 299aea924ccef428219ed6f1a026c122678429e6).

That is all nice and dandy up to the point where the apt process has no
controlling terminal, so that opening the pseudo terminal slave will
make this terminal our controlling terminal! In our cleanup at the end
we close the pseudo terminal, which in that case is a terminal mistake
as its our controlling terminal, which quite literally means: we hang
ourselves.


So, the proper thing to do is to rule with our hard iron fist and show
our primitive little slave that it isn't right for him to have
aspirations behond what its puppet-master intended for him.
In other words: We add O_NOCTTY to the open(2) call to stop the slave
terminal from becoming our controlling terminal.


Attached is a patch which hopefully does exactly this. It is against
experimental, but that shouldn't matter (expect for the testcase
I think). I have run it on Linux amd64 (and armel) hardware as well
as on a kfreebsd kvm, so I have some hope that it isn't regessing, but
it would be nice if you could try it with puppet just to be sure that we
are really fixing the problem completely or if I have justed resolved
the problem in the setsid testcase.


Thanks in any case for the report and the testcase, especially the
later helped tremendously in reproducing the problem!


Best regards

David Kalnischkies
commit c6bc9735cf1486d40d85bba90cfc3aaa6537a9c0
Author: David Kalnischkies da...@kalnischkies.de
Date:   Wed Dec 10 22:26:59 2014 +0100

do not make PTY slave the controlling terminal

If we have no controlling terminal opening a terminal will make this
terminal our controller, which is a serious problem if this happens to
be the pseudo terminal we created to run dpkg in as we will close this
terminal at the end hanging ourself up in the process…

The offending open is the one we do to have at least one slave fd open
all the time, but for good measure, we apply the flag also to the slave
fd opening in the child process as we set the controlling terminal
explicitely here.

This is a regression from 150bdc9ca5d656f9fba94d37c5f4f183b02bd746 with
the slight twist that this usecase was silently broken before in that it
wasn't logging the output in term.log (as a pseudo terminal wasn't
created).

Closes: 772641

diff --git a/apt-pkg/deb/dpkgpm.cc b/apt-pkg/deb/dpkgpm.cc
index 79120f6..8a8214c 100644
--- a/apt-pkg/deb/dpkgpm.cc
+++ b/apt-pkg/deb/dpkgpm.cc
@@ -1127,7 +1127,7 @@ void pkgDPkgPM::StartPtyMagic()
 	   on kfreebsd we get an incorrect (step like) output then while it has
 	   no problem with closing all references… so to avoid platform specific
 	   code here we combine both and be happy once more */
-	d-protect_slave_from_dying = open(d-slave, O_RDWR | O_CLOEXEC);
+	d-protect_slave_from_dying = open(d-slave, O_RDWR | O_CLOEXEC | O_NOCTTY);
 	 }
   }
}
@@ -1159,7 +1159,7 @@ void pkgDPkgPM::SetupSlavePtyMagic()
if (setsid() == -1)
   _error-FatalE(setsid, Starting a new session for child failed!);
 
-   int const slaveFd = open(d-slave, O_RDWR);
+   int const slaveFd = open(d-slave, O_RDWR | O_NOCTTY);
if (slaveFd == -1)
   _error-FatalE(open, _(Can not write log (%s)), _(Is /dev/pts mounted?));
else if (ioctl(slaveFd, TIOCSCTTY, 0)  0)
diff --git a/test/integration/test-no-fds-leaked-to-maintainer-scripts b/test/integration/test-no-fds-leaked-to-maintainer-scripts
index cde987b..a7d556b 100755
--- a/test/integration/test-no-fds-leaked-to-maintainer-scripts
+++ b/test/integration/test-no-fds-leaked-to-maintainer-scripts
@@ -26,20 +26,25 @@ setupaptarchive
 
 rm -f rootdir/var/log/dpkg.log rootdir/var/log/apt/term.log
 testsuccess aptget install -y fdleaks -qq  /dev/null
-msgtest 'Check if fds were not' 'leaked'
-if [ $(grep 'root root' rootdir/tmp/testsuccess.output | wc -l) = '8' ]; then
-	msgpass
-else
-	echo
-	cat rootdir/tmp/testsuccess.output
-	msgfail
-fi
 
-cp rootdir/tmp/testsuccess.output terminal.output
-tail -n +3 rootdir/var/log/apt/term.log | head -n -1  terminal.log
-testfileequal

Bug#766758: apt: does not process pending triggers

2014-11-23 Thread David Kalnischkies
On Sat, Nov 15, 2014 at 12:28:07AM +0100, Guillem Jover wrote:
  I dislike bug-pingpong, but in this case I have to move it back to dpkg
  as we can't change apt to make upgrades work (at least it was never
  allowed in the past, so I doubt it is an option now) and its a behaviour
  change in dpkg, not a apt regression per-se, so dpkg/jessie has to behave
  as expected by libapt-pkg/wheezy here regardless of how dumb that might
  be.
 
 Sure, although the current apt behavior goes against the written
 triggers spec, where apt/aptitude even have their own section. :)

I don't want to be seen as picky, but it doesn't. Especially the
mentioned section isn't violated. We know these states and we call
configure for them if we see them, but the next line says we usually
will not see them. What you did now is changing the usual in this
sentence to in the way you are using it, it will be close to always.

Triggers are from our viewpoint an implementation detail of dpkg (which
is also what the spec suggests), which leaks into our domain more and
more for good reasons, but at the same time its bad as we can't really
deal with them as there is no way to predict what will happen…


  If you agree just clone the bug back to us and I will take care of it
  from the apt side. You might want to clone it to other dpkg-callers as
  well as I presume that at least some have the same problem. Otherwise,
  I am all ears for alternative solutions.
 
 Only apt seems to be affected. dselect properly uses “dpkg transactions”
 and as such queues all configuration in a final «--configure --pending»
 call. And cupt seems to behave correctly by calling dpkg with
 «--triggers-only --pending», but Eugene might know for sure.
 
 If you know of other frontends, I'd be interested to know.

Well, I don't know, but I would guess that at least the various
(cross-)bootstrappers need to be checked. smartpm (although, it might be
better to just remove it). d-i maybe, but I guess it doesn't use dpkg
directly (and/or later states with apt will fix that up). codesearch
might help if you can come up with a good search pattern (I couldn't).


   So apt needs to either pass man-db to the --configure call, or just
   do a final --triggers-only/--configure --pending call. A trivial fix
   would be to change the default value for DPkg::TriggersPending to
   true.

I just realized that we also have a dpkg::ConfigurePending option
causing apt to run a dpkg --configure --pending after all other dpkg
calls, so I will opt for this one as it is more future proof and does
what we need just as well.


Reasoning: I just tried the following sequence:
dpkg -i trigdepends-interest_1.0_all.deb triggerable-interest_1.0_all.deb
# ^ dependency   ^ interest /usr/share/doc
dpkg --unpack trigdepends-interest_1.0_all.deb
dpkg --unpack trigstuff_1.0_all.deb
dpkg --configure trigstuff
# ^ trigstuff is iW as dependencies of trigger aren't statisfied
dpkg --triggers-only --pending

My expectation I expressed in the previous mail was that the last
command here would fail as a pending trigger can't be run. It doesn't,
so my biggest concern with dpkg::TriggersPending isn't really existing,
but I still think that running it all the time isn't needed if we can
just do the more general ConfigurePending once.


Best regards

David Kalnischkies

P.S.: I will respond to other parts of the mail/thread in other
threads/bugs to keep all reasonably ordered… if that is possible.


signature.asc
Description: Digital signature


Bug#769609: apt: does not process pending triggers

2014-11-23 Thread David Kalnischkies
Hi,

as mentioned in the dpkg-part of this bugreport I am favoring enabling
the dpkg::configurePending option to fix this from our side.

It causes apt to shedule a dpkg --configure --pending after all other
dpkg calls are done. In the best case this is doing nothing, in the
worst it runs some triggers to get all packages out of the trigger
states.

I think it is better to this more general call than to do
--triggers-only as its more future proof and will be something we will
be using (more) in future versions anyway.


The attached git commit fixes also the progress reporting as otherwise
this sheduled call would be run at 100%. Included is a testcase for
this, but this obviously requires a broken dpkg version to see that it
actually works.


Best regards

David Kalnischkies
commit 1a46b9499017105f0d6a8c6319521088eadff6b2
Author: David Kalnischkies da...@kalnischkies.de
Date:   Tue Nov 18 19:53:56 2014 +0100

always run 'dpkg --configure -a' at the end of our dpkg callings

dpkg checks now for dependencies before running triggers, so that
packages can now end up in trigger states (especially those we are not
touching at all with our calls) after apt is done running.

The solution to this is trivial: Just tell dpkg to configure everything
after we have (supposely) configured everything already. In the worst
case this means dpkg will have to run a bunch of triggers, usually it
will just do nothing though.

The code to make this happen was already available, so we just flip a
config option here to cause it to be run. This way we can keep
pretending that triggers are an implementation detail of dpkg.
--triggers-only would supposely work as well, but --configure is more
robust in regards to future changes to dpkg and something we will
hopefully make use of in future versions anyway (as it was planed at the
time this and related options were implemented).

Closes: 769609

diff --git a/apt-pkg/deb/dpkgpm.cc b/apt-pkg/deb/dpkgpm.cc
index 5938750..56e9d75 100644
--- a/apt-pkg/deb/dpkgpm.cc
+++ b/apt-pkg/deb/dpkgpm.cc
@@ -1047,6 +1047,12 @@ void pkgDPkgPM::BuildPackagesProgressMap()
 	 PackagesTotal++;
   }
}
+   /* one extra: We don't want the progress bar to reach 100%, especially not
+  if we call dpkg --configure --pending and process a bunch of triggers
+  while showing 100%. Also, spindown takes a while, so never reaching 100%
+  is way more correct than reaching 100% while still doing stuff even if
+  doing it this way is slightly bending the rules */
+   ++PackagesTotal;
 }
 /*}}}*/
 bool pkgDPkgPM::Go(int StatusFd)
@@ -1274,9 +1280,8 @@ bool pkgDPkgPM::Go(APT::Progress::PackageManager *progress)
 
// support subpressing of triggers processing for special
// cases like d-i that runs the triggers handling manually
-   bool const SmartConf = (_config-Find(PackageManager::Configure, all) != all);
bool const TriggersPending = _config-FindB(DPkg::TriggersPending, false);
-   if (_config-FindB(DPkg::ConfigurePending, SmartConf) == true)
+   if (_config-FindB(DPkg::ConfigurePending, true) == true)
   List.push_back(Item(Item::ConfigurePending, PkgIterator()));
 
// for the progress
diff --git a/test/integration/framework b/test/integration/framework
index ff059f5..36deccf 100644
--- a/test/integration/framework
+++ b/test/integration/framework
@@ -1241,10 +1241,13 @@ testnopackage() {
 	fi
 }
 
-testdpkginstalled() {
-	msgtest Test for correctly installed package(s) with dpkg -l $*
-	local PKGS=$(dpkg -l $@ 2/dev/null | grep '^i' | wc -l)
-	if [ $PKGS != $# ]; then
+testdpkgstatus() {
+	local STATE=$1
+	local NR=$2
+	shift 2
+	msgtest Test that $NR package(s) are in state $STATE with dpkg -l $*
+	local PKGS=$(dpkg -l $@ 2/dev/null | grep ^${STATE} | wc -l)
+	if [ $PKGS != $NR ]; then
 		echo 2 $PKGS
 		dpkg -l $@ | grep '^[a-z]' 2
 		msgfail
@@ -1253,16 +1256,12 @@ testdpkginstalled() {
 	fi
 }
 
+testdpkginstalled() {
+	testdpkgstatus 'ii' $# $@
+}
+
 testdpkgnotinstalled() {
-	msgtest Test for correctly not-installed package(s) with dpkg -l $*
-	local PKGS=$(dpkg -l $@ 2 /dev/null | grep '^i' | wc -l)
-	if [ $PKGS != 0 ]; then
-		echo
-		dpkg -l $@ | grep '^[a-z]' 2
-		msgfail
-	else
-		msgpass
-	fi
+	testdpkgstatus 'ii' '0' $@
 }
 
 testmarkedauto() {
diff --git a/test/integration/test-apt-progress-fd b/test/integration/test-apt-progress-fd
index af022f5..90e6ef7 100755
--- a/test/integration/test-apt-progress-fd
+++ b/test/integration/test-apt-progress-fd
@@ -19,13 +19,14 @@ testequal dlstatus:1:0:Retrieving file 1 of 1
 dlstatus:1:20:Retrieving file 1 of 1
 pmstatus:dpkg-exec:0:Running dpkg
 pmstatus:testing:0:Installing testing (amd64)
-pmstatus:testing:20:Preparing testing (amd64)
-pmstatus:testing:40:Unpacking testing (amd64)
-pmstatus:testing:60:Preparing to configure testing (amd64)
-pmstatus:dpkg-exec:60

Bug#767103: irssi-plugin-otr doesn't work with irssi 0.8.17

2014-11-07 Thread David Kalnischkies
On Fri, Nov 07, 2014 at 10:15:22AM +0100, intrigeri wrote:
 David Kalnischkies wrote (06 Nov 2014 21:52:10 GMT) :
  On Tue, Nov 04, 2014 at 08:14:24PM +0100, intrigeri wrote:
  David Kalnischkies wrote (28 Oct 2014 14:00:40 GMT) :
   Upgrading irssi from 0.8.16-1+b1 to 0.8.17-1 seems to break the OTR
   plugin for me.
  
  I'm wondering if this could be a side-effect of #767230.
  Can you reproduce this after upgrading libotr5 to 4.1.0-1?
 
  Sounds like it and I had some hope, but trying with:
 
  irssi  0.8.17-1
  irssi-plugin-otr   1.0.0-1+b1  (+b1 for rebuild against libgcrypt20)
  libgcrypt20:amd64  1.6.2-4
  libgcrypt20:i386   1.6.2-4
  libotr54.1.0-2
 
  I still have this problem. :(
 
 OK, thanks.

Just to highlight, you asked for 4.1.0-_1_, but I am at -_2_ – which
according to the changelog is the first one with the symbols file.


  I see that irssi-plugin-otr has an unversioned dependency on libotr5.
  Doing an apt-get source irssi-plugin-otr -b results in a package with
  a versioned dependency libotr5 (= 4.0.0) and after installing and
  restarting irssi I can run /otr init without the mentioned error message
  and the remote gets the '?OTRv23?', so that looks about right.
 
 Can you confirm you've built it against libotr from sid (that
 introduces a proper symbols file)?

Yes, libotr5-dev is at 4.1.0-2 just as libotr5. After all the -dev
package has an equal-dependency on the library, so my beloved apt would
be pretty pissed if it would be at an earlier version. ;)


Best regards

David Kalnischkies


signature.asc
Description: Digital signature


Bug#767103: irssi-plugin-otr doesn't work with irssi 0.8.17

2014-11-06 Thread David Kalnischkies
Hi,

On Tue, Nov 04, 2014 at 08:14:24PM +0100, intrigeri wrote:
 David Kalnischkies wrote (28 Oct 2014 14:00:40 GMT) :
  Upgrading irssi from 0.8.16-1+b1 to 0.8.17-1 seems to break the OTR
  plugin for me.
 
 I'm wondering if this could be a side-effect of #767230.
 Can you reproduce this after upgrading libotr5 to 4.1.0-1?

Sounds like it and I had some hope, but trying with:

irssi  0.8.17-1
irssi-plugin-otr   1.0.0-1+b1  (+b1 for rebuild against libgcrypt20)
libgcrypt20:amd64  1.6.2-4
libgcrypt20:i386   1.6.2-4
libotr54.1.0-2

I still have this problem. :(

I see that irssi-plugin-otr has an unversioned dependency on libotr5.
Doing an apt-get source irssi-plugin-otr -b results in a package with
a versioned dependency libotr5 (= 4.0.0) and after installing and
restarting irssi I can run /otr init without the mentioned error message
and the remote gets the '?OTRv23?', so that looks about right.
(sorry, I can't test with a real remote at the moment)

Looks like libotr5 still has an ABI break somewhere – or the +b1
happened at the time libotr5 had one, so that it picked it up
accidentally (at least in the amd64 rebuild)?

So, next action is reassigning to release team for another binNMU,
to libotr5 to find the possible regression or … ?


Best regards

David Kalnischkies


signature.asc
Description: Digital signature


Bug#767734: upgrade failure: perl-modules depends on perl which is not configured yet

2014-11-02 Thread David Kalnischkies
On Sun, Nov 02, 2014 at 12:51:02PM +0100, Sven Joachim wrote:
 [CC'ing apt maintainers.]

[ for every one that asks receives ]

 The circular dependency between perl and perl-modules has been around
 for ages, and it can be broken by configuring both perl and perl-modules
 in one run and letting dpkg figure out the order.  It seems as if apt
 told dpkg to only configure perl-modules which cannot work.
 
 I think this is the same problem as in
 https://bugs.launchpad.net/ubuntu/+source/apt/+bug/1347721, and it's
 fixed in apt 1.0.7 (you have apt 1.0.6, 0.9.9 introduced the bug).

Thanks for the analyse! It should be indeed this problem showing again
how nasty circular dependencies can be…

The mentioned bug can't really be worked around and it only effects some
apt versions in unstable and especially not stable, so I would close it
and be done. It is at the very least not release critical.


Best regards

David Kalnischkies


signature.asc
Description: Digital signature


Bug#767103: irssi-plugin-otr doesn't work with irssi 0.8.17

2014-10-28 Thread David Kalnischkies
Package: irssi-plugin-otr
Version: 1.0.0-1
Severity: grave
X-Debbugs-CC: ir...@packages.debian.org

Hello!

Upgrading irssi from 0.8.16-1+b1 to 0.8.17-1 seems to break the OTR
plugin for me. Opening a new query window and executing /otr init
resulted usually in the initialisation of an OTR session.

Doing it now seems to not send anything to the remote user I tried to
init an OTR session with and instead a message like this shows up in the
status window:

14:50:16 [oftc] OTR: Initiating OTR session...
14:50:20 [oftc] -!- H�G(�ff.�: No such nick/channel

OTR inits from the remote end don't trigger any interesting status messages,
but the query window contains the usual Gone secure message, but nearly
after every message from the remote (sprinkled in with a message telling me
that this wasn't sent inside the OTR session) and the indicator claims it
is still plaintext. I haven't really investigated which one is true as that
smells fishy either way.

(I think it is unrelated, but for completeness: the remote end is Pidgin,
but a simple webclient as remote doesn't show any message along the lines of
?OTRv23? if I try to init either; irssi is connected via ZNC).


Downgrading irssi to the previous version solves this issue.
I have CC'ed irssi maintainers in case they have an idea what is wrong
and/or as this if unsolved effects jessie might warrant a Breaks.


Best regards

David Kalnischkies


signature.asc
Description: Digital signature


Bug#765458: apt: broken cdrom support, breaking installation from weekly ISO images

2014-10-17 Thread David Kalnischkies
On Fri, Oct 17, 2014 at 11:28:12AM +0200, Cyril Brulebois wrote:
 Cyril Brulebois k...@debian.org (2014-10-16):
  I'm tempted to make apt migrate to testing soon, possibly today, because
  bug reports are piling up. From your maintainer point of view, is there
  anything speaking against such a move?
 
 Having received no negative feedback, I did that yesterday and fixed apt
 is now in testing. I've also started a rebuild of the weekly images[1],
 and images dated 2014-10-17 should have a fixed apt.

I was about to answer that from our side of the fence we see no problem
with quick-aging (okay, its a bit misfortune that now buildprofiles in
apt are changed before dpkg, but the drivers response to an earlier
inquiry on compatibility requirements was: no need, so I am sure they
will manage just fine as this was bound to happen one way or another
anyway).

Thanks for testing  hinting and until next time in apt-cdrom bugland ;)


Best regards

David Kalnischkies


signature.asc
Description: Digital signature


Bug#765458: apt: broken cdrom support, breaking installation from weekly ISO images

2014-10-15 Thread David Kalnischkies
Hi,

On Wed, Oct 15, 2014 at 11:47:44AM +0200, Cyril Brulebois wrote:
 [ X-D-Cc: debian-boot@, please keep it in the loop when replying. ]

[Done, although I don't see the header… (bad mutt, bad).]


 we received several bug reports about weekly installation images being
 unable to find a kernel package to install on the freshly debootstrapped
 system. I've been able to replicate this issue with apt 1.9.0.2. Various

Is the apt-get update call from which you have included the output
a recent addition or was it 'always' there?

I am asking as 'apt-cdrom add' actually does the work of copying the
indexes to disk, which (should) mean that 'apt-get update' is a no-op,
making the call useless if cdroms are really the only possible source in
that stage.

That 'update' isn't really supposed to be called here is reinforced by
the ugly warnings/errors you showcase and which existed since ever. Our
only testcase covering apt-cdrom also doesn't include such a call…

Why that is the case, I have no idea, as I would expect at least some
people to have multiple sources, including cdrom, which would call for
an update, so that should really work without being scary (= assuming
warnings from apt are regarded as scary…).

The irony is that the suspected bad-boy 80174528 actually fixes this
longstanding problem as the regression was that apt exited nonzero, not
that it printed warnings (so much for scary).

The problematic commit is b0f4b486 (and therefore not a regression in
a security fix – everyone rejoice): While fine by itself, merged with
the suspected bad-boy we still have no warnings and a successful exit,
but our helpful list-cleanup kicks in removing files from the lists/
directory which seem to be orphaned given that it is looking e.g. for
a Packages.gz file and not for Packages as the part fixing up the
filename is skipped if a cdrom source is encountered.


The attached patch should merge both in a better working way, at least
that is what the testcase is promising me – which I have extended a bit
to cover a bit more ground, too. Nothing near proper testing though, so
someone giving it a proper testspin would be nice, but if that is too
hard I guess Michael could just upload it and let the world test it for
us (now that he doesn't have to fear another security upload).  ;)


Best regards

David Kalnischkies
commit 5afcfe2a51a9e47e95023b99bcab065d1975e950
Author: David Kalnischkies da...@kalnischkies.de
Date:   Wed Oct 15 15:56:53 2014 +0200

don't cleanup cdrom files in apt-get update

Regression from merging 801745284905e7962aa77a9f37a6b4e7fcdc19d0 and
b0f4b486e6850c5f98520ccf19da71d0ed748ae4. While fine by itself, merged
the part fixing the filename is skipped if a cdrom source is
encountered, so that our list-cleanup removes what seems to be orphaned
files.

Closes: 765458

diff --git a/apt-pkg/acquire-item.cc b/apt-pkg/acquire-item.cc
index 2401364..253cbda 100644
--- a/apt-pkg/acquire-item.cc
+++ b/apt-pkg/acquire-item.cc
@@ -1144,16 +1144,12 @@ void pkgAcqIndex::Done(string Message,unsigned long long Size,string Hash,
else
   Local = true;
 
-   // do not reverify cdrom sources as apt-cdrom may rewrite the Packages
-   // file when its doing the indexcopy
-   if (RealURI.substr(0,6) == cdrom: 
-   StringToBool(LookupTag(Message,IMS-Hit),false) == true)
-  return;
-
// The files timestamp matches, for non-local URLs reverify the local
// file, for local file, uncompress again to ensure the hashsum is still
// matching the Release file
-   if (!Local  StringToBool(LookupTag(Message,IMS-Hit),false) == true)
+   bool const IsCDROM = RealURI.substr(0,6) == cdrom:;
+   if ((Local == false || IsCDROM == true) 
+	 StringToBool(LookupTag(Message,IMS-Hit),false) == true)
{
   // set destfile to the final destfile
   if(_config-FindB(Acquire::GzipIndexes,false) == false)
@@ -1162,7 +1158,10 @@ void pkgAcqIndex::Done(string Message,unsigned long long Size,string Hash,
  DestFile += URItoFileName(RealURI);
   }
 
-  ReverifyAfterIMS(FileName);
+  // do not reverify cdrom sources as apt-cdrom may rewrite the Packages
+  // file when its doing the indexcopy
+  if (IsCDROM == false)
+	 ReverifyAfterIMS(FileName);
   return;
}
string decompProg;
diff --git a/test/integration/test-apt-cdrom b/test/integration/test-apt-cdrom
index 8d8fdf1..44eccb7 100755
--- a/test/integration/test-apt-cdrom
+++ b/test/integration/test-apt-cdrom
@@ -66,22 +66,51 @@ CD_LABEL=$(echo $ident | grep ^Stored label: | head -n1 | sed s/^[^:]*: //
 testequal CD::${CD_ID} \${CD_LABEL}\;
 CD::${CD_ID}::Label \${CD_LABEL}\; cat rootdir/var/lib/apt/cdroms.list
 
-testequal 'Reading package lists...
+testcdromusage() {
+	touch rootdir/var/lib/apt/extended_states
+
+	testequal 'Reading package lists...
 Building dependency tree...
+Reading state information...
 The following NEW packages will be installed:
   testing
 0 upgraded, 1 newly

Bug#758857: buildbot: Unable to upgrade master

2014-09-09 Thread David Kalnischkies
On Sat, Sep 06, 2014 at 11:01:29AM +0300, Andrii Senkovych wrote:
 Closing the by by agreement with the reporter. The bug cannot be
 reproduced on several buildbot instances on the maintainer's machine
 and the end user's problem has been resolved.

I had the same problem unfortunately and I think this is a really bad
upgrade experience, so in case someone else finds this bugreport in
a what the hell, why isn't that working… moment (or maybe it even
helps the maintainer reproducing it, who knows):

Funnily simply deleting the state.sqlite file didn't change anything,
I just got a new empty state.sqlite file (with the wrong owner 'root'
as all the other files are owned by the 'buildbot' user – maybe the
message should indicate how to run the upgrade command as another user
than root), but the message remained the same. What helped was in fact
correcting what the upgrade-master command complained about in the
warning (as in the initial mail): WARNING: rotateLength is a string, it
should be a number

I can't remember ever touching the buildbot.tac file at all, but well,
I had these values in the file:
rotateLength = '1000'
maxRotatedFiles = '10'

[possibly upstream bug #2588 ; my instance was setup in January 2014]

After removing the quotes from both (after fixing rotateLength you get
the same warning about maxRotateFiles) I could run the upgrade-master
command and it finished successfully. These should really be errors
instead of warnings if they make the command fail completely…


Trying to start buildmaster again made it die without any message in the
logs though, but the upgrade command had created a new master.cfg.sample
file and comparing this with my file indicated that the way the port has
to be set changed – notice that I haven't changed the port at all…
anyway, changing the old line to:
c['protocols'] = {'pb': {'port': 9989}}
made the buildmaster start again with all of its old state.


Sidenote: In public_html/ there are also some *.new files for me,
namely for robots.txt and default.css – I doubt I had changed them
either, so at least default.css would have been nice if it was upgraded
automatically (I see why robots.txt wasn't) [even better if it would be
handled like the template files as I have no intension of changing the
CSS], but at least a friendly message that I should do that would be
good.


Best regards

David Kalnischkies


signature.asc
Description: Digital signature


Bug#753941: libapt-pkg4.12: segfaults at debListParser::NewVersion

2014-07-07 Thread David Kalnischkies

Control: severity -1 important

Hi,

On Sun, Jul 06, 2014 at 04:00:29PM +0200, Zakaria ElQotbi wrote:
 Package: libapt-pkg4.12
 Version: 1.0.5
 Severity: grave
 Justification: renders package unusable

Thanks for the report!

These bugs are traditionally hard to tackle as they are hard to
reproduce (if at all possible). They depend on the order of sources.list
entries and the contents of the files downloaded as a result of that.
I am therefore downgrading a bit as this will resolve itself very very
likely with the next 'apt-get update'. Alternatively the posted
workaround works just as well. (and because you can hit it with any
version of apt from the last four years – so an RC-bugs would at the
most discourage people from upgrading to a security fix we had
recently…)


That said, I happen to know what is wrong this time as I saw it while
rewriting this codearea for a (still unfinished… damn) experimental
branch, which should have been public weeks ago… anyway, the simple fix
is in our debian/sid branch now waiting for the next upload.

As a (very) small reward for reporting this issue: The workaround will
actually make apt's cache generation slightly faster and is totally
harmless (if you don't happen to use insane high values, the equivalent
of 100MB should be enough, the current default is ~25MB). It is very
small though as it is probably not measurable…


Technical background:
In the dark ages (=before squeeze) if the cache was too small apt would
just error out (mmap ran out of room). In many many iterations
I worked on making the cache generation relocatable at runtime, so that
we can grow the underlying mmap by moving to a different address (as
growing but keeping the address is unlikely to work). We can't just
increase the cache size by default as on embedded devices we would eat
a good part of the available RAM this way… really bad if the kernel
OOM-killer is triggered…

The offender here is the line:
Ver-Section = UniqFindTagWrite(Section);

So the compiler figures out the storage location (Ver-Section), then
it calculates the value (return value of the function call) and stores
the value at the storage location – just that this is the old location
as the function call could potentially move the mmap… segfault.

So the solution is something like:
tempvalue = UniqFindTagWrite(Section);
Ver-Section = tempvalue;

Seems trivial, right? It is also the reason why regardless of how hard
you try to find all these instances, one or two are always slipping
through (but after 4 years, there can't be that many left, right? ;) )


Best regards

David Kalnischkies


signature.asc
Description: Digital signature


Bug#749795: holes in secure apt

2014-06-17 Thread David Kalnischkies
On Mon, Jun 16, 2014 at 12:04:51PM +0200, Thorsten Glaser wrote:
 On Thu, 12 Jun 2014, David Kalnischkies wrote:
  For your attack to be (always) successful, you need a full-sources
  mirror on which you modify all tarballs, so that you can build a valid
  Sources file. You can't just build your attack tarball on demand as the
 
 Erm, no? You can just cache a working Sources file and exchange
 the paragraph you are interested in. That’s something that would
 be easy in a CGI written in shell, *and* perform well. Trivial.

The always refers to the small problem that a MITM isn't in control of
what source package is acquired by the user later on. Modifying the
Source file is of course trivial, the hard part is making the
modification count given that at the time the request for the Sources
file is made you have no idea what (if any) source package the user will
request in 10 seconds/days following this 'apt-get update' (or
equivalent) – if the user isn't on to you given that you have thrown
away the signatures for binary packages, too, so that he can't even get
his build-dependencies without saying yes to a (default: no) warning.

From a theoretical standpoint, this is of course all negligible, but in
practice it's so annoying/fragile that way better alternatives exist.
(Me messing up InRelease parsing [twice] for example with ironically far
less coverage - its all about catchy titles I guess)


Best regards

David Kalnischkies


signature.asc
Description: Digital signature


Bug#749795: holes in secure apt

2014-06-12 Thread David Kalnischkies
On Thu, Jun 12, 2014 at 01:06:28AM +0200, Christoph Anton Mitterer wrote:
 In my opinion this is really some horrible bug... probably it could have
 been very easily found by others, and we have no idea whether it was
 exploited already or not.

Probably yes. Someone in the last ~11 years could have, but that nobody
did tells you a lot about how many people actively work on what so many
people seem to assume just has to work and complain loudly if it
doesn't in the way it always was (assumed to be)… so, to get anything
useful out of this: Should we do a kickstarter now or wait for
a libreapt fork?


 Anyone who believed in getting trusted sources might have been attacked
 with forged packages, and even the plain build of such package might
 have undermined users' security integrity.

Worst case. In practice you will have installed build-dependencies
before which has resulted in a error for those, which should have been
enough for you to recognise that something fishy goes on. It is at least
what all automatic builders will run into. Assuming of course you don't
ignore such errors which many users/scripts happily do…


Also, keep in mind that the chain is broken at the Release - Sources
level, not at the Sources - tarball level, so if you ship modified
tarballs to your target you have to also ship a modified Sources file.

For your attack to be (always) successful, you need a full-sources
mirror on which you modify all tarballs, so that you can build a valid
Sources file. You can't just build your attack tarball on demand as the
hash (and filesize) isn't going to match with what Sources declares.
(assuming you aren't good at pre-imaging, but then, why do you bother
with this one here?) Combine that with the problems of being a good MITM
in general and you might understand why my heart isn't bleeding that
much about this particular bug. We had worse and nobody really cared…


 It's really saddening to see that such an issue could slip through,
 especially when I've personally started already a few threads on
 debian-devel about the security of secure APT.
 The most recent one was IIRC:
 https://lists.debian.org/debian-devel/2012/03/msg00549.html
 but I've had one before, I think.

What is really sad is that many people keep talking about how much more
secure everything should be but don't do the smallest bit of work
to make it happen or even do a basic level of research themselves.

So instead of answering all your questions, I will instead leave them
unanswered and say: Go on and check for yourself! You shouldn't trust
a random guy like me anyway and if that leads to even one person
contributing to apt (or the security team or anything else really) in
this area, we have a phenomenal massive increase in manpower …
(for apt in the 50% ballpark!)


 - I think per default APT should refuse to work with unsigned
 repos/packages. One should really need some configuration switch or
 option that allows this.

I will comment this one though: Michael wanted to look into this for
a while now. The plan I was suggesting was something like jessie:
support-unauth=true by default, jessie+1: support-unauth=false by
default, jessie+2: gone. We will see if this can be implemented at all.
Contributions welcome as always.


 I don't think it's a big issue, since all the major repos are signed and
 even the end-user tools to make own repos (like debarchiver) support
 signing.

Think again. People do it all the time. It is the default mode of
operation for plugging in repos into builders for example. If you are
bored, just search for the usage of --allow-unauthenticated.


I half-jokingly mentioned with the plan last time that a bunker is
nearby, so I would be safe; half-jokingly as at least I got murder
threats for far less. I doubt it will be any different with this not
big issue. So be careful with what you assume to be simple and
uncontroversial. See also xkcd#1172.


Some usecases can be transitioned to [trusted=yes] probably, but I am
not sure we really gain that much this way (as it makes things actually
worse from a security standpoint) so we really shouldn't press the
security: don't care crowd in this direction. Hence the slow ride-plan.


 People should simply be taught to not use unsigned repos...

Yeah. I will try my luck with world peace first though. Might be easier…
But I am a naive kid. 5 years ago I wondered why a small bug – which
even I could provide a patch for – wasn't fixed. Now I wonder how the
team manages to keep up with reading bugs at all; but its the same for
many other Debian: native packages. aka: It took me a while to
understand what no upstream really means …


Best regards

David Kalnischkies

P.S.: Dropping security@, bug@ and everyone else in Reply-To as this
chit-chat thread is just noise for them. Please don't pick up cc's at
random … If you want to /work/ on anything you could move to deity@ as
already suggested. Otherwise lets just talk here… (and no, you don't
have to cc me either

Bug#749795: apt: no authentication checks for source packages

2014-05-30 Thread David Kalnischkies
On Fri, May 30, 2014 at 03:21:20PM +0200, Michael Vogt wrote:
 From b7f501b5cc8583f61467f0c7a0282acbb88e4b29 Mon Sep 17 00:00:00 2001
 From: Michael Vogt m...@debian.org
 Date: Fri, 30 May 2014 14:47:56 +0200
 Subject: [PATCH] Show unauthenticated warning for source packages as well
 
 This will show the same unauthenticated warning for source packages
 as for binary packages and will not download a source package if
 it is unauthenticated. This can be overriden with

typo: overridden

 +   // check authentication status of the source as well
 +   if (UntrustedList !=   !AuthPrompt(UntrustedList, true))
 +  return false;

As said, I don't think 'apt-get source' should be interactive, so this
true should be a false, right?

Reasons (as a repeat):
- it was not interactive before
- the error message on 'no' talks about install, so we would need a new
  string
- 'apt-get download' isn't interactive either
(- it is more in line with your own commit summary)

Counter arguments?


Best regards

David Kalnischkies


signature.asc
Description: Digital signature


Bug#749795: apt: no authentication checks for source packages

2014-05-29 Thread David Kalnischkies
 be in dreamland
for a while now, so I could be horribly wrong about all this of course.

Not a lot of time tomorrow^Wtoday (and I can't upload anyway), so,
Michael, could you please have a look and talk to the security teams?


Best regards

David Kalnischkies


signature.asc
Description: Digital signature


Bug#749020: apt: policykit-1_0.105-5_amd64 crashes apt-get 1.0.3

2014-05-23 Thread David Kalnischkies
Control: severity -1 normal

Hi!

On Thu, May 22, 2014 at 11:32:22PM -0400, Martin Furter wrote:
 Package: apt
 Version: 1.0.3
 Severity: critical
 Justification: breaks the whole system

Well, if apt segfaults before anything is installed it by definition
isn't breaking the system. You can't upgrade your system, which defeats
the propose of apt, but your system is still okay. Hence downgrading
and the following hopefully describes why I set it so low…


 Before the dist-upgrade I upgraded apt. Then I ran apt-get dist-upgrade. It
 downloaded all packages and then crashed with segmentation fault.

Old system, policykit (and therefore libpam-systemd) in the bug title
and the list of downloaded packages includes systemd-sysv…
I bet you are one of those hit by #748355. See there for the nitty-gritty
details, but in essence: APT encounters an impossible to upgrade
safely and forbidden by debian policy situation here, which the
involved packages have to solve, so that a good upgrade path exists
and apt can do its assigned work.


I finally managed to get a patch together yesterday to restore apts
detection for these kind of situations, so the next upload should fix
the segfault and instead a user should in the future be greeted again
with the following old error message:
| E: This installation run will require temporarily removing the essential
| package sysvinit:amd64 due to a Conflicts/Pre-Depends loop. This is
| often bad, but if you really want to do it, activate the
| APT::Force-LoopBreak option.

Not really a lot better from a user point as you still can't really
upgrade, it is just slightly less scary than a segfault, but that bug
has really to be resolved on the systemd/sysvinit-side …


Best regards

David Kalnischkies


signature.asc
Description: Digital signature


Bug#748355: Upgrading from sysvinit/wheezy to systemd-sysv/sid impossible due to loop

2014-05-16 Thread David Kalnischkies
Package: systemd-sysv
Version: 204-10
Severity: serious
Justification: triggers a debian-policy defined dpkg error (§7.4)
X-Debbugs-CC: pkg-sysvinit-de...@lists.alioth.debian.org

Hi *,

I got a report (now…) that apt segfaults in a wheezy → sid upgrade.
Debugging this leads to the following universe (hugely simplified):

Package: sysvinit
Version: 1
Essential: yes

Package: sysvinit
Version: 2
Pre-Depends: sysvinit-core | systemd-sysv
Essential: yes

Package: sysvinit-core
Version: 2

Package: systemd-sysv
Version: 2
Conflicts: sysvinit ( 2)
Breaks: sysvinit-core


If we have sysvinit v1 installed and want to install systemd-sysv now
we not only run into the previously mentioned segfault, but, if the
segfault would not appear and dpkg actually executed we get:
| Selecting previously unselected package systemd-sysv.
| dpkg: considering deconfiguration of sysvinit, which would be broken by 
installation of systemd-sysv ...
| dpkg: no, sysvinit is essential, will not deconfigure
|  it in order to enable installation of systemd-sysv
| dpkg: error processing archive 
/tmp/tmp.W9nkJhRQvg/aptarchive/pool/systemd-sysv_2_amd64.deb (--unpack):
|  installing systemd-sysv would break existing software
| Errors were encountered while processing:
|  /tmp/tmp.W9nkJhRQvg/aptarchive/pool/systemd-sysv_2_amd64.deb
| E: Sub-process fakeroot returned an error code (1)

The reason is simple:
Unpacking systemd-sysv is not possible before we have gotten right of
sysvinit 1. The normal solution is to just upgrade it to version 2, but
this requires us to first unpack systemd-sysv - loop.
The other solution is to temporary remove sysvinit 1 and reinstall it
later on. Such a practice isn't allowed for essential packages.
Debian policy §7.4 even explicitly defines that dpkg should error out if
it is attempt, which is what you get at the moment (minus a segfault).
Note that Breaks will not work either (same message).


I see two probably good-enough solutions:
1. Downgrade Pre-Depends in sysvinit to a Depends
Technical it isn't entirely correct as it would be allowed to have
an unpacked sysvinit then, but not a working init system. In theory I
assume this window of opportunity to be very small as APT treats Depends
of a (pseudo-)essential package if at all possible as Pre-Depends and
also tries to configure them as soon as possible. In practice it should
mean that they are unpacked in the same dpkg call, so if you can write
your maintainerscripts without requiring runlevel, shutdown and co this
should work out.
2. Remove Conflicts: sysvinit ( 2) from systemd-sysv
It is only for the file replacement, right? (In which case Breaks would
have equally (not) worked). dpkg is happy as long as it has Replaces, so
we talk mostly about partial upgrades and downgrades here and while I
tried to come up with a good scenario in which something would break,
I failed to find one given that both seem to work with the binaries of
the other at least somehow.
(This 2nd solution is btw deployed in sysvinit-core, so just in case
 someone requests adding a Conflicts/Breaks here: be careful)


I guess 'solution' 2 is preferable, so I report against systemd, but
have CC'ed sysvinit maintainers. Feel free to disagree and reassign
and/or invent another (better) solution.


Best regards

David Kalnischkies

P.S.: Full-disclosure bla bla: At the moment a third solution would be
for apt to temporary install sysvinit-core, to be able to install the
new version of sysvinit, so that it in turn can remove sysvinit-core
again and replace it with systemd-sysv. Yes, that would be insane and
is not even close to be supportable as a scenario by apt …


signature.asc
Description: Digital signature


Bug#745866: FileFd::Size failure on all big-endian architectures (patch attached)

2014-04-26 Thread David Kalnischkies
Hi,

On Fri, Apr 25, 2014 at 07:49:17PM -0600, Adam Conrad wrote:
 Package: apt
 Version: 1.0.2
 Severity: serious
 Tags: patch
 Justification: fails to build from source (but built successfully in the past)

The testcase failing the build is new (in this form), so this bug exists
for a longer time (in the somewhere between 2011 and 2013 ballpark)
and is just detected now.
(I didn't thought writing it would pay off so early and now it found
 three bugs/oddities already…)


Effected are only operations which act on a gzip compressed file
directly asking for the content size, uncompressing them first ist fine
as well as just reading from the file.

The only default configuration using this was pdiff, which might be the
reason why nobody has found this so far in Ubuntu/Debian (stable) as
pdiffs do not exist there and even on Debian unstable you have the
chance to never see it if you can spare enough memory (The size was used
to request a mmap of this size). Add to it that even pdiff isn't affected
anymore since the rewrite at the beginning of the year (it still does
the uncompressing on the fly, but doesn't use an mmap anymore).


 This patch should be fairly self-explanatory for people who grok
 binary math and endian flips.  Fixes the FTBFS on big-endian arches.

After a bit of talking on IRC, we agreed that both patches and than some
should be applied. Beside that it hopefully still fixes the build
(testers welcome to confirm), it also removes the dependency on binary
math and endian flip grokking – and even reduces codesize. ;)


Best regards

David Kalnischkies
commit 05eab8afb692823f86c53c4c2ced783a7c185cf9
Author: Adam Conrad adcon...@debian.org
Date:   Sat Apr 26 10:24:40 2014 +0200

fix FileFd::Size bitswap on big-endian architectures

gzip only gives us 32bit of size, storing it in a 64bit container and
doing a 32bit flip on it has therefore unintended results.
So we just go with a exact size container and let the flipping be handled
by eglibc provided le32toh removing our #ifdef machinery.

Closes: 745866

diff --git a/apt-pkg/contrib/fileutl.cc b/apt-pkg/contrib/fileutl.cc
index de73a7f..b77c7ff 100644
--- a/apt-pkg/contrib/fileutl.cc
+++ b/apt-pkg/contrib/fileutl.cc
@@ -58,13 +58,10 @@
 	#include bzlib.h
 #endif
 #ifdef HAVE_LZMA
-	#include stdint.h
 	#include lzma.h
 #endif
-
-#ifdef WORDS_BIGENDIAN
-#include inttypes.h
-#endif
+#include endian.h
+#include stdint.h
 
 #include apti18n.h
 	/*}}}*/
@@ -1880,19 +1877,13 @@ unsigned long long FileFd::Size()
 	  FileFdErrno(lseek,Unable to seek to end of gzipped file);
 	  return 0;
}
-   size = 0;
+   uint32_t size = 0;
if (read(iFd, size, 4) != 4)
{
 	  FileFdErrno(read,Unable to read original size of gzipped file);
 	  return 0;
}
-
-#ifdef WORDS_BIGENDIAN
-   uint32_t tmp_size = size;
-   uint8_t const * const p = (uint8_t const * const) tmp_size;
-   tmp_size = (p[3]  24) | (p[2]  16) | (p[1]  8) | p[0];
-   size = tmp_size;
-#endif
+   size = le32toh(size);
 
if (lseek(iFd, oldPos, SEEK_SET)  0)
{


signature.asc
Description: Digital signature


Bug#745354: apt-get fails on cdrom added with apt-cdrom while updating

2014-04-21 Thread David Kalnischkies
Control: severity -1 normal
Control: tags -1 - d-i

Hi,

On Sun, Apr 20, 2014 at 09:44:38PM +0200, msrd0 wrote:
 Package: apt
 Version: 0.9.7.9+deb7u1
 Severity: grave
 Tags: d-i security
 Justification: user security hole

The very sparse description indicates this has nothing to do with d-i,
so removing that tag, the security tag was already dropped by someone
else. Severity also lowered to normal levels as error messages rarely
fit into this category. We can inflate it again later if need be.
(mentally this message also adds a 'moreinfo' tag, see below)

Please do not set them at random. If you aren't sure, leave them at
their defaults. Every error is of course 'critical' in the perception of
the user facing it, but as long as it isn't eating your data, the world
is probably able to continue to spin a little while longer.


 -- Some information added by myself:
 
 I have added the architecture i386 to install skype, but my computer has 
 architekture amd64. It could be that the output of apt-get tries to say that 
 there is an
 error with architekture i386 - but I don't know real about it.

Well, for starters it would be nice if you could tell us the actual
commands you executed and error messages you are seeing, otherwise
we have no idea what you are talking about.


Best regards

David Kalnischkies


signature.asc
Description: Digital signature


Bug#740673: apt-cdrom ident started requesting to insert cd even if cd is already mounted

2014-03-08 Thread David Kalnischkies
On Sat, Mar 08, 2014 at 01:37:22AM +0100, Cyril Brulebois wrote:
 Gabriele Giacone 1o5g4...@gmail.com (2014-03-04):
  On hurd, apt-cdrom ident started requesting to insert cdrom even if cdrom 
  is
  already mounted.
  That breaks debian-installer given it's called by load-install-cd.
  Recent debian-installer builds get stuck at Configuring apt - Scanning the
  CD-ROM.
  See https://bugs.debian.org/728153
 
 Weekly installation image builds are indeed broken; this is slightly
 annoying since I was aiming at releasing an alpha 1 image for jessie
 soon, so that we can perform regression tests against it.

Sorry about that. I am not really able to test cdrom stuff at the moment
and had hoped that this was actually tested with d-i as the buglog
indicated this to me (yes I know, silly me). John, as the author of the
patch, can you shine some light on with what you have tested this (out
of my interest). One of your mails suggested to me it was useful for
d-i, but now that I read it again I seem to have overlooked a
possibly.

(btw: I had shortly looked at the bug before, but couldn't find anything
 with a non-udev codepath POV, so that cluebat here really helped)


 And more precisely:
 | Author: John Ogness john.ogn...@linutronix.de
 | Date:   Fri Dec 13 20:59:31 2013 +0100
 | 
 | apt-cdrom should succeed if any drive succeeds
 | 
 | If there are multiple CD-ROM drives, `apt-cdrom add` will abort with an
 | error if any of the drives do not contain a Debian CD which is against
 | the documentation we have saying a CD-ROM and also scripts do not
 | expect it this way.
 | 
 | This patch modifies apt-cdrom to return success if any of the drives
 | succeeded. If failures occur, apt-cdrom will still continue trying all
 | the drives and report the last failure (if none of them succeeded).
 | 
 | The 'ident' command was also changed to match the new 'add' behavior.
 | 
 | Closes: 728153
 
 I'm pretty sure making apt-cdrom ident hang wasn't part of the plan,
 and that's what's happening nonetheless when called from apt-setup's
 generators/40cdrom script.

Mhh, I see. Me and my lets reduce code duplication stricking again.
I guess you can give apt-cdrom ident /dev/null as stdin as you do for
add to hotfix that for the moment (but not tested).

Will have to do some code-staring to find out what is really messed up
as I see this Please insert a Disc in the drive and press enter in the
new output of the add command only, but the code for it is in the old
one as well. And my impression is that it shouldn't be in any. Fishy.

At least it reminds me that I have to find a way to make a testcase which
doesn't use --no-mount as this is of course hiding the issue…

Sidenote: Why are you allowing apt-cdrom to do the mounting by itself
here if you have mounted it already and remount it after the run?


 [ Also, if you're going to change semantics, it probably would be nice
 to warn your users (e.g. -boot@ in that particular case); heads-up on
 topics with possible big consequences are always appreciated. ]

It wasn't supposed to change semantics - at least not in a negative way.
If you have one drive nothing changes (at least that was the idea) and
if you have two apt-cdrom will not fail if it looks at the empty drive
first - which sounds like a good idea and should be fine for d-i (and
even fix some issues as mentioned in the buglog with it) if it would
work as intended.

Beside: I have to admit that I don't know who is using apt and how.
I get a remote idea of what is using apt and how each time we break
something (like description fieldname in 'apt-cache show' for cdrom
creation scripts), but as much as I would like to, I can't remember them
all and especially can't test them all. And I am pretty sure you don't
want to be cc'ed on all changes in apt just because I have no idea what
could possibly break if I change anything (as the fieldname example
shows, I can't assume anything in general). So sorry again for the
trouble caused, but please don't assume bad faith here…

Sidenote: The manpage says 'ident' is a debugging tool. You will
hopefully understand that even if I had anticipated that the commit
would cause trouble I would have assumed nobody would use it.
(I see now that apt-setup is using it and why, and while the information
 is in the add output as well it is probably a bit harder to get it from
 there, point taken, but that this is easy to say after the fact)


Best regards

David Kalnischkies


signature.asc
Description: Digital signature


Bug#738909: apt: Can not reproduce

2014-02-16 Thread David Kalnischkies
Control: severity -1 normal
Control: tags -1 moreinfo unreproducible

On Sat, Feb 15, 2014 at 02:59:05PM +, Claudio Moretti wrote:
 IMHO, the reporter added experimental sources (like I did) and pinned
 them in some way that prioritizes experimental over unstable (or lower).

(I tried (dist-)upgrading to experimental and apt isn't suggesting a
 libc6 remove and as said, I think its pretty unlikely that it could
 happen without a lot of force… )

 Unless somebody is able to reproduce it and/or the reporter gives more
 details, I propose this bug report is closed, because it may scare
 people into not upgrading while the problem is user-specific.

Agreed. I hoped for a quick feedback to tell what is wrong, but didn't
got one so far and nobody else seems to be effected, so severity
downgraded and tagged accordingly. We (infrequently) close bugs with
the moreinfo tag without any additional info, so first warning.


Best regards

David Kalnischkies


signature.asc
Description: Digital signature


Bug#738567: uses futimens, which is supported only on linux-any

2014-02-13 Thread David Kalnischkies
On Thu, Feb 13, 2014 at 02:21:40PM +0100, Samuel Thibault wrote:
 David Kalnischkies, le Tue 11 Feb 2014 19:36:59 +0100, a écrit :
  On Mon, Feb 10, 2014 at 06:35:37PM +0100, Petr Salinger wrote:
   The apt 0.9.15.1 started to use futimens instead of previous utime.
  
   The futimens() is not supported on kfreebsd.
  
  Could this be added to the manpage utimensat(2)? I had looked there and
  assumed that by POSIX1.2008 and that it is in glibc it would be safe to
  use it as utime replacement…
 
 Well, but with old Linux kernels you would have the same issue.

Sure, but I don't exactly see the point here. The manpage talks about
2.6.26 which is … not even oldstable. I think glibc requires newer
kernels (at least I think I had this problem last year with armel).

My point was more that I would have expected the manpage to give at
least a passing mention of the non-availability on !linux as they
usually do and are (therefore) my primary source for such stuff while
being offline… (and the cppcheck message was not helping either).

Of course, completely my fault, I just wanted to mention how I ended
up on the wrong track so that others aren't able follow my 'lead'.


   The futimes() is currently supported (at least) on linux, kfreebsd, hurd.
  
  It isn't part of any standard though, so I would worry now that we could
  run into problems with it as well.
 
 Indeed.  But you could at least check for them in configure.ac and use
 what is available (autoconf will properly figure out that futimens are
 ENOSYS stubs in glibc on !linux, BTW).

Codesearch is suggesting this, but we really don't care enough for time
to add that much of trickery. Anyway, with the recent upload we just
switched (back) to utimes and should be done. Sorry for the trouble.


Best regards

David Kalnischkies


signature.asc
Description: Digital signature


Bug#738909: [apt] Package System Broken, libc6 etc

2014-02-13 Thread David Kalnischkies
On Thu, Feb 13, 2014 at 10:49:19PM +0200, David Baron wrote:
 Will remove everything, libc6 and all if I let it.

This is unlikely, like really, so please mention the EXACT command you
run and include the COMPLETE output of it. With a oneline summary nobody
can help you.

You can also attach the file /var/lib/dpkg/status which includes
information about all packages you have installed. It will enable others
to reproduce your problem more easily (but it exposes the mentioned
info, so if you don't want to expose it to the public, you can also sent
it to me and I will try what I can)

Also, why is this grave if you can still say 'no'?
Details please.

 Following packages are broken:
 libc-bin, libc6 libc6-dbg, libc6-i686, libc6:amd64

How are they broken? Could dpkg not install them or what?
Again details are the key. And why are they broken if you haven't
approved the the solution apt proposed.

Also, the selection is obscure. Why do you have multiarch enabled on a
i386 system? The infos reportbug attached suggest that you use a PAE
kernel, so its not for the kernel…


 Apt-get upgrade tried to install experimental versions and experimental libc6 
 was installed, not the others.

Why do you use experimental if you can't deal with the breakage it could
include? I don't think it is a good idea to ever upgrade/dist-upgrade
against experimental…


 Attempt to downgrade will remove most everything as well.

Downgrades are not supported and usually not a good idea.
Trying to downgrade really important stuff like libc6 will not just not
work, but explode bigtime.


Best regards

David Kalnischkies


signature.asc
Description: Digital signature


Bug#738567: uses futimens, which is supported only on linux-any

2014-02-11 Thread David Kalnischkies
Hi,

thanks for the report!

On Mon, Feb 10, 2014 at 06:35:37PM +0100, Petr Salinger wrote:
 The apt 0.9.15.1 started to use futimens instead of previous utime.

 The futimens() is not supported on kfreebsd.

Could this be added to the manpage utimensat(2)? I had looked there and
assumed that by POSIX1.2008 and that it is in glibc it would be safe to
use it as utime replacement…

The suggestion came from cppcheck btw as utime is obsolete, so this might
happen a bit more often in the future if more people follow this advice.
(but only in a very picky cppcheck --enable=all --std=posix call)

Oh, and you don't mention utimensat, but I presume it has the same
problem (coming from the same manpage and all)…


 Please could you switch to futimes() ?
 It seems that the subsecond part is set to zero in all apt cases,
 so there is no difference between nanosecond and microsecond precision at all.

Yes, there is no difference. We have at most seconds precision as this
is done to 'store' the modification time we got from the server.


 The futimes() is currently supported (at least) on linux, kfreebsd, hurd.

It isn't part of any standard though, so I would worry now that we could
run into problems with it as well. I guess I will just opt to revert the
utime change for the moment - its not like we are in a hurry with that,
I was just fixing issues reported by various static analyse tools…
Or maybe utimes: We have the filename around anyway and it will
silence cppcheck (and I don't have to remember to ignore the remark).


Best regards

David Kalnischkies


signature.asc
Description: Digital signature


Bug#717613: systemd-udevd failes to execute /lib/udev/socket:@/org/freedesktop/hal/udev_event

2014-01-30 Thread David Kalnischkies
On Thu, Jan 30, 2014 at 05:51:44PM +0100, Michael Biebl wrote:
 Am 30.01.2014 17:32, schrieb Martin Pitt:
  Michael Biebl [2014-01-30 17:24 +0100]:
  c/ Add Breaks: hal to udev so it is automatically uninstalled on Linux.
  Since hal on Linux is no longer really functional and actually broken by
  that udev change, this might be the right thing to do.
  
  I'm usuallly a bit wary with adding Breaks since they have the tendency
  to confuse apt on dist-upgrades.
 
  I concur. I've been pondering doing the same on Ubuntu as we still get
  the odd bug report about it as well (and we entirely removed hal some
  time ago). Would a Breaks:/Replaces: help out apt more than a single
  Breaks:?
 
 Dunno, using Replaces seems a bit odd here.

Replaces will do exactly nothing in APT, so don't add it if you don't
replace files and want to tell dpkg about it (and please tell this
anyone you meet, as it is a common misunderstanding).


 That said, since udev is a rather central package it's highly unlikely
 that this Breaks would cause udev to be uninstalled.

The problem is not so much that APT could decide to remove udev as a
better solution exists: hold udev at its installed version
(especially as hal itself depends on udev).

That said, it is unlikely that it will happen. On my system udev has a
score of 86 points¹ at the moment. That doesn't make it invincible,
but should win against an obsolete package easily (which at most has
some points left from other obsolete packages) – especially as this
obsolete package depends on udev itself, so the score it has gets also
added to udev…

¹ apt-get dist-upgrade -s -o Debug::pkgProblemResolver::ShowScores=1 21 | 
grep ' udev '
  [I somehow suspect for most people, the score is (a lot) higher.]

(Removing hal from the archive would btw only lower the score by one
 point for hal, so it is not that effective from an APT point of view.
 And its not that effective as a hint anyway as many people have also
 older sources in their list, so packages are never not downloadable)


 IIRC the general recommendation is to *not* use Breaks to kick out
 obsolete packages but instead let apt-get autoremove cleanup such
 packages.

This is indeed the preferred way as beside that APT could decide against
the remove based on the score you could also have a *user* deciding
against a remove. Some actually check before saying 'Y' and frankly, the
description of hal would suggest at least to me that it might not be a
clever idea to let it go without deeper investigation…
(but maybe I am just too used to unstable).


 But in this case not kicking out hal forcefully leads to those
 scary boot messages (and already quite a few duplicate bug reports).
 Once this udev version enters stable, we might get even more.
 So I'm also inclined to add the Breaks.

Usually I would suggest a transitional package in addition, but in this
case I am going a bit further:
The error message suggests to me (who has absolutely no idea what he is
talking about through) that hal configures udev to send messages to hal.
Why not just drop this configuration if it doesn't work anyway… ?


Best regards

David Kalnischkies


signature.asc
Description: Digital signature


Bug#726047: Bug#726055: libapt-pkg.so.4.12: segmentation fault in pkgDPkgPM::ProcessDpkgStatusLine

2013-10-11 Thread David Kalnischkies
package aptitude libapt-pkg4.12
severity 726055 grave
reassign 726047 libapt-pkg4.12 0.9.12
merge 726055 726047
affects 726055 aptitude
thanks

Hi *,

On Fri, Oct 11, 2013 at 6:30 PM, Sven Hartge s...@svenhartge.de wrote:
 dpkg: error processing /var/cache/apt/archives/msr-tools_1.3-1_i386.deb 
 (--unpack):
  trying to overwrite '/usr/share/man/man1/cpuid.1.gz', which is also in 
 package cpuid 20130610-2

 Program received signal SIGSEGV, Segmentation fault.
 0xf7f5436b in pkgDPkgPM::ProcessDpkgStatusLine(int, char*) ()
from /usr/lib/i386-linux-gnu/libapt-pkg.so.4.12

With symbols and everything attached gdb says a bit more:
#1  pkgDPkgPM::ProcessDpkgStatusLine (this=this@entry=0x626190,
OutStatusFd=OutStatusFd@entry=-1, line=optimized out) at
deb/dpkgpm.cc:603
[apt-dbg doesn't exist because somehow we always fall for the
 this time, we gonna get automatic -dbg generation goal… oh my.]

This is a regression in 0.9.12, the buggy change being:
  * Fix status-fd progress calculation for certain multi-arch install/upgrade
situations
which triggers on dpkg errors (like not declared file overrides as shown here)
or on conffile prompts (not tested yet, just assuming from the code).
#726001 seems to be different and aptitude related.
[The error being that the 4th element of an array with 3 elements is read]

Also, the advertised fix isn't complete as it assumes every package which dpkg
isn't qualifying with an architecture is native, which isn't the case, as dpkg
only qualifies :same packages, but not foreign packages (with the logic that
only one architecture could be meant at all times, so no need to qualify).
So while it will be correct for many, it certainly isn't for all and somehow,
throwing in the architecture suddenly smells like our front ends are going
to hate us… (at least if they parse what they hand to them with this).

[The code parsing dpkg status lines is a bloody mess, but I hope I will
 find some time in-between vintage this weekend to have a closer look]


Best regards

David Kalnischkies


--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#724995: apt: Apt fails in upgrade to jessie when setting up upower (0.9.21-3)

2013-09-30 Thread David Kalnischkies
Control: severity -1 normal
Control: tags -1 - d-i

Hi Richard,

On Mon, Sep 30, 2013 at 11:31 AM, richard rich...@mail.sheugh.com wrote:
 Subject: apt: Apt fails in upgrade to jessie when setting up upower (0.9.21-3)
 Package: apt
 Version: 0.9.9.4
 Justification: Policy 2.5 Priorities required
 Severity: serious
 Tags: d-i

huh? Have you copied this from somewhere?
I at least don't see the seriousness and have absolutely no idea where
would we violate §2.5 or effect d-i, hence downgrade to normal for now.


 I was in the process of upgrading from “stable” to “testing” (jessie) and
 apt-get dist upgrade informed me that the upgrade could not proceed because of
 package conflicts. I then began, using dselect and apt-get, to upgrade 
 selected
 packages, beginning with required. At some point in the process apt broke,
 leaving the upgrade in a locked condition.

If its the same I experienced yesterday the upower thingy froze and did
nothing anymore. Interestingly, killing upower mad the upgrade proceed
without any further complain. I was a bit surprised, but had to carry on,
so did no further investigation on that front.


 This appears to be related to bug #722612 but as apt said to report this bug 
 against apt, that is where it is filed.

In a way. term.log looks like you managed to interrupt the upgrade completely,
so you have (potentially) a lot of half-configured packages on your system.

APT isn't tested in those situations a lot as it depends on other packages
being buggy enough to interrupt the upgrade (and autoremove isn't exactly
the first step you should do to fix the situation).

Would be nice if you could upload/attach/sent the following to files two the
bugreport (or me only, they include information about what packages you
have installed in which version on your system):
/var/lib/apt/extended_states
/var/lib/dpkg/status


After you have saved the files, you should be able to fix your system with
dpkg --configure --pending
And after that repeating the APT command which failed for you, e.g.
apt-get dist-upgrade
to finish whatever is left to do to comply with the request.


 # apt-get autoremove  apt-get clean   apt-get autoclean

Pro-tip: There is no point in calling clean and autoclean together,
as clean will delete every already downloaded *.deb file, while
autoclean will only delete does which can't be downloaded anymore;
so choose whatever you prefer instead of calling both needlessly.

And frankly, autoremove is a command which requires the user to check
if the packages considered for autoremove are really okay to be removed,
as its a guess, no a definite knowledge. The stuff deleted by the clean
commands on the other hand is really not needed anymore and/or is
redownloaded by APT automatically if it needs it.
So, I wouldn't run them together as they don't belong together.


Best regards

David Kalnischkies


--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#724744: 'apt-get source' does not stop if signatures can't be checked

2013-09-27 Thread David Kalnischkies
Control: severity -1 wishlist

On Fri, Sep 27, 2013 at 2:05 PM, Eduard - Gabriel Munteanu
edg...@gmail.com wrote:
 Source packages are signed, therefore it's fair to expect 'apt-get
 source' to enforce signature verification. But it merely prints a
 warning and continues if it can't check a signature because of a missing
 key (e.g. when you forgot to install the developer keyring). This seems
 to be caused by dpkg-source needing the --require-valid-signature option
 to enable strict checking (*).

APT doesn't need to validate the signature of the source package to ensure
it is indeed the source package the maintainer uploaded.

The signature is used by dak (and other repository creators) to ensure what
they get is coming indeed from someone they trust. Only if that is the case
it is integrated into the archive.

In the archive the files are indexed with their checksums in the Sources
file, which is itself indexed in the (In)Release file which is (clear)signed
by the maintainers of the repository: A key APT has available (which changes
a lot less than the keys of people allowed to upload source packages as
DDs get accepted and retire all the time and not to forget DMs … – also,
those people can retire and therefore be removed from the keyrings, but
their uploaded packages aren't magically invalid now).

So given that we know the signature of the Release file is correct, we know
that the checksums for Sources is correct and hence we can use the checksums
included in that file to verify the integrity of files we download.

No need to require all users to download multiple multi-MB big keyrings they
have to constantly keep up-to-date just for such a basic operation.


(I have the strong feeling that this is a duplicate, but I have no time now
 to check, just wanted to remove the RC-bug indicator so nobody is scared.)


Best regards

David Kalnischkies

On Fri, Sep 27, 2013 at 2:05 PM, Eduard - Gabriel Munteanu
edg...@gmail.com wrote:
 Package: apt
 Version: 0.9.7.9
 Severity: grave
 Tags: security

 Source packages are signed, therefore it's fair to expect 'apt-get
 source' to enforce signature verification. But it merely prints a
 warning and continues if it can't check a signature because of a missing
 key (e.g. when you forgot to install the developer keyring). This seems
 to be caused by dpkg-source needing the --require-valid-signature option
 to enable strict checking (*).

 Freenode's #debian suggested I should file a bug on 'apt' since it's the
 frontend, and set a 'wishlist' severity. However I decided to give it a
 'grave' severity because Debian policy says that's appropriate when a
 package introduces a command that exposes the user accounts to attacks
 when ran ( http://release.debian.org/stable/rc_policy.txt ). I'm hoping
 this gets treated more seriously than 'wishlist' (**).

 The security hole in this case involves introducing a compromised source
 package on a Debian mirror. Then apt will happily take it, unpack it,
 patch stuff and possibly execute arbitrary code from it, without
 quitting if it can't check signatures. It breaks the reasonable
 assumption that the package manager will check source package signatures
 for official packages just as it checks binary packages.

 (*) I'd also argue --require-valid-signature is an incredibly poor
 default in itself, and that's what should be fixed. It essentially makes
 security a long option to a core Debian command and it's off by default.

 (**) I should remind you my somewhat related #722906 issue on downloads
 being exceedingly difficult to check correctly from non-Debian machines
 also got a 'wishlist' status (initially 'important' and not tagged as a
 security issue) and had its subject change to something more benign.
 I'm hoping my report was misunderstood.


 --
 To UNSUBSCRIBE, email to deity-requ...@lists.debian.org
 with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
 Archive: http://lists.debian.org/20130927120511.GA3406@home



--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#723705: apt: Saves some downloaded packages under truncated filenames

2013-09-19 Thread David Kalnischkies
On Thu, Sep 19, 2013 at 2:48 PM, Cyril Brulebois k...@debian.org wrote:
 Good luck fixing the scanner. :-)

I have to test this a bit more, but I fear that could be the fix:

diff --git a/apt-pkg/tagfile.cc b/apt-pkg/tagfile.cc
index b91e868..e0802e3 100644
--- a/apt-pkg/tagfile.cc
+++ b/apt-pkg/tagfile.cc
@@ -164,7 +164,7 @@ bool pkgTagFile::Fill()
   unsigned long long const dataSize = d-Size - ((d-End - d-Buffer) + 1);
   if (d-Fd.Read(d-End, dataSize, Actual) == false)
 return false;
-  if (Actual != dataSize || d-Fd.Eof() == true)
+  if (Actual != dataSize)
 d-Done = true;
   d-End += Actual;
}


The Eof check was added (by me of course) in
0aae6d14390193e25ab6d0fd49295bd7b131954f
as part of a fix up ~a month ago (at DebConf).

The idea is not that bad, but doesn't make that much sense either
as this bit is set by the FileFd based on Actual as well, so this is
basically doing the same check again – with the difference that the
HitEof bit can still linger from a previous Read we did at the end of the
file, but have seek'd away from it now (so as a fix for this we could just
as well fix the naming of zlib1g … ;) ).

The most interesting part will be writing a testcase for that…
(the rest of the commit doesn't look completely bulletproof either, mmh)


Best regards

David Kalnischkies


--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#712481: apt: installing package with terminal gives errors...

2013-06-16 Thread David Kalnischkies
Control: reassign -1 dpkg
Control: severity -1 normal

Hello Andre,

On Sun, Jun 16, 2013 at 12:42 PM, Andre Verwijs verwijs...@gmail.com wrote:
Severity: serious
Tags: upstream
Justification: required

Yes, indeed, a justification is required … (sry, couldn't resist)


 root@Debian-Jessie:/home/verwijs# apt-get install python-vte
[…]
 dpkg: warning: parsing file '/var/lib/dpkg/available' near line 294 package
 'libtext-wrapi18n-perl':
  ontbrekende description
 dpkg: warning: parsing file '/var/lib/dpkg/available' near line 414 package
 'libustr-1.0-1:amd64':
  ontbrekende description
[and many more of these lines]

APT isn't touching this file (yeah okay, it kind of does if you happen to use
dselect with APT, but you don't seem to do that) so reassigning to dpkg as
it is their file and they will know best what might be wrong on your system
and how to fix it (I presume it isn't a bug in the end, but they will know)

Downgraded the severity though as a warning itself isn't destroying anything
and your transcript shows that indeed the dpkg/APT run was successful.


Best regards

David Kalnischkies


--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#707578: apt: yields dependency problems with apt-get install --purge libreoffice

2013-05-23 Thread David Kalnischkies
Control: severity -1 normal

(I thought I had replied already, sry for being so late)

Thanks Vincent  Andrey for the status files! They helped a lot in
identifying the problem here, which needs quiet a bit of a loop
and a bit of bad luck (=order of the Depends line is important)
to get triggered.


Even though apt bailing out is never nice, I am setting the severity down
to normal as the recovery in this case is pretty simple (re-run apt-get)
and the loops needed are usually a bug by itself, so its unlikely to see
them in the wild (in stable) and for unstable I have cooked a patch which
should raise the loop-complexity bar even higher.
(It took us ~3 years to hit this bar, lets see how long it takes this time)

Indeed, Rene Engelhard chopped the loop we failed here at down to
non-existence as the loop was in fact incorrect so lets hope the reordered
code for APT will keep working for a while. :)


Best regards

David Kalnischkies


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#708831: upgrade-reports: [sid-sid] left system barely unusable (manually fixed with dpkg --install)

2013-05-19 Thread David Kalnischkies
Hi,

The dpkg/status file before the upgrade could be helpful for reproducing.
You can (hopefully) find it in /var/backup/

Helpful config options (I case someone wants to try)
-o dir::state::status=./dpkg/status
-o debug::pkgdpkgpm=true  # displays dpkg calls rather than executing them
-o debug::pkgorderlist=true # first stage order choices
-o debug::pkgpackagemanager=true # second (and final) stage order choices

On Sun, May 19, 2013 at 3:50 PM, Aurelien Jarno aurel...@aurel32.net wrote:
 /usr/bin/python: /lib/i386-linux-gnu/i686/cmov/libc.so.6: version
 `GLIBC_2.15' not found (required by /usr/bin/python)
 /usr/bin/python: /lib/i386-linux-gnu/i686/cmov/libc.so.6: version
 `GLIBC_2.16' not found (required by /usr/bin/python)
 dpkg: warning: subprocess old pre-removal script returned error exit
 status 1

While APT usually avoids it, its actually fine to have dependencies not
satisfied for half-installed packages and prerm scripts do only give you
the guarantee that your dependencies will be at least half-installed.

In the case of debconf, we don't even have a dependency on python,
so its even more fine to choose this order for unpack.


I haven't had the time yet to debug why APT is choosing this route
(and as said, dpkg/status file would help), but while this might not be
ideal its not a bug in APT.

This smells more like a bug in dh_python2 which adds this prerm code which
assumes that pyclean can be executed even if it isn't configured (aka that
it behaves like an essential application), but I am not in the mood for
bug-ping-pong so just CC'ed python maintainers for now, so they can have a
look and comment on it while we will see whats up with APT to decide on
this route (did I mention that a dpkg/status file would help? ;) ).


Best regards

David Kalnischkies


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#708831: upgrade-reports: [sid-sid] left system barely unusable (manually fixed with dpkg --install)

2013-05-19 Thread David Kalnischkies
On Sun, May 19, 2013 at 9:10 PM, Matthias Klose d...@debian.org wrote:
 Am 19.05.2013 17:15, schrieb David Kalnischkies:
 The dpkg/status file before the upgrade could be helpful for reproducing.
 You can (hopefully) find it in /var/backup/

 Helpful config options (I case someone wants to try)
 -o dir::state::status=./dpkg/status
 -o debug::pkgdpkgpm=true  # displays dpkg calls rather than executing them
 -o debug::pkgorderlist=true # first stage order choices
 -o debug::pkgpackagemanager=true # second (and final) stage order choices

 On Sun, May 19, 2013 at 3:50 PM, Aurelien Jarno aurel...@aurel32.net wrote:
 /usr/bin/python: /lib/i386-linux-gnu/i686/cmov/libc.so.6: version
 `GLIBC_2.15' not found (required by /usr/bin/python)
 /usr/bin/python: /lib/i386-linux-gnu/i686/cmov/libc.so.6: version
 `GLIBC_2.16' not found (required by /usr/bin/python)
 dpkg: warning: subprocess old pre-removal script returned error exit
 status 1

 While APT usually avoids it, its actually fine to have dependencies not
 satisfied for half-installed packages and prerm scripts do only give you
 the guarantee that your dependencies will be at least half-installed.

 well, only seen on i386, and apparently libc6-i686 is installed. Now why 
 aren't
 the new libc6 packages unpacked first? I would assume to see more reports for
 this upgrade path.

Its caused by the remove of libescpr1 (src:epson-inkjet-printer-escpr) in
Richards case, which ignoring the ubuntu-popcon spike has a 25.000 score,
so not installed by everyone and even if you have it installed you might
have upgraded your libc6 earlier (as the libescpr1 drop is just two days
 old) or APT chooses a different package to handle first [removes are done
relatively early compared to other actions].
I guess there are other situations where you can reach this problem, but
you need to be a bit lucky. Describing exactly whats going on is not only
offtopic, but also complicated, so lets just say that APT is choosing here a
bad way of handling things, but while its not a good idea its a valid one.
(people wanting to explore the misery can follow the word DepRemove)


 This smells more like a bug in dh_python2 which adds this prerm code which
 assumes that pyclean can be executed even if it isn't configured (aka that
 it behaves like an essential application), but I am not in the mood for
 bug-ping-pong so just CC'ed python maintainers for now, so they can have a
 look and comment on it while we will see whats up with APT to decide on
 this route (did I mention that a dpkg/status file would help? ;) ).

 so for now, I'm adding the libc6 dependency as a pre-dependency in
 python2.7-minimal, like perl-base is doing.

While this should work to fix this exact bug, its still papering over the
problem indicated by the bug: python isn't essential, so pre* scripts can't
assume that python is working. Usually you are lucky enough that it works
by accident, but bugs like this one show that you can very well be in a
system state which is allowed by policy but python isn't working.
(assuming the chosen order is better, e.g. loops can trigger this)

To really fix this you have to ensure that
a) python always works even in half-installed state  OR
b) all packages using dh_python2 prerm snippets pre-depend on python  OR
c) cleaning is moved to post* scripts and just depend on python  OR
d) the dh_python2 prerm snippet works without a configured python by using
 shell/perl/$essential tools only.
Same goes for python3 and dh_python3.


I am CCing the debconf maintainers btw as their prerm script looks
really strange with 3 different ways of cleaning up pyc and pyo files –
one handwritten shell and two added by dh_python2 (one requiring python and
 a fallback in shell).


Best regards

David Kalnischkies


--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#707578: apt: yields dependency problems with apt-get install --purge libreoffice

2013-05-09 Thread David Kalnischkies
Hello Vincent!

On Thu, May 9, 2013 at 4:10 PM, Vincent Lefevre vinc...@vinc17.net wrote:
 Severity: grave
 Justification: renders package unusable

That is a bold statement …


 xvii:/home/vinc17# apt-get install --purge libreoffice

It would help if you could look into /var/backups/ and find the status file
representing the state before you issued this command for easier reproduction
as it is a bit hard to guess otherwise.

 (Reading database ... 480141 files and directories currently installed.)
 Removing libreoffice-style-galaxy ...
 (Reading database ... 480136 files and directories currently installed.)
 Preparing to replace libreoffice-style-tango 1:3.5.4+dfsg2-1 (using 
 .../libreoffice-style-tango_1%3a4.0.3-1_all.deb) ...
 Unpacking replacement libreoffice-style-tango ...
 dpkg: dependency problems prevent configuration of libreoffice-style-tango:
  libreoffice-common (1:3.5.4+dfsg2-1) breaks libreoffice-style-tango (= 
 1:3.6~) and is installed.
   Version of libreoffice-style-tango to be configured is 1:4.0.3-1.

 dpkg: error processing libreoffice-style-tango (--configure):
  dependency problems - leaving unconfigured
 Errors were encountered while processing:
  libreoffice-style-tango
 E: Sub-process /usr/bin/dpkg returned an error code (1)

I presume APT tries to unpack  configure the new style after removing
the old style before unpacking libreoffice-common as it depends on a
style (via Provides) while having  and = breaks on the real styles.
The unpack of tango should have caused dpkg to auto-deconfigure commons
though if I see that right (but I haven't the time to look too closely now).

As said, it would be really helpful if you could find the status file.


Best regards

David Kalnischkies


--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#645713: fails to upgrade a default GNOME desktop installation from squeeze → sid

2013-04-28 Thread David Kalnischkies
On Wed, Apr 24, 2013 at 9:07 PM, Adam D. Barratt
a...@adam-barratt.org.uk wrote:
 [-openoffice dropped from Cc, added Andreas]

 On Fri, 2013-04-19 at 17:51 +0200, Julien Cristau wrote:
 In the mean time I've applied the following change to the release notes:

 Author: jcristau jcristau@313b444b-1b9f-4f58-a734-7bb04f332e8d
 Date:   Fri Apr 19 15:47:54 2013 +

 upgrading: mention the 'Could not perform immediate configuration' issue
 [...]
 I'm leaving this bug open for now because of the issue brought up by
 Andreas, but if you want to reassign to release-notes that would be fine
 with me.

 Any opinions / thoughts on how we progress / resolve this welcome.

[ just to have something on public record ]

As I said Julien already on IRC the option isn't going to prevent this
bug from happening. It might work for some cases, but not for all.
(or: the bug still happens - which is not a problem per se -, you just might
 be lucky to not hit the other bug down the road to which this message
 actually belongs)

What works heavily depends on random things like at which point a package
stanza was encountered while parsing (thats why openoffice.org-core worked for
 me, or why it works sometimes just by still having the squeeze
 sources.list entry – or not having it in others).



Possibly every remove in the solution can cause this and as its unlikely that
removes can be avoided you have a variate of options which might or might not
help you out of the misery:

Try 'apt-get dist-upgrade' with option enabled (default) and disabled after:
* If not done already: apt-get upgrade (= no removes).
* apt-get install apt
* apt-get install package mentioned in the error (rinse and repeat)
* apt-get remove package which dist-upgrade wants to remove (^)

All of them having their own set of downsizes and depending on who the user
is (and how the machine looks like) I would suggest a different order for
trying them out.


Best regards

David Kalnischkies


--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#645713: many squeeze-wheezy upgrades fail with Could not perform immediate configuration

2013-04-16 Thread David Kalnischkies
On Sun, Apr 14, 2013 at 2:21 AM, Andreas Beckmann a...@debian.org wrote:
 I'm attaching another status file that fails with

   E: Could not perform immediate configuration on 'libgstreamer0.10-0'.
   Please see man 5 apt.conf under APT::Immediate-Configure for details.
   (2)

 This is an openoffice free testcase, generated by
 * creating a minimal squeeze chroot
 * installing gnome-accessibility without Recommends
 * sed /squeeze/wheezy/ ; apt-get update
 * apt-get dist-upgrade

Thanks! I am not that far with it so far, but it is the same function,
different path (one not used at all by the previous) and looks like it has
something to do with libseed0.

Is it just me or is adding a wheezy source instead of replacing squeeze with it
really fixing the issue in that case without changing the solution
in terms of packages, but slightly in terms of ordering?
(aka: Am I crazy yet or what the hell is going on)


Best regards

David Kalnischkies


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#645713: fails to upgrade a default GNOME desktop installation from squeeze → sid

2013-04-09 Thread David Kalnischkies
On Thu, Apr 4, 2013 at 10:46 PM, Julien Cristau jcris...@debian.org wrote:
 On Sun, Mar 24, 2013 at 18:17:46 +0100, David Kalnischkies wrote:

 Pictures^Wdpkg-status files or it didn't happen, as I said multiple times 
 now.

 You'll find the (compressed) status file attached.
[…]
 E: Could not perform immediate configuration on 'openjdk-6-jre'. Please see 
 man 5 apt.conf under APT::Immediate-Configure for details. (2)

Thanks.

After wasting a few days on this now I know at least who is at fault: me.
And as I am not a dependency I pass the blame on to Java5 and OpenOffice.

[I will spare you the details now, you will find some below as very long P.S.]

I have done very few tests, but it seems like bringing openoffice.org-core
back (as a transitional package) is the simplest workaround. If I remember
right all status-files I have seen so far about this issue (not that many,
 but yeah) included openoffice (as I wondered why it was touched so early
 and wanted to investigate this after wheezy), so while this sounds indeed
crazy I guess it would solve all known issues. Hence CC'ing the previous
maintainers to gaining some intelligence on how feasible this is.

(Such a transition package needs to break at least
 openoffice.org-report-builder-bin as it is otherwise not removed
 on upgrade. I have no idea what else / how depends should look like)


Alternatively we could provide a fixed apt/squeeze which removes the most
offending lines (and reopens bugs), but the usual problems arise from that.


Best regards

David Kalnischkies, who works on a timemachine now to travel ~3 years into
the past to prevent a certain someone from committing something harmful …
(… and ~10 more to prevent the other half by various others).

P.S.:
I mentioned the Provides change earlier which after carefully looking at it
breaks now after 3 years into pieces and I wonder how it worked at all …
(not that the code before that would be bug-free, I just enhanced it)

Describing whats going on is a bit hard and frankly I haven't worked out
myself in completion what the code does, just what it should do and while
this overlaps at many points, in edgecases the code runs amok …

This code is run for all dependencies effected by a remove, but a problem
will only arise if you have an or-group somewhere behind that dependency
which features virtual packages AND [pretty freaky ordering conditions –
 I will refer to it as magic].

In the testcase we happen to meet these conditions once: While removing
openoffice.org-core we call this code for many openoffice.org-* packages
one of those is openoffice.org-officebean which features such a magic
or-group beginning with default-jre | …. The magic will help us skipping
over most of the or-group, but we will end-up with java5-runtime being
mistakenly chosen for immediate promotion to a package needing immediate
configuration (better: a provider as java5 is virtual).
So we end up with stuff like 'openjdk6-jre' as kinda pseudo essential.
(I am positive that it works with other openoffice.org-* packages too,
 as long as they need the core package and java [in that order], which is
 true in the example for openoffice.org-base, too – and it works with
 promoting others like gcj-jre, too. Which one is chosen exactly depends on
 magic, just like promotions in real life do [SCNR]).

That these packages become kinda pseudo essential is a bug, but in most
cases apt/squeeze can work with that. Sometimes it can't, which is a known
bug [hopefully] fixed in apt/wheezy which I suspected as being at fault here
as the usual symptoms apply (and disappear with apt/wheezy).

I am working for a while now on fixing this code, which basically means
a complete rewrite of the DepRemove method, but as usually the bug leads just
to a non-optimal ordering, we could (and I tend to should) happily delay
this until jessie and work around it as we usually do it with bugs in APT.

P.P.S.: The Conf Broken error(s) can be ignored as they just happen in
simulation. In the real run APT will tell dpkg to do the right thing™ and
it usually does (like configuring two packages at the same time).


--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#704257: missing libgl1-mesa-dri in upgrades

2013-04-02 Thread David Kalnischkies
On Mon, Apr 1, 2013 at 11:16 PM, Daniel Pocock dan...@pocock.com.au wrote:
 On 01/04/13 22:04, John Paul Adrian Glaubitz wrote:
 On 04/01/2013 09:59 PM, Daniel Pocock wrote:
 Agreed, but that doesn't complete the picture, as libgl1-mesa-glx
 doesn't depend on libgl1-mesa-dri:

 $ apt-cache depends libgl1-mesa-glx
...
  Recommends: libgl1-mesa-dri


 Well, Recommends are installed by default, aren't they? However, I'm

 Not during upgrade or dist-upgrade operations.  This is specifically an
 upgrading issue.  From man apt-get:

   upgrade:
  ...  under no circumstances are currently installed packages removed,
 or packages not already installed retrieved and installed.

Correct for apt/squeeze, partly-wrong for apt/wheezy (since 0.8.15.3).
A package requiring a new recommends which is in a non-broken policy state
previously will be held back just like other packages requiring a new depends
in apt/wheezy.
In apt/squeeze the policy will break, which you could fix with
apt-get install --fix-policy, but that is going to fix ALL recommends.

We are going to be fine in this regard as many packages have a new
dependency in a new release (upgrade is mostly for between releases).
In this case it is at least multiarch-support.


 dist-upgrade:
 ... intelligently handles changing dependencies with new versions of
 packages

dist-upgrade on the other hand installs new recommends since the introduction
of recommends. Keyword is new: If you had recommends disabled previously
and/or removed a recommends apt will not install this recommendation again.
(It compares the recommends list of the old version with the new version and
 only uninstalled recommends present in the new, but not in the old version
 are marked for installation).
Of course, if the recommends isn't installable you will still get a solution
which doesn't include this recommends which will be displayed as usual.
You have to install it later by hand then as it now an old recommends …
(In stable, uninstallability shouldn't happen though)

I guess the confusion comes from the word dependencies:
In APT namespace dependency means any relation which is allowed;
not just a Depends.

So the sentence should be read as … handles changing Pre-Depends, Depends,
Conflicts, Breaks, Replaces, Provides, Recommends (if enabled, default yes)
and Suggests (if enabled, default no) with new versions …
(for the sake of completion: Enhances are not handled)
It's just that a user shouldn't really be required to know what those are.

(if you digg deaper [usually in non-user facing texts] you will come across
 hard, important, soft, negative and positive dependencies to
 complete the confusion. I will leave it as an exercise for now which subsets
 are meant with those adjectives)


Best regards

David Kalnischkies


--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#645713: fails to upgrade a default GNOME desktop installation from squeeze → sid

2013-03-25 Thread David Kalnischkies
(breaking my promise, but as it is marked as a release-blocker now)

On Thu, Mar 21, 2013 at 8:09 PM, Julien Cristau jcris...@debian.org wrote:
 On Thu, Mar 21, 2013 at 19:59:58 +0100, Michael Biebl wrote:
 Fwiw, I can no longer reproduce this myself.
 I've done some recent test dist-upgrades from a default squeeze GNOME
 installation to wheezy, which worked fine.

 From my POV, this bug can be closed.

 From mine it can't, because I've upgraded a machine last week and had
 a similar failure (except with some java stuff instead of gstreamer).

Pictures^Wdpkg-status files or it didn't happen, as I said multiple times now.

The problem is that there is no this bug as this message is kinda catch-all,
so its extremely likely that different bugs in APT (and in dependency chains)
result in the very same message. Even more so if the message is completely
different (and yes, a different package makes it completely different).

There is no workaround beside the config option the message mentions
and there is also no such thing as a fix we could backport.

I also want to mention that there wasn't a lot of change in this code between
lenny and squeeze - beside some cornercases, debug enhancements and
*drum roll* changing the string of the message to the current form from
Internal error, could not immediate configure %s.
So I wonder a bit what changed in how we write down dependencies after
lenny vs. after squeeze as this wasn't such a popular error condition before …
(I especially wonder how all these packages manage to become pseudo-essential)

I am already scared what will creep up for jessie, now that we had a lot of
code changes in that area in apt/wheezy. It will be so much fun … not.
So it would be interesting to know about which versions we are talking here.
The original bugreport e.g. is against an early post-squeeze APT version,
so are other instances now against apt/squeeze or against apt/wheezy or
some version in-between?


Best regards

David Kalnischkies


--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#689519: libapt-pkg4.12: SIGSEGV when used by apt-get or aptitude

2012-10-11 Thread David Kalnischkies
Hi Rainer,

On Wed, Oct 3, 2012 at 3:55 PM, Rainer Poisel rainer.poi...@fhstp.ac.at wrote:
 The same applies to aptitude (same method invokation led to a segfault). 
 Please let me know if you need any further information.

Is this still reproducible?
Only with this package or with any command involving any package?
Any special settings or sources?

If it is reproducible please tell us your sources and attach
your /var/lib/dpkg/status file (compressed, it might be 2 MB big).

The file includes details about your installed packages - if you don't
want to expose these details to the general public feel free to send
it directly to me.


Best regards

David Kalnischkies


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#686346: closed by Michael Vogt m...@debian.org (Bug#686346: fixed in apt 0.9.7.5)

2012-09-14 Thread David Kalnischkies
On Fri, Sep 14, 2012 at 2:54 AM, Daniel Hartwig mand...@gmail.com wrote:
 On 13 September 2012 23:17, Vincent Lefevre vinc...@vinc17.net wrote:
 On 2012-09-11 15:36:15 +, Debian Bug Tracking System wrote:
[ David Kalnischkies ]
* handle packages without a mandatory architecture (debian-policy §5.3)
  by introducing a pseudo-architecture 'none' so that the small group of
  users with these packages can get right of them without introducing too
  much hassle for other users (Closes: #686346)

 Package 'docbook-mathml' is not installed, so not removed. Did you mean 
 'docbook-mathml:none'?

 This error highlights something: the lack of architecture should not
 extend the fullname like that and interfere with locating the package.
 ?

If we have no-architecture == native on the commandline and :none isn't
native we have exactly this. It is the usual problem in Multi-Arch.
And not only for the commandline, but also for the the library usage itself
as Cache.FindPkg(docbook-mathml); should NOT return the arch-less
package as it is not what is expected. Expected is that we get the native
architecture, not this useless cra^Wmp architecture (I will get back to that
in just a second).

The idea behind supporting these package at all is that I can write a
request for the release notes to include a
apt-get purge .*:none
and be done with that architecture.

Doing this with dpkg is a bit harder and as your system is quiet likely in
a broken state we at least have a tool which can recover from that mess.

But yes, I was many times quiet near to just printing an
E: arch-less package detected. Exterminate! Exterminate! EXTERMINATE!
but I can't expect that everyone has watched Doctor Who S02x12 recently,
so the dramatic effect is mostly lost - what is left is that a system which
previously worked and was upgraded by APT ends up in a state which is
so broken that even APT refuses to help fixing it. That is not nice.


 Is there a reason for introducing this pseudo-arch. rather than using
 “I-Pkg.Arch() == 0”?

Yes there is. As I said in my previous mail these arch-less packages are
pretty useless as they form a new architecture, but that is how dpkg wants
it, so be it. This specifically means that the system we are talking about
here is broken before any package is removed as a package with an
architecture can't satisfy a dependency on a package without one
(If we don't accept arch-less packages as native, we can't let them
 satisfy dependencies in native - or anywhere else as this means these
 packages would be implicitly M-A:foreign).


And that is the problem here: A little optimization went havoc.
APT isn't seeing the unsatisfied dependencies because it doesn't
even create them (see apt-cache showpkg docbook-mathml:none)
because it hasn't seen these none-packages. That's okay if the parent
package is a != none (as we will create it later if we need them),
but for a package == none these dependencies should be created …

=== modified file 'apt-pkg/pkgcachegen.cc'
--- apt-pkg/pkgcachegen.cc  2012-09-09 19:22:54 +
+++ apt-pkg/pkgcachegen.cc  2012-09-14 10:16:35 +
@@ -922,7 +925,7 @@
// Locate the target package
pkgCache::PkgIterator Pkg = Grp.FindPkg(Arch);
// we don't create 'none' packages and their dependencies if we
can avoid it …
-   if (Pkg.end() == true  Arch == none)
+   if (Pkg.end() == true  Arch == none 
strcmp(Ver.ParentPkg().Arch(), none) != 0)
   return true;
DynamicpkgCache::PkgIterator DynPkg(Pkg);
if (Pkg.end() == true) {


The joy of testing in well-defined self-created environment a feature
which is supposed to handle quiet the opposite …
Vincent, could you mail me your status file maybe, so I can run some
real world tests on it?


 $ dpkg -C

dpkg doesn't check dependencies after it has installed packages, so you
will not see broken dependencies with it. Try it with an unpacked
docbook-mathml:none and dpkg --configure -a if you don't trust me
(and as usual, you shouldn't) and you will see that dpkg sees the
dependency as not satisfied.

That's why the FullName() for these packages is 'pkg:none' even though
we could easily print just 'pkg' -- it doesn't give a single hint why
a package depending on pkg(:native) isn't satisfiable by pkg(:none).
And we have the problem of needing to tell the user that we remove a
pkg(:none) while installing a pkg(:native) …
(A display issue dpkg completely ignores as you will see)

Attached is a testcase for APT to play with it.
Additional to the one included in 0.9.7.5:
test/integration/test-bug-686346-package-missing-architecture


Best regards

David Kalnischkies


test-fun-with-arch-less-packages
Description: Binary data


Bug#686346: dpkg is wrong about the install state of docbook-mathml, making the system in inconsistent state

2012-09-03 Thread David Kalnischkies
(cc'ing debian-dpkg@ as this possibly is a problem for any dpkg user)

On Fri, Aug 31, 2012 at 5:26 PM, Guillem Jover guil...@debian.org wrote:
 So it would seem to me the arch-qualifying logic in apt is not right,
 it really only ever needs to arch-qualify if:

   * dpkg supports --assert-multi-arch
   AND
   * the package is Multi-Arch:same

As I said in earlier discussions of Multi-Arch APT only checks for the first
and if this is true will call dpkg always with an architecture regardless of
if dpkg might or might not need it for this specific package simply because
that is a lot easier than trying to work out if this dpkg is a debian-dpkg or
an ubuntu-dpkg in a pre-multiarch or post-multiarch state and therefore needs
to spill out with architecture, without architecture or just sometimes either.

I think you agreed with this, but my memory might trick me here.
I at least can't remember anyone saying that clients shouldn't - so they did.


 Because M-A:same packages are guaranteed to always have a valid
 architecture, something that cannot be expected from non-M-A:same
 packages due to legacy reasons.

Really? (I never had a package without an architecture installed …)
Anyway, dpkg does some internal defaulting, doesn't it, as otherwise
I don't see how such a package can satisfy any dependency on this name,
so it would be nice if dpkg could accept whatever default it assumes as
explicitly mentioned architecture, too.

Otherwise we need to clone this to aptitude (as it does some direct dpkg
calling on its own as far as I know) and whatever other dpkg front-end assumed
that it could arch-qualify everything in a multi-arch universe.


Best regards

David Kalnischkies


--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#686346: dpkg is wrong about the install state of docbook-mathml, making the system in inconsistent state

2012-09-03 Thread David Kalnischkies
On Mon, Sep 3, 2012 at 9:05 PM, Guillem Jover guil...@debian.org wrote:
 On Mon, 2012-09-03 at 13:53:47 +0200, David Kalnischkies wrote:
 On Fri, Aug 31, 2012 at 5:26 PM, Guillem Jover guil...@debian.org wrote:
  So it would seem to me the arch-qualifying logic in apt is not right,
  it really only ever needs to arch-qualify if:
 
* dpkg supports --assert-multi-arch
AND
* the package is Multi-Arch:same

 As I said in earlier discussions of Multi-Arch APT only checks for the first
 and if this is true will call dpkg always with an architecture regardless of
 if dpkg might or might not need it for this specific package simply because
 that is a lot easier than trying to work out if this dpkg is a debian-dpkg or
 an ubuntu-dpkg in a pre-multiarch or post-multiarch state and therefore needs
 to spill out with architecture, without architecture or just sometimes 
 either.

 I think you agreed with this, but my memory might trick me here.
 I at least can't remember anyone saying that clients shouldn't - so
 they did.

 That's right (that's why I said “needs”, not must :), dpkg is fine with
 clients always arch-qualifying package names, only as long as the
 architecture matches what's on the system. And as such, arch-qualifying
 a package w/o an architecture is inherently wrong. :)

 I guess I keep forgetting about the Ubuntu dpkg, as in: not my problem.

It's not really mine either, but fewer checks = lower chance to screw them up.
It just coincidences with not breaking other peoples toys if I can avoid it.

  Because M-A:same packages are guaranteed to always have a valid
  architecture, something that cannot be expected from non-M-A:same
  packages due to legacy reasons.

 Really? (I never had a package without an architecture installed …)

 Yeah, unfortuntately there was a time when packages didn't need to have
 an architectures field (it was not mandatory in policy), and some
 users do still have those around (!). There's also an old bug from
 dpkg, which would forget about the architecture field for some states,
 so it's actually common to find systems in those states.

 See #620958 for an assortment of users having those.

 Anyway, dpkg does some internal defaulting, doesn't it, as otherwise
 I don't see how such a package can satisfy any dependency on this name,
 so it would be nice if dpkg could accept whatever default it assumes as
 explicitly mentioned architecture, too.

 dpkg before multiarch has never had the architecture field into account
 in any of its dependency resolution logic, it did only verify the
 architecture of the package being installed matched the native one
 and errored out otherwise, as long as no --force-architecture was
 specified.

 As such treating them as native architecture packages would be risky
 and most probbly wrong (they could also be arch:all), and dpkg just
 keeps trating them as arch-less packages.

Lets reword with an example:

Package: A
Architecture: armel
Version: 2
Depends: B

Package: B
Version: 1

I would assume that A is installable, but as you say B is arch-less it can't
satisfy the dependency A has … this makes B for me a pretty useless package
especially if I upgraded A from version 1 (without an architecture).
Making B native or all (APT says it's native, but the difference isn't
that big) sounds for me like a reasonable choice. Sure, that changes if you
cross-grade dpkg, but I think you deserve some pain for ignoring warnings
from dpkg while attempting cross-grades and the alternative seems to be worse.


Best regards

David Kalnischkies


--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#685192: apt: redirection handling changes in 0.9.4 may break aptitude

2012-08-28 Thread David Kalnischkies
On Thu, Aug 23, 2012 at 6:47 PM, Raphael Geissert geiss...@debian.org wrote:
 One day later than expected...

(Several days later than expected …)

 On Tuesday 21 August 2012 10:56:06 Raphael Geissert wrote:
 If you do consider those cases, then Breaks should probably be used
 instead. Recommends is not enough even for the scenario where this bug
 was reproduced: grml - recommends are disabled by default.

 I haven't tested a squeeze-wheezy upgrade with Breaks, though. Will try
 to get around it today so that I can report back...

 It went fine. APT of course had to be deconfigured due to the Breaks, but it
 was handled just fine.

 I used a Breaks: apt ( 0.9.4~).

Which is after a bit of thinking not that surprising:
libapt-pkg is already unpacked and configured before apt is unpacked anyway
(as APT handles itself as essential), so the solution we arrive at is more or
less the same - good to know that at least sometimes theory isn't disproved by
the implementation. :)

Scheduled for 0.9.7.5
ETA: After we know what will happen with 0.9.7.4 (#685155)


Best regards

David Kalnischkies


--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#685192: apt: redirection handling changes in 0.9.4 may break aptitude

2012-08-21 Thread David Kalnischkies
For clarity: This partial upgrade thing effects not only aptitude, but APT
itself and just by extension all front-ends even if the message just talks
about how aptitude is unable to handle the internal change in libapt and
how it talks to his own http-method shipped in 'apt'.

And I doubt that a bug containing the words partial upgrade and
unofficial sources (which http.debian.net still is, even as a well-recieved
mirror of official content) fits very well in the severity grave bucket,
but I let it slight for the moment.


On Sat, Aug 18, 2012 at 2:53 AM, Raphael Geissert geiss...@debian.org wrote:
 Now, the easiest way to prevent this kind of conflict would be by adding a
 Depends: apt = 0.9.4 to libapt-pkg4.12. Not sure how much trouble it would
 cause to a squeeze-wheezy upgrade, as it would force apt to also be
 upgraded when upgrading aptitude (upgrading apt already requires upgrading
 aptitude.) It also introduces a soft dependency loop, but it seems harmless.

I think Depends are a bit hard in that case. It's not only a loop, but
libapt-pkg can be used without the method-binaries in a lot of cases, so a
Recommends: apt (= ${binary:Version})
feels more appropriated and should trigger an upgrade of 'apt' in this
partial upgrade situation as well (as long as the installation of Recommends
are not disabled) without negative consequences on the installation order.


The only thing not covered by this Recommends is that you can still remove
apt from your system and possibly break aptitude (and other packages using
the acquire-system from libapt) - for any libapt user this will be equal to
the removal of an essential package through, however the specific front-end
handles this (apt-get is e.g. very vocal about that).

The net-result would be that front-ends should depend on 'apt' if they use
the acquire system (some do, even if they don't, packagesearch for example
 seems to be such a candidate).

Yet, this might be wrong in the (less likely case) that a user uses only
debtorrent or https which is provided by other packages and therefore the
acquire system could be used without needing the standard methods in 'apt'.
So again, a Recommends would be more in order maybe.

On the other hand: A depends could be added automatically with our symbol
file if an acquire symbol is used, recommends can't be added in this way.
Maybe we should add such a feature to dpkg-dev as it could come in handy for
(big) libraries using other tools internally in certain paths.
Might be better than requiring the library user to declare such a relation.


In the end we are talking about an priority: important package, so a user
who wants to remove it should be able to handle the pain s/he has to suffer.
'apt' doesn't depend on a network-manager, even through it is likely that
you need some sort of network access to get packages from somewhere else…

Same case if s/he prefers to disable installation of recommends.
And with this back to the initial topic: Adding a recommends, okay?


Best regards

David Kalnischkies


--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#673815: libapt-pkg: segfault in pkgPackageManager::SmartUnPack()

2012-05-21 Thread David Kalnischkies
forcemerge 673536 673815
thanks

On Mon, May 21, 2012 at 5:01 PM, Sebastian Harl tok...@debian.org wrote:
 Justification: renders package unusable

I am interested, which package is completely unusable?
You found a way around the problem yourself, so APT can't be
the unusable one… And somehow i only know this sentence with
the addition of an unrelated, but yeah.
Fits important at most - at least until a maintainer disagrees.

[…]
 nagios-plugins-common
[…]

See master bugreport for details on the issue.
You can find in there a patch and the notice that it is fixed,
but the upload was unfortunately broken. Really misfortune that
pbuilder has network access while the buildds haven't…


Best regards

David Kalnischkies



--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



  1   2   >