Bug#1081313: python3-pcl: FTBFS with Python 3.12
Jochen Sprickerhof writes: > There is also: > > https://github.com/PointCloudLibrary/clang-bind > https://pypi.org/project/pcl-py/ > https://github.com/davidcaron/pclpy Thank you for those links. They all look unmaintained and quite unfriendly. Do you (or anybody else?) have plans to package any of these?
Bug#1081313: python3-pcl: FTBFS with Python 3.12
Package: python3-pcl Version: 0.3.0~rc1+dfsg-14+b2 Severity: serious Hello! python3-pcl doesn't build with Python3.12: it throws many Cython errors. I looked briefly, and fixing them appears non-trivial, at least to those with no prior cython experience. Upstream is dead: https://github.com/strawlab/python-pcl And no obvious successor has stepped up: https://github.com/strawlab/python-pcl/issues/395 Do we want to backport fixes? Any Debian people adept at keeping this going?
Bug#1076236: python3-numpy: python3-numpy doesn't support cross-building well
I just looked at the python3-numpy = 1:2.1.1+ds-1 currently in experimental. Overall it works! Thanks for doing that. I built these packages for amd64 and arm64 (natively; want to think about one thing at a time). Then I installed python3-numpy-dev:amd64 and python3-numpy-dev:arm64 on my amd64 machine; they co-installed nicely. Then I tried to build a Python extension module using these. The test module was mrcal. Natively: make mrcal-pywrap.o And cross: PKG_CONFIG=aarch64-linux-gnu-pkgconf CC=aarch64-linux-gnu-gcc make mrcal-pywrap.o Both of these worked nicely. The mrcal upstream needed this patch to work with numpy2: https://github.com/dkogan/mrcal/commit/0934dfe97099873f6b4415de23b3e784c745673c That's due to a questionable upstream decision; not OUR thing to fix. Now for some details. I see the python3-numpy-dev package separates ALL the headers into /usr/lib/ARCH/. 99% of them are identical across arches though, so that's overkill, but it just costs our users a bit of extra disk space, so it's probably ok. Along the same lines, 99% of all files in numpy in general, are identical across arches, so we can bring more stuff into the -dev package if we need to. the -dev package is Multi-Arch:same, so that will make more stuff avaialble to cross-arch usages. I THINK the current split is fine, and we can bring more stuff over later, if we find out that we need it. I think the way you have the new package set up is good: > Package: python3-numpy > Architecture: any > Multi-Arch: allowed## more on this further down > Depends: python3-numpy-dev > > and > > Package: python3-numpy-dev > Architecture: any > Multi-Arch: same So anybody that had Depends:python3-numpy will get the same stuff as before. And if they want to do fancier things, they can Build-Depends:python3-numpy-dev > most people can continue build-depending on python3-numpy, and if you need to > be > able to run numpy code while cross-building, you can > use Build-Depends: python3-numpy:any, python3-numpy-dev. Am I wrong? I want to say that - we don't want Multi-Arch:allowed - Cross-builders will want to Build-Depends: python3-numpy-dev - Packages using numpy in their tests will want to Build-Depends: python3-numpy What's your thought about using :allowed/:any the way you noted above? That will allow a native python3-numpy and a foreign python3-numpy-dev to be installed, which most of the time isn't useful. I definitely may be missing some nuances here. > Everything might be complicated further by the fact that built > extensions usually do not depend on python3-numpy itself, but on > python3-numpy-abiX which is a virtual package provided by > python3-numpy. My guess/hope is that the this will still work with > Multi-Arch. I think this is OK. The files that follow the ABI are the .so files, which live in python3-numpy, which is the package that Provides those virtual abi packages. The current effort is about cross-BUILDING (with python3-numpy-dev), not cross-EXECUTING, which what the python3-numpy and the abi stuff does. Other notes: I tried swig and numpy.i. It works, but we should add a small note to debian/patches/0005-Adapt-SWIG-documentation-to-Debian.patch to say that the user should %include "numpy/numpy.i" and not %include "numpy.i" I'll send you a patch right after this. We ship .a files in python3-numpy-dev. I don't know what those do. Any idea? It'd be good to test those. The python3-numpy-doc package should be arch-independent, but the generated docs do have some differences in their data examples, and in some places they explicitly talk about the arch. I don't think that BREAKS anything, other than making the build unreproducible. For the record I'm attaching the doc difference python3-numpy ships arch-less symlinks: /usr/lib/python3/dist-packages/numpy/_core/include -> ../../../../x86_64-linux-gnu/python3-numpy/numpy/_core/include /usr/lib/python3/dist-packages/numpy/_core/lib -> ../../../../x86_64-linux-gnu/python3-numpy/numpy/_core/lib /usr/lib/python3/dist-packages/numpy/f2py/src -> ../../../../x86_64-linux-gnu/python3-numpy/numpy/f2py/src /usr/lib/python3/dist-packages/numpy/random/lib -> ../../../../x86_64-linux-gnu/python3-numpy/numpy/random/lib I guess this preserves backwards compatibility, but could it cause confusion for cross-builders? Do we know what will break if we get rid of the symlinks and only make the new paths available? This all looks great! diff --exclude '*.js' -Naur doc1/usr/share/doc/python-numpy/html/reference/arrays.scalars.html doc2/usr/share/doc/python-numpy/html/reference/arrays.scalars.html --- doc1/usr/share/doc/python-numpy/html/reference/arrays.scalars.html 2024-09-06 05:18:38.0 -0700 +++ doc2/usr/share/doc/python-numpy/html/reference/arrays.scalars.html 2024-09-06 05:18:38.0 -0700 @@ -1236,7 +1236,7 @@ Canonical name: numpy.byte -Alias on this platform (Linux aarch64): +Alias on this platform (Linux x86_64): nu
Bug#1077722: schroot: Should /var/cache/apt/archives be added to the default fstab?
Hi. Thanks for replying. I don't disagree with anything you said, but I still think this would be good at least as a commented-out-by-default bit of config. In my day-to-day use of the machine I routinely use /var/cache/apt/archives to, for instance, access the previouly-installed packages; older versions perhaps. This cache is always there, and I don't manage it. 99% of the time the packages I want are there. Extending this idea to schroots doesn't seem unreasonable, even if there are corner cases that would make it not perfectly-nice in 100% of cases. If I propose a patch to add this as a commented-out-by-default option in the config, with your concerns placed in a comment, would yall accept it? Thanks
Bug#1076236: python3-numpy: python3-numpy doesn't support cross-building well
Hi Timo. I have a mostly-working prototype of a python3-numpy package that is Multi-Arch:same. Looks like you can ALMOST avoid splitting the package, so let's see if we can follow that path. The branch (with a single commit at this time) is here: https://salsa.debian.org/python-team/packages/numpy/-/tree/multi-arch-same?ref_type=heads The commit message describes how we disambiguate each conflicting file: Prior to this work these are the differing files in different architecture builds: $ diff -rq python3-numpy_1%3a1.26.4+ds-11_a* | grep -v -E '(aarch64|x86_64)-linux-gnu.so' ... /usr/lib/python3/dist-packages/numpy/__config__.py /usr/lib/python3/dist-packages/numpy-1.26.4.dist-info/WHEEL /usr/lib/python3/dist-packages/numpy/core/lib/libnpymath.a /usr/lib/python3/dist-packages/numpy/random/lib/libnpyrandom.a This patch disambiguates each of these: - The differing parts of __config__.py are moved to arch-specific paths, and the right one is pulled in at runtime - The WHEEL doesn't matter, so I simply delete it - I moved the .a files to arch-specific directories I don't know what the "right" thing to do with the .a files is, but it looks like changes in this area are coming with numpy 2.0, so until we move I don't want to this too hard about it: https://github.com/numpy/numpy/issues/20880 The last missing piece that makes Multi-Arch:same not work is the Depends:python3. This is due to dh_python3 doing something I don't understand: https://lists.debian.org/debian-python/2024/08/msg00036.html So this could be addressed in dh_python3 or we can move the Depends to a Recommends, or we can split the package. So numpy 2.0 is coming very soon, right? If so, I'm considering this done for right now, and I'll propose a new patch when that comes. If you see anything in the current proposal that you're very much against, I'd like to know about it. Thanks!
Bug#1076236: python3-numpy: python3-numpy doesn't support cross-building well
Hi Timo. > I'm not opposed to splitting the package per se, but I want to point > out that long before I became NumPy maintainer, there used to be a > separate python-numpy-dev package already, and I'd like to find out > why it was discontinued and if the reason is still relevant before I > go ahead with a new split. Looking at version control I see there was a python-numpy-ext that was transitional even at the start of the git history in 2007. Is that what you're referring to? I don't see any other such packages in version control. I'm currently implementing the split, to see what that would look like, and to confirm that it would solve the issues I'd like it to solve. Would it be true to say that you're planning to push numpy 2.0 into Debian soon-ish? If so, it would be good to do this package split (if we proceed with that) at the same time. So I'm experimenting on my local machine, and when I get something that's functoinal, I'll report back here to talk about merging. Thank you!
Bug#1076236: python3-numpy: python3-numpy doesn't support cross-building well
> I THINK this is currently impossible, or at least I can't figure out a > set of Build-Depends that would achive this result. It maybe would be > enough to add a Multi-Arch tag, but it would be clearer to split the dev > stuff (*.h and *.pc) into a separate package, and that package should be > Multi-Arch:same. Thinking about this some more, I'm fairly certain that splitting the package is the correct way to do this because the -dev package shouldn't require python3 to be installed, which is currently required by numpy. Installing a foreign-architecture python3 requires qemu, so that's never what we want. Furthermore, some packages (like "mrcal", for instance) require numpy at build-time for two purposes: - Build-time code generation. This needs numpy:native - Accessing numpy.pc to build the extension module. this needs numpy:foreign So if we really wanted to make this work without splitting the current python3-numpy package, it would need to be Multi-Arch:same AND be happy installing only the native python3. This isn't obviously possible, and even if it were, splitting the package would make this much clearer. Unless I hear an objection or a better idea here, I'm going to implement this, and propose a patch in a reply to this bug.
Bug#1077722: schroot: Should /var/cache/apt/archives be added to the default fstab?
Package: schroot Version: 1.6.13-3+b2 Severity: wishlist Hi. This isn't a "bug", but a question/feature request. To speed up package installs in schroots (both with sbuild and without) I usually add /var/cache/apt/archives to the set of bind-mounts in the fstab of most profiles. This creates a global package cache, which saves lots of time and bandwidth. Is there a strong reason to not do this by default? If not for sbuild, then maybe we can do this for the other profiles. Or if not even that, we can add a commented-out line to the fstab files to make it easy for users to turn that on. I can give you a patch once we decide what is appropriate. Thanks!
Bug#1077709: lintian: Patch: small improvements to documentation
Package: lintian Version: 2.117.0 Severity: normal Hello. Here're two patches to document the previously undocumented --debug option and to more clearly note what to do with changes and dbgsym overrides. Also, I tried to push into a branch on salsa to create an MR instead, but I didn't have the rights to do that. Maybe at least all DDs should be allowed to push to their own branches? Thanks! >From 155d9c964f5c7ba661c7fca7e80bc004b858d520 Mon Sep 17 00:00:00 2001 From: Dima Kogan Date: Thu, 1 Aug 2024 12:14:33 +0900 Subject: [PATCH 1/2] Documented --debug --- bin/lintian | 1 + man/lintian.pod | 7 +++ 2 files changed, 8 insertions(+) diff --git a/bin/lintian b/bin/lintian index 4f44e6fc1..2d65dd50c 100755 --- a/bin/lintian +++ b/bin/lintian @@ -1010,6 +1010,7 @@ General options: --print-version print unadorned version number and exit -q, --quiet suppress all informational messages -v, --verbose verbose messages +-d, --debug extra-verbose messages -V, --version display Lintian version and exit Behavior options: --color never/always/auto disable, enable, or enable color for TTY diff --git a/man/lintian.pod b/man/lintian.pod index 5204aeeb6..cf7967b4f 100644 --- a/man/lintian.pod +++ b/man/lintian.pod @@ -137,6 +137,13 @@ In the configuration file, this option is enabled by using B variable. The B and B variables may not both appear in the config file. +=item B<-d>, B<--debug> + +Display extra-verbose messages. + +This is a deeper B<--verbose>, and implies B<--verbose>. Pass >= 3 times to get +memory-usage information as well + =item B<-V>, B<--version> Display lintian version number and exit. -- 2.42.0 >From 5951e508517acd55f313a00a70204beb101d3aff Mon Sep 17 00:00:00 2001 From: Dima Kogan Date: Thu, 1 Aug 2024 12:18:39 +0900 Subject: [PATCH 2/2] Improved documentation for lintian-override files --- man/lintian.pod | 12 +--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/man/lintian.pod b/man/lintian.pod index cf7967b4f..6a3997702 100644 --- a/man/lintian.pod +++ b/man/lintian.pod @@ -569,14 +569,20 @@ Utility scripts used by the other lintian scripts. =back For binary packages, Lintian looks for overrides in a file named -IpackageE> inside the binary -package, where IpackageE> is the name of the binary -package. For source packages, Lintian looks for overrides in +IpackageE> inside the binary package, +where IpackageE> is the name of the binary package. This is usually +created automatically by debhelper from the +IpackageE.lintian-overrides> file. + +For source packages, Lintian looks for overrides in I and then in I if the first file is not found. The first path is preferred. See the Lintian User's Manual for the syntax of overrides. +Lintian errors in automatically-generated packages (such as I) and +in the I file are currently not overrideable. + =head1 CONFIGURATION FILE The configuration file can be used to specify default values for some -- 2.42.0
Bug#1076236: python3-numpy: python3-numpy doesn't support cross-building well
Package: python3-numpy Version: 1:1.26.4+ds-6 Severity: normal Hello. Currently the python3-numpy package serves at least two independent purposes: - To make it possible to run numpy Python code - To support building extension modules using the C numpy API It should be possible to install the foreign numpy libraries to cross-build extension modules. This doesn't work well today. When cross-building you usually would want: - /usr/lib/FOREIGN/pkgconfig/numpy.pc - python3:native I THINK this is currently impossible, or at least I can't figure out a set of Build-Depends that would achive this result. It maybe would be enough to add a Multi-Arch tag, but it would be clearer to split the dev stuff (*.h and *.pc) into a separate package, and that package should be Multi-Arch:same. Thanks -- System Information: Debian Release: trixie/sid APT prefers unstable APT policy: (800, 'unstable'), (500, 'unstable-debug'), (500, 'stable'), (1, 'experimental') Architecture: amd64 (x86_64) Kernel: Linux 6.6.13-amd64 (SMP w/4 CPU threads; PREEMPT) Kernel taint flags: TAINT_OOT_MODULE, TAINT_UNSIGNED_MODULE Locale: LANG=C, LC_CTYPE=C.UTF-8 (charmap=UTF-8), LANGUAGE not set Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system) Versions of packages python3-numpy depends on: ii libatlas3-base [liblapack.so.3]3.10.3-14 ii libblas3 [libblas.so.3]3.11.0-2 ii libc6 2.38-11 ii libgcc-s1 14.1.0-3 ii liblapack3 [liblapack.so.3]3.12.0-3 ii libopenblas0-openmp [liblapack.so.3] 0.3.26+ds-1 ii libopenblas0-pthread [liblapack.so.3] 0.3.26+ds-1 ii python33.11.8-1 ii python3-pkg-resources 68.1.2-2 python3-numpy recommends no packages. Versions of packages python3-numpy suggests: ii gcc 4:14-20240120-6 ii gfortran4:14-20240120-6 ii python3-dev 3.11.8-1 ii python3-pytest 8.2.2-1 -- no debconf information
Bug#1074016: ITP: rosbags -- The pure python library for everything rosbag
Package: wnpp Owner: Dima Kogan Severity: wishlist * Package name: rosbags Version : 0.10.3 Upstream Author : Ternaris * URL or Web page : https://gitlab.com/ternaris/rosbags * License : Apache-2.0 Description : The pure python library for everything rosbag It contains: - highlevel easy-to-use interfaces, - rosbag2 reader and writer, - rosbag1 reader and writer, - extensible type system with serializers and deserializers, - efficient converter between rosbag1 and rosbag2, - and more. Rosbags does not have any dependencies on the ROS software stacks and can be used on its own or alongside ROS1 or ROS2
Bug#745706: Reopening bug
I'm reopening this bug with the upload of falcosecurity-libs 0.15.1-4. In 0.15.1-3 I added logic to add a "scap" group that has permissions to talk to the scap driver. But the previous issues (this doesn't also grant the required access to /proc) apparently weren't resolved yet. So I reverted that logic, and the bug is back. The relevant commit from git: commit 793391d31ecd700a0913773c70591824c8e7d519 Author: Dima Kogan Date: Fri May 24 21:18:18 2024 -0700 Reverted the use-group-to-access-scap-device patches These patches: 5682cde Dima Kogan 2024-05-24 Added missing Depends:adduser ea3ef71 Dima Kogan 2024-05-17 Tiny fixes to the use-group-to-access-scap-device b43bda3 Gerald Combs 2024-05-16 Add a udev rule and module config for falcosecurity-scap-dkms Reopens this bug: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=745706 I did some testing earlier to confirm that this bug was actually fixed, and it seemed like it was. But apparently I didn't look thoroughly-enough, and this bug is still problematic. So I'm reverting the patches that effected this insufficient fix. Gerald Combs said: Hi Dima, I haven't had a chance to try out the new package, but I did ask around about the required capture permissions internally at Sysdig. It's possible to capture without root using the eBPF driver: https://falco.org/docs/install-operate/running/#least-privileged However, the kmod driver requires root in order scan through /proc for process information other than your own. This matches my tests here; I see many more syscalls when I capture as root vs when I capture as an unprivileged user with read+write access to /dev/scap*. I'm going to update Logray's local Debian packaging to make falcodump setuid and accessible by the "scap" group: https://gitlab.com/wireshark/wireshark/-/merge_requests/15673 Hopefully at some point we can change that to a set of capabilities.
Bug#1063587: More info
I have these: bpftrace 0.20.2-1 amd64 libbpfcc:amd640.29.1+ds-1.1 amd64 libbpfcc-dev:amd640.29.1+ds-1.1 amd64 I'm seeing this problem as well, but my error message is slightly different. I have a tst.c: #include #include void f(int x) { printf("%d\n",x); sleep(1); } int main(void) { for(int i=0; i<3; i++) f(i); return 0; } I build it: gcc -o tst tst.c And this happens: dima@shorty:/tmp$ sudo bpftrace -e 'u:tst:f {print(arg0)}' -c './tst' Attaching 1 probe... ERROR: Could not resolve symbol: tst:f Same as with this bug, specifying a wildcard makes it work. It also works if the path to the ELF file being instrumented is given with a directory, not just as a bare file. These both work: sudo bpftrace -e 'u:./tst:f {print(arg0)}' -c './tst' sudo bpftrace -e 'u:tst:f* {print(arg0)}' -c './tst' I debugged it a little bit. The different-behavior-with-a-directory happens because of this branch: https://sources.debian.org/src/bpfcc/0.29.1%2Bds-1.1/src/cc/bcc_syms.cc/#L798 For whatever reason using a wildcard doesn't go through this chunk of code at all. This looks like a bug that should be filed upstream. Can one of you please do that, if that's appropriate? Thanks!
Bug#1071233: bpftrace: Debug symbols are stripped, and not usable
Package: bpftrace Version: 0.20.2-1 Severity: normal Hi. This happens with "bpftrace" and "bpftrace-dbgsym" installed: dima@shorty:~$ gdb /usr/bin/bpftrace GNU gdb (Debian 13.2-1) 13.2 ... Reading symbols from /usr/bin/bpftrace... (No debugging symbols found in /usr/bin/bpftrace) (gdb) b main Function "main" not defined. I.e. there are no useful debugging symbols. The bpftrace-dbgsym package has some stuff, but whatever that is, it doesn't know about the 'main' symbol, so it's not right: dima@shorty:~$ dpkg -L bpftrace-dbgsym /. /usr /usr/lib /usr/lib/debug /usr/lib/debug/.dwz /usr/lib/debug/.dwz/x86_64-linux-gnu /usr/lib/debug/.dwz/x86_64-linux-gnu/bpftrace.debug /usr/share /usr/share/doc /usr/share/doc/bpftrace-dbgsym dima@shorty:~$ readelf -w /usr/lib/debug/.dwz/x86_64-linux-gnu/bpftrace.debug | grep 'DW_AT_name.* main$' [ no output ] If I rebuild the package, the non-stripped executable that comes out of the build knows about 'main' and is debuggable: dima@shorty:~/debianstuff/bpftrace$ readelf -w obj-x86_64-linux-gnu/src/bpftrace | grep 'DW_AT_name.* main$' <21f287> DW_AT_name: (indirect string, offset: 0x42426a): main dima@shorty:~/debianstuff/bpftrace$ gdb obj-x86_64-linux-gnu/src/bpftrace ... Reading symbols from obj-x86_64-linux-gnu/src/bpftrace... (gdb) b main Breakpoint 1 at 0x5b3e0: file ./src/main.cpp, line 756. Looking at the debian/rules, it looks like there's some custom logic for stripping off the debug symbols. Perhaps that logic is wrong? There are no comments or version control notes about the rationale behind that logic, other than the need to preserve SOME of the DWARF info. A comment about that would be good. Thanks. -- System Information: Debian Release: trixie/sid APT prefers unstable APT policy: (800, 'unstable'), (500, 'unstable-debug'), (500, 'stable') Architecture: amd64 (x86_64) Kernel: Linux 6.6.13-amd64 (SMP w/4 CPU threads; PREEMPT) Kernel taint flags: TAINT_OOT_MODULE, TAINT_UNSIGNED_MODULE Locale: LANG=C, LC_CTYPE=C.UTF-8 (charmap=UTF-8), LANGUAGE not set Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system) Versions of packages bpftrace depends on: ii libbpf1 1:1.4.1-1 ii libbpfcc 0.29.1+ds-1.1 ii libc62.38-11 ii libclang1-17t64 1:17.0.6-12 ii libdw1t640.191-1 ii libgcc-s114.1.0-1 ii libllvm17t64 1:17.0.6-12 ii libstdc++6 14.1.0-1 ii zlib1g 1:1.3.dfsg+really1.3.1-1 Versions of packages bpftrace recommends: ii libc6-dev 2.38-11 bpftrace suggests no packages. -- no debconf information
Bug#1071230: bpftrace: Build fails if older clang/llvm are installed
Package: bpftrace Version: 0.20.2-1 Severity: normal Hi. I'm running Debian/sid. I just "apt build-dep bpftrace" and I tried building the latest bpftrace from git. This is at the latest tag: dima@shorty:~/debianstuff/bpftrace$ git describe --tags debian/0.20.2-1 The build fails during the cmake configuration: -- Found LLVM 17.0.6: /usr/lib/llvm-17/lib/cmake/llvm CMake Error at /usr/lib/llvm-14/lib/cmake/clang/ClangTargets.cmake:756 (message): The imported target "clangBasic" references the file "/usr/lib/llvm-14/lib/libclangBasic.a" but this file does not exist. Possible reasons include: * The file was deleted, renamed, or moved to another location. * An install or uninstall procedure did not complete successfully. * The installation package was faulty and contained "/usr/lib/llvm-14/lib/cmake/clang/ClangTargets.cmake" but not all the files it references. Call Stack (most recent call first): /usr/lib/cmake/clang-14/ClangConfig.cmake:19 (include) CMakeLists.txt:158 (find_package) To make it work, I had to remove these packages, one at a time: clang-13 clang-14 clang-16 I didn't have clang-15 installed, but presumably it would have confused the build as well. These extra packages being installed shouldn't confuse anything. Thanks. -- System Information: Debian Release: trixie/sid APT prefers unstable APT policy: (800, 'unstable'), (500, 'unstable-debug'), (500, 'stable') Architecture: amd64 (x86_64) Kernel: Linux 6.6.13-amd64 (SMP w/4 CPU threads; PREEMPT) Kernel taint flags: TAINT_OOT_MODULE, TAINT_UNSIGNED_MODULE Locale: LANG=C, LC_CTYPE=C.UTF-8 (charmap=UTF-8), LANGUAGE not set Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system) Versions of packages bpftrace depends on: ii libbpf1 1:1.4.1-1 ii libbpfcc 0.29.1+ds-1.1 ii libc62.38-11 ii libclang1-17t64 1:17.0.6-12 ii libdw1t640.191-1 ii libgcc-s114.1.0-1 ii libllvm17t64 1:17.0.6-12 ii libstdc++6 14.1.0-1 ii zlib1g 1:1.3.dfsg+really1.3.1-1 Versions of packages bpftrace recommends: ii libc6-dev 2.38-11 bpftrace suggests no packages. -- no debconf information
Bug#1067096: ITP: galvani -- reads data from a device with graphical plots and evaluation
Hi. Sorry it took so late to reply. I've been busy. > I created 2 tags (v0.34 and v0.34-2, the later for some corrections I > had to make in the debian-directory). One minor note: there's nothing inherently wrong here, but your life will be a bit easier if you avoid - in your upstream version strings. Debian packages have version UPSTREAM-DEBIAN. For instance: dima@shorty:~$ dpkg -l firefox ... ii firefox122.0-1 amd64Mozilla Firefox web browser So this is the firefox 122.0 release from upstream, and it's the first package release. If the package was modified, but shipping the same 122.0 release, the next version would be 122.0-2, and so on. So avoiding - in upstream versions would make this easy. > I created a release on gitlab. Should I create it on salsa too? No. You tag a release upstream. Then you import that release tarball into salsa. The "gbp import-orig" command in the last email should do everything: update the 3 branches, make the upstream/VERSION tag, etc. I don't see the tags in your salsa repo, and I see you were manually cleaning up some stuff in the "upstream" branch. You can manage this repo however you like, but following conventions will make it easier to collaborate. Look at the version control of other packages to see examples. > I created the branch pristine-tar (took me some time to find out how it > works ...). The master-branch ist called "main" in my repository. Is > that ok? It does take a bit of time to figure out how to work the tools and what the conventions are. Definitely look at other packages, and experiment with the tools. The "gbp-import-orig" manpage says that the master branch default is "master". If that's correct, then you'll need a gbp.conf file to tell it that "main" is the branch name. Or just call it "master". >> Sure. Try to make a debianized repository as I described above, and >> let me know when you're done. Or if you need help. > > Perhaps you could check if everthing is fine: > https://salsa.debian.org/blutz/galvani > https://gitlab.com/b.lutz1/galvani Sorry, I don't have the cycles to do this quickly right now. A quick look tells me that your upstream repo (the one on gitlab) doesn't just contain source. Unless there's a specific reason, the sources should be things you, the human, wrote. Here I see src/galvani: something the compiler made, and you did not write. I see src/Makefile: something autotools made, and you did not write. And so on. These are build products, and should be in your repo. There're probably others, like more generated autotools stuff. Do you really want to ship a po/Makefile.in.in? That's not a mistake? The least fun part of the debianization is writing the debian/copyright. This should ideally describe each file in the sources. You should "git grep -ir copyright" on your repo, and everything should end up in the copyright file. This can be long and tedious. Lots of stuff in m4/ is Copyrighted by the FSF. If you really need that stuff (do you?) it should be mentioned in the debian/copyright. Similar, stuff in po/ has various different copyrights that should be mentioned. > I'm looking forward to join the debian-science-team with my project. I > read the policy. What to do now? Sign in and/or subsribe to the > mailing list? Yeah. Subscribe to the list, move your project to salsa.debian.org/science-team/, update the Maintainer: and Uploader: fields, etc.
Bug#1069220: mrcal ftbfs with Python 3.12
This is this bug: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1067398 You need both of python3-numpy 1:1.26.4+ds-6 mrbuild 1.9 This failing build you have has the former but not the latter (you have mrbuild 1.8). What is being built? Is this Ubuntu's version of experimental? I believe both of these packages are in Debian/unstable right now, hence it builds fine there (yes, unstable has python 3.11, but the error you're seeing is exactly what mrbuild 1.9 fixes). mrcal 2.4.1 made it to Ubuntu/noble, right?
Bug#1069220: mrcal ftbfs with Python 3.12
Thanks for the report, but I cannot reproduce. I upgraded my python3 install to what's currently in experimental: "python3" starts up 3.12.3, and mrcal builds just fine. Can I get the version of these packages please: - mrbuild - python3-numpy Any other specific suggestions would be good too. Thanks.
Bug#1068718: freeimage: consider packaging r1909?
Hi. It looks like the current 3.18.0 release is at r1806. Are there features in r1909 that you want that aren't in our 3.18.0? If there are useful things there, I think it would be best to talk to upstream about releasing a 3.19. Is upstream completely gone, or just slow?
Bug#1067582: gnuplot: please provide a profile to break B-D cycle
I added the requested profile, and fixed a few build bugs I encountered in the process. The patch series is here: https://salsa.debian.org/science-team/gnuplot/-/commits/bug-1067582 Anton: can you please review and upload, if it looks good? Or let me know, and I'll make a Team upload.
Bug#1067582: gnuplot: please provide a profile to break B-D cycle
OK. I see what you're saying. I can do this today or tomorrow. Anton: are you good with that? I'd make a profile "nox-only" that elides the gnuplot-x11 and gnuplot-qt packages.
Bug#1067582: gnuplot: please provide a profile to break B-D cycle
Hi. I might be misunderstanding what you're asking specifically, but two notes: - Today leptonlib Build-Depends on gnuplot-nox only if !nocheck. So if you build leptonlib with testing disabled, there's no dependency on gnuplot - Today the gnuplot source package has a hard Build-Depends on wxt and qt. Removing either of those (even in a specific profile) would make it impossible to build gnuplot-qt and gnuplot-x11 packages with the same set of functionality they normally have. If a profile was added to loosen either of these dependencies, but that dramatically changed the end product, would that be useful? Thanks
Bug#1067096: ITP: galvani -- reads data from a device with graphical plots and evaluation
"Dr. Burkard Lutz" writes: > The actual version ("0.34") is the first which contains all desired > functions, and after extensive testing I hope that there are only > minor bugs left. Thanks for explaining. > Therfore I decided to make an attempt for publishing it on debian. > Should I rename it to "0.10"? No. 0.34 is fine. I just wanted to understand the state of things > Now you can see the project under the following address: > https://gitlab.com/b.lutz1/galvani I changed the group name to > "galvani" but the path to the project remained the same. OK. Excellent. A distro-agnostic location to host the upstream version control is desirable. You do your development there, and when you're ready to release, you should make a tag. Currently there aren't any: https://gitlab.com/b.lutz1/galvani/-/tags To indicate which commit, exactly is being released, you should make a tag called 'v0.34' or '0.34' or something like that. Once you make a tab, gitlab will create a tarball with your sources at that tag. This is your "release tarball". The debianization repo should live on salsa. Generally you have 3 branches: - "pristine-tar" contains the release tarballs - "upstream" contains the unpacked upstream sources. Each upstream release is one commit - "master" branches off "upstream"; contains the debianization This isn't the "best" way to do it, but it's how most packages are set up. Look around on salsa; you'll see this layout everywhere. The "gbp" tool is useful to manipulate the debianized repo. In particularly, you can import new release tarballs with gbp import-orig --pristine-tar whatever.tar.gz The upstream release tarball location is encoded in the debian/watch file. The "uscan" tool is used to interpret this file, and to see if new release tarballs are available, and to download them. In order for this to work, debian/watch has to be written properly. This is described here: https://wiki.debian.org/debian/watch It looks like gitlab keeps changing their file layout, so you'll need to play with it until uscan --verbose --report-status sees your tarball. > I saw that you are a member of debian-science-team. Did you have some > time so far to have a look at my project? Do you think debian-science- > team could be interested in that project? Yes. Joining a team is what you usually want. It doesn't mean that somebody else will fix all your problems (you're still the primary maintainer), but it's a signal that if a team member wants to fix stuff while you're not available, you're ok with that. debian-science is a fine place for this. Follow the policy: https://science-team.pages.debian.net/policy/ Mostly it means that you put your debianization into their subdirectory on salsa: https://salsa.debian.org/science-team/ And that you set the team to be the Maintainer and yourself as the Uploader. Read the policy. > I'm looking for a sponsor to publish the project on debian. Can you > perhaps help me in that issue? Sure. Try to make a debianized repository as I described above, and let me know when you're done. Or if you need help.
Bug#1067349: closed by Dima Kogan (Done)
Thanks for pointing this out. There was a missing Depends, and I just pushed mrbuild=1.9-2 to fix that. Works for me in sbuild now.
Bug#1067096: ITP: galvani -- reads data from a device with graphical plots and evaluation
> You wrote: "- Each release of "galvani" should have a git tag". Does > that mean, that every file in the release should have a tag "v_0.34" > or similar? Tags apply to the whole repository, not to individual files. I'm still confused, though. Are you the author of this software? Is there version control somewhere? You're packaging version 0.34; is there a version 0.33? Where is it? What are the differences between the two? What about version 0.35? Is it going to be developed? Where are those commits going to go? I'm guessing your development happens in some non-public place, and this is the first public releaes. Is there a non-public version control? or a non-public place to download this software? If so, can that version control (or the release tarballs) be made public as well? I think there are some (mostly older) Debian packages where upstream develops behind closed doors, and releases a tarball to the public periodically. It's unideal, but you can do that too, if that's what you want. Whatever we're doing, there has to be a clear idea of where upstream lives. Sorry to be a pain, you're just trying to do something nonstandard, and I don't know what specifically to suggest, yet.
Bug#1067398: closed by Debian FTP Masters (reply to Timo Röhling ) (Bug#1067398: fixed in numpy 1:1.26.4+ds-6)
I just pushed mrbuild 1.9 to use the .pc file. Thank you!
Bug#1067398: python3-numpy: Missing /usr/include/python3.11/numpy link breaks builds
> Backporting sounds like a reasonable approach. If that does not work > as expected, I'll restore the symlink. Excellent. Let me know when this is done, and I'll then update mrbuild to use it, and the builds will then work again. I see you just tagged 1:1.26.4+ds-6 in git with the .pc stuff. I'll make my mrbuild changes now against that, and will push them when you tell me that you're done and that you have pushed it. Thanks!
Bug#1067398: python3-numpy: Missing /usr/include/python3.11/numpy link breaks builds
Hi Timo. Thanks for replying. My feeling is that being confused by that symlink is a bug, but I didn't read #998084 in great detail, so maybe I'm missing some nuance. So let's make this work. ** Short version ** Proposal: if pkg-config files will be added in the near future, can we wait until those are available before removing the symlink? Or, can you backport them into the current package? Essentially, moving the information that was previously in the symlink into the .pc file? If I can assume the symlink exists, I will use it in mrbuild and/or the builds of projects that use it. But if something was confused by the symlink, would it also be confused by the .pc file? ** Long version ** The breakage is in packages where I'm upstream. These use my build system: mrbuild. This asks Python for its header directory: sysconfig.get_config_var('INCLUDEPY'), and asks the compiler to look there. Today this ends up passing -I/usr/include/python3.11 which would previously succesfully resolve (via the now-missing symlink) #include mrbuild very explicitly does not use setup.py or anything of that nature. It's used for mixed-language projects, and I don't want multiple build systems for each language that wanted to rewrite the world. Some discussion and rationale in a blog post: https://notes.secretsauce.net/notes/2017/11/14_python-extension-modules-without-setuptools-or-distutils.html I just tried to patch mrbuild to use np.get_includes(). This works, but it's slow. After warming up the caches, I see this: $ time python3 -c 'import sysconfig' python3 -c 'import sysconfig' 0.02s user 0.01s system 97% cpu 0.025 total $ time python3 -c 'import numpy' python3 -c 'import numpy' 0.51s user 0.67s system 634% cpu 0.185 total Currently the Makefile launches Python and imports sysconfig, which is relatively fast. As we can see, importing numpy also, is MUCH slower. All I want is the include path; I shouldn't need to initialize all of numpy to do that. Thanks much. Hopefully we can find a nice way to satisfy everybody
Bug#1067398: python3-numpy: Missing /usr/include/python3.11/numpy link breaks builds
Package: python3-numpy Version: 1:1.26.4+ds-5 Severity: important X-Debbugs-Cc: none, Dima Kogan Hi. Many of my packages just started to FTBFS. For instance this: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1067270 This is due to the /usr/include/python3.11/numpy (and/or other versions) symlink not being shipped in python3-numpy anymore. I haven't bisected, but it has worked as recently as python3-numpy = 1:1.24.2-2. It no longer works as of 1:1.26.4+ds-5. Can we get that symlink back, please? Thanks! -- System Information: Debian Release: trixie/sid APT prefers unstable APT policy: (800, 'unstable'), (500, 'unstable-debug'), (500, 'stable') Architecture: amd64 (x86_64) Kernel: Linux 6.6.13-amd64 (SMP w/4 CPU threads; PREEMPT) Kernel taint flags: TAINT_OOT_MODULE, TAINT_UNSIGNED_MODULE Locale: LANG=en_US.utf-8, LC_CTYPE=en_US.utf-8 (charmap=UTF-8) (ignored: LC_ALL set to en_US.utf-8), LANGUAGE=en_US.utf-8 Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system) Versions of packages python3-numpy depends on: ii libatlas3-base [liblapack.so.3]3.10.3-13 ii libblas3 [libblas.so.3]3.11.0-2 ii libc6 2.37-7 ii libgcc-s1 13.2.0-2 ii liblapack3 [liblapack.so.3]3.11.0-2 ii libopenblas0-openmp [liblapack.so.3] 0.3.25+ds-1 ii libopenblas0-pthread [liblapack.so.3] 0.3.25+ds-1 ii python33.11.4-5+b1 ii python3-pkg-resources 68.1.2-1 python3-numpy recommends no packages. Versions of packages python3-numpy suggests: ii gcc 4:13.2.0-1 ii gfortran4:13.2.0-1 ii python3-dev 3.11.4-5+b1 ii python3-pytest 7.4.0-2 -- no debconf information
Bug#1066959: sysdig: wrong runtime dependency on old falcosecurity binary
Gianfranco Costamagna writes: > Hello, for some reasons sysdig has an hardcoded runtime dependency on > libfalcosecurity0, now renamed in libfalcosecurity0t64. You can remove > it and let debhelper create it via shlibs:Depends automatically Thank you very much for catching and fixing this. The falco ABIs weren't obviously stable earlier, but that might be better now, so hopefully we can get away without a versioned dependency. I'll ask falco upstream about stability in a bit.
Bug#1067096: ITP: galvani -- reads data from a device with graphical plots and evaluation
"Dr. Burkard Lutz" writes: > there is no other upstream source except salsa.debian.org > Is that sufficient? Hi. This is certainly sufficient, but it raises more questions. These tools weren't available to the public before this, I'm guessing, and this is the initial public release? Most programs in Debian (and every other distro) are separated into the "upstream" part that contains the program being packaged, and the "debianization": the packaging logic. While not strictly required, it would be good to do that here as well. What if somebody finds these tools, and wants to use them in some other distro? Hosting the sources on salsa implies that there's something debian-specific in galvani, and from reading the description, it sounds like there isn't. So, unless you really feel strongly about doing it this way, I would suggest that you - Create a new "galvani" project someplace non-debian-specific (github, gitlab, etc...) with a README that tells people how to get the software. It can say "please use Debian and 'apt install galvani'" if that's what you want to communicate. - Each release of "galvani" should have a git tag - The repo on salsa should have the canonical structure used by most packages: an "upstream" branch containing the upstream sources from a release tarball and a "master" branch containing these sources + the debianization. One can debate about the technical pros/cons of doing this, but it's the standard, and will make it easier for you and others to manage this package. Look at other packages for examples of how to structure this. You want to have a debian/watch file that points to your repo; something like this: https://salsa.debian.org/science-team/mrcal/-/blob/master/debian/watch And you want to use the "uscan" program to read this file, to download the sources. And you want to use the "gbp import-orig" tool to ingest new tarballs. Furthermore, I would encourage you to do this as part of a team. For instance, the debian-science team: https://salsa.debian.org/science-team/ Doing this sends a signal that you are OK with other people helping maintain this package. Their policies are described here: https://wiki.debian.org/Teams/DebianScience I would suggest that you subscribe to their mailing list, and ask for help there, if you need it. Or feel free to talk to me further on this bug.
Bug#1067096: ITP: galvani -- reads data from a device with graphical plots and evaluation
Hi. Where's the upstream source for this? I would expect to see a link here: - debian/copyright (Source field) - debian/control (Copyright field) - debian/upstream/metadata Usually the upstream source would live somewher outside of Debian (for any non-debian-specific programs, like this one). salsa.debian.org would contain the debianized sources. Thanks
Bug#1063051: vnlog: NMU diff for 64-bit time_t transition
Steve Langasek writes: > What I'm unclear on is why you don't run vnl-gen-header at build time > and output the "generated" header in the -dev package with a > comprehensive description of all the ABI entry points? Each user of libvnlog-dev would give different arguments to vnl-gen-header, and would get a different generated header file. So there isn't a single generated header I can produce when building the vnlog packages.
Bug#1063051: vnlog: NMU diff for 64-bit time_t transition
Thanks for replying. I'll revert the changes. > ... however, I will say it's very strange to ship a shared library, > that has a public shlibs file, and has a -dev package that depends on > it, but the headers shipped in that -dev package are NOT the > authoritative api for that library? That's how I did it, and while it sounds odd, I believe this is right. The public interface is vnl-gen-header ... > generated.h and #include "generated.h" The generated header contains some user-facing macros that call the functions in vnlog.h with specific arguments. That's the API. From the compiler's perspective, the functions declared in vnlog.h are the interface, and the ABI in those symbols must be stable, and putting them into the .symbols file is appropriate. Let me know if I'm doing something wrong. Thanks
Bug#1063051: vnlog: NMU diff for 64-bit time_t transition
Hi. vnlog does not depend on time_t. Is it too late to stop this update? The abi-compliance-checker failure is here: https://adrien.dcln.fr/misc/armhf-time_t/2024-02-01T09%3A53%3A00/logs/libvnlog-dev/base/log.txt That error message says what the problem is: you are not supposed to #include vnlog.h directly. Instead you're supposed to use the "vnl-gen-header" tool (also in the "libvnlog-dev" package) to produce usable headers that themselves #include vnlog.h. For instance: vnl-gen-header 'int w' 'uint8_t x' 'char* y' 'double z' > vnlog_fields_generated.h If you then run vnlog_fields_generated.h (which, again, #includes vnlog.h) through abi-compliance-checker, you'll see that it passes. vnl-gen-header doesn't support any time-related types, so this is y2k38 safe. Thanks.
Bug#1064982: gnuplot-qt: gnuplot displays a window with nothing in it
Can you see if other wxt applications work on a system that's exhibiting this problem?
Bug#1064982: gnuplot-qt: gnuplot displays a window with nothing in it
Hi. I'd like to get more clarity. - You see the issue when you try to plot anything at all? - You say "plot x" and you get a plot window, but it's all white, or something? - Only with the "qt" terminal? You can try to change your window manager, qt versions, etc, etc. If no trigger is found, it would be good to bisect the gnuplot sources to find the cause. Are you able to do that? I cannot reproduce at the moment, so I cannot do it myself.
Bug#1064320: libeigen3-dev: linking objects compiled with different flags may cause crashes
Package: libeigen3-dev Version: 3.4.0-4 Severity: normal X-Debbugs-Cc: none, Dima Kogan Hello. I'm making this report to track the report in this mailing list thread: https://www.mail-archive.com/debian-science@lists.debian.org/msg13666.html In short: there's a known issue in Eigen that can create crashing binaries when using a very reasonable workflow. A description of the issue and minimized reproducer are here: https://www.mail-archive.com/debian-science@lists.debian.org/msg13710.html I propose to patch this in Debian and/or talk to Eigen upstream to eliminate the cause of the crash. A proposed patch appears here: https://www.mail-archive.com/debian-science@lists.debian.org/msg13857.html In my view, a questionable design choice in C++ allows the user to create crashing code, and Eigen expoloits this design choice to make this crashing possible. We cannot fix C++, but we can fix Eigen. The issue is that a templated function defined in a header generates a (weak symbol) copy of this function in EACH compile unit, and the linker then picks an arbitrary copy from the many compile units it is given. It is thus imperative that each copy is compatible with every other copy. Eigen breaks this requirement by using the preprocessor to select incompatible behaviors that might crash when linked together. The proposed patch eliminates this preprocessor-based variability.
Bug#1062952: This package is not affected by time_t
Hi. libmrcal-dev does not use time_t. I'm seeing the abi-compliance-checker failure here: https://adrien.dcln.fr/misc/armhf-time_t/2024-02-01T09%3A53%3A00/logs/libmrcal-dev/base/log.txt The cause is that the tool takes all the headers in /usr/include/mrcal in an arbitrary order, and tries to #include them. That does not work here. "mrcal-internal.h" should not be #included explicitly since it is already #included by mrcal.h. Removing that header from the command in the error log above makes the errors disappear. Let me know what needs to happen to ingest that logic, to exclude mrcal from this transition. This will make my life easier. Thanks.
Bug#1063380: ITP: libuio -- Linux Kernel UserspaceIO helper library
Hi. Thanks for your contribution. I looked at the upstream code a tiny bit, and it looks like it might have portability bug, at least on big-endian architectures. For instance: https://github.com/missinglinkelectronics/libuio/blob/6ef3d8d096a641686bfdd112035aa04aa16fe81a/irq.c#L78 This assumes that sizeof(long)==4. Maybe this is benign, but it would be nice to fix. Are you upstream or do you know upstream? Can yall fix these? Thanks!
Bug#1037136: How to fix a-c-c for this package?
Hi. libmrcal-dev does not use time_t. I'm seeing the abi-compliance-checker failure here: https://adrien.dcln.fr/misc/armhf-time_t/2024-02-01T09%3A53%3A00/logs/libmrcal-dev/base/log.txt The cause is that the tool takes all the headers in /usr/include/mrcal in an arbitrary order, and tries to #include them. That does not work here. "mrcal-internal.h" should not be #included explicitly since it is already #included by mrcal.h. Removing that header from the command in the error log above makes the errors disappear. Can we do that, and remove libmrcal-dev from this transition? Thanks.
Bug#1062545: Processed: Re: falcosecurity-libs: NMU diff for 64-bit time_t transition
Oops. I was trying to save yall time, but I guess I didn't do it right. Please advise. Here's what happened, in order: - 0.14.1-3 was in the archive - 0.14.1-3.1 the NMU in experimental - 0.14.1-4 I fixed an unrelated bug; no time64 changes - 0.14.1-5 I added the time64 stuff to my unrelated bug fix So what should we do to get both the bug fix in -4 and the time64 stuff?
Bug#1061646: falcosecurity-libs: build-depends on unavailable libluajit-5.1-dev
Thanks for the note. 0.14.1-2 makes the build work on arm64, and I wanted to get that done, before thinking of other arches. I' about to apply your suggested patches.
Bug#1061049: libsuitesparse-dev: libsuitesparse-dev 7.4.0 has an ABI break in libcholmod5 without bumping to "libcholmod6"
Thanks. In case you're unaware, there're tools that can report ABI and API breaks. I usually use abi-compliance-checker (works great). And there's also abigail (have't tried it myself, but supposedly works well). Both are in Debian. Cheers.
Bug#1061049: libsuitesparse-dev: libsuitesparse-dev 7.4.0 has an ABI break in libcholmod5 without bumping to "libcholmod6"
Package: libsuitesparse-dev Version: 1:7.3.1+dfsg-2 Severity: serious X-Debbugs-Cc: none, Dima Kogan Hi. I'm chasing down http://bugs.debian.org/1060986 The problem is that mrcal uses libdogleg, which contains typedef struct { cholmod_common common; } dogleg_solverContext_t; The existing "libdogleg2" package was built against libsuitesparse-dev 7.3, so it must be linked with packages that use that ABI. But in suitesparse 7.4 the cholmod_common structure has a new member at the end: FILE *blas_dump ; // only used if CHOLMOD is compiled with -DBLAS_DUMP This is in CHOLMOD/Include/cholmod.h This extra member changes sizeof(cholmod_common), which changes the ABI, causing the crash. One way to fix this is to bump the SONAME of libcholmod. Thanks.
Bug#1059342: live-build: Can we please install net-tools?
Package: live-build Severity: normal Hi. This is a feature request. Can we please include net-tools in the set of packages we ship with debian-live? It is small, and would make many people's lives easier. I personally use this as a rescue disk, and configuring the network is a common need for such an application. And like many, I prefer the older net-tools tools. Thanks. -- System Information: Debian Release: trixie/sid APT prefers unstable APT policy: (800, 'unstable'), (500, 'unstable-debug'), (500, 'stable') Architecture: amd64 (x86_64) Foreign Architectures: armhf, armel Kernel: Linux 6.4.0-3-amd64 (SMP w/4 CPU threads; PREEMPT) Locale: LANG=C, LC_CTYPE=C.UTF-8 (charmap=UTF-8), LANGUAGE not set Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system)
Bug#1056556: Debugging techniques
Hi Johannes Schauer Marin Rodrigues writes: > By default, mmdebstrap does not print the output of the commands it runs. It > does that though when something goes wrong. So if "apt install" fails, then > you > get its output. In your case, you missed a "note" (not even a warning) in the > "apt update" output. You would've seen that if you had run mmdebstrap with the > --verbose option. No. I was running mmdebstrap --verbose, and the note wasn't in the output. I only saw the note when I set up a similar situation in a real arm64 install (no schroot, no mmdebstrap), and tried to "apt update" there. mmdebstrap passing on the note would have helped. > You initially suggested mmdebstrap to drop down to a shell and print a command > for the user to re-run. Lets assume that this were possible (I don't think it > is feasible). In that case, you still would not've known how to proceed or > that > it is the apt pinning that is at fault. > If a shell was available, you can do lots of quick experiments quickly, and narrow down the problem quickly. I routinely use tools like sysdig to report all system-wide syscalls, and that output is enough to figure out lots of problems. You can use it without the shell too, without reproducing the problem in isolation, but it's harder to interpret the logs. > So given all of this, I don't think your initial suggestion of adding a > facility to drop to a shell and re-run a command in shell would've fixed your > problem. Maybe. Maybe not. Can we agree that the capability to do this in sbuild is extremely useful, at least? I find it extremely valuable. > I have no problem helping you with this and it doesn't bother me but I > also don't see anything that is to be learned from all of this. OK. Hopefully I don't need to bug you many more times. Thanks a lot for the help!
Bug#1056556: Debugging techniques
Johannes Schauer Marin Rodrigues writes: >> > mmdebstrap ... --variant=apt --chrooted-customize-hook=bash unstable >> > /dev/null >> >> Would that work, though? > Yes. Did you try it and it did not work? What was the error message? No :) I wanted to read about what it did first. I tried it just now, though. With the --included metapackage it has the same behavior as before: complains that it can't install libopencv-dev. If I don't ask it to --install the problematic package, intending to manually poke apt, then I can't do that: the problematic package is nowhere to be found. It was originally on disk locally, but without --include, it was never copied into the bind mount. I do want to have a metapackage: this allows the metapackage to be updated in the future, and have users be able to "apt update && apt upgrade". I guess I could do this differently for testing. By making the metapackage available in an apt server, and using an undocumented option. That's a heavy lift though. If I was so expert to know to do these things, I probably wouldn't need to debug stuff in the first place >> In any case, I figured out my specific problem by creating a similar >> scenario on a native arm64 box. I was naming the pinning file .conf >> instead of .pref which apparently matters. > > Yes, in the man page of apt_preferences it says: "The files have either no or > "pref" as filename extension". In one of my last mails to you I also suggested > you add: > > --setup-hook='{ echo "Package: XXX"; echo "Pin: origin \"YYY\""; echo > "Pin-Priority: 1"; } > "$1"/etc/apt/preferences.d/mypinnings.pref' > > Notice that I used the correct filename extension. Yep. I liked my extension better, but didn't realize that the name was significant. >> On the arm64 box this produced a clear error message ("apt" told me to >> rename the file). But with mmdebstrap there was no specific error at all, as >> you saw. Any idea why? > > Which apt command produced the error? I also don't think it was an error. It > was only a warning, right? Did you get it for "apt-get update" or for "apt-get > install"? Great questions. I just tried it again: $ sudo apt update N: Ignoring file 'mypinnings.conf' in directory '/etc/apt/preferences.d/' as it has an invalid filename extension So I don't know what the answer is. But this felt undebuggable, and I wish I could figure this stuff without sinking many hours into it or asking you every time. Thanks for the help, as always.
Bug#1056556: Debugging techniques
Hi josch. I sorta expected that there was extra complexity here that made debugging difficult. It's unfortunate. > mmdebstrap ... --variant=apt --chrooted-customize-hook=bash unstable /dev/null Would that work, though? --chrooted-customize-hook isn't in the manpage --customize-hook runs after everything was installed (so past where the failure was happening here) --essential-hook was running at the right time, but the "apt-get" executable wasn't available In any case, I figured out my specific problem by creating a similar scenario on a native arm64 box. I was naming the pinning file .conf instead of .pref which apparently matters. On the arm64 box this produced a clear error message ("apt" told me to rename the file). But with mmdebstrap there was no specific error at all, as you saw. Any idea why? Thanks for all the help.
Bug#1056556: Debugging techniques
Hi. I tried to do that apt pinning today, as you suggested. It still fails in the same way as before: $ mmdebstrap I: installing remaining packages inside the chroot... Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ros-noetic-cv-bridge : Depends: libopencv-dev but it is not installable ros-noetic-grid-map-filters : Depends: libopencv-dev but it is not installable ros-noetic-image-geometry : Depends: libopencv-dev but it is not installable E: Unable to correct problems, you have held broken packages. The preferences file is there, but it isn't obviously doing anything. I did some debugging just now to try to figure out why, and I'm reminded of https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1036929 Ideally the workflow I would like is 1. mmdebstrap does stuff. When it runs a command, it prints out EXACTLY what it does, in a way that I can copy/paste it, and get the same output 2. If it fails to do something, it drops me into a shell. Where I can paste the command to reproduce the problem, and then poke around to fix it This probably isn't 100% possible here; but how would you debug this otherwise? I'm attaching a patch that does some of (1.) above. This uses String::ShellQuote to quote all the arguments so that the command string can be pasted. I'm guessing you'd want to do some of that differently, so it isn't super thorough. OK. Then I gave myself a shell in a spot that (I think?) sits right before the failing "apt-get install": mmdebstrap \ --essential-hook 'echo $$1; bash -i' \ And pasting the command didn't work as I had hoped: root@fatty:# apt-get -o Dir::Bin::dpkg=env -o DPkg::Options::=--unset=TMPDIR -o DPkg::Options::=dpkg -o DPkg::Chroot-Directory=/tmp/mmdebstrap.i1wpW0WLMS --yes install -oDpkg::Use-Pty=false tst-libopoencv '?narrow(?or(?archive(^focal$),?codename(^focal$)),?architecture(arm64),?and(?or(?priority(required),?priority(important)),?not(?essential)))' E: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied) E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), are you root? Any idea what the difference is? Here I'm running the external "apt-get". Is the "apt-get install" supposed to happen inside the chroot? There is no "apt-get" binary there yet. If I can quickly and repeatedly reproduce the failure, I can "apt-cache policy libopencv-dev", and I can see if it's trying to use the preferences file and such. Maybe I mistyped something. If the above diagnostic sequence cannot work, what would? Thanks much diff --git a/debian/control b/debian/control index c82cdb3..e3f56a8 100644 --- a/debian/control +++ b/debian/control @@ -17,6 +17,7 @@ Architecture: all Depends: apt (>= 2.3.14), python3 (>= 3.10), + libstring-shellquote-perl, ${misc:Depends}, ${perl:Depends}, Recommends: diff --git a/mmdebstrap b/mmdebstrap index 0abdfc3..d52496d 100755 --- a/mmdebstrap +++ b/mmdebstrap @@ -46,6 +46,7 @@ use Socket; use Time::HiRes; use Math::BigInt; use Text::ParseWords; +use String::ShellQuote qw(shell_quote); use version; ## no critic (InputOutput::RequireBriefOpen) @@ -815,8 +816,6 @@ sub run_progress { info "run_progress() received signal $_[0]: waiting for child..."; }; -debug("run_progress: exec " . (join ' ', ($get_exec->('${FD}'; - # delay signals so that we can fork and change behaviour of the signal # handler in parent and child without getting interrupted my $sigset = POSIX::SigSet->new(SIGINT, SIGHUP, SIGPIPE, SIGTERM); @@ -845,6 +844,8 @@ sub run_progress { # redirect stderr to stdout so that we can capture it open(STDERR, '>&', STDOUT) or error "cannot open STDOUT: $!"; my @execargs = $get_exec->($fd); +my $cmd_string = shell_quote(@execargs); + # before apt 1.5, "apt-get update" attempted to chdir() into the # working directory. This will fail if the current working directory # is not accessible by the user (for example in unshare mode). See @@ -853,8 +854,11 @@ sub run_progress { chdir $chdir or error "failed chdir() to $chdir: $!"; } eval { Devel::Cover::set_coverage("none") } if $is_covering; + +info("run_progress: running $cmd_string"); + exec { $execargs[0] } @execargs - or error 'cannot exec() ' . (join ' ', @execargs); + or error 'cannot exec() $cmd_string'; } close $wfh; @@ -947,7 +951,7 @@ sub run_progress { if ($verbosity_level >= 1) { print STDERR $output; } -error((join ' ', $get_exec->('<$fd>')) . '
Bug#1056556: mmdebstrap: mmdebstrap error resolving installed packages
Package: mmdebstrap Version: 1.4.0-1 Severity: normal Hi. I'm seeing a failure that I understand very well, but yet don't know how to debug or fix. Any suggestions would be appreciated. I'm making an Ubuntu/focal image that has a bunch of stuff installed, and can serve as a base for development. This runs on arm64. There're a number of ugly external APT repos that have semi-broken packages, but it should all still work. I define the stuff I want to install into the image with a meta-package. tst-libopencv.equivs: Source: tst-libopencv Section: misc Priority: optional Standards-Version: 3.9.2 Package: tst-libopencv Version: 1 Maintainer: Dima Kogan Depends: ros-noetic-cv-bridge, libopencv-dev (<< 4.5) Architecture: arm64 Description: Test And I build the meta-package: equivs-build -aarm64 tst-libopencv.equivs And I can use mmdebstrap to create a base image with this package installed: mmdebstrap \ --verbose \ --architectures=arm64 \ --hook-dir=/usr/share/mmdebstrap/hooks/file-mirror-automount \ --include ./tst-libopencv_1_arm64.deb \ focal \ tst.tar.gz \ "deb [trusted=yes] http://ports.ubuntu.com/ubuntu-ports/ focal main restricted universe multiverse" \ "deb [trusted=yes] http://ports.ubuntu.com/ubuntu-ports/ focal-updates main restricted universe multiverse" \ "deb [trusted=yes] http://ports.ubuntu.com/ubuntu-ports/ focal-backports main restricted universe multiverse" \ "deb [trusted=yes] http://ports.ubuntu.com/ubuntu-ports/ focal-security main restricted universe multiverse" \ "deb [trusted=yes] http://packages.ros.org/ros/ubuntu focal main" This works great. ros-noetic-cv-bridge is an external-apt-repo package. It Depends: libopencv-dev. Ubuntu/focal ships 4.2, so the requirement libopencv-dev (<< 4.5) in the meta-package is satisfied. OK. But let's say I want to add another, also-heinous external repo into the mix, and I do this: mmdebstrap \ --verbose \ --architectures=arm64 \ --hook-dir=/usr/share/mmdebstrap/hooks/file-mirror-automount \ --include ./tst-libopencv_1_arm64.deb \ focal \ tst.tar.gz \ "deb [trusted=yes] http://ports.ubuntu.com/ubuntu-ports/ focal main restricted universe multiverse" \ "deb [trusted=yes] http://ports.ubuntu.com/ubuntu-ports/ focal-updates main restricted universe multiverse" \ "deb [trusted=yes] http://ports.ubuntu.com/ubuntu-ports/ focal-backports main restricted universe multiverse" \ "deb [trusted=yes] http://ports.ubuntu.com/ubuntu-ports/ focal-security main restricted universe multiverse" \ "deb [trusted=yes] https://repo.download.nvidia.com/jetson/common r35.4 main" \ "deb [trusted=yes] https://repo.download.nvidia.com/jetson/t234 r35.4 main" \ "deb [trusted=yes] http://packages.ros.org/ros/ubuntu focal main" This is the same command, but I also make some nvidia packages available. THAT repo ships its own copy of libopencv-dev: version 4.5.x. When building the image I explicitly do NOT want it to pick up that version, but to use the normal Ubuntu/focal ones: that restriction in the meta-package should do that for me. There's no reason this shouldn't work, and I can easily create this situation with some apt commands after I chroot into the image. But mmdebstrap cannot create this image: the above command fails: The following packages have unmet dependencies: tst-libopencv : Depends: libopencv-dev (< 4.5) but 4.5.4-8-g3e4c
Bug#1056157: libfalcosecurity0-dev: libsinsp.pc lists wrong libs: -lgRPC::grpc++ -lgRPC::grpc -lgRPC::gpr
Hello. Thanks for the report. I fixed the original issue you reported in git, but haven't tested it yet, or released the fixed packages. I'll look at this in a bit. This package has bigger problems, unfortunately. Let me know if you want to help fix them.
Bug#1053729: RFP: SAIL image decoding library
Andrius Merkys writes: > Do you know any software already in Debian which would benefit from > having SAIL in Debian? There aren't many C image-reading libraries. libfreeimage is mostly-dead upstream, and is kinda weird. If SAIL was in Debian and is all the things that its website claims, I would consider moving my upstream software to use it instead of libfreeimage. So I'd like to see this in Debian, but have too much of a backlog to do the packaging myself, sadly.
Bug#1051499: ITP: ros-image-transport-plugins -- ROS1 plugins to the image transport system
Package: wnpp Owner: Dima Kogan Severity: wishlist * Package name: ros-image-transport-plugins Version : 1.15.0 Upstream Author : Willow garage * URL or Web page : https://github.com/ros-perception/image_transport_plugins * License : BSD-3 Description : ROS1 plugins to the image transport system
Bug#1041410: libdogleg-dev: missing Breaks+Replaces: libdogleg-doc (<< 0.16-2)
Thank you very much for the report. I completely forgot about these. Fixed just now.
Bug#1041059: FTBFS against suitesparse 7
Hello. Thank you for the report. This is already fixed in the libdogleg upstream repo. I will push a new package when a new libdogleg is released or when the new suitesparse moves to unstable, whichever comes first.
Bug#1040942: rosbash: Most binary tools (roscd, rosd, rosls, ....) are unavailable in this package
Package: rosbash Version: 1.15.8-5 Severity: normal X-Debbugs-Cc: none, Dima Kogan Hello. I'm using the package from bookworm. rosbash The "rosbash" package should provide several commandline tools, documented here: http://wiki.ros.org/rosbash But only "rosrun" is provided in the package. This is because most of the tools are not binaries, but are shell functions. These are supposed to be defined in /usr/share/rosbash/rosbash, but our rosbash package does not ship this file. This file exists in the package sources in tools/rosbash/rosbash, but it is not installed anywhere. This is the bug. Our package does reference the tools (we include all the tab completions). And the package scripts ask for it: dima@fatty:$ source /usr/share/rosbash/catkin_env_hook/15.rosbash.bash bash: /usr/share/rosbash/rosbash: No such file or directory So if we install that file, that would probably fix this bug. Thanks
Bug#773385: Ping
Niels Thykier writes: > From my PoV, what you experience here with find is a complete different > problem. > > By default, apt-file uses the `APT::Architectures` configuration variable to > determine which architectures to search for[1]. If APT's default is not > correct > here and you do not APT to see arm64, then please add the corrected > `APT::Architectures` to `/etc/apt/apt-file.conf`. Well yeah. I totally get that this is what it's doing, I'm just unconvinced that it should be doing this. I can give you patches, but let's agree on what the patch should do before I do any work. apt-cache has databases for every enabled architecture. So the proposal is to search ALL of them and report ALL the results. If the user wants to limit the search, they can pass -a or grep the output, or whatever. Would you accept such a patch? Thanks for working on apt-file!
Bug#1037200: please consider backporting valijson to Bullseye
Hi. I'll gladly accept help on this. If you can do this yourself, that would be great! Thanks
Bug#1036929: mmdebstrap: Feature request: "mmdebstrap --anything-failed-commands '%s'" should exist, like in sbuild
Johannes Schauer Marin Rodrigues writes: > let me tell you about another trick. Instead of running > > --customize-hook='chroot "$1" i-might-fail || chroot "$1" bash' > > you can also run: > > --chrooted-customize-hook='i-might-fail || bash' > > In contrast to the --X-hook options, the --chrooted-X-hook options run their > arguments inside the chroot and thus save you quite a bit of typing. The > --chrooted-X-hook options do a bit more than that but those things are not > relevant here. The options are currently undocumented but I think I'll make > them official with the next release. Good to know. Thanks. Not relevant to me today because I don't have any customization hooks, but I'm sure I will at some point. > To give you an idea of why it's really far from simple to just re-run a > command > run by mmdebstrap have a look at this: > > https://sources.debian.org/src/mmdebstrap/1.3.5-7/mmdebstrap/#L486 > > If any of that fails, what good is an interactive shell going to do? Yeah. Certainly for some things there isn't a simple command you can give to an interactive shell. But for some other things, there is, like my apt server failure. I don't have a good sense of which case is more common. >> > The commands should be printed if you increase verbosity with --verbose or >> > even >> > with --debug. If the command is not printed, then that is a bug that I will >> > fix. >> >> Good to know. I admittedly haven't spent a ton of time working on it. > > I think what you ultimately want with the interactive bash shell is to figure > out why the stuff that broke for you did break. But I can get you the same > information by increasing either the --verbose or --debug output as necessary. Yes and no. I can imagine that my apt server is misconfigured, and the server will need a change to make this work. And to test potential server fixes it would be much easier to run "apt update" repeatedly in an interactive shell than doing a full mmdebstrap run. Doing a full run each time can be much slower if the failing thing doesn't happen right at the start. --verbose or --debug are good to diagnose problems, but not to test potential fixes. > Could you run your mmdebstrap invocation with --debug and paste(bin) > the error you get? Let me try to get that tomorrow.
Bug#773385: Ping
This really should work. It's maybe sorta ok for "apt-file list", but it also affects "apt-file find". Look: dima@fatty:~$ apt-file find /usr/lib/aarch64-linux-gnu/libOpenCL.so dima@fatty:~$ apt-file -aarm64 find /usr/lib/aarch64-linux-gnu/libOpenCL.so nvidia-libopencl1: /usr/lib/aarch64-linux-gnu/libOpenCL.so.1 nvidia-libopencl1: /usr/lib/aarch64-linux-gnu/libOpenCL.so.1.0.0 ocl-icd-libopencl1: /usr/lib/aarch64-linux-gnu/libOpenCL.so.1 ocl-icd-libopencl1: /usr/lib/aarch64-linux-gnu/libOpenCL.so.1.0.0 ocl-icd-opencl-dev: /usr/lib/aarch64-linux-gnu/libOpenCL.so I.e. I asked it to tell me what package provides a file, and I had to tell it which architecture to look at. The whole point of apt-file is to look up the package name from a path, and if I have to tell IT things like the architecture, it loses a lot of its utility. Thanks.
Bug#1036929: mmdebstrap: Feature request: "mmdebstrap --anything-failed-commands '%s'" should exist, like in sbuild
Johannes Schauer Marin Rodrigues writes: > ah I see our main difference might be that I run mmdebstrap mostly > from other scripts whereas you are running it interactively and thus > you want a shell if something goes wrong. I usually run it from scripts too. But if something goes wrong, I re-run it manually, and having an easy way to get a shell at the failing point would be nice. >> > I'd also like to add that you can already emulate this behaviour by >> > running a hook like this: >> > >> > --customize-hook='chroot "$1" i-might-fail || chroot "$1" bash' >> >> I would want to add the '|| chroot "$1" bash' to everything mmdebstrap >> does: downloading packages, installing them, doing customization hooks, >> etc, etc. The above just applies to customization hooks, right? > > Oh, so you want the interactive shell on other things than failing hooks? You > also want that shell when any command run by mmdebstrap failed? Yessir. A shell where the failure is quickly reproduced makes fixing problems MUCH faster. That's what sbuild does, and I've used this countless times. >> The actual failure I'd like to fix today is a failing "apt update" >> trying to talk to my apt-cacher-ng server (for some reason the server >> returns 502 only when mmdebstrap tries to talk to it). I don't believe >> there's a nice way to debug this with mmdebstrap today, right? I tried >> to use --SOMETHING-hook (don't remember what SOMETHING was), but it >> wasn't clear what the exact failing command was, so I moved on to >> something else. Printing the exact failing command for easy >> reproducibility would be important. Maybe there's already a verbosity >> level that does this? > > The commands should be printed if you increase verbosity with --verbose or > even > with --debug. If the command is not printed, then that is a bug that I will > fix. Good to know. I admittedly haven't spent a ton of time working on it. > For your specific problem I would first try to take mmdebstrap out of the loop > and see if the problem can be replicated with plain apt as well. I did that. The problem only shows up with mmdebstrap. I doubt it's a bug in mmdebstrap, but that's the only place I see this. > The man page contains a small shell snippet that does the essential things > that > mmdebstrap does but without mmdebstrap in the section OPERATION: > > https://manpages.debian.org/unstable/mmdebstrap/mmdebstrap.1.en.html#OPERATION > > You could try if that script with your apt-cacher-ng setup produces the same > error and then you've already reduced the number of moving parts. I can do that. But fixing this hasn't been very high priority for me today, so I haven't put in the work. I'm just using this as an example of a case where the --failure-hook option would be useful. Thanks much.
Bug#1036929: mmdebstrap: Feature request: "mmdebstrap --anything-failed-commands '%s'" should exist, like in sbuild
Johannes Schauer Marin Rodrigues writes: > how about an option like this: > > --failure-hook='chroot "$1" bash' I don't care about the exact command, as long as it's documented. This suggestion sounds reasonable. > Since all hooks have the MMDEBSTRAP_HOOK variable set, whatever is run in the > hook would have access to the type of hook that failed. > > The information that would be missing would be *which* hook of a certain type > was the one failing. I do not see a good way to communicate this information. Ideally, mmdebstrap will tell you which command failed, so the user can cut/paste the failing command to reproduce the failure. This maybe is the most important thing to communicate? I might be missing the subtleties of what you're thinking. > Another question: what should be done if the failure-hook failed? Hmmm. The obvious thing to say would be "It doesn't matter; we failed, so mmdebstrap should just exit regardless". But maybe the hook can fix whatever the failure was, and if the hook callback succeeds, mmdebstrap can try again? In my usage of these in sbuild I'm always debugging failures, so just exiting regardless is the right thing. But maybe something smarter would be good too. > Do you know of another software besides sbuild that has a similar interface? > I'd like to get some more ideas first before I add another interface that > mmdebstrap would have to support forever. I can only thing of sbuild off the top of my head. But mmdebstrap already has a hook system, so extending that in the way you suggested above sounds like a self-consistent way to do it. > I would rather not add the percent escapes from sbuild as that would > mean that any percent sign in the hooks has to be escaped as well. > This would break existing users of hooks. Yeah. Let's conform to the existing mmdebstrap conventions > I'd also like to add that you can already emulate this behaviour by > running a hook like this: > > --customize-hook='chroot "$1" i-might-fail || chroot "$1" bash' I would want to add the '|| chroot "$1" bash' to everything mmdebstrap does: downloading packages, installing them, doing customization hooks, etc, etc. The above just applies to customization hooks, right? The actual failure I'd like to fix today is a failing "apt update" trying to talk to my apt-cacher-ng server (for some reason the server returns 502 only when mmdebstrap tries to talk to it). I don't believe there's a nice way to debug this with mmdebstrap today, right? I tried to use --SOMETHING-hook (don't remember what SOMETHING was), but it wasn't clear what the exact failing command was, so I moved on to something else. Printing the exact failing command for easy reproducibility would be important. Maybe there's already a verbosity level that does this? Thanks much!
Bug#1036929: mmdebstrap: Feature request: "mmdebstrap --anything-failed-commands '%s'" should exist, like in sbuild
Package: mmdebstrap Version: 1.3.3-6.1 Severity: wishlist X-Debbugs-Cc: none, Dima Kogan Hi. Currently it's possible to do sbuild --anything-failed-commands '%s' to get an interactive shell in response to any step of the process failing. This makes it much easier to debug problems. It would be great if mmdebstrap had a similar function. I'm currently trying to debug an issue with an apt-cacher-based server failing when mmdebstrap is pulling from it (but not when anything else is pulling from it), and that option would make this process much easier. Thanks
Bug#1034881: falcosecurity-scap-dkms: Cannot compile linux kernel 6.2.12 due to failure with scap dkms
Hi. Thanks for the report. Debian is currently in a freeze while the bookworm release is being prepared. bookworm is unaffected (it ships with linux 6.1). I will look at this after the release is out (in a few months probably).
Bug#1034414: libspectra-dev: libspectra-dev should be Multi-Arch:foreign
Package: libspectra-dev Version: 1.0.1-2 Severity: normal X-Debbugs-Cc: Dima Kogan Ading the "Multi-Arch:foreign" to this package would allow cross-building for packages that depend on it. I'm hitting this when trying to cross-build the gtsam package (not in Debian yet, but in progress).
Bug#1033626: sbuild: Dependencies should not be required outside the chroot (--no-clean should be the default)
> How would a resolution to this bug look like from your point of view? An extra line in the error message that reiterates that "dh clean" runs outside the chroot, and needs manual Build-Depends would be sufficient I think. Then the user knows it's not a bug, and can go read the manpage for more detail. Even better (but more work) would be to identify the missing package. It's almost always dh-SOMETHING. Is it easy to grep the Build-Depends for all packages that match ^dh-.*, and say "try installing THIS and THAT"?
Bug#1033626: sbuild: Dependencies should not be required outside the chroot (--no-clean should be the default)
Hi > Note though, that in the sbuild.conf man page it already says: > >> When running sbuild from within an unpacked source tree, run the >> 'clean' target before generating the source package. This might >> require some of the build dependencies necessary for running the >> 'clean' target to be installed on the host machine. Only disable if >> you start from a clean check‐ out and you know what you are doing. > > Does that paragraph say everything you would've liked to know or is > there anything you'd add there? That paragraph says what I would have liked to know, yes. But I never went looking for it in the docs. If one thinks of sbuild as handling all of the Build-Depends for you, then those failures just look like weird bugs, and I wouldn't expect the manpage to say anything about it. Maybe it's all fine. I don't know.
Bug#1028623: apt: "apt info" should report Multi-Arch fields
Thanks for replying. I get the rationale, but I'd like to find some kind of better solution here. DonKult just pointed out to me on IRC that I can get the output I want with an "apt-cache show" instead of "apt show". Which is great. But it exposes a different problem: "apt" and "apt-get","apt-cache" and friends act VERY similarly, but have unclear differences. Before DonKult told me about "apt-cache show" just now, I had assumed that "apt show" was a synonym. And if I, a Debian user for decades and a DD am confused by this, we can probably assume that almost everybody else is too. This is probably a bigger discussion than this bug. There are ways to improve this. For instance, you can have "apt show package" limit itself to commonly-used fields (what it does today), with an extra note at the bottom: N: Additional fields are displayed with -v And "apt show -v package" would show everything (this is what "apt-cache show" does?). "apt show" already has N: notes at the bottom, so this would be consistent with the way it works today. Adding more docs to the manpage wouldn't help: the tools take identical options and produce 99% identical output. Anybody who sees that would just assume the tools are the same. Thanks
Bug#1028623: apt: "apt info" should report Multi-Arch fields
I just realized that it also doesn't report the Architecture field, so it's impossible to tell if a given package is Architecture:all or not. This info is there in /var/lib/apt/lists, so it's available to the tool. Can we please make "apt info PACKAGE" and "apt show PACKAGE" report these fields? Thanks
Bug#1033678: installation-reports: Unbootable install: MBR partition unusable with UEFI
Hi all. Thanks for the replies. I was just able to get it installed. And here are some notes about what happened, and about how we can do better. I got it running by using a friend's usb installer. HIS usb disk was a valid UEFI boot disk, so I could boot in UEFI mode, and do the normal install, which completed successfully. As stated earlier, I made my USB install disk like this: > I downloaded this: > >debian-bookworm-DI-alpha2-amd64-netinst.iso > > from here: > >https://cdimage.debian.org/cdimage/bookworm_di_alpha2/amd64/iso-cd/ > > and I wrote that .iso to /dev/sde > >cp debian-bookworm-DI-alpha2-amd64-netinst.iso /dev/sde This worked, but apparently this was not a valid UEFI thing. Which I didn't know. Maybe some clearer instructions on the website would help. I was here: https://www.debian.org/devel/debian-installer/ Clicking on "amd64" under "other images (netboot, USB stick, etc.)" gives me listings of files that I don't know what to do with. I ended up getting the "CD" image, which gave me an .iso file that I did know what to do with. The iso-cd page: https://cdimage.debian.org/cdimage/bookworm_di_alpha2/amd64/iso-cd/ has some quick instructions which maybe would be helpful for those that don't know what to do with an .iso. It does mention UEFI, but only when describing the "mac" image. So better UEFI notes on the iso-cd page. And any kind of notes on the USB page would be good. Next. Steve McIntyre suggested installing in "expert mode", and then explicitly creating a GPT partition table. This worked, but I didn't read his suggestion closely enough, and didn't add an ESP partition. Because I didn't know anything about it. The installer allowed me to do that, and once again, created an unbootable installation. Should the installer have yelled at me? Just because I was in "expert mode" doesn't mean I know what I'm doing :) I guess that's it. In the default path where the installer just picks the partition kind (MBR, GPT, ) I don't think it ever said anything about that being a choice at all. If it at least had text somewhere about creating an "MBR", or something, that would probably be good. Thanks.
Bug#1033626: sbuild: Dependencies should not be required outside the chroot (--no-clean should be the default)
Hi. Thanks for all the explanations. I just re-read this whole sequence of emails, and I'm mostly clear on this now. First off, I think the last email confused things a little bit. I run sbuild on modified source, as you expect. The sequence in the previous email was just a simple example of something that surprisingly to me doesn't work in sbuild. All packages I ever work on live in git. So the "clean" state is defined as the result of "git clean -fdx && git reset --hard". I know that sbuild can't assume git So my workflow is usually something like - git clean -fdx && git reset --hard - sbuild If I forgot the clean step above, dpkg usually yells at me. That feels like enough, without needing a "dh clean" also. If there's no way to make this always work in all cases, can we make the error message better somehow? How about "dh clean" failed. Note: this runs outside the chroot, so the required Build-Depends may not be installed Or something like that. This bit of info would have saved me some time: I spent time looking around before filing this report. Is that reasonable? Can we do even better? Thanks much
Bug#1033678: installation-reports: Unbootable install: MBR partition unusable with UEFI
Pascal Hambourg writes: > On 30/03/2023 at 01:21, Dima Kogan wrote: >> I had to turn off >> secure-boot and UEFI in the BIOS. > > Why ? What happens if UEFI boot is enabled ? If UEFI was enabled, the USB device isn't seen by the machine in its list of valid boot devices > How did you prepare the USB drive ? What installation image did you > use (full file name and URL please) ? >From yesterday's email: I downloaded this: debian-bookworm-DI-alpha2-amd64-netinst.iso from here: https://cdimage.debian.org/cdimage/bookworm_di_alpha2/amd64/iso-cd/ and I wrote that .iso to /dev/sde I did "cp debian-bookworm-DI-alpha2-amd64-netinst.iso /dev/sde" >> I'm not 100% sure of the exact cause. But I suspect strongly is that >> booting the install media without UEFI broke installing to an UEFI-only >> disk. > > If the installer was booted in BIOS/legacy mode, it installed GRUB for > legacy boot. Was this a choice the installer made, or was it the only option? I don't actually have a workaround yet. And if the installer had a check box to ask for a GPT even though the install media was booted without UEFI, then I could at least get this working after some fiddling.
Bug#1033678: installation-reports: Unbootable install: MBR partition unusable with UEFI
Hi. Thank you both for replying. Tim Bell writes: > Just to confirm - you were not able to configure the USB Drive for EFI > boot? Correct. For whatever reason this wasn't possible in this BIOS, at least not in any way I could figure out. Possibly I created the install media incorrectly? I downloaded this: debian-bookworm-DI-alpha2-amd64-netinst.iso from here: https://cdimage.debian.org/cdimage/bookworm_di_alpha2/amd64/iso-cd/ and I wrote that .iso to /dev/sde. There was no obvious "usb image", but just using the CD image appeared to work. I could boot and run the installer, at least with UEFI turned off. Cyril Brulebois writes: > For the avoidance of doubt: which one? Alpha 1 or Alpha 2. > Also, which image did you use? Alpha 2. The link is above. >> This is an amd64 recent-ish laptop. The disk is a PCIe SSD, not SATA. > > You have not given a single detail about that machine. I'm trying to give relevant detail. This is a Dell Latitude 5420 rugged. What else do you want to know? >> I'm installing from a USB drive. To make this work, I had to turn off >> secure-boot and UEFI in the BIOS. > > Why did you need that in the first place? How did you put the > installation image onto that USB drive? See above. Even if I didn't do this properly, installing an unbootable OS is not very nice. > In a nutshell, BIOS means MBR, UEFI means GPT. (This is a very gross > oversimplification though.) OK. Sorry, I managed to be blissfully ignorant for decades, and this is the first time I'm touching GPT or UEFI. So I'm not well-versed in this at all. > I'm not sure why the firmware would allow running an installer in BIOS > mode and not boot off from the installed system… in BIOS mode too. You would expect the Debian installer to write an MBR partition and then you would expect the machine (running with UEFI disabled) to be able to use this MBR partition? I would expect this too, I think. I'm reading Dell's notes a bit. This suggests that PICe SSD devices are UEFI-only: https://www.dell.com/support/kbdoc/en-us/000132410/what-are-pcie-ssds-and-how-to-use-them-as-a-boot-drive-for-a-dell-pc This makes me think that installing to an MBR on the SSD on this machine is never correct. It also makes me think that creating my install media in a way that would make UEFI boot with it would have avoided this. But this failure mode isn't great. Can we detect these UEFI-only drives in any way? Can I ask the installer create a GPT instead of an MBR somehow? Thanks
Bug#1033678: installation-reports: Unbootable install: MBR partition unusable with UEFI
Package: installation-reports Severity: grave Hi. I just installed a bookworm candidate. This worked OK through partitioning and reboot, but I cannot boot into the system. This is an amd64 recent-ish laptop. The disk is a PCIe SSD, not SATA. I'm installing from a USB drive. To make this work, I had to turn off secure-boot and UEFI in the BIOS. I believe that the result of this is the Debian partitioner defaulted to an MBR partition, not a GPT partition. The BIOS of this laptop only allows booting from the PCIe SSD in UEFI mode (so I need to change the BIOS setting before even trying). But even after that, the machine doesn't let me boot off that disk. Some searching tells me this is because GPT partitions are required for UEFI booting, but Debian made an MBR partition. I'm not 100% sure of the exact cause. But I suspect strongly is that booting the install media without UEFI broke installing to an UEFI-only disk. Thanks.
Bug#1033626: sbuild: Dependencies should not be required outside the chroot (--no-clean should be the default)
Johannes Schauer Marin Rodrigues writes: > I fear I do not quite understand what kind of feature you are asking for. Do > you really think it would be a good idea if sbuild, every time you run it, > first locates a .dsc, unpacks the .dsc, compares the unpacked .dsc to your > current directory and only invokes the clean target if it finds differences? > Would there not almost always be differences because you only invoke sbuild > *after* you've made some changes to the unpacked source directory? And what > should sbuild do if it has detected changes? It would still need to run the > clean target before it can create the new source package. > >> Other than that, can we run "dh clean" inside the chroot? > > What would that accomplish? At the point where the .dsc is unpacked > inside the chroot, it already is clean. You need a clean unpacked > source directory, so that you can build a .dsc so that it can be > copied into the chroot. So this cleaning has to happen on the outside. Those questions are all valid, of course, if you think of the .dsc as the input to sbuild. Up until today I was not even aware that this is how it works. The feature I'm asking for is that on a brand-new Debian install I think I should be able to 1. apt install sbuild 2. create schroot for sbuild in whatever way 3. apt source package 4. cd package 5. sbuild Today this doesn't always work, because sbuild wants to "dh clean" outside the chroot. Omitting the "dh clean" (by relying on dpkg complaining) would be one way to get this working. Doing the "dh clean" inside the chroot after the Build-Depends have been installed is another. Maybe the above sequence shouldn't be expected to work, but that makes sbuild less useful in my view. I can make --no-clean the default in my config, I suppose. Probably others use sbuild in this way too? I guess I have no way of knowing.
Bug#1033626: sbuild: Dependencies should not be required outside the chroot (--no-clean should be the default)
Hi. Thanks for the explanation. I have never once in my life ran sbuild from a .dsc file. In fact I don't think I've ever done anything with .dsc files directly. I'm always sitting on the sources, with a ../whatever.orig.tar.gz on disk. If I've been using it wrong this whole time, I guess that's on me. But starting from sources feels like the natural flow to me, so can we make this work a bit better? If we wanted to make the clean step optional, sbuild can check for source differences, and barf if any are detected. It mostly does that already. I believe it doesn't complain if there are extra files on disk, but we could make it do that too. Other than that, can we run "dh clean" inside the chroot? Thanks!
Bug#1033626: sbuild: Dependencies should not be required outside the chroot (--no-clean should be the default)
Package: sbuild Version: 0.85.2 Severity: normal Hi. This just happened: dima@shorty:/tmp/opencv-4.6.0+dfsg$ sbuild -c sid-amd64 -d unstable -s -A --anything-failed-commands '%s' dh clean dh: error: unable to load addon maven-repo-helper: Can't locate Debian/Debhelper/Sequence/maven_repo_helper.pm in @INC (you may need to install the Debian::Debhelper::Sequence::maven_repo_helper module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.36.0 /usr/local/share/perl/5.36.0 /usr/lib/x86_64-linux-gnu/perl5/5.36 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl-base /usr/lib/x86_64-linux-gnu/perl/5.36 /usr/share/perl/5.36 /usr/local/lib/site_perl) at (eval 14) line 1. BEGIN failed--compilation aborted at (eval 14) line 1. make: *** [debian/rules:126: clean] Error 255 E: Failed to clean source directory /tmp/opencv-4.6.0+dfsg (/tmp/opencv_4.6.0+dfsg-11.dsc) The user expectation is that sbuild takes care of all the Build-Depends (by installing them in the chroot), but this apparently isn't 100% true: it runs "dh clean" outside the chroot, so any extra debhelper bits must be installed outside. Can we fix this by not doing anything outside the chroot that sbuild itself doesn't Depend on? The simplest way to do that is to make --no-clean the default. Of we can run "dh clean" inside the chroot. Can we do something like that? Thanks
Bug#1032489: mmdebstrap without root: newuidmap: write to uid_map failed: Operation not permitted
Johannes Schauer Marin Rodrigues writes: > I recently (with version 1.3.2) extended the documentation for unshare mode in > the mmdebstrap manual page to also cover these two files: > > https://gitlab.mister-muffin.de/josch/mmdebstrap/commit/46fc269b549abe89d99e63addba0813bcbc938ac > > Does this answer some of the questions you had or do you think I should add > more? I like the docs. When debugging problems it's helpful to - have a clear error message that says what the problem is - have a clear connection between the error message, and a chunk of the docs that talks about that failure Here I had: W: no entry in /etc/subuid for dima E: invalid idmap and with the older mmdebstrap: newuidmap: uid range [1-2) -> [10-11) not allowed E: newuidmap 2086656 0 60017 1 1 10 1 failed: E: child had a non-zero exit status: 1 E: chown failed Can we change "W: no entry in /etc/subuid for dima" to something like "W: no entry in /etc/subuid for dima: mode=unshare will fail; see THIS section of the docs", or maybe make it an error? If the docs contained the exact error message we would see with this issue, that would be super helpful too. Do you know why the older mmdebstrap has a different error message? Is it something you changed in the code, or is there something about that machine that's causing it? > There are two problems: > > 2) whatever method you use to create new users does not create these > entries I don't know why they're missing. It's an old install of sid, continually being updated: /etc goes back to 2006! I don't think I ever did anything funky with the users, but who knows. It's not an mmdebstrap problem, in any case. > I have a patch for you that should fix this problem in the sense that > mmdebstrap should not choose the unshare mode anymore. If you like, apply the > following to mmdebstrap from unstable: > > https://mister-muffin.de/p/ZwXV.diff Neither of your patches apply to the current mmdebstrap from unstable (I'm at 5d24b65 in the git tree). If you want me to test, you should give me another patch. But I trust you to fix it, and I don't NEED the patch, since I now know to fix the /etc/subuid and /etc/subgid. So you can just apply the patch to the tree and close this bug. Thank you very much for your help!
Bug#1032489: mmdebstrap without root: newuidmap: write to uid_map failed: Operation not permitted
I see this on a machine where the user is missing from /etc/subuid: dima@shorty:~$ /tmp/mmdebstrap bookworm /tmp/tst.tar.gz http://deb.debian.org/debian E: unable to pick chroot mode automatically dima@shorty:~$ /tmp/mmdebstrap --mode=unshare bookworm /tmp/tst.tar.gz http://deb.debian.org/debian W: no entry in /etc/subuid for dima E: failed to parse /etc/subuid and /etc/subgid Is this right? Can we get better error messages? The "normal" command a user would type is the first one, and "unable to pick chroot mode automatically" is unhelpful. It tells the user nothing about what went wrong, or how to even look for a solution. Thanks.
Bug#1032489: mmdebstrap without root: newuidmap: write to uid_map failed: Operation not permitted
Johannes Schauer Marin Rodrigues writes: > Thank you for your feedback! How about: > > E: unable to pick chroot mode automatically (use --mode for manual selection) > > This will make the user look up the --mode argument and its possible values in > the man page. If the user then selects --mode=unshare, the error message > indicates what is wrong. That's better. What's the internal logic? I guess mmdebstrap tried "unshare", and it didn't work. Did it try all the others too, and they didn't work also? It doesn't hurt to have ridiculously long error messages. We COULD say E: unable to pick chroot mode automatically (use --mode for manual selection). Tried A, which didn't work because X; tried B, which didn't work because Y... So if mmdebstrap already knows that --mode=unshare would produce W: no entry in /etc/subuid for dima E: failed to parse /etc/subuid and /etc/subgid It could say that initially. Maybe that's overkill and too much typing for you. What you have already already tells the user what to read about and play with (--mode), so maybe that's fine. Thanks!
Bug#1032489: mmdebstrap without root: newuidmap: write to uid_map failed: Operation not permitted
Johannes Schauer Marin Rodrigues writes: > The problem with ridiculously long error messages is, that mmdebstrap > currently has no way to wrap long error messages after 80 columns or > so. A very long error message is hard to read if it doesn't get > wrapped similar to how you did it in your example. I don't think this is something that mmdebstrap should be thinking about. Error messages aren't something that needs to be immediately fully consumable at a glance. Debugging takes time, and if we can save the user even a bit of debugging time, then the extra minute it takes for them to wrap the line is worth it. And does it really take any time at all? I use either xterm or the emacs shell 100% of the time, and both of those will wrap long lines to make them legible, without me having to ask. > The second reason is, that it would not be easy to store and forward > the reason why the other modes failed. Especially the unshare mode can > fail for 26 different reasons if I counted correctly. Letting the > test-function silently fail when checking for the mode but extracting > the error message would turn the code even more into spagetti. Yeah. I was wondering if this was the case. I think what you have is great. Ship it! And thanks.
Bug#1032489: mmdebstrap without root: newuidmap: write to uid_map failed: Operation not permitted
Hi Josch. Thanks for replying. Notes inline Johannes Schauer Marin Rodrigues writes: > Quoting Dima Kogan (2023-03-08 00:46:18) >> Package: mmdebstrap >> Version: 1.3.1-2 > > where is this version from? Debian stable has 0.7.5 and testing is at > 1.3.3. I run sid, manually updating periodically. I guess I last updated at 1.3.1-2. The breakage doesn't appear to be version-dependent, although I see different behaviors. I tried it on 3 machines: - My workstation (amd64, sid, mmdebstrap=1.3.3-6.1). Works fine. It has this: dima@fatty:~$ id uid=60017(dima) gid=60017(dima) groups=60017(dima),4(adm),20(dialout),24(cdrom),25(floppy),29(audio),30(dip),33(www-data),44(video),46(plugdev),108(netdev),110(lpadmin),112(x),113(scanner),119(bluetooth),131(sbuild),1002(yumsters),1003(mock),1004(pub) dima@fatty:~$ cat /etc/subuid systemd-timesync:10:65536 systemd-network:165536:65536 systemd-resolve:231072:65536 :296608:65536 messagebus:362144:65536 avahi:427680:65536 uuidd:493216:65536 Debian-exim:558752:65536 statd:624288:65536 avahi-autoipd:689824:65536 colord:755360:65536 dnsmasq:820896:65536 geoclue:886432:65536 rtkit:951968:65536 pulse:1017504:65536 sshd:1083040:65536 sbuild:1148576:65536 saned:1214112:65536 usbmux:1279648:65536 hplip:1345184:65536 Debian-gdm:1410720:65536 dima:1476256:65536 _apt:1541792:65536 BBB:1607328:65536 pub:1672864:65536 bitlbee:1738400:65536 testman:1803936:65536 C:1869472:65536 mysql:1935008:65536 tftp:2000544:65536 DD:2066080:65536 EE:2131616:65536 :2197152:65536 G:2262688:65536 :2328224:65536 I:2393760:65536 JJJ:2459296:65536 KK:2524832:65536 L:2590368:65536 M:2655904:65536 NNN:2721440:65536 OOO:2786976:65536 PP:2852512:65536 Q:2918048:65536 :2983584:65536 :3049120:65536 :3114656:65536 U:3180192:65536 :3245728:65536 WWW:3311264:65536 :3376800:65536 - My laptop (amd64, sid, mmdebstrap=1.3.3-6.1; same as the workstation). Does NOT work fine: dima@shorty:~$ mmdebstrap bookworm /tmp/tst.tar.gz http://deb.debian.org/debian I: automatically chosen mode: unshare I: chroot architecture amd64 is equal to the host's architecture I: finding correct signed-by value... done I: automatically chosen format: tar I: using /tmp/mmdebstrap.VLwVKQsx19 as tempdir W: no entry in /etc/subuid for dima E: invalid idmap The user id situation is different: dima@shorty:~$ id uid=1000(dima) gid=1000(dima) groups=1000(dima),4(adm),5(tty),6(disk),7(lp),12(man),20(dialout),24(cdrom),25(floppy),27(sudo),29(audio),44(video),50(staff),100(users),107(netdev),111(nvram),112(fuse),120(stapdev),125(postgres),129(davfs2),132(motion),137(systemd-journal),140(sbuild),145(input) dima@shorty:~$ cat /etc/subuid bitlbee:10:65536 stunnel4:165536:65536 sbuild:231072:65536 iodine:296608:65536 systemd-timesync:362144:65536 systemd-network:427680:65536 systemd-resolve:493216:65536 pulse:624288:65536 AAA:689824:65536 debian-tor:755360:65536 _apt:820896:65536 pub:886432:65536 BBB:558752:65536 - The server (amd64, Ubuntu 20.04, mmdebstrap=0.4.1-6). Also does not work fine: kogan@cadredev:~$ mmdebstrap bookworm /tmp/tst.tar.gz http://deb.debian.org/debian I: automatically chosen mode: unshare I: chroot architecture amd64 is equal to the host's architecture I: using /tmp/mmdebstrap.42KXYMZJtF as tempdir newuidmap: uid range [1-2) -> [10-11) not allowed E: newuidmap 2086656 0 60017 1 1 10 1 failed: E: child had a non-zero exit status: 1 E: chown failed kogan@cadredev:~$ id uid=60017(kogan) gid=1000(AAA) groups=1000(AAA),10773(perf),22373(BBB) kogan@cadredev:~$ cat /etc/subuid ssa:10:65536 This is clearly running a much older mmdebstrap. It's also a more complex beast regarding users, since it's a shared server with LDAP. This is the machine I was complaining about originally; I had forgotten that it's an old distro when making the report; sorry. But it looks like similar failures are happening on other boxes too. In all cases the "unshare" mode was selected. I'm guessing I need to add an entry for my user to /etc/subuid? How is this managed? I've never heard of this file before today, and I've certainly never added anything to it. Why am I listed in it on one machine, but not on the other two? >> but I don't understand the >> problem, and would like ask here. For a little while now I've been using >> mmdebstrap to create bookworm tarballs. This works very nicely. As a >> non-root
Bug#1032809: ITP: python3-cogapp -- Cog content generation tool. Small bits of computation for static files
Package: wnpp Owner: Dima Kogan Severity: wishlist * Package name: python3-cogapp Version : 3.3.0 Upstream Author : Ned Batchelder * URL or Web page : https://github.com/nedbat/cog * License : MIT Description : python3-cogapp
Bug#1032691: rinse: fedora-37 is not installable
Package: rinse Version: 4.1 Severity: normal X-Debbugs-Cc: none, Dima Kogan Hi. This might not be a RINSE bug, but an issue with the fedora servers. Nevertheless... Today I can use rinse to reliably create a fedora-36 install. Just tried it twice; worked both times. The same command reliably fails with fedora-37. The exact failure mode varies. Sometimes it does this: $ sudo rinse --distribution fedora-37 --directory root_fedora2 --arch amd64 Failed to fetch : http://download.fedoraproject.org/pub/fedora/linux/releases/37/Everything/x86_64/os/Packages//z/ 404 Not Found Which is odd because navigating there with a browser works. Sometimes it does this instead: $ sudo rinse --distribution fedora-37 --directory root_fedora2 --arch amd64 [Harmless] Failed to find download link for acl [Harmless] Failed to find download link for alternatives [Harmless] Failed to find download link for audit-libs [Harmless] Failed to find download link for basesystem [Harmless] Failed to find download link for xz-libs [Harmless] Failed to find download link for zchunk-libs [Harmless] Failed to find download link for zlib Running post-install script /usr/lib/rinse/common/10-resolv.conf.sh: Running post-install script /usr/lib/rinse/common/15-mount-proc.sh: Running post-install script /usr/lib/rinse/common/20-dev-zero.sh: Running post-install script /usr/lib/rinse/fedora-37/post-install.sh: Setting up DNF cache mv: cannot stat 'root_fedora2/*.rpm': No such file or directory cp: cannot stat '/var/cache/rinse//fedora-37.amd64/*': No such file or directory mv: cannot stat 'root_fedora2/etc/yum.repos.d': No such file or directory Bootstrapping DNF chroot: failed to run command '/usr/bin/dnf': No such file or directory mv: cannot stat 'root_fedora2/etc/yum.repos.d.orig': No such file or directory chroot: failed to run command 'update-ca-trust': No such file or directory Updating packages chroot: failed to run command '/usr/bin/dnf': No such file or directory chroot: failed to run command '/usr/bin/dnf': No such file or directory Installation complete. It claims to have succeeded successfully, but it did not at all. This is a bug too (it should know that it failed). Thanks -- System Information: Debian Release: bookworm/sid APT prefers unstable APT policy: (800, 'unstable'), (700, 'testing'), (500, 'unstable-debug'), (500, 'stable') merged-usr: no Architecture: amd64 (x86_64) Foreign Architectures: armhf, armel Kernel: Linux 6.1.0-2-amd64 (SMP w/4 CPU threads; PREEMPT) Kernel taint flags: TAINT_OOT_MODULE, TAINT_UNSIGNED_MODULE Locale: LANG=C, LC_CTYPE=C.UTF-8 (charmap=UTF-8), LANGUAGE not set Shell: /bin/sh linked to /bin/dash Init: systemd (via /run/systemd/system) Versions of packages rinse depends on: ii cpio 2.13+dfsg-7 ii libterm-size-perl 0.211-1+b2 ii libwww-perl6.67-1 ii perl 5.36.0-4 ii rpm4.17.0+dfsg1-4+b1 ii rpm2cpio 4.17.0+dfsg1-4+b1 ii wget 1.21.3-1+b2 rinse recommends no packages. rinse suggests no packages. -- no debconf information
Bug#1032689: rinse: Manpage is incorrect
Package: rinse Version: 4.1 Severity: normal X-Debbugs-Cc: none, Dima Kogan Hi. The manpage says Basic usage is as simple as: rinse --distribution fedora-core-6 --directory /tmp/test This will download the required RPM files and unpack them into a minimal installation of Fedora Core 6. This is incorrect. This just happened: $ sudo rinse --distribution fedora-37 --directory root_fedora The name of the architecture is mandatory. Please specify i386, amd64 or arm64. So the architecture is a required argument, and the manpage should include that in its example. Thanks! -- System Information: Debian Release: bookworm/sid APT prefers unstable APT policy: (800, 'unstable'), (700, 'testing'), (500, 'unstable-debug'), (500, 'stable') merged-usr: no Architecture: amd64 (x86_64) Foreign Architectures: armhf, armel Kernel: Linux 6.1.0-2-amd64 (SMP w/4 CPU threads; PREEMPT) Kernel taint flags: TAINT_OOT_MODULE, TAINT_UNSIGNED_MODULE Locale: LANG=C, LC_CTYPE=C.UTF-8 (charmap=UTF-8), LANGUAGE not set Shell: /bin/sh linked to /bin/dash Init: systemd (via /run/systemd/system) Versions of packages rinse depends on: ii cpio 2.13+dfsg-7 ii libterm-size-perl 0.211-1+b2 ii libwww-perl6.67-1 ii perl 5.36.0-4 ii rpm4.17.0+dfsg1-4+b1 ii rpm2cpio 4.17.0+dfsg1-4+b1 ii wget 1.21.3-1+b2 rinse recommends no packages. rinse suggests no packages. -- no debconf information
Bug#1032606: ITP: etlcpp -- Embedded template library: a C++ template library for embedded applications
Package: wnpp Owner: Dima Kogan Severity: wishlist * Package name: etlcpp Version : 20.35.14 Upstream Author : John Wellbelove * URL or Web page : https://www.etlcpp.com/ * License : MIT Description : Embedded template library: a C++ template library for embedded applications
Bug#1032489: mmdebstrap without root: newuidmap: write to uid_map failed: Operation not permitted
Package: mmdebstrap Version: 1.3.1-2 Severity: normal X-Debbugs-Cc: none, Dima Kogan Hi. This is almost certainly not a bug, but I don't understand the problem, and would like ask here. For a little while now I've been using mmdebstrap to create bookworm tarballs. This works very nicely. As a non-root user I would do this: mmdebstrap \ bookworm \ image.tar.gz \ http://deb.debian.org/debian Today I tried this on a different machine. It's also running Debian, but something is different about it, because this happens: mmdebstrap \ bookworm \ image.tar.gz \ http://deb.debian.org/debian I: automatically chosen mode: unshare I: chroot architecture amd64 is equal to the host's architecture I: finding correct signed-by value... done I: automatically chosen format: tar I: using /tmp/mmdebstrap.hu5TsS_2_C as tempdir newuidmap: write to uid_map failed: Operation not permitted E: newuidmap 1474581 0 60017 1 1 1476256 1 failed: E: child had a non-zero exit status: 1 E: chown failed I'm reading the "newuidmap" manpage, but the issue isn't clear to me. In an attempt to debug, I did this on the working machine: strace -f -o /tmp/stlog mmdebstrap Doing that makes it fail with that error! So adding strace to a working mmdebstrap invocation causes this error too. If I just run the failing "newuidmap" command all by itself in the shell, it consistently produces that error. This makes me think that when mmdebstrap is working for me, it's somehow not actually running newuidmap. I don't know why. In all cases I see this: I: automatically chosen mode: unshare The mmdebstrap manpage talks about this option, but it's still not clear to me. Can you please comment? Is the above supposed to work? If so, any idea why it would fail on some machines and not others? Does it make sense that strace breaks it? Thanks!
Bug#1032478: qemu-user-static: Python intermittently segfaults when emulating amd64 from arm64
Package: qemu-user-static Version: 1:7.2+dfsg-4 Severity: normal X-Debbugs-Cc: none, Dima Kogan Hi. I'm running bookworm on an arm64 machine. I have an amd64 foreign arch enabled, and running python3:amd64 in a loop sometimes segfaults. I'm doing this: for i in {1..400}; do echo $i; python3 -c "exit()"; done I see 1-2 crashes usually. The symptoms look exactly like: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=988174 But this is emulating arm64->amd64, not the other way, like in that bug. And the issue in that bug was /proc not being mounted, while it doesn't appear to make a difference here. I cannot debug deeper right now, but it appears to be very reproducible. If nobody beats me to it in the next few weeks, I'll try to dig into it. Thanks!
Bug#1032275: gcc-12-cross: gfortran-12-ARCH is missing Provides: virtual packages
Package: gfortran-12-aarch64-linux-gnu Severity: normal X-Debbugs-Cc: debian-cr...@lists.debian.org, Dima Kogan Control: affects 983600 Hi. This is the underlying cause of https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=983600 Installing libopenmpi-dev:foreign is impossible because it depends on some virtual gfortran packages that the cross-compiler is not providing. I see this: # dpkg --print-architecture amd64 # dpkg --print-foreign-architectures arm64 # apt install libopenmpi-dev:arm64 ... The following packages have unmet dependencies: libopenmpi-dev:arm64 : Depends: gfortran-12:arm64 but it is not going to be installed or gfortran-mod-15:arm64 # apt show libopenmpi-dev:arm64 Package: libopenmpi-dev:arm64 Depends: gfortran-12 | gfortran-mod-15, ... So to install libopenmpi-dev:arm64 we need gfortran-mod-15. This is provided by the native compiler: # apt show gfortran-12 | grep Provides Provides: fortran95-compiler, gfortran-mod-15 But not by the cross compiler: # apt show gfortran-12-aarch64-linux-gnu | grep Provides nothing printed Should the cross-compiler Provide this? Or is libopenmpi-dev wrong to Depend on it? Thanks
Bug#1031800: mmdebstrap: --keyring doesn't work properly
Hi. Johannes Schauer Marin Rodrigues writes: > It seems that /etc/apt/trusted.gpg is a historic relic and keys from it are > removed by the postinst of debian-archive-keyring with the following code > comment next to it: > > # remove keys from the trusted.gpg file as they are now shipped in fragment > # files in trusted.gpg.d OK. Good to know. Thanks for looking it up > I probably never should've added the --keyring argument. Its documentation > already states: > >> Since apt only supports a single keyring file and directory, respectively, >> you can not use this option to pass multiple files and/or directories. I did see that note. But for most other stuff in /etc the main config lives in /etc/thing, and optional extra stuff lives in /etc/thing.d/ so my (incorrect!) assumption was that the main keys live in /etc/apt/trusted.gpg and if I added my extra thing to /etc/apt/trusted.gpg.d/ then I'd have the full set of stuff. If we transitioned to /etc/apt/trusted.gpg.d/ being the main set of keys, we REALLY should delete /etc/apt/trusted.gpg to avoid any confusion. I do think --keyring can be useful if we change what it does. mmdebstrap can gather all the keys in all the --keyring arguments, put them all into a new directory, feed that to Dir::Etc::TrustedParts, and put that into /etc/apt/trusted.gpg.d/ in the final chroot. You can say that without any --keyring arguments it uses /etc/apt/trusted.gpg and /etc/apt/trusted.gpg.d/, but with any --keyring you have to specify them all explicitly, including /etc/apt/ > You can create a directory and copy your keys into it, yes. But the docs for > --keyring also suggest that you use signed-by instead. Is that not a better > solution than copying keys from debian-archive-keyring around? If you use > signed-by you also do not need the --keyring argument anymore. I saw that too. I had a reason to not do that, but I now think that reason is wrong. I was concerned that I could have different keys for signing the repository (InRelease file) and for signing the various packages inside it. But the only key I care about here is the repo-signing key, so that signed-by would have been just fine, I think. I like your documentation patch. And now that I realize that the repository key is the main one to care about, maybe --keyring isn't needed most of the time, as you say. Thanks for looking at this.
Bug#1031800: mmdebstrap: --keyring doesn't work properly
Johannes Schauer Marin Rodrigues writes: > The weirdest thing about your bug is that copying your key to > /etc/apt/trusted.gpg.d/ makes it work for you because when you changed the > location of Dir::Etc::TrustedParts it just pointed to a different directory. > Apt should not treat keys differently just because the path to them looks > different... Hi Josch. Thanks for looking into this. You're right, it sounds weird that apt would care about the name of the directory, so I just poked at it again. It's not actually that weird; I just wasn't looking at the error messages closely enough. The /etc/apt/sources.list has two repos: - main bookworm repo. Signed with the Debian keys - my repo. Signed with its own key If I "mmdebstrap --keyring MY-KEY-DIRECTORY" then apt actually does find the keys to my repo, and it's happy about it. The problem is that it then doesn't look in /etc/apt/trusted.gpg.d and it thinks the main bookworm repo is unverifiable. So there's no mystery here, but my use case still doesn't work. Some questions, if I may: - By default apt has /etc/apt/trusted.gpg and /etc/apt/trusted.gpg.d/*. Which of these is expected to contain the keys for Debian? - I want mmdebstrap to use the extra repo and the keys, so what's the right way to do that? I guess I need to: - Create new key directory - Copy /etc/apt/trusted.gpg and /etc/apt/trusted.gpg.d/* and my new keys into it - Pass that to mmdebstrap --keyring - Add my new keys into the chroot with an mmdebstrap hook so that these are available inside the chroot Is that right? If so, can we make this explicit in the manpage? Thank you very much!
Bug#1031800: mmdebstrap: --keyring doesn't work properly
Johannes Schauer Marin Rodrigues writes: > you were now able to reproduce the problem without mmdebstrap but with > plain apt. This suggests that your problem is not an mmdebstrap > problem. OK. Good to know. >> And I have another related question. I can workaround this by copying my keys >> to /etc/apt/trusted.gpg.d/ on the host. This makes mmdebstrap happy, but the >> resulting chroot doesn't have my keys in ITS /etc/apt/trusted.gpg.d. So an >> "apt update" inside the chroot has the same problem as before: complaining >> that my repo is unverifiable. The docs aren't clear on whether those keys are >> supposed to be copied or not. Are they? If not, am I supposed to do that >> manually via an mmdebstrap hook? > > mmdebstrap will not automatically copy the keys it needs to some location into > the chroot. If your chroot needs extra key material for later "apt update" > runs > it's up to you to copy the keys into the chroot at a location you like. Thanks. > I also think I found the source of your problem. I reproduced your issue > locally like this: > > sq key generate --userid "" --export juliet.key.pgp > sq key extract-cert --output juliet.cert.pgp juliet.key.pgp > apt-ftparchive release . > Release > sq sign --signer-key juliet.key.pgp --cleartext-signature --output=InRelease > Release > mmdebstrap --keyring=/home/josch/repo/ --variant=apt unstable /dev/null > http://deb.debian.org/debian "deb copy:///home/josch/repo ./" > [...] > I: running apt-get update... > done > Get:1 copy:/home/josch/repo ./ InRelease [1190 B] > Get:2 http://deb.debian.org/debian unstable InRelease [180 kB] > Err:1 copy:/home/josch/repo ./ InRelease > The following signatures couldn't be verified because the public key is not > available: NO_PUBKEY FC8F3FACCD368D66 > Get:3 http://deb.debian.org/debian unstable/main arm64 Packages [9282 kB] > Reading package lists... > W: GPG error: copy:/home/josch/repo ./ InRelease: The following signatures > couldn't be verified because the public key is not available: NO_PUBKEY > FC8F3FACCD368D66 > E: The repository 'copy:/home/josch/repo ./ InRelease' is not signed. > > > This is your problem, right? This looks exactly like my problem, yes. > mv juliet.cert.pgp juliet.cert.asc > > The clue can be found in the man page of apt-key: > >Alternatively, if all systems which should be using the created keyring >have at least apt version >= 1.4 installed, you can use the ASCII >armored format with the "asc" extension instead which can be created >with gpg --armor --export. > > Can you confirm that you also had a ASCII armored key stored with the .gpg > extension instead of .asc and that changing the extension makes apt happy? Doesn't work for me. I exported the public key both in binary and ascii formats, put them both in the keys/ directory (given to --keyring), and I get the same error as before. The keys are there: $ file keys/KEY.{asc,gpg} keys/KEY.asc: PGP public key block Public-Key (old) keys/KEY.gpg: OpenPGP Public Key Version 4, Created Wed Feb 22 22:07:13 2023, RSA (Encrypt or Sign, 4096 bits); User ID; Signature; OpenPGP Certificate And once again, I can confirm that the keys are right because copying them (or just one) to /etc/apt/trusted.gpg.d/ makes it happy. Is there no way to ask apt for diagnostics? Should I reassign this bug report to apt? Thanks
Bug#1031800: mmdebstrap: --keyring doesn't work properly
Hi josch. Thanks for replying! I just ran your script up to the "apt update", having the shell substitute $1 <- "bookworm" and $2 <- "DIRECTORY_FOR_CHROOT", and adding my new repo: mkdir -p "$2/etc/apt" "$2/var/cache" "$2/var/lib" cat << END > "$2/apt.conf" Apt::Architecture "$(dpkg --print-architecture)"; Apt::Architectures "$(dpkg --print-architecture)"; Dir "$(cd "$2" && pwd)"; Dir::Etc::Trusted "$(eval "$(apt-config shell v Dir::Etc::Trusted/f)"; printf "$v")"; Dir::Etc::TrustedParts "$(eval "$(apt-config shell v Dir::Etc::TrustedParts/d)"; printf "$v")"; END echo "deb http://deb.debian.org/debian/ $1 main" > "$2/etc/apt/sources.list" echo "deb http://MYREPO $1 main" >> "$2/etc/apt/sources.list" After I do this, DIRECTORY_FOR_CHROOT/apt.conf contains: Apt::Architecture "amd64"; Apt::Architectures "amd64"; Dir "/home/dima/cadre/packaging/bookworm2-tst"; Dir::Etc::Trusted "/etc/apt/trusted.gpg"; Dir::Etc::TrustedParts "/etc/apt/trusted.gpg.d/"; Note that the Trusted keys are in the host, NOT in the chroot, so naturally the "apt update" complains about the missing keys. If I change the last line to Dir::Etc::TrustedParts "MY_KEYRING_DIRECTORY"; then "apt update" still complains. And once again sysdig tells me that it IS actually finding and using my keys. Suggestions? And I have another related question. I can workaround this by copying my keys to /etc/apt/trusted.gpg.d/ on the host. This makes mmdebstrap happy, but the resulting chroot doesn't have my keys in ITS /etc/apt/trusted.gpg.d. So an "apt update" inside the chroot has the same problem as before: complaining that my repo is unverifiable. The docs aren't clear on whether those keys are supposed to be copied or not. Are they? If not, am I supposed to do that manually via an mmdebstrap hook? Thanks
Bug#1031800: mmdebstrap: --keyring doesn't work properly
Package: mmdebstrap Version: 1.3.1-2 Severity: normal X-Debbugs-Cc: none, Dima Kogan Hi. I'm using mmdebstrap to bootstrap an install that uses the normal Debian repos AND my own repo. My repo is signed with a key that lives in $PWD/keys/something.gpg. I pass --keyring=$PWD/keys as suggested in the docs, but this doesn't work for some mysterious reason. No clear diagnostics are avaible, with --verbose saying nothing extra. This is what I see: $ sudo mmdebstrap\ --architectures=arm64 \ --keyring=$PWD/keys\ --aptopt 'Acquire::https::MY_REPO_DOMAIN::Verify-Peer "false"' \ bookworm \ bookworm-tst \ http://deb.debian.org/debian \ http://MY_REPO_DOMAIN/public/debian/bookworm I: automatically chosen mode: root I: arm64 cannot be executed natively, but transparently using qemu-user binfmt emulation I: finding correct signed-by value... I: automatically chosen format: directory I: running apt-get update... Get:1 https://MY_REPO_DOMAIN/public/debian/bookworm bookworm InRelease [5136 B] Get:2 http://deb.debian.org/debian bookworm InRelease [177 kB] Err:1 https://MY_REPO_DOMAIN/public/debian/bookworm bookworm InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 221CA67104340B68 Get:3 http://deb.debian.org/debian bookworm/main arm64 Packages [8909 kB] Reading package lists... W: GPG error: https://MY_REPO_DOMAIN/public/debian/bookworm bookworm InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 221CA67104340B68 E: The repository 'http://MY_REPO_DOMAIN/public/debian/bookworm bookworm InRelease' is not signed. E: apt-get update --error-on=any -oAPT::Status-Fd=<$fd> -oDpkg::Use-Pty=false failed I: main() received signal PIPE: waiting for setup... E: mmdebstrap failed to run This should work, but it doesn't. I used sysdig to confirm that something is indeed looking in $PWD/keys/ and something is indeed calling read() on the relevant key. I have also confirmed that if I copy my keys to /etc/apt/trusted.gpg.d/ then it does work properly. But I don't want to do that. Ideally I'd like mmdebstrap to grab all the keys in $PWD/keys and add them to /etc/apt/trusted.gpg.d/ in the chroot, but NOT on the host machine. Any clear way to do that? Any debugging tricks I'm missing? Thanks! -- System Information: Debian Release: bookworm/sid APT prefers unstable APT policy: (800, 'unstable'), (700, 'testing'), (500, 'unstable-debug'), (500, 'stable') merged-usr: no Architecture: amd64 (x86_64) Foreign Architectures: armhf, armel Kernel: Linux 6.1.0-2-amd64 (SMP w/4 CPU threads; PREEMPT) Kernel taint flags: TAINT_OOT_MODULE, TAINT_UNSIGNED_MODULE Locale: LANG=C, LC_CTYPE=C.UTF-8 (charmap=UTF-8), LANGUAGE not set Shell: /bin/sh linked to /bin/dash Init: systemd (via /run/systemd/system) Versions of packages mmdebstrap depends on: ii apt 2.5.2 ii perl 5.36.0-4 ii python3 3.10.6-1 Versions of packages mmdebstrap recommends: pn arch-test pn fakechroot ii fakeroot 1.29-1 ii gpg 2.2.35-3 ii libdistro-info-perl 1.1 ii libdpkg-perl 1.21.19 ii mount2.38.1-1 pn uidmap Versions of packages mmdebstrap suggests: pn apt-transport-tor ii apt-utils 2.5.2 ii binfmt-support 2.2.2-1 ii ca-certificates20211016 ii debootstrap1.0.127 ii distro-info-data 0.54 ii dpkg-dev 1.21.19 pn genext2fs ii perl-doc 5.36.0-4 pn qemu-user ii qemu-user-static 1:7.0+dfsg-7+b1 pn squashfs-tools-ng -- no debconf information
Bug#1031420: Acknowledgement (libgoogle-glog-dev: CMake config doesn't work out of the box)
Sorry for the repeated emails. I figured out the problem and fixed it. This is a bug introduced by usrmerge. The necessary module path was already being set, but it was trying to find /usr/share/. by a relative traversal from /lib/. It was expecting /usr/lib, not /lib, so the relative path had the wrong number of ../ in it. The attached patch fixes the issue for us: Debian can just use the correct absolute path. Upstream would presumably need to do something else, but they can figure it out. Thanks. --- a/CMakeLists.txt.original 2023-02-16 16:14:41.891485974 -0800 +++ b/CMakeLists.txt 2023-02-16 16:15:57.970855143 -0800 @@ -1034,24 +1034,14 @@ get_filename_component (_PREFIX "${CMAKE_INSTALL_PREFIX}" ABSOLUTE) -# Directory containing the find modules relative to the config install -# directory. -file (RELATIVE_PATH glog_REL_CMake_MODULES - ${_PREFIX}/${_glog_CMake_INSTALLDIR} - ${_PREFIX}/${_glog_CMake_DATADIR}/glog-modules.cmake) - -get_filename_component (glog_REL_CMake_DATADIR ${glog_REL_CMake_MODULES} - DIRECTORY) - -set (glog_FULL_CMake_DATADIR - ${CMAKE_CURRENT_BINARY_DIR}/${_glog_CMake_DATADIR}) +set (glog_FULL_CMake_DATADIR /usr/share/glog/cmake) configure_file (glog-modules.cmake.in ${CMAKE_CURRENT_BINARY_DIR}/glog-modules.cmake @ONLY) install (CODE " -set (glog_FULL_CMake_DATADIR \"\\\${CMAKE_CURRENT_LIST_DIR}/${glog_REL_CMake_DATADIR}\") +set (glog_FULL_CMake_DATADIR \"/usr/share/glog/cmake\") set (glog_DATADIR_DESTINATION ${_glog_CMake_INSTALLDIR}) if (NOT IS_ABSOLUTE ${_glog_CMake_INSTALLDIR})
Bug#1031420: Acknowledgement (libgoogle-glog-dev: CMake config doesn't work out of the box)
I can "fix" this by adding to the top of /usr/lib/x86_64-linux-gnu/cmake/glog/glog-config.cmake: set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} /usr/share/glog/cmake/) But this doesn't sound right. Maybe we should be shipping /usr/share/glog/cmake/* someplace else?
Bug#1031420: libgoogle-glog-dev: CMake config doesn't work out of the box
Package: libgoogle-glog-dev Version: 0.6.0-1 Severity: normal X-Debbugs-Cc: none, Dima Kogan Hi. I'm not a cmake expert, so this might not be a bug. It might also not be a bug in THIS package. Apologies if that is the case. The libgoogle-glog-dev package includes cmake scripts in /lib/ARCH/cmake/glog/ But they don't work by default. I have a tiny CMakeLists.txt: cmake_minimum_required(VERSION 3.14) project(test) find_package(glog REQUIRED) This happens: /tmp/glog-test$ cmake . -- The C compiler identification is GNU 12.2.0 -- The CXX compiler identification is GNU 12.2.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/lib/ccache/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/lib/ccache/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done CMake Error at /usr/share/cmake-3.25/Modules/CMakeFindDependencyMacro.cmake:47 (find_package): By not providing "FindUnwind.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "Unwind", but CMake did not find one. Could not find a package configuration file provided by "Unwind" (requested version 1.3.2) with any of the following names: UnwindConfig.cmake unwind-config.cmake Add the installation prefix of "Unwind" to CMAKE_PREFIX_PATH or set "Unwind_DIR" to a directory containing one of the above files. If "Unwind" provides a separate development package or SDK, be sure it has been installed. Call Stack (most recent call first): /lib/x86_64-linux-gnu/cmake/glog/glog-config.cmake:35 (find_dependency) CMakeLists.txt:3 (find_package) -- Configuring incomplete, errors occurred! See also "/tmp/glog-test/CMakeFiles/CMakeOutput.log". It DOES work if I invoke it like this: $ cmake -DCMAKE_MODULE_PATH=/usr/share/glog/cmake/ . I shouldn't need to do that. The package should configure everything by itself. Thanks! -- System Information: Debian Release: bookworm/sid APT prefers unstable APT policy: (800, 'unstable'), (700, 'testing'), (500, 'unstable-debug'), (500, 'stable') merged-usr: no Architecture: amd64 (x86_64) Foreign Architectures: armhf, armel Kernel: Linux 6.1.0-2-amd64 (SMP w/4 CPU threads; PREEMPT) Kernel taint flags: TAINT_OOT_MODULE, TAINT_UNSIGNED_MODULE Locale: LANG=C, LC_CTYPE=C.UTF-8 (charmap=UTF-8), LANGUAGE not set Shell: /bin/sh linked to /bin/dash Init: systemd (via /run/systemd/system) Versions of packages libgoogle-glog-dev depends on: ii libgflags-dev 2.2.2-2 ii libgoogle-glog0v6 0.6.0-1 ii libunwind-dev 1.3.2-2 libgoogle-glog-dev recommends no packages. libgoogle-glog-dev suggests no packages. -- no debconf information
Bug#982864: More info
The issue is a failing test in test/run_tests.bash: head fish1.png > ${tmpdir}/fake.png "$pdiff" --verbose fish1.png ${tmpdir}/fake.png 2>&1 | grep -q 'Failed to load' rm -f ${tmpdir}/fake.png Here it's making sure that we are able to detect a corrupt .png file, and to throw an error. The actual image load is being done by libfreeimage. For whatever reason, on amd64 (and other non-breaking platforms) FreeImage_Load() returns NULL when given this corrupt file, which is what the test expects. But on the failing platforms it throws a c++ exception instead. The test doesn't catch this exception and crashes, causing this FTBFS. I tried to catch this exception nicely with the attached patch, but for some reason it doesn't work. Since this problem isn't in the main part of the library, we should simply disable this particular test to resolve the FTBFS and this RC bug. If I don't hear back in a few days, I'm going to do an NMU with this patch. Thanks. diff --git a/rgba_image.cpp b/rgba_image.cpp index 2ba9a67..b91407c 100644 --- a/rgba_image.cpp +++ b/rgba_image.cpp @@ -147,10 +147,17 @@ namespace pdiff } FIBITMAP *free_image = nullptr; -if (auto temporary = FreeImage_Load(file_type, filename.c_str(), 0)) +try { -free_image = FreeImage_ConvertTo32Bits(temporary); -FreeImage_Unload(temporary); +if (auto temporary = FreeImage_Load(file_type, filename.c_str(), 0)) +{ +free_image = FreeImage_ConvertTo32Bits(temporary); +FreeImage_Unload(temporary); +} +} +catch (...) +{ +throw RGBImageException("Failed to load the image " + filename); } if (not free_image) { diff --git a/test/run_tests.bash b/test/run_tests.bash index 757a164..2b25c29 100755 --- a/test/run_tests.bash +++ b/test/run_tests.bash @@ -84,10 +84,6 @@ rm -f diff.png ls ${tmpdir}/diff.png rm -f ${tmpdir}/diff.png -head fish1.png > ${tmpdir}/fake.png -"$pdiff" --verbose fish1.png ${tmpdir}/fake.png 2>&1 | grep -q 'Failed to load' -rm -f ${tmpdir}/fake.png - mkdir -p ${tmpdir}/unwritable.png "$pdiff" --output ${tmpdir}/unwritable.png --verbose fish{1,2}.png 2>&1 | grep -q 'Failed to save' rmdir ${tmpdir}/unwritable.png
Bug#1031098: Acknowledgement (ITP: gtsam -- sensor fusion using factor graphs)
A mostly complete packaging is available here: https://salsa.debian.org/science-team/gtsam I still need to do a few things. Then I'll push it into experimental, and to unstable once the bookworm transition is complete