Bug#1023870: dpkg: Problems in buildds due to slow compression

2022-11-12 Thread Guillem Jover
Hi!

On Sun, 2022-11-13 at 00:17:36 +0100, Aurelien Jarno wrote:
> On 2022-11-12 22:28, Guillem Jover wrote:
> > On Fri, 2022-11-11 at 19:15:59 +0100, Manuel A. Fernandez Montecelo wrote:
> > > Package: dpkg
> > > Version: 1.21.9
> > > Severity: normal
> > > X-Debbugs-Cc: m...@debian.org, debian-wb-t...@lists.debian.org
> > 
> > > After some investigation by aurel32 and myself, this was traced back to 
> > > the
> > > commit f8d254943051e085040367d689048c00f31514c3 [2], in which the 
> > > calculation of
> > > the memory that can be used, to determine the number of threads to use, 
> > > was
> > > changed from half of the physical mem to be based on the memory available.
> > 
> > Ah, thanks for tracking this down! I think the problem is the usual
> > "available" memory does not really mean what people think it means. :/
> > And I unfortunately missed that (even thought I was aware of it) when
> > reviewing the patch.
> > 
> > Attached is something I just quickly prepared, which I'll clean up and
> > merge for the upcoming 1.21.10. Let me know if that solves the issue
> > for you, otherwise we'd need to look for further changes.
> 
> Thanks for providing a patch. I have not been able yet to try it for the
> case where we have found the issue, i.e. building linux. However I have
> tried to setup a similar environment:

> - I took a just booted VM with 4 GB RAM, 4 GB swap and 4 GB tmpfs, and very 
> few
>   things running on it.
> - I filled the tmpfs with 4 GB of random data, which means that after
>   moving the content of the tmpfs to the swap, 4 GB could still be used
>   without issue.
> - I ended up with the following /proc/meminfo:
> MemTotal:3951508 kB
> MemFree:  130976 kB
> MemAvailable:  10584 kB
> Buffers:2448 kB
> Cached:  3694676 kB
> SwapCached:12936 kB
> Active:  3111920 kB
> Inactive: 610376 kB
> Active(anon):3102668 kB
> Inactive(anon):   606952 kB
> Active(file):   9252 kB
> Inactive(file): 3424 kB
> Unevictable:   0 kB
> Mlocked:   0 kB
> SwapTotal:   4194300 kB
> SwapFree:3777400 kB
> Zswap: 0 kB
> Zswapped:  0 kB
> Dirty: 0 kB
> Writeback: 0 kB
> AnonPages: 12960 kB
> Mapped: 6700 kB
> Shmem:   3684416 kB
> KReclaimable:  27616 kB
> Slab:  54652 kB
> SReclaimable:  27616 kB
> SUnreclaim:27036 kB
> KernelStack:2496 kB
> PageTables: 1516 kB
> NFS_Unstable:  0 kB
> Bounce:0 kB
> WritebackTmp:  0 kB
> CommitLimit: 6170052 kB
> Committed_AS:4212940 kB
> VmallocTotal:   34359738367 kB
> VmallocUsed:   16116 kB
> VmallocChunk:  0 kB
> Percpu: 2288 kB
> HardwareCorrupted: 0 kB
> AnonHugePages: 0 kB
> ShmemHugePages:0 kB
> ShmemPmdMapped:0 kB
> FileHugePages: 0 kB
> FilePmdMapped: 0 kB
> HugePages_Total:   0
> HugePages_Free:0
> HugePages_Rsvd:0
> HugePages_Surp:0
> Hugepagesize:   2048 kB
> Hugetlb:   0 kB
> DirectMap4k:  110452 kB
> DirectMap2M: 5132288 kB
> DirectMap1G: 5242880 kB

> With the current version of dpkg, it means it consider 10584 kB are available
> (not however that there is 130976 kB of unused physical RAM). With your patch,
> it's a bit better, as it would be 123408 kB. Still far less that one the VM is
> capable of.

Err sorry, the patch was computing the used memory and not the truly
available one! The updated patch should do better. :)

Thanks,
Guillem
diff --git i/lib/dpkg/compress.c w/lib/dpkg/compress.c
index 8cfba80cc..9b02b48b7 100644
--- i/lib/dpkg/compress.c
+++ w/lib/dpkg/compress.c
@@ -605,8 +605,14 @@ filter_lzma_error(struct io_lzma *io, lzma_ret ret)
  * page cache may be purged, not everything will be reclaimed that might be
  * reclaimed, watermarks are considered.
  */
-static const char str_MemAvailable[] = "MemAvailable";
-static const size_t len_MemAvailable = sizeof(str_MemAvailable) - 1;
+
+struct mem_field {
+	const char *name;
+	ssize_t len;
+	int tag;
+	uint64_t *var;
+};
+#define MEM_FIELD(name, tag, var) name, sizeof(name) - 1, tag, 
 
 static int
 get_avail_mem(uint64_t *val)
@@ -615,6 +621,14 @@ get_avail_mem(uint64_t *val)
 	char *str;
 	ssize_t bytes;
 	int fd;
+	uint64_t mem_free, mem_buffers, mem_cached;
+	struct mem_field fields[] = {
+		{ MEM_FIELD("MemFree", 0x1, mem_free) },
+		{ MEM_FIELD("Buffers", 0x2, mem_buffers) },
+		{ MEM_FIELD("Cached", 0x4, mem_cached) },
+	};
+	const int want_tags = 0x7;
+	int seen_tags = 0;
 
 	*val = 0;
 
@@ -632,14 +646,23 @@ get_avail_mem(uint64_t *val)
 
 	str = buf;
 	while (1) {
+		struct mem_field *field = NULL;
 		char *end;
+		size_t f;
 
 		end = strchr(str, ':');
 		if (end == 0)
 			break;
 
-		if ((end - str) == len_MemAvailable &&
-		strncmp(str, str_MemAvailable, len_MemAvailable) == 0) {
+		for (f = 

Bug#1023870: dpkg: Problems in buildds due to slow compression

2022-11-12 Thread Adrian Bunk
On Sat, Nov 12, 2022 at 11:42:20PM +0100, Aurelien Jarno wrote:
>...
> On 2022-11-12 23:04, Adrian Bunk wrote:
>...
> > Sebastian, was there any real-world problem motivating your commit,
> > or did it just sound more correct?
> > 
> > With default settings there should be < 100 MB/core RAM usage,
> > and even with "xz -9"[1] RAM usage should be < 700 MB/core.
> 
> I think the use case there, is for the desktop. It's preferable to use 2
> threads only on a 4 core processor than sending Firefox to the swap.
> That said that heuristics is not necessary the best for the build
> daemon.
>...

On a desktop that managed to successfully build a package large enough
that multithreaded compression is even possible, why does it matter
whether compression needs 200 MB or 400 MB of RAM?

You can create a huge data package where the build just copies the data 
from the source package and then compresses with "xz -9", but the common 
case for big packages is C++ code where even in a single-threaded build 
g++ might need > 2 GB and then compressed with the default "xz -6".

IOW, if building on a desktop that is so RAM-starved that this code 
would matter, I would expect that Firefox was already sent to swap
during compilation.

> Regards
> Aurelien

cu
Adrian



Bug#1023870: dpkg: Problems in buildds due to slow compression

2022-11-12 Thread Aurelien Jarno
Hi,

On 2022-11-12 22:28, Guillem Jover wrote:
> Hi!
> 
> On Fri, 2022-11-11 at 19:15:59 +0100, Manuel A. Fernandez Montecelo wrote:
> > Package: dpkg
> > Version: 1.21.9
> > Severity: normal
> > X-Debbugs-Cc: m...@debian.org, debian-wb-t...@lists.debian.org
> 
> > After some investigation by aurel32 and myself, this was traced back to the
> > commit f8d254943051e085040367d689048c00f31514c3 [2], in which the 
> > calculation of
> > the memory that can be used, to determine the number of threads to use, was
> > changed from half of the physical mem to be based on the memory available.
> 
> Ah, thanks for tracking this down! I think the problem is the usual
> "available" memory does not really mean what people think it means. :/
> And I unfortunately missed that (even thought I was aware of it) when
> reviewing the patch.
> 
> Attached is something I just quickly prepared, which I'll clean up and
> merge for the upcoming 1.21.10. Let me know if that solves the issue
> for you, otherwise we'd need to look for further changes.

Thanks for providing a patch. I have not been able yet to try it for the
case where we have found the issue, i.e. building linux. However I have
tried to setup a similar environment:
- I took a just booted VM with 4 GB RAM, 4 GB swap and 4 GB tmpfs, and very few
  things running on it.
- I filled the tmpfs with 4 GB of random data, which means that after
  moving the content of the tmpfs to the swap, 4 GB could still be used
  without issue.
- I ended up with the following /proc/meminfo:
MemTotal:3951508 kB
MemFree:  130976 kB
MemAvailable:  10584 kB
Buffers:2448 kB
Cached:  3694676 kB
SwapCached:12936 kB
Active:  3111920 kB
Inactive: 610376 kB
Active(anon):3102668 kB
Inactive(anon):   606952 kB
Active(file):   9252 kB
Inactive(file): 3424 kB
Unevictable:   0 kB
Mlocked:   0 kB
SwapTotal:   4194300 kB
SwapFree:3777400 kB
Zswap: 0 kB
Zswapped:  0 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 12960 kB
Mapped: 6700 kB
Shmem:   3684416 kB
KReclaimable:  27616 kB
Slab:  54652 kB
SReclaimable:  27616 kB
SUnreclaim:27036 kB
KernelStack:2496 kB
PageTables: 1516 kB
NFS_Unstable:  0 kB
Bounce:0 kB
WritebackTmp:  0 kB
CommitLimit: 6170052 kB
Committed_AS:4212940 kB
VmallocTotal:   34359738367 kB
VmallocUsed:   16116 kB
VmallocChunk:  0 kB
Percpu: 2288 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
ShmemHugePages:0 kB
ShmemPmdMapped:0 kB
FileHugePages: 0 kB
FilePmdMapped: 0 kB
HugePages_Total:   0
HugePages_Free:0
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
Hugetlb:   0 kB
DirectMap4k:  110452 kB
DirectMap2M: 5132288 kB
DirectMap1G: 5242880 kB

With the current version of dpkg, it means it consider 10584 kB are available
(not however that there is 130976 kB of unused physical RAM). With your patch,
it's a bit better, as it would be 123408 kB. Still far less that one the VM is
capable of.

For our use case, I wonder if the memory contained in Shmem (which in that case
maps to the memory used for the tmpfs) should be considered as available, as it
could be moved to the swap easily.

Cheers
Aurelien

-- 
Aurelien Jarno  GPG: 4096R/1DDD8C9B
aurel...@aurel32.net http://www.aurel32.net



Bug#1023870: dpkg: Problems in buildds due to slow compression

2022-11-12 Thread Aurelien Jarno
Hi,

On 2022-11-12 23:04, Adrian Bunk wrote:
> On Fri, Nov 11, 2022 at 07:15:59PM +0100, Manuel A. Fernandez Montecelo wrote:
> >...
> > The origins of this bug report are because there are sometimes problems 
> > building
> > packages in buildds, the compression phase is very slow and sometimes the 
> > build
> > is aborted due to inactivity:
> > 
> >   E: Build killed with signal TERM after 300 minutes of inactivity [1]
> > 
> > After some investigation by aurel32 and myself, this was traced back to the
> > commit f8d254943051e085040367d689048c00f31514c3 [2], in which the 
> > calculation of
> > the memory that can be used, to determine the number of threads to use, was
> > changed from half of the physical mem to be based on the memory available.
> > 
> > For example in buildds with not-so-large amount of RAM, using part of it for
> > tempfs (which is how buildds are set-up), the reported available memory 
> > maybe is
> > not very large.  For example it could be MemAvailable=2GB for 16GB of 
> > physical
> > RAM in the machine.  In the dpkg code, for the purposes of the calculation 
> > of
> > how much memory can be used, the result appears to be to be half of the
> > MemAvailable [3], so only 1GB.
> > 
> > According to the tables in xz man page, for compression the algorithm can 
> > use
> > 674MB.  So as a result, dpkg-deb uses single-threaded compression, even if 
> > it
> > could easily use 2-3 threads and still use only RAM; or use the full number 
> > of
> > threads in the machine (e.g. 4 or 8) even if it means swapping out some of 
> > the
> > content of tempfs -- which is not a problem in most cases for buildds, 
> > specially
> > if using fast disks.
> 
> I don't see src:linux changing the compression from the default level 6,
> so that should be < 100 MB per core.
> 
> I am wondering whether these buildds are for some reason running into 
> the 128 MiB default at the bottom of the commit when the reading from 
> /proc fails for some reason.

No the problem they have is that the build is done in a 80GB tmpfs, with
100GB swap. This works because the kernel is slowly moving things to
swap when there is no RAM available. However for packages needing more
build space than the physical RAM, the kernel seems to start moving data
to the swap when MemAvailable is in the order of 100MB to 200MB. dpkg
therefore considers there are no memory available, that said after
moving a bit more data from the tmpfs to the swap, plenty of memory
would be available for dpkg to do the parallel compression.

To make a comparison, if GCC need 4GB of RAM, it just allocate them, and
the kernel takes care of moving things to swap. In the worst case the
OOM killer just kill GCC. On the other hand dpkg look how much memory
available without moving things to the swap, and only uses that.

> > As a result of all this, it is taking upwards of 600 mins of CPU time to
> > compress recent linux-image-*-dbg packages in the buildds of riscv64
> > architecture at the moment, so when using 2 threads of less, it's 
> > guaranteed to
> > be aborted.
> > 
> > But this also affects other arches and other packages in other ways, at 
> > least by
> > making the build needlessly slow in many cases.
> 
> There is an even more worrysome issue, from xz(1):
>   If the specified memory usage limit is exceeded when decompressing,  xz
>   will  display  an  error  and decompressing the file will fail.  If the
>   limit is exceeded when compressing, xz will try to scale  the  settings
>   down  so that the limit is no longer exceeded
> 
> Different compression of packages in the archive based on what is in 
> /proc on the buildd is not desirable.

A *very* quick look at the xz source seems to show it looks at the
physical memory here, so we're good.

> > It's not very clear what could be the solution for this, as the default 
> > could be
> > OK for desktops and many machines in which there's plenty of available RAM, 
> > but
> > this is not the case of all of our buildds.  It might not be possible to 
> > detect
> > which is the best from within dpkg.
> >...
> 
> Sebastian, was there any real-world problem motivating your commit,
> or did it just sound more correct?
> 
> With default settings there should be < 100 MB/core RAM usage,
> and even with "xz -9"[1] RAM usage should be < 700 MB/core.

I think the use case there, is for the desktop. It's preferable to use 2
threads only on a 4 core processor than sending Firefox to the swap.
That said that heuristics is not necessary the best for the build
daemon.

> These numbers shouldn't be a problem on buildds that successfully
> manage to build packages large enough that multithreaded compression
> is even possible.

Yep, we always have at least 1GB per core on all our buildds.

Regards
Aurelien

-- 
Aurelien Jarno  GPG: 4096R/1DDD8C9B
aurel...@aurel32.net http://www.aurel32.net



Bug#1023870: dpkg: Problems in buildds due to slow compression

2022-11-12 Thread Guillem Jover
Hi!

On Fri, 2022-11-11 at 19:15:59 +0100, Manuel A. Fernandez Montecelo wrote:
> Package: dpkg
> Version: 1.21.9
> Severity: normal
> X-Debbugs-Cc: m...@debian.org, debian-wb-t...@lists.debian.org

> After some investigation by aurel32 and myself, this was traced back to the
> commit f8d254943051e085040367d689048c00f31514c3 [2], in which the calculation 
> of
> the memory that can be used, to determine the number of threads to use, was
> changed from half of the physical mem to be based on the memory available.

Ah, thanks for tracking this down! I think the problem is the usual
"available" memory does not really mean what people think it means. :/
And I unfortunately missed that (even thought I was aware of it) when
reviewing the patch.

Attached is something I just quickly prepared, which I'll clean up and
merge for the upcoming 1.21.10. Let me know if that solves the issue
for you, otherwise we'd need to look for further changes.

Thanks,
Guillem
diff --git i/lib/dpkg/compress.c w/lib/dpkg/compress.c
index 8cfba80cc..7f9345186 100644
--- i/lib/dpkg/compress.c
+++ w/lib/dpkg/compress.c
@@ -605,8 +605,14 @@ filter_lzma_error(struct io_lzma *io, lzma_ret ret)
  * page cache may be purged, not everything will be reclaimed that might be
  * reclaimed, watermarks are considered.
  */
-static const char str_MemAvailable[] = "MemAvailable";
-static const size_t len_MemAvailable = sizeof(str_MemAvailable) - 1;
+
+struct mem_field {
+	const char *name;
+	ssize_t len;
+	int tag;
+	uint64_t *var;
+};
+#define MEM_FIELD(name, tag, var) name, sizeof(name) - 1, tag, 
 
 static int
 get_avail_mem(uint64_t *val)
@@ -615,6 +621,15 @@ get_avail_mem(uint64_t *val)
 	char *str;
 	ssize_t bytes;
 	int fd;
+	uint64_t mem_total, mem_free, mem_buffers, mem_cached;
+	struct mem_field fields[] = {
+		{ MEM_FIELD("MemTotal", 0x1, mem_total) },
+		{ MEM_FIELD("MemFree", 0x2, mem_free) },
+		{ MEM_FIELD("Buffers", 0x4, mem_buffers) },
+		{ MEM_FIELD("Cached", 0x8, mem_cached) },
+	};
+	const int want_tags = 0xf;
+	int seen_tags = 0;
 
 	*val = 0;
 
@@ -632,14 +647,23 @@ get_avail_mem(uint64_t *val)
 
 	str = buf;
 	while (1) {
+		struct mem_field *field = NULL;
 		char *end;
+		size_t f;
 
 		end = strchr(str, ':');
 		if (end == 0)
 			break;
 
-		if ((end - str) == len_MemAvailable &&
-		strncmp(str, str_MemAvailable, len_MemAvailable) == 0) {
+		for (f = 0; f < array_count(fields); f++) {
+			if ((end - str) == fields[f].len &&
+			strncmp(str, fields[f].name, fields[f].len) == 0) {
+field = [f];
+break;
+			}
+		}
+
+		if (field) {
 			intmax_t num;
 
 			str = end + 1;
@@ -657,16 +681,25 @@ get_avail_mem(uint64_t *val)
 			/* This should not overflow, but just in case. */
 			if (num < (INTMAX_MAX / 1024))
 num *= 1024;
-			*val = num;
-			return 0;
+
+			*field->var = num;
+			seen_tags |= field->tag;
 		}
 
+		if (seen_tags == want_tags)
+			break;
+
 		end = strchr(end + 1, '\n');
 		if (end == 0)
 			break;
 		str = end + 1;
 	}
-	return -1;
+
+	if (seen_tags != want_tags)
+		return -1;
+
+	*val = mem_total - (mem_free + mem_buffers + mem_cached);
+	return 0;
 }
 # else
 static int


Bug#1023870: dpkg: Problems in buildds due to slow compression

2022-11-12 Thread Adrian Bunk
On Fri, Nov 11, 2022 at 07:15:59PM +0100, Manuel A. Fernandez Montecelo wrote:
>...
> The origins of this bug report are because there are sometimes problems 
> building
> packages in buildds, the compression phase is very slow and sometimes the 
> build
> is aborted due to inactivity:
> 
>   E: Build killed with signal TERM after 300 minutes of inactivity [1]
> 
> After some investigation by aurel32 and myself, this was traced back to the
> commit f8d254943051e085040367d689048c00f31514c3 [2], in which the calculation 
> of
> the memory that can be used, to determine the number of threads to use, was
> changed from half of the physical mem to be based on the memory available.
> 
> For example in buildds with not-so-large amount of RAM, using part of it for
> tempfs (which is how buildds are set-up), the reported available memory maybe 
> is
> not very large.  For example it could be MemAvailable=2GB for 16GB of physical
> RAM in the machine.  In the dpkg code, for the purposes of the calculation of
> how much memory can be used, the result appears to be to be half of the
> MemAvailable [3], so only 1GB.
> 
> According to the tables in xz man page, for compression the algorithm can use
> 674MB.  So as a result, dpkg-deb uses single-threaded compression, even if it
> could easily use 2-3 threads and still use only RAM; or use the full number of
> threads in the machine (e.g. 4 or 8) even if it means swapping out some of the
> content of tempfs -- which is not a problem in most cases for buildds, 
> specially
> if using fast disks.

I don't see src:linux changing the compression from the default level 6,
so that should be < 100 MB per core.

I am wondering whether these buildds are for some reason running into 
the 128 MiB default at the bottom of the commit when the reading from 
/proc fails for some reason.

> As a result of all this, it is taking upwards of 600 mins of CPU time to
> compress recent linux-image-*-dbg packages in the buildds of riscv64
> architecture at the moment, so when using 2 threads of less, it's guaranteed 
> to
> be aborted.
> 
> But this also affects other arches and other packages in other ways, at least 
> by
> making the build needlessly slow in many cases.

There is an even more worrysome issue, from xz(1):
  If the specified memory usage limit is exceeded when decompressing,  xz
  will  display  an  error  and decompressing the file will fail.  If the
  limit is exceeded when compressing, xz will try to scale  the  settings
  down  so that the limit is no longer exceeded

Different compression of packages in the archive based on what is in 
/proc on the buildd is not desirable.

> It's not very clear what could be the solution for this, as the default could 
> be
> OK for desktops and many machines in which there's plenty of available RAM, 
> but
> this is not the case of all of our buildds.  It might not be possible to 
> detect
> which is the best from within dpkg.
>...

Sebastian, was there any real-world problem motivating your commit,
or did it just sound more correct?

With default settings there should be < 100 MB/core RAM usage,
and even with "xz -9"[1] RAM usage should be < 700 MB/core.

These numbers shouldn't be a problem on buildds that successfully
manage to build packages large enough that multithreaded compression
is even possible.

Perhaps my understanding of xz memory usage is wrong,
or I might be missing some other problem.

> Thanks and cheers.

cu
Adrian

[1] which gives a lintian warning



Processed (with 2 errors): Re: Bug#1023922: dpkg-checkbuilddeps: please add an option to print the list of packages

2022-11-12 Thread Debian Bug Tracking System
Processing control commands:

> user d...@packages.debian.org
Unknown command or malformed arguments to command.

> usertags -1 dpkg-checkbuilddeps
Unknown command or malformed arguments to command.

> tags -1 moreinfo
Bug #1023922 [dpkg-dev] dpkg-checkbuilddeps: please add an option to print the 
list of packages
Added tag(s) moreinfo.

-- 
1023922: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1023922
Debian Bug Tracking System
Contact ow...@bugs.debian.org with problems



Bug#1023922: dpkg-checkbuilddeps: please add an option to print the list of packages

2022-11-12 Thread Guillem Jover
Control: user d...@packages.debian.org
Control: usertags -1 dpkg-checkbuilddeps
Control: tags -1 moreinfo

Hi!

On Sat, 2022-11-12 at 14:00:38 +0100, Enrico Zini wrote:
> Package: dpkg-dev
> Version: 1.21.9
> Severity: wishlist

> I often find myself[1] in need of a tool that, given a source package,
> prints a list of its build depends, given an architecture, a build
> profile, and so on.

There was a bug filed requesting adding custom output format support
(#214566) but it was closed “recently”. I think there might be some
value in that, but not for the intended use the submitters seemed
to want it.

I'd be interested to know how you'd want to use this new output/option
as from the PoC script you provide it is not obvious to me, as it
prints both build-depends and build-conflicts in an indistinguishable
way, and it includes version constraints and alternative dependencies.

> Would it be possible to add a way to print the unfiltered list?

Anything is possible, I guess my concern is whether this might create
confusion, or whether this is better solved already by some other
tool? Or if there is no workable alternative, finding the right
semantics to add this.

> Ideally, it can become a --print-depends/--print-conflicts option to
> dpkg-checkbuilddeps, instead of a separate tool. Unfortunately my
> perl-foo is too rusty to pretend I could propose a competent patch :/

I don't mind implementing it at all, but I'd like to understand what
this is needed for. :)

Thanks,
Guillem



Bug#1023922: dpkg-checkbuilddeps: please add an option to print the list of packages

2022-11-12 Thread Enrico Zini
Package: dpkg-dev
Version: 1.21.9
Severity: wishlist

Hello,

thank you for maintaining dpkg!

I often find myself[1] in need of a tool that, given a source package,
prints a list of its build depends, given an architecture, a build
profile, and so on.

dpkg-checkbuilddeps does internally generate it, and then only print the
list of packages not currently installed.

Would it be possible to add a way to print the unfiltered list?

I've made something that does it by chopping away the filtering bits
from dpkg-checkbuilddeps (see attachment).

Ideally, it can become a --print-depends/--print-conflicts option to
dpkg-checkbuilddeps, instead of a separate tool. Unfortunately my
perl-foo is too rusty to pretend I could propose a competent patch :/


Enrico

[1] and I'm apparently in good company, considering how many times this
is reimplemented in various places in Debian

-- Package-specific info:
This system uses merged-usr-via-aliased-dirs, going behind dpkg's
back, breaking its core assumptions. This can cause silent file
overwrites and disappearances, and its general tools misbehavior.
See .

-- System Information:
Debian Release: bookworm/sid
  APT prefers testing
  APT policy: (500, 'testing')
Architecture: amd64 (x86_64)

Kernel: Linux 6.0.0-2-amd64 (SMP w/4 CPU threads; PREEMPT)
Locale: LANG=en_IE.UTF-8, LC_CTYPE=en_IE.UTF-8 (charmap=UTF-8), 
LANGUAGE=en_IE:en
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled

Versions of packages dpkg-dev depends on:
ii  binutils  2.39-8
ii  bzip2 1.0.8-5+b1
ii  libdpkg-perl  1.21.9
ii  make  4.3-4.1
ii  patch 2.7.6-7
ii  perl  5.36.0-4
ii  tar   1.34+dfsg-1
ii  xz-utils  5.2.7-0.1

Versions of packages dpkg-dev recommends:
ii  build-essential  12.9
ii  clang-14 [c-compiler]1:14.0.6-2
ii  fakeroot 1.29-1
ii  gcc [c-compiler] 4:12.2.0-1
ii  gcc-10 [c-compiler]  10.4.0-5
ii  gcc-12 [c-compiler]  12.2.0-9
ii  gnupg2.2.40-1
ii  gpgv 2.2.40-1
ii  libalgorithm-merge-perl  0.08-5

Versions of packages dpkg-dev suggests:
ii  debian-keyring  2022.08.11

-- no debconf information
#!/usr/bin/perl
#
# dpkg-checkbuilddeps
#
# Copyright © 2001 Joey Hess 
# Copyright © 2006-2009, 2011-2015 Guillem Jover 
# Copyright © 2007-2011 Raphael Hertzog 
# Copyright © 2022 Enrico Zini 
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program.  If not, see .

use strict;
use warnings;

use Getopt::Long qw(:config posix_default bundling_values no_ignorecase);

use Dpkg ();
use Dpkg::Gettext;
use Dpkg::ErrorHandling;
use Dpkg::Arch qw(get_host_arch);
use Dpkg::Vendor qw(run_vendor_hook);
use Dpkg::BuildProfiles qw(get_build_profiles set_build_profiles);
use Dpkg::Deps;
use Dpkg::Control::Info;

textdomain('dpkg-dev');

sub version()
{
printf g_("Debian %s version %s.\n"), $Dpkg::PROGNAME, $Dpkg::PROGVERSION;
}

sub usage {
printf g_(
'Usage: %s [...] []')
. "\n\n" . g_(
'Options:
  -A ignore Build-Depends-Arch and Build-Conflicts-Arch.
  -B ignore Build-Depends-Indep and Build-Conflicts-Indep.
  -I ignore built-in build dependencies and conflicts.
  -d build-deps  use given string as build dependencies instead of
 retrieving them from control file
  -c build-conf  use given string for build conflicts instead of
 retrieving them from control file
  -a archassume given host architecture
  -P profilesassume given build profiles (comma-separated list)
  --admindir=
 change the administrative directory.
  -?, --help show this help message.
  --version  show the version.')
. "\n\n" . g_(
' is the control file to process (default: debian/control).')
. "\n", $Dpkg::PROGNAME;
}

my $ignore_bd_arch = 0;
my $ignore_bd_indep = 0;
my $ignore_bd_builtin = 0;
my ($bd_value, $bc_value);
my $bp_value;
my $host_arch = get_host_arch();
my $admindir = $Dpkg::ADMINDIR;
my @options_spec = (
'help|?' => sub { usage(); exit(0); },
'version' => sub { version(); exit 0; },
'A' => \$ignore_bd_arch,
'B' => \$ignore_bd_indep,
'I' => \$ignore_bd_builtin,
'd=s' => \$bd_value,
'c=s' => \$bc_value,
'a=s' => \$host_arch,
'P=s' => \$bp_value,
'admindir=s' => \$admindir,