Running pybuild tests with search path for entry_points()

2024-05-11 Thread John Paul Adrian Glaubitz
(Please CC me as I'm not subscribed to debian-devel)

Hello,

I am currently trying to update src:kiwi to its latest upstream version to fix
#1069389 [1].

The problem that I am now stuck with is that the testsuite uses the entry_points
class to test for the available kiwi.tasks. It looks like something like this:

from importlib_metadata import entry_points
for entry in dict.get(entry_points(), 'kiwi.tasks'):
discovered_tasks[entry.name] = entry.load()

However, that fails because the kiwi module needs to be either installed for
entry_points() to find "kiwi.tasks" or the PYTHONPATH needs to include the
build directory below ".pybuild". Otherwise, entry_points() will not include
"kiwi.tasks" and the testsuite fails.

I noticed that entry_points() will also work as expected if the python 
interpretor
is run from within the build directory below ".pybuild", so I tried adding the
following to debian/rules according to [2]:

export PYTHONPATH = $(CURDIR)
export PYBUILD_TEST_ARGS_python3 = cd {build_dir}; python{version} -m discover

That doesn't work unfortunately and entry_points() still fails and there doesn't
seem to be much documentation available for pybuild which explains how to adjust
PYTHONPATH for running the testuite.

Note that I tried this with kiwi 10.0.16 plus the patch from [3]. I put up a 
little
reproducer in [4].

Does anyone know how to make pybuild set the proper PYTHONPATH so that 
entry_points()
works while running the testsuite?

Thanks,
Adrian

> [1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1069389
> [2] https://wiki.debian.org/Python/Pybuild
> [3] https://github.com/OSInside/kiwi/pull/2550
> [4] https://github.com/OSInside/kiwi/issues/2548#issuecomment-2103993758

-- 
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer
`. `'   Physicist
  `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913



Re: Status of the t64 transition

2024-04-19 Thread John Paul Adrian Glaubitz
Hello,

On Thu, 2024-04-18 at 21:22 +0200, Sebastian Ramacher wrote:
> Finally, packages that need rebuilds but currently have open FTBFS (RC +
> ftbfs tag) bugs:
> (...)
> virtualjaguar

I already have a tentative patch and will fix the package within the next
days. I am also preparing to fix two other bugs, one being missing SDL-2
support and the other the FTBFS after rebuild from the same source unpack.

Adrian

-- 
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer
`. `'   Physicist
  `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913



Re: New supply-chain security tool: backseat-signed

2024-04-06 Thread Adrian Bunk
On Sat, Apr 06, 2024 at 03:54:51PM +0200, kpcyrd wrote:
>...
> autotools pre-processed source code is clearly not "the preferred form of
> the work for making modifications", which is specifically what I'm saying
> Debian shouldn't consider a "source code input" either, to eliminate this
> vector for underhanded tampering that Jia Tan has used.

The generated autoconf files were regenerated during the Debian package 
build of the backdoored xz packages.

> If we can force a future Jia Tan to commit their backdoor into git (for
> everybody to see) I consider this a win.
>...

Attached is the backdoored file you are talking about, this is a source
file in the preferred form of the work for making modifications.

Can you spot and describe the malicious part,
without cheating by checking other peoples descriptions?

Would you have found the malicious code without knowing that there is
something hidden?

> cheers,
> kpcyrd

cu
Adrian
# build-to-host.m4 serial 30
dnl Copyright (C) 2023-2024 Free Software Foundation, Inc.
dnl This file is free software; the Free Software Foundation
dnl gives unlimited permission to copy and/or distribute it,
dnl with or without modifications, as long as this notice is preserved.

dnl Written by Bruno Haible.

dnl When the build environment ($build_os) is different from the target runtime
dnl environment ($host_os), file names may need to be converted from the build
dnl environment syntax to the target runtime environment syntax. This is
dnl because the Makefiles are executed (mostly) by build environment tools and
dnl therefore expect file names in build environment syntax, whereas the runtime
dnl expects file names in target runtime environment syntax.
dnl
dnl For example, if $build_os = cygwin and $host_os = mingw32, filenames need
dnl be converted from Cygwin syntax to native Windows syntax:
dnl   /cygdrive/c/foo/bar -> C:\foo\bar
dnl   /usr/local/share-> C:\cygwin64\usr\local\share
dnl
dnl gl_BUILD_TO_HOST([somedir])
dnl This macro takes as input an AC_SUBSTed variable 'somedir', which must
dnl already have its final value assigned, and produces two additional
dnl AC_SUBSTed variables 'somedir_c' and 'somedir_c_make', that designate the
dnl same file name value, just in different syntax:
dnl   - somedir_c   is the file name in target runtime environment syntax,
dnl as a C string (starting and ending with a double-quote,
dnl and with escaped backslashes and double-quotes in
dnl between).
dnl   - somedir_c_make  is the same thing, escaped for use in a Makefile.

AC_DEFUN([gl_BUILD_TO_HOST],
[
  AC_REQUIRE([AC_CANONICAL_BUILD])
  AC_REQUIRE([AC_CANONICAL_HOST])
  AC_REQUIRE([gl_BUILD_TO_HOST_INIT])

  dnl Define somedir_c.
  gl_final_[$1]="$[$1]"
  gl_[$1]_prefix=`echo $gl_am_configmake | sed "s/.*\.//g"`
  dnl Translate it from build syntax to host syntax.
  case "$build_os" in
cygwin*)
  case "$host_os" in
mingw* | windows*)
  gl_final_[$1]=`cygpath -w "$gl_final_[$1]"` ;;
  esac
  ;;
  esac
  dnl Convert it to C string syntax.
  [$1]_c=`printf '%s\n' "$gl_final_[$1]" | sed -e "$gl_sed_double_backslashes" 
-e "$gl_sed_escape_doublequotes" | tr -d "$gl_tr_cr"`
  [$1]_c='"'"$[$1]_c"'"'
  AC_SUBST([$1_c])

  dnl Define somedir_c_make.
  [$1]_c_make=`printf '%s\n' "$[$1]_c" | sed -e "$gl_sed_escape_for_make_1" -e 
"$gl_sed_escape_for_make_2" | tr -d "$gl_tr_cr"`
  dnl Use the substituted somedir variable, when possible, so that the user
  dnl may adjust somedir a posteriori when there are no special characters.
  if test "$[$1]_c_make" = '\"'"${gl_final_[$1]}"'\"'; then
[$1]_c_make='\"$([$1])\"'
  fi
  if test "x$gl_am_configmake" != "x"; then
gl_[$1]_config='sed \"r\n\" $gl_am_configmake | eval $gl_path_map | 
$gl_[$1]_prefix -d 2>/dev/null'
  else
gl_[$1]_config=''
  fi
  _LT_TAGDECL([], [gl_path_map], [2])dnl
  _LT_TAGDECL([], [gl_[$1]_prefix], [2])dnl
  _LT_TAGDECL([], [gl_am_configmake], [2])dnl
  _LT_TAGDECL([], [[$1]_c_make], [2])dnl
  _LT_TAGDECL([], [gl_[$1]_config], [2])dnl
  AC_SUBST([$1_c_make])

  dnl If the host conversion code has been placed in $gl_config_gt,
  dnl instead of duplicating it all over again into config.status,
  dnl then we will have config.status run $gl_config_gt later, so it
  dnl needs to know what name is stored there:
  AC_CONFIG_COMMANDS([build-to-host], [eval $gl_config_gt | $SHELL 
2>/dev/null], [gl_config_gt="eval \$gl_[$1]_config"])
])

dnl Some initializations for gl_BUILD_TO_HOST.
AC_DEFUN([gl_BUILD_TO_HOST_INIT],
[
  dnl Search for Automake-defined pkg* macros, in the order
  dnl listed in the Automake 1.10a+ documentation.
  gl_am

Re: New supply-chain security tool: backseat-signed

2024-04-06 Thread Adrian Bunk
On Sat, Apr 06, 2024 at 07:13:22PM +0800, Sean Whitton wrote:
> Hello,
> 
> On Fri 05 Apr 2024 at 01:31am +03, Adrian Bunk wrote:
> 
> >
> > Right now the preferred form of source in Debian is an upstream-signed
> > release tarball, NOT anything from git.
> 
> The preferred form of modification is not simply up for proclamation.
> Our practices, which are focused around git, make it the case that
> salsa & dgit in some combination are the preferred form for modification
> for most packages.

You cannot simply proclaim that some git tree is the preferred form of 
modification without shipping said git tree in our ftp archive.

If your claim was true, then Debian and downstreams would be violating 
licences like the GPL by not providing the preferred form of modification
in the archive.

> Sean Whitton

cu
Adrian



Re: New supply-chain security tool: backseat-signed

2024-04-04 Thread Adrian Bunk
On Fri, Apr 05, 2024 at 01:30:51AM +0200, kpcyrd wrote:
> On 4/5/24 12:31 AM, Adrian Bunk wrote:
> > Hashes of "git archive" tarballs are anyway not stable,
> > so whatever a maintainer generates is not worse than what is on Github.
> > 
> > Any proper tooling would have to verify that the contents is equal.
> > 
> > > ...
> > > Being able to disregard the compression layer is still necessary however,
> > > because Debian (as far as I know) never takes the hash of the inner .tar
> > > file but only the compressed one. Because of this you may still need to
> > > provide `--orig ` if you want to compare with an uncompressed tar.
> > > ...
> > 
> > Right now the preferred form of source in Debian is an upstream-signed
> > release tarball, NOT anything from git.
> > 
> > An actual improvement would be to automatically and 100% reliably
> > verify that a given tarball matches the commit ID and signed git tag
> > in an upstream git tree.
> 
> I strongly disagree. I think the upstream signature is overrated.

The best we can realistically verify is that the code is from upstream.

> It's from the old mindset of code signing being the only way of securely
> getting code from upstream. Recent events have shown (instead of bothering
> upstream for signatures) it's much more important to have clarity and
> transparency what's in the code that is compiled into binaries and executed
> on our computers, instead of who we got it from.
>...

We do know that for the backdoored xz packages.

An intentional backdoor by upstream is not something we can 
realistically defend against.

The tiny part of the whole xz backdoor that was only in the tarball 
could instead also have been in git like the rest of the backdoor.

A "supply-chain security tool" that does not bring any improvement in 
this case is just snake oil.

> cheers,
> kpcyrd

cu
Adrian



Re: New supply-chain security tool: backseat-signed

2024-04-04 Thread Adrian Bunk
On Thu, Apr 04, 2024 at 09:39:51PM +0200, kpcyrd wrote:
>...
> I've checked both, upstreams github release page and their website[1], but
> couldn't find any mention of .tar.xz, so I think my claim of Debian doing
> the compression is fair.
> 
> [1]: https://www.vim.org/download.php
>...

Perhaps that's a maintainer running "git archive" manually?

Hashes of "git archive" tarballs are anyway not stable,
so whatever a maintainer generates is not worse than what is on Github.

Any proper tooling would have to verify that the contents is equal.

>...
> Being able to disregard the compression layer is still necessary however,
> because Debian (as far as I know) never takes the hash of the inner .tar
> file but only the compressed one. Because of this you may still need to
> provide `--orig ` if you want to compare with an uncompressed tar.
>...

Right now the preferred form of source in Debian is an upstream-signed 
release tarball, NOT anything from git.

An actual improvement would be to automatically and 100% reliably
verify that a given tarball matches the commit ID and signed git tag
in an upstream git tree.

But for that writing tooling would be the trivial part,
architectural topics like where to store the commit ID
and where to store the git tree would be the harder parts.

Or perhaps stop using tarballs in Debian as sole permitted
form of source.

> cheers,
> kpcyrd

cu
Adrian



Re: New supply-chain security tool: backseat-signed

2024-04-02 Thread Adrian Bunk
On Wed, Apr 03, 2024 at 02:31:11AM +0200, kpcyrd wrote:
>...
> I figured out a somewhat straight-forward way to check if a given `git
> archive` output is cryptographically claimed to be the source input of a
> given binary package in either Arch Linux or Debian (or both).

For Debian the proper approach would be to copy Checksums-Sha256 for the 
source package to the buildinfo file, and there is nothing where it would
matter whether the tarball was generated from git or otherwise.

> I believe this to be the "reproducible source tarball" thing some people
> have been asking about.
>...

The lack of a reliably reproducible checksum when using "git archive" is 
the problem, and git cannot realistically provide that.

Even when called with the same parameters, "git archive" executed in 
different environments might produce different archives for the same
commit ID.

It is documented that auto-generated Github tarballs for the same tag 
and with the same commit ID downloaded at different times might have 
different checksums.

> This tool highlights the concept of "canonical sources", which is supposed
> to give guidance on what to code review.
>...

How does it tell the git commit ID the tarball was generated from?

Doing a code review of git sources as tarball would would be stupid,
you really want the git metadata that usually shows when, why and by
whom something was changed.

> https://github.com/kpcyrd/backseat-signed
> 
> The README
>...

"This requires some squinting since in Debian the source tarball is 
 commonly recompressed so only the inner .tar is compared"

This doesn't sound true.

> Let me know what you think. 
> 
> Happy feet,
> kpcyrd

cu
Adrian



Re: autoreconf --force not forcing (was Re: Validating tarballs against git repositories)

2024-04-02 Thread Adrian Bunk
On Tue, Apr 02, 2024 at 06:05:22PM +0100, Colin Watson wrote:
> On Tue, Apr 02, 2024 at 06:57:20PM +0300, Adrian Bunk wrote:
> > On Mon, Apr 01, 2024 at 08:07:27PM +0200, Guillem Jover wrote:
> > > On Sat, 2024-03-30 at 14:16:21 +0100, Guillem Jover wrote:
> > > > This seems like a serious bug in autoreconf, but I've not checked if
> > > > this has been brought up upstream, and whether they consider it's
> > > > working as intended. I expect the serial to be used only when not
> > > > in --force mode though. :/
> > >...
> > > We might have to perform a mass rebuild to check if there could be
> > > fallout out of a true --force behavior change I guess.
> > 
> > Does gnulib upstream support upgrading/downgrading the gnulib m4 files
> > (like the one used in the xz backdoor) without upgrading/downgrading
> > the corresponding gnulib C files?
> 
> Yes, although it takes a bit of effort.  You can use the --local-dir
> option of gnulib-tool, which allows overriding individual Gnulib files
> or modules or applying patches to Gnulib files; or you can define a
> bootstrap_post_import_hook function in bootstrap.conf and do whatever
> you want there.

I had the impression that what Guillem has in mind is more towards 
adding dependencies on packages like gnulib and autoconf-archive
to dh-autoreconf, which would then blindly overwrite all m4 files
where a copy (same or older or newer) exists on the build system.

cu
Adrian



Re: autoreconf --force not forcing (was Re: Validating tarballs against git repositories)

2024-04-02 Thread Adrian Bunk
On Mon, Apr 01, 2024 at 08:07:27PM +0200, Guillem Jover wrote:
>...
> On Sat, 2024-03-30 at 14:16:21 +0100, Guillem Jover wrote:
>...
> > This seems like a serious bug in autoreconf, but I've not checked if
> > this has been brought up upstream, and whether they consider it's
> > working as intended. I expect the serial to be used only when not
> > in --force mode though. :/
>...
> We might have to perform a mass rebuild to check if there could be
> fallout out of a true --force behavior change I guess.

Does gnulib upstream support upgrading/downgrading the gnulib m4 files
(like the one used in the xz backdoor) without upgrading/downgrading
the corresponding gnulib C files?

> Thanks,
> Guillem

cu
Adrian



Re: Validating tarballs against git repositories

2024-04-02 Thread Adrian Bunk
On Mon, Apr 01, 2024 at 11:17:21AM -0400, Theodore Ts'o wrote:
> On Sat, Mar 30, 2024 at 08:44:36AM -0700, Russ Allbery wrote:
>...
> > Yes, perhaps it's time to switch to a different build system, although one
> > of the reasons I've personally been putting this off is that I do a lot of
> > feature probing for library APIs that have changed over time, and I'm not
> > sure how one does that in the non-Autoconf build systems.  Meson's Porting
> > from Autotools [1] page, for example, doesn't seem to address this use
> > case at all.
> 
> The other problem is that many of the other build systems are much
> slower than autoconf/makefile.  (Note: I don't use libtool, because
> it's so d*mn slow.)  Or building the alternate system might require a
> major bootstrapping phase, or requires downloading a JVM, etc.

The main selling point of Meson has been that it is a lot faster
than autotools.

> > Maybe the answer is "you should give up on portability to older systems as
> > the cost of having a cleaner build system," and that's not an entirely
> > unreasonable thing to say, but that's going to be a hard sell for a lot of
> > upstreams that care immensely about this.
> 
> Yeah, that too.  There are still people building e2fsprogs on AIX,
> Solaris, and other legacy Unix systems, and I'd hate to break them, or
> require a lot of pain for people who are building on MacPorts, et. al.
>...

Everything you mention should already be supported by Meson.

>   - Ted

cu
Adrian



Re: xz backdoor

2024-04-01 Thread Adrian Bunk
On Mon, Apr 01, 2024 at 12:02:09PM +0200, Bastian Blank wrote:
> Hi
> 
> On Sun, Mar 31, 2024 at 07:48:35PM +0300, Adrian Bunk wrote:
> > > What we can do unilaterally is to disallow vendoring those files.
> > These files are supposed to be vendored in release tarballs,
> > the sane approach for getting rid of such vendored files would
> > be to discourage tarball uploads to the archive and encourage
> > git uploads instead.
> 
> I don't understand what you are trying to say.  If we add a hard check
> to lintian for m4/*, set it to auto-reject, then it is fully irrelevant
> if the upload is a tarball or git.

xz also has > 600 LOC of legit own m4 code in m4/*,
and that's not unusual for packages using autoconf.

> > > Does it help?  At least in the case of autoconf it removes one common
> > > source of hard to read files.
> > But I doubt every DD would be able to review the 2k LOC non-vendored 
> > autoconf code in xz.
> 
> But at least changes to this code are visible.  In this case the changes
> to the m4 stuff did not exist in the somewhat reviewed repo, but just in
> the unreviewed tarballs.

There are many other ways how these unreviewed tarballs could be manipulated.

The root cause of the problem you want to solve is that the ftp team 
permits uploading such unreviewed tarballs to our archive.

> Bastian

cu
Adrian



Re: Validating tarballs against git repositories

2024-03-31 Thread Adrian Bunk
On Sat, Mar 30, 2024 at 11:55:04AM +, Luca Boccassi wrote:
>...
> In the end, massaged tarballs were needed to avoid rerunning
> autoconfery on twelve thousands different proprietary and
> non-proprietary Unix variants, back in the day. In 2024, we do
> dh_autoreconf by default so it's all moot anyway.
>...

The first step of the xz exploit was in a vendored gnulib m4 file that
is not (and should not be) in git and that does not get updated by 
dh_autoreconf.

cu
Adrian



Re: xz backdoor

2024-03-31 Thread Adrian Bunk
On Sun, Mar 31, 2024 at 09:35:09AM +0200, Bastian Blank wrote:
> On Sat, Mar 30, 2024 at 08:15:10PM +, Colin Watson wrote:
> > On Sat, Mar 30, 2024 at 05:12:17PM +0100, Sirius wrote:
> > > I have seen discussion about shifting away from the whole auto(re)conf
> > > tooling to CMake or Meson with there being a reasonable drawback to CMake.
> > > Is that something being discussed within Debian as well?
> > It's not in general something that Debian can unilaterally change.  And
> > in a number of cases switching build system would be pretty non-trivial.
> 
> What we can do unilaterally is to disallow vendoring those files.

These files are supposed to be vendored in release tarballs,
the sane approach for getting rid of such vendored files would
be to discourage tarball uploads to the archive and encourage
git uploads instead.

> Does it help?  At least in the case of autoconf it removes one common
> source of hard to read files.
>...

But I doubt every DD would be able to review the 2k LOC non-vendored 
autoconf code in xz.

The experimental cmake build of xz also has 2700 LOC.

> Bastian

cu
Adrian



Re: xz backdoor

2024-03-31 Thread Adrian Bunk
On Sun, Mar 31, 2024 at 03:07:53AM +0100, Colin Watson wrote:
> On Sun, Mar 31, 2024 at 04:14:13AM +0300, Adrian Bunk wrote:
> > The timing of the 5.6.0 release might have been to make it into the 
> > upcoming Ubuntu LTS, it didn't miss it by much.
> 
> It didn't miss it at all, even.  Ubuntu has rolled it back and is
> rebuilding everything that was built using it, but it did make it into
> noble-proposed (the current unstable analogue) for some time and noble
> (the current testing analogue) briefly.

It missed being in the actual release, due to being detected by chance
before the release date of noble.

cu
Adrian



Re: xz backdoor

2024-03-30 Thread Adrian Bunk
On Sat, Mar 30, 2024 at 10:49:33AM +0200, Jonathan Carter wrote:
>...
> On 2024/03/29 23:38, Russ Allbery wrote:
> > I think the big open question we need to ask now is what exactly the
> > backdoor (or, rather, backdoors; we know there were at least two versions
> > over time) did.
> 
> Another big question for me is whether I should really still
> package/upload/etc from an unstable machine. It seems that it may be prudent
> to consider it best practice to work from stable machines where any private
> keys are involved. For me it's just been so convenient to use unstable
> because it helps track changes that affect my users by the time it hits
> stable and also find bugs early that I care about, but perhaps I just need
> to make that adjustment and find more efficient ways to track unstable
> (perhaps on additional machines / VMs / etc). Not sure how other DDs think
> about this, but I'm also curious how they will deal with this, because
> there's near to no filter between unstable and the outside world, and this
> is probably not the last time someone will try something like this.

I don't think it is such a clear case that stable is more secure than 
unstable.

The uncommon part might be that it was detected so early, and only due 
to a minor visible side effect on performance found by pure luck that a 
better implementation of the exploit might have been able to avoid.

The timing of the 5.6.0 release might have been to make it into the 
upcoming Ubuntu LTS, it didn't miss it by much.

And an intentional backdoor is not necessarily much different from
one caused by a bug:

Heartbleed (CVE-2014-0160) in OpenSSL made it into stable.

The Debian-specific bug that broke the OpenSSL RNG resulting in 
predictable keys (CVE-2008-0166) made it into stable.

There have even been cases where an attacker realized that
a non-security bugfix fixed something that can be exploited.
In such cases unstable might get fixed, but stable not.

Perhaps a case can be made that stable is slightly more secure,
but an intentional backdoor that gets detected early is rather
rare so far.

> -Jonathan

cu
Adrian



Re: xz backdoor

2024-03-30 Thread Adrian Bunk
On Sat, Mar 30, 2024 at 11:28:07PM +0100, Pierre-Elliott Bécue wrote:
>...
> I'd be happy to have Debian France care about buying and having yubikeys
> delivered to any DD over the world.

Including Russia?

cu
Adrian



Re: Validating tarballs against git repositories

2024-03-30 Thread Adrian Bunk
On Fri, Mar 29, 2024 at 11:29:01PM -0700, Russ Allbery wrote:
>...
> In other words, we should make sure that breaking the specific tactics
> *this* attacker used truly make the attacker's life harder, as opposed to
> making life harder for Debian packagers while only forcing a one-time,
> minor shift in attacker tactics.  I *think* I'm mostly convinced that
> forcing the attacker into Git commits is a useful partial defense, but I'm
> not sure this is obviously true.
>...

There are also other reasons why using tarballs by default is no longer 
a good option.

In many cases our upstream source is the unsigned tarball Github 
automatically provides for every tag, which invites MITM attacks.

The hash of these tarballs is expected to change over time, which makes 
it harder to reliably verify that the upstream sources we have in the 
archive match what is provided upstream.

cu
Adrian



Re: Validating tarballs against git repositories

2024-03-30 Thread Adrian Bunk
On Fri, Mar 29, 2024 at 06:21:27PM -0600, Antonio Russo wrote:
>...
> 1. Move towards allowing, and then favoring, git-tags over source tarballs
>...

git commit IDs, not tags.

Upstream moving git tags does sometimes happen.

Usually for bad-but-not-malicious reasons like "add one more last-minute fix",
but using tags would also invite to manipulation similar to what 
happened with xz at any point after the release.

> Best,
> Antonio Russo

cu
Adrian



Re: Requesting help with the t64 transition

2024-03-08 Thread John Paul Adrian Glaubitz
Hi,

On Tue, 2024-03-05 at 09:56 +0100, John Paul Adrian Glaubitz wrote:
> I would like to ask for help with the t64 transition for m68k, powerpc and
> sh4 because it's getting too much for me alone and I'm really exhausted.
> 
> I have build many packages for powerpc already and some for m68k and sh4,
> but I'm not there yet. The progress with powerpc is the furthest, but perl
> is still uninstallable and I don't really understand why because cudf does
> not produce any useful output.

Some update. I have managed to get powerpc back to the state where the 
devscripts
and build-essential packages can be installed. However, I had to update the 
chroots
on the buildds manually as debbootstrap currently fails due to left-over 
perl-modules
package.

debbootstrap first downloads perl-modules-5.38_5.38.2-3_all.deb, then later 
tries
to install perl_5.38.2-3.2_powerpc.deb which causes dpkg to bail out. It can be
reproduced with:

# debootstrap --no-check-gpg --arch=powerpc --variant=buildd \
  --include=debian-ports-archive-keyring unstable sid-powerpc-sbuild \
  http://ftp.ports.debian.org/debian-ports

Thus, we need to get rid of perl-modules-5.38_5.38.2-3_all.deb from the 
repositories
in order to be able to create fresh chroots with debbootstrap again. Since 
packages
in Debian Ports are directly synced from the main repos for arch:all, this 
needs to
be done by the FTP masters.

For m68k and sh4, I managed to build perl and pam so that all Perl packages are 
rebuilding
for now. Thorsten Glaser is kindly helping me with the transition on m68k.

Adrian

-- 
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer
`. `'   Physicist
  `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913



Re: Re: Perl problem - loadable library and perl binaries are mismatched

2024-03-06 Thread John Paul Adrian Glaubitz
Hi Roderich,

On Wed, 2024-03-06 at 19:20 +0100, Roderich Schupp wrote:
> Hi,
> 
> > Parser.c: loadable library and perl binaries are mismatched (got first 
> > handshake key 0xb600080, needed 0xb700080)
> > 
> 
> 
> The upper 16 bits in these keys (i.e. 0xb60 vs 0xb70) is 
> sizeof(PerlInterpreter), the one that some XS module saw
> when it was built vs the size your current perl executable was built with. 
> From the location of the error message
> it looks as the build process ("perl Build") has just created the "glue" 
> shared library (blib/arch/auto/XS/Parse/
> Keyword/Keyword.so), next it is going to generate documentation (man pages). 
> Unless there's an error warning,
> this doesn't produce any output. I ran "perl Build" under strace, this shows 
> that doc generation loads Pod::Html
> (probably to generate HTML pages as well, though none were requested) and 
> finally this loads HTML::Parser. The
> latter is an XS module 
> (/usr/lib/x86_64-linux-gnu/perl5/5.38/auto/HTML/Parser/Parser.so) and seems 
> to emit the
> above message.
> 
> So the reason is that your HTML/Parser/Parser.so (maybe a version not in the 
> canonical path?)  is built with a
> different struct PerlInterpreter. The difference in sizeof(PerlInterpreter) 
> can probably explained with the
> time64 transition as PerlInterpreter contains a struct stat.

Thanks a lot for the detailed analysis. In fact, libhtml-parser-perl has not 
been rebuilt against the time64_t
Perl package yet [1] which would align with your explanation. I'll try to 
rebuild the package locally and if
it fixes the problem, I'll binNMU it for powerpc.

Your explanation will enable me to debug future occurrences as I now understand 
the underlying problem.

Thanks,
Adrian

> [1] https://buildd.debian.org/status/package.php?p=libhtml-parser-perl

-- 
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer
`. `'   Physicist
  `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913



Re: Perl problem - loadable library and perl binaries are mismatched

2024-03-05 Thread John Paul Adrian Glaubitz
Hi,

On Tue, 2024-03-05 at 23:10 +0100, John Paul Adrian Glaubitz wrote:
> > 
> > oks like it's built with dpkg-dev_1.22.4 but the time64 build flags are
> > only activated with 1.22.5.
> 
> Ah, that would explain it, thank you so much!
> 
> > I think there was talk about making them the default in gcc too, not
> > sure if they got there yet.
> > 
> > I suppose Perl could/should store them in its configuration so they'd be
> > passed to all XS module builds regardless of what dpkg-buildflags says.
> > But currently things from dpkg-buildflags get explicitly filtered away [1].
> > 
> > IIRC the rationale for this was that packages could opt in/out of security
> > hardening flags independently. That doesn't seem desirable here as they
> > make the XS module ABI incompatible as you've noticed.
> > 
> > [1] see https://sources.debian.org/src/perl/5.38.2-3.1/debian/rules/#L188
> > I think -fstack-protector gets passed through there as an exception,
> > so doing the same with the relevant time64 flags should do the trick.
> > 
> 
> Thanks! You saved me a lot of headaches!

I have run into this issue again trying to rebuild libxs-parse-keyword-perl
with a src:perl that was built with dpkg_1.22.5:

Building XS-Parse-Keyword
powerpc-linux-gnu-gcc -Isrc/ -I/usr/lib/powerpc-linux-gnu/perl/5.38/CORE -fPIC 
-I. -Ihax -c -D_REENTRANT -D_GNU_SOURCE -DDEBIAN -fwrapv -fno-strict-aliasing 
-pipe -I/usr/local/include -
D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -g -O2 
-Werror=implicit-function-declaration 
-ffile-prefix-map=/home/glaubitz/perl-modules/libxs-parse-keyword-perl-0.39=. 
-fstack-protector-strong -Wformat -
Werror=format-security -g -O2 -Werror=implicit-function-declaration 
-ffile-prefix-map=/home/glaubitz/perl-modules/libxs-parse-keyword-perl-0.39=. 
-fstack-protector-strong -Wformat -Werror=format-
security -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -D_TIME_BITS=64 -Wdate-time 
-D_FORTIFY_SOURCE=2 -o src/infix.o src/infix.c
powerpc-linux-gnu-gcc -Isrc/ -I/usr/lib/powerpc-linux-gnu/perl/5.38/CORE -fPIC 
-I. -Ihax -c -D_REENTRANT -D_GNU_SOURCE -DDEBIAN -fwrapv -fno-strict-aliasing 
-pipe -I/usr/local/include -
D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -g -O2 
-Werror=implicit-function-declaration 
-ffile-prefix-map=/home/glaubitz/perl-modules/libxs-parse-keyword-perl-0.39=. 
-fstack-protector-strong -Wformat -
Werror=format-security -g -O2 -Werror=implicit-function-declaration 
-ffile-prefix-map=/home/glaubitz/perl-modules/libxs-parse-keyword-perl-0.39=. 
-fstack-protector-strong -Wformat -Werror=format-
security -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -D_TIME_BITS=64 -Wdate-time 
-D_FORTIFY_SOURCE=2 -o src/keyword.o src/keyword.c
src/keyword.c: In function 'probe_piece':
src/keyword.c:348:37: warning: format '%d' expects argument of type 'int', but 
argument 2 has type 'U32' {aka 'long unsigned int'} [-Wformat=]
  348 |   croak("TODO: probe_piece on type=%d\n", type);
  |~^ 
  | | |
  | int   U32 {aka long unsigned int}
  |%ld
src/keyword.c: In function 'parse_piece':
src/keyword.c:828:37: warning: format '%d' expects argument of type 'int', but 
argument 2 has type 'U32' {aka 'long unsigned int'} [-Wformat=]
  828 |   croak("TODO: parse_piece on type=%d\n", type);
  |~^ 
  | | |
  | int   U32 {aka long unsigned int}
  |%ld
powerpc-linux-gnu-gcc -Isrc/ -I/usr/lib/powerpc-linux-gnu/perl/5.38/CORE 
-DVERSION="0.39" -DXS_VERSION="0.39" -fPIC -I. -Ihax -c -D_REENTRANT 
-D_GNU_SOURCE -DDEBIAN -fwrapv -fno-strict-aliasing -pipe
-I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -g -O2 
-Werror=implicit-function-declaration 
-ffile-prefix-map=/home/glaubitz/perl-modules/libxs-parse-keyword-perl-0.39=. 
-fstack-
protector-strong -Wformat -Werror=format-security -g -O2 
-Werror=implicit-function-declaration 
-ffile-prefix-map=/home/glaubitz/perl-modules/libxs-parse-keyword-perl-0.39=. 
-fstack-protector-strong -
Wformat -Werror=format-security -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 
-D_TIME_BITS=64 -Wdate-time -D_FORTIFY_SOURCE=2 -o lib/XS/Parse/Keyword.o 
lib/XS/Parse/Keyword.c
ExtUtils::Mkbootstrap::Mkbootstrap('blib/arch/auto/XS/Parse/Keyword/Keyword.bs')
powerpc-linux-gnu-gcc -g -O2 -Werror=implicit-function-declaration 
-ffile-prefix-map=/home/glaubitz/perl-modules/libxs-parse-keyword-perl-0.39=. 
-fstack-protector-strong -Wformat -Werror=format-
security -Wl,-z,relro -Wl,-z,now -shared -L/usr/local/lib 
-fstack-protector-strong -o blib/arch/auto/XS/Parse/Keyword/Keyword.so 
lib/XS/Parse/K

Re: Perl problem - loadable library and perl binaries are mismatched

2024-03-05 Thread John Paul Adrian Glaubitz
On Wed, 2024-03-06 at 00:08 +0200, Niko Tyni wrote:
> (Oops, forgot the Cc you asked for. So resending. Apologies for the
> duplicate on the list.)

No worries.

> On Tue, Mar 05, 2024 at 09:17:17PM +0100, John Paul Adrian Glaubitz wrote:
>  
> > I am getting strange Perl error after rebuilding Perl for the time64_t
> > transition on powerpc:
> > 
> >  loadable library and perl binaries are mismatched (got first handshake key 
> > 0xb600080, needed 0xb700080)
> > 
> > See: 
> > https://buildd.debian.org/status/fetch.php?pkg=libdevice-usb-perl=powerpc=0.38-3=1709663348=0
> > 
> > I have already rebuilt Perl once again against the new time64_t libraries,
> > but that didn't help although the package builds fine locally.
> > 
> > Does anyone knowledgeable with Perl know what's going on?
> 
> (You're in somewhat uncharted territory unfortunately, as none of this
> was tested beforehand.)

Yikes.

> Looks like it's built with dpkg-dev_1.22.4 but the time64 build flags are
> only activated with 1.22.5.

Ah, that would explain it, thank you so much!

> I think there was talk about making them the default in gcc too, not
> sure if they got there yet.
> 
> I suppose Perl could/should store them in its configuration so they'd be
> passed to all XS module builds regardless of what dpkg-buildflags says.
> But currently things from dpkg-buildflags get explicitly filtered away [1].
> 
> IIRC the rationale for this was that packages could opt in/out of security
> hardening flags independently. That doesn't seem desirable here as they
> make the XS module ABI incompatible as you've noticed.
> 
> [1] see https://sources.debian.org/src/perl/5.38.2-3.1/debian/rules/#L188
> I think -fstack-protector gets passed through there as an exception,
> so doing the same with the relevant time64 flags should do the trick.
> 

Thanks! You saved me a lot of headaches!

Adrian

-- 
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer
`. `'   Physicist
  `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913



Perl problem - loadable library and perl binaries are mismatched

2024-03-05 Thread John Paul Adrian Glaubitz
Hello,

I am getting strange Perl error after rebuilding Perl for the time64_t
transition on powerpc:

 loadable library and perl binaries are mismatched (got first handshake key 
0xb600080, needed 0xb700080)

See: 
https://buildd.debian.org/status/fetch.php?pkg=libdevice-usb-perl=powerpc=0.38-3=1709663348=0

I have already rebuilt Perl once again against the new time64_t libraries,
but that didn't help although the package builds fine locally.

Does anyone knowledgeable with Perl know what's going on?

Thanks,
Adrian

PS: Please CC me, I am not subscribed to debian-devel.

-- 
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer
`. `'   Physicist
  `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913



Re: Requesting help with the t64 transition

2024-03-05 Thread John Paul Adrian Glaubitz
On Tue, 2024-03-05 at 09:56 +0100, John Paul Adrian Glaubitz wrote:
> For m68k, there is mitchy.debian.net and for powerpc, there is 
> perotto.debian.net.
> 
> For sh4, qemu-user can be used.
> 
> Chroots here: https://people.debian.org/~glaubitz/chroots/

I'm collecting packages for bootstrap here: 
https://people.debian.org/~glaubitz/bootstrap/

Adrian

-- 
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer
`. `'   Physicist
  `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913



Requesting help with the t64 transition

2024-03-05 Thread John Paul Adrian Glaubitz
Hello,

I would like to ask for help with the t64 transition for m68k, powerpc and
sh4 because it's getting too much for me alone and I'm really exhausted.

I have build many packages for powerpc already and some for m68k and sh4,
but I'm not there yet. The progress with powerpc is the furthest, but perl
is still uninstallable and I don't really understand why because cudf does
not produce any useful output.

See: 
https://buildd.debian.org/status/fetch.php?pkg=bfs=powerpc=3.1.2-1=1709623862=0

I am not subscribed to debian-devel, so please CC.

For m68k, there is mitchy.debian.net and for powerpc, there is 
perotto.debian.net.

For sh4, qemu-user can be used.

Chroots here: https://people.debian.org/~glaubitz/chroots/

Thank you,
Adrian

-- 
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer
`. `'   Physicist
  `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913



Resuming snapshot.debian.org for Debian Ports

2024-02-12 Thread John Paul Adrian Glaubitz
(Please CC me in replies as I'm not subscribed to debian-devel)

Hello,

does anyone know whether there are any plans to resume snapshot.debian.org
for Debian Ports?

The service has been unavailable for some months and the lack of snapshots
means that it becomes more difficult to fix the buildd queue once it has become
stuck due to Mini-DAK as used by Debian Ports not supporting cruft [1].

Thanks,
Adrian

> [1] https://lists.debian.org/debian-sparc/2017/12/msg00060.html

-- 
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer
`. `'   Physicist
  `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913



Re: Ability to further support 32bit architectures

2024-01-11 Thread Adrian Bunk
On Thu, Jan 11, 2024 at 11:28:19AM +0100, Bastian Blank wrote:
>...
> On Thu, Jan 11, 2024 at 09:48:34AM +, Dimitri John Ledkov wrote:
> > Disabling debug symbols, enabling debug symbol zstd compression, using
> > split debug symbols (disabled BTF usage) should help here.
> 
> Okay, maybe more workarounds exist.  But none of them look really
> promising.
>...

gcc being a memory hog on for C++ code is a hard problem,
and debug symbols for C++ code can be a problem since
they might be > 1 GB for some binaries.

But gcc needing more than 4 GB for a small C kernel driver is not
a problem for the "Ability to further support 32bit architectures",
that's a gcc bug that should be reported upstream just like you wouldn't
suggest dropping amd64 if gcc would ICE on one kernel driver on that
architecture.

> Bastian

cu
Adrian



New sparc64 porterbox available

2023-11-11 Thread John Paul Adrian Glaubitz
Hi!

After a long time since the previous sparc64 porterbox went offline since
it had to move out of the data center at my old university, I am happy
to announce that a new sparc64 porterbox is now available.

The porterbox is a virtual machine (LDOM) hosted on a SPARC T4-1 with 96 GB
of RAM and more than 500 GB of disk space (I hope we will be able to increase
the available disk space in the near future). Hosting is kindly provided by
Cononva Communications GmbH in Salzburg, Austria.

I have already verified that creating a chroot works as expected and I could
test-build a package without any issues, so I am confident it should work for
everyone else.

For questions and problems reports, please drop me an email or join 
#debian-ports
on OFTC IRC network.

Thanks,
Adrian

> [1] https://db.debian.org/machines.cgi?host=stadler

-- 
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer
`. `'   Physicist
  `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913



Bug#1050994: xutils-dev: Please add support for loong64

2023-09-01 Thread John Paul Adrian Glaubitz
Source: xutils-dev
Version: 1:7.7+6.1
Severity: normal
User: debian-devel@lists.debian.org
Usertags: loong64
X-Debbugs-Cc: 
debian-devel@lists.debian.org,zhangjial...@loongson.cn,zhangdan...@loongson.cn

Hi!

Multiple X packages currently fail to build from source on loong64 due
to missing architecture support in xutils-dev [1]:

 gcc: warning: LinuxMachineDefines: linker input file unused because linking 
not done
 gcc: error: LinuxMachineDefines: linker input file not found: No such file or 
directory

This should be fixed in a similar fashion for loong64 as it has been done for
riscv64 in [2]. I have CC'ed two engineers from Loongson to make them aware
of the bug so they can work on a patch to add loong64 support.

Thanks,
Adrian

> [1] 
> https://buildd.debian.org/status/fetch.php?pkg=xaw3d=loong64=1.5%2BF-1.1=1693526902=0
> [2] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1026002

--
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer
`. `'   Physicist
  `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913



Bug#1050893: gcc-13: Please disable Ada, D, Go and M2 as well as GDB support on loong64

2023-08-31 Thread John Paul Adrian Glaubitz
Source: gcc-13
Version: 13.2.0-2
Severity: normal
Tags: patch
User: debian-devel@lists.debian.org
Usertags: loong64
X-Debbugs-Cc: debian-devel@lists.debian.org

Hello!

In order to ease the bootstrap of the new loong64 port, please reduce
the build dependencies and number of enabled languages.

- Please disable Ada, D, Go and M2 for loong64 in debian/rules.def.
- Please add "!loong64" for gdb in debian/control.m4

The attached patch implements these changes.

Thanks,
Adrian

--
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer
`. `'   Physicist
  `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913
diff -Nru old/gcc-13-13.2.0/debian/control.m4 
new/gcc-13-13.2.0/debian/control.m4
--- old/gcc-13-13.2.0/debian/control.m4 2023-07-11 17:40:07.0 +0200
+++ new/gcc-13-13.2.0/debian/control.m4 2023-08-30 20:17:54.515043971 +0200
@@ -81,7 +81,7 @@
   libzstd-dev, zlib1g-dev, SDT_BUILD_DEP USAGE_BUILD_DEP
   BINUTILS_BUILD_DEP,
   gperf (>= 3.0.1), bison (>= 1:2.3), flex, gettext,
-  gdb`'NT [!riscv64 !mipsel !mips64el], OFFLOAD_BUILD_DEP
+  gdb`'NT [!loong64 !riscv64 !mipsel !mips64el], OFFLOAD_BUILD_DEP
   texinfo (>= 4.3), LOCALES, sharutils,
   procps, FORTRAN_BUILD_DEP GNAT_BUILD_DEP GO_BUILD_DEP GDC_BUILD_DEP 
GM2_BUILD_DEP
   ISL_BUILD_DEP MPC_BUILD_DEP MPFR_BUILD_DEP GMP_BUILD_DEP PHOBOS_BUILD_DEP
diff -Nru old/gcc-13-13.2.0/debian/rules.defs 
new/gcc-13-13.2.0/debian/rules.defs
--- old/gcc-13-13.2.0/debian/rules.defs 2023-08-04 04:48:33.0 +0200
+++ new/gcc-13-13.2.0/debian/rules.defs 2023-08-30 20:11:21.780573351 +0200
@@ -850,6 +850,7 @@
 ada_no_cpus:= m32r sh3 sh3eb sh4eb
 ada_no_cpus+= arc
 ada_no_cpus+= ia64
+ada_no_cpus+= loong64
 ada_no_systems := 
 ada_no_cross   := no
 ada_no_snap:= no
@@ -1006,7 +1007,7 @@
   with_libcc1 :=
 endif
 
-go_no_cpus := arc avr hppa
+go_no_cpus := arc avr hppa loong64
 go_no_cpus += m68k # See PR 79281 / PR 83314
 go_no_systems := kfreebsd
 ifneq (,$(filter $(distrelease),precise))
@@ -1064,7 +1065,7 @@
 # D ---
 d_no_cross := yes
 d_no_snap :=
-d_no_cpus := alpha arc ia64 m68k sh4 s390 sparc64
+d_no_cpus := alpha arc loong64 ia64 m68k sh4 s390 sparc64
 d_no_systems := gnu kfreebsd-gnu
 
 ifneq ($(single_package),yes)
@@ -1261,7 +1262,7 @@
 with_m2 := yes
   endif
 endif
-m2_no_archs = powerpc ppc64 sh4 kfreebsd-amd64 kfreebsd-i386 hurd-amd64 
hurd-i386
+m2_no_archs = loong64 powerpc ppc64 sh4 kfreebsd-amd64 kfreebsd-i386 
hurd-amd64 hurd-i386
 ifneq (,$(filter $(DEB_TARGET_ARCH),$(m2_no_archs)))
 with_m2 := disabled for cpu $(DEB_TARGET_ARCH)
 endif


Re: autodep8 test for C/C++ header

2023-08-09 Thread Adrian Bunk
On Wed, Aug 09, 2023 at 02:26:17PM +0800, Paul Wise wrote:
> On Tue, 2023-08-08 at 18:32 +0300, Adrian Bunk wrote:
> 
> > Manual opt-in for our > 11k -dev packages is a significant cost 
> > that would have to be justified by the people who oppose opt-out.
> 
> You could use the Janitor to do automatic opt-in where it works,
> IIRC the Janitor runs autopkgtests before filing merge requests,
> so it could easily try the tests and enable them when they work.
> 
> Or the other way, enable them and have the Janitor submit merge
> requests to turn them off where they don't work.

The cases where they would fail are either false positives or RC bugs.

Janitor merge requests to silence the failures for all actual RC bugs
would not make sense.

The immediate benefit of such a test would be a review of all failing 
cases and filing of RC bugs (which might be > 200) for all that look 
like bugs.

After the MBF it should be clear how many RC bugs actually exist
in practice.

> bye,
> pabs

cu
Adrian



Re: autodep8 test for C/C++ header

2023-08-08 Thread Adrian Bunk
On Tue, Aug 08, 2023 at 09:19:16AM -0300, Antonio Terceiro wrote:
> On Tue, Aug 08, 2023 at 11:35:01AM +0300, Adrian Bunk wrote:
> > On Tue, Aug 08, 2023 at 06:46:38AM -, Sune Vuorela wrote:
> > > On 2023-08-07, Benjamin Drung  wrote:
> > > > while working a whole week on fixing failing C/C++ header compilations
> > > > for armhf time_t [1], I noticed a common pattern: The library -dev
> > > > packages was missing one or more dependencies on another -dev package.
> > > > Over 200 -dev packages are affected.
> > > 
> > > I don't think this is a important problem that some headers might have
> > > special conditions for use. I'd rather have our developers spend time
> > > fixing other issues than satisfying this script.
> > >...
> > 
> > There are many actual bugs it would catch, that are currently only 
> > caught later manually (sometimes through bug reports from users in 
> > stable).
> > 
> > There are special cases that might result in false positives.
> > 
> > Numbers for bugs found and false positives should help determine whether 
> > it should be opt-in or opt-out.
> 
> While providing this for packages to use is a great idea, this will have
> to be opt-in. Imposing this on maintainers has a significant technical
> and social cost, specially in the case of packages where the defaults
> don't work correctly, that I am not willing to pay.
>...

Manual opt-in for our > 11k -dev packages is a significant cost 
that would have to be justified by the people who oppose opt-out.

Are the > 200 affected -dev packages
> 200 RC bugs and a dozen false positives,
or are they > 200 false positives and a dozen RC bugs?

cu
Adrian



Re: autodep8 test for C/C++ header

2023-08-08 Thread Adrian Bunk
On Mon, Aug 07, 2023 at 06:43:36PM +, Benjamin Drung wrote:
>...
> I propose to add an autodep8 test for C/C++ header files that tries to
> compile the header file. This will catch issues like missing
> dependencies and other issues.
>...

An additional check with some overlap would be whether
   pkgconf --cflags .pc
returns 0 for every pkgconfig file in a package.

That would also catch a common kind of bugs.

cu
Adrian



Re: autodep8 test for C/C++ header

2023-08-08 Thread Adrian Bunk
On Tue, Aug 08, 2023 at 06:46:38AM -, Sune Vuorela wrote:
> On 2023-08-07, Benjamin Drung  wrote:
> > while working a whole week on fixing failing C/C++ header compilations
> > for armhf time_t [1], I noticed a common pattern: The library -dev
> > packages was missing one or more dependencies on another -dev package.
> > Over 200 -dev packages are affected.
> 
> I don't think this is a important problem that some headers might have
> special conditions for use. I'd rather have our developers spend time
> fixing other issues than satisfying this script.
>...

There are many actual bugs it would catch, that are currently only 
caught later manually (sometimes through bug reports from users in 
stable).

There are special cases that might result in false positives.

Numbers for bugs found and false positives should help determine whether 
it should be opt-in or opt-out.

> /Sune

cu
Adrian



Re: autodep8 test for C/C++ header

2023-08-08 Thread Adrian Bunk
On Mon, Aug 07, 2023 at 09:17:18PM +, Benjamin Drung wrote:
> On Mon, 2023-08-07 at 22:52 +0300, Peter Pentchev wrote:
>...
> > 1) The library has a "main" header file that must be included before
> >any of the others, and it does not come first in lexicographical
> >order. It may define typedefs and structure definitions that
> >the other header files can use, it may define preprocessor symbols
> >that reflect the availability of this or that system header file or
> >type; there are also other ways in which another header file
> >distributed by the -dev package may depend on the main one.
> 
> In this case the non-"main" header could just include the "main" header
> as first step. Alternatively, an option to specify headers that should
> be included first could be added to the check script.
>...

The opposite problem also exists, where it is documented that only 
one/few "main" headers are supposed to be included directly by users
and the other headers are internal headers only to be included by
the "main" headers (or by other internal headers).

glibc and GNOME would be examples for this.

cu
Adrian



Re: The future of mipsel port

2023-08-07 Thread Adrian Bunk
On Mon, Aug 07, 2023 at 10:09:40AM +0800, Paul Wise wrote:
> On Sun, 2023-08-06 at 13:54 +0200, Florian Lohoff wrote:
> 
> > I am late to the party but as i mentioned a couple times on debian-mips
> > already i'd like to keep mipsel as a debian-port - and i'd like to
> > revert away from mips32r2 back to mips2/mips3 - That change (with
> > stretch) basically dropped all of the supported platforms formerly
> > supported without a good reason - mips32r2 cpus would have been 
> > able to run mips2 code. The now supported platforms are
> > basically non existent or available for the normal user.
> 
> That sounds like a new port would be needed,
>...

No, that's not required.

We've already had baseline lowering in ports in the past (and could do 
that even for a release architecture) by changing the default in gcc
and then binNMUing all packages.

> bye,
> pabs

cu
Adrian



Re: Potential MBF: packages failing to build twice in a row

2023-08-05 Thread Adrian Bunk
On Sat, Aug 05, 2023 at 07:40:36PM +0200, Andrey Rakhmatullin wrote:
> On Sat, Aug 05, 2023 at 08:10:35PM +0300, Adrian Bunk wrote:
> > Debian maintainers with proper git workflows are already exporting all 
> > their changes from git to debian/patches/ as one file - currently the 
> > preferred form of modification of a Debian package has to be in salsa 
> > and not in our archive when the changes cannot be represented as quilt 
> > patches against tarballs.
> Is the gbp-pq workflow improper?

With "proper git workflow" I meant a workflow where the changes to the 
upstream sources are in topic branches that get rebased to new upstream 
versions and then merged.

"topic branch workflow" might have been better wording.

cu
Adrian



Re: Potential MBF: packages failing to build twice in a row

2023-08-05 Thread Adrian Bunk
On Sat, Aug 05, 2023 at 08:55:03PM +0200, Lucas Nussbaum wrote:
> On 05/08/23 at 19:20 +0300, Adrian Bunk wrote:
> > On Sat, Aug 05, 2023 at 05:06:27PM +0200, Lucas Nussbaum wrote:
> > >...
> > > Packages tested: 29883 (I filtered out those that take a very long time 
> > > to build)
> > > .. building OK all times: 24835 (83%)
> > > .. failing somehow: 5048 (17%)
> > >...
> > > I wonder what we should do, because 5000+ failing packages is a lot...
> > 
> > I doubt these are > 5k packages that need individual fixing.
> > 
> > What packages are failing, and why?
> 
> Did you see http://qa-logs.debian.net/2023/08/twice/ ?

Yes, after sending my email...

>...
> > > Should we give up on requiring a 'clean' target that works? After all,
> > > when 17% of packages are failing, it means that many maintainers don't
> > > depend on it in their workflow.
> > 
> > You are mixing two related but not identical topics.
> > 
> > Your subject talks about "failing to build twice in a row",
> > but the contents mostly talks about dpkg-source.
> > 
> > Based on my workflows I can say that building twice in a row, defined as
> >   dpkg-buildpackage -b --no-sign && dpkg-buildpackage -b --no-sign
> > works for > 99% of all packages in the archive.
> 
> That's true. However, if the 'clean' target doesn't work correctly,
> there are chances that the second build might not happen in the same
> conditions as the first one (for example because it will re-use
> left-overs from the first build).

Your test is not sufficient to ensure that the 'clean' target does work 
correctly, non-binary changes under debian/ might result in false negatives.

OTOH it is less of a problem for me if a package that does run autoconf 
during the build does not remove/restore the generated configure in the
'clean' target even though it might fail your test.

> Lucas

cu
Adrian



Re: Potential MBF: packages failing to build twice in a row

2023-08-05 Thread Adrian Bunk
On Sat, Aug 05, 2023 at 05:29:34PM +0100, Simon McVittie wrote:
>...
> One way to streamline dealing with these generated files would be
> to normalize repacking of upstream source releases to exclude them,
> and make it easier to have source packages that genuinely only contain
> what we consider to be source.

What do we actually consider to be source?

Debian maintainers with proper git workflows are already exporting all 
their changes from git to debian/patches/ as one file - currently the 
preferred form of modification of a Debian package has to be in salsa 
and not in our archive when the changes cannot be represented as quilt 
patches against tarballs.

> At the moment, devref §6.8.8.2 strongly
> discourages repacking tarballs to exclude DFSG-but-unnecessary files
> (including generated files, as well as source/build files only needed on
> Windows or macOS or whatever[1]), and Lintian strongly encourages adding
> a +dfsg or +ds suffix to any repacked tarball, which makes it less
> straightforward to track upstream's versioning. Is it time for us to
> reconsider those recommendations?
> 
> For many upstreams (for example Autotools-based projects, and any project
> like GTK that includes pre-generated documentation in source releases),
> we can get "more source-like" upstream source releases by repacking our
> own tarball based on upstream VCS tags than we would get by using their
> official source release artifacts. For other upstreams, Files-Excluded
> can be used to delete generated or unneeded files.
>...

The proper solution would be to stop pretending that we are still living 
in the last millenium and that tarballs would be the main form of sources.

Not using git trees as sources for many packages is preventing a lot of 
proper and easy tooling for many things, including here.

> smcv
>...

cu
Adrian



Re: The future of mipsel port

2023-08-05 Thread Adrian Bunk
On Wed, Jul 26, 2023 at 06:24:49PM +0200, Aurelien Jarno wrote:
> Hi,
> 
> On 2023-07-24 23:07, Adrian Bunk wrote:
> > On Sun, Jul 23, 2023 at 08:36:53PM +0100, Mark Hymers wrote:
> > > On Sun, 23, Jul, 2023 at 08:36:15PM +0200, Paul Gevers spoke thus..
> > > > Speaking as a member of the Release Team, but without having consulted 
> > > > with
> > > > the others, I think we're OK with the removal.
> > > > 
> > > > I have not been involved in removal of an architecture before, I think 
> > > > it's
> > > > the Release Team configuration of britney2 that needs to change as the 
> > > > first
> > > > step or at least at the same time as the actual removal from the 
> > > > archive,
> > > > correct?
> > > 
> > > I don't want to get ahead of ourselves until we're sure that there's
> > > consensus, but the procedure would normally be:
> > > 
> > >  1. Release team: reconfigure britney2 to remove mipsel from testing
> > >  2. ftp-team remove architecture from testing and associated queues and
> > >  perform any needed cleanup
> > >  3. ftp-team remove architecture from unstable and experimental and
> > >  associated queues + cleanup
> > 
> > It might be a good idea to have a 3 year gap between 2. and 3.
> > 
> > mipsel/bookworm is (security) supported by Debian until mid-2026.
> > 
> > Currently all MIPS buildds are shared between mips64el and mipsel.
> > 
> > Separate build infrastructures with differently configured buildds 
> > running on different types of hardware between unstable/experimental
> > and oldstable/stable for the same architecture is something that
> > might not be a good idea.
> 
> Sorry but I don't see your point. The hardware currently building
> mips64el will continue building mipsel for (old)stable(-security). This
> is not an issue.

It's about trying to avoid creating differences between unstable
and *stable-security.

We do have some packages where the latest upstream version from unstable
regularly get updated into *stable-security.

In ports we even have an architecture where all builders are qemu 
running with nocheck, any build results from such a setup might 
have problems different from what will fail in *stable-security.

Even the setup is subtly different, sometimes packages do build
on ports maintained buildds but FTBFS on DSA maintained buildds.[1]

If there turns out to be a reason why continuing to build 
mipsel/unstable+experimental on DSA maintained hardware
might no longer be feasible then changing the setup would
be fair enough, but the default option should be to keep
the currently working setup for mipsel until 2026.

> DSA will probably just have to reinstall the hosts running mipsel as
> mips64el so that it can continue to be used for mips64el even when
> bookworm is not supported anymore (or just get rid of it because is
> likely going to be quite old at that time).

That's something that might have to happen in 2026, but it's invariant 
to the discussion where mipsel/unstable+experimental is being built.

> Regards
> Aurelien

cu
Adrian

[1] An example from today would be
https://buildd.debian.org/status/logs.php?pkg=rust-fs-extra=1.3.0-2



Re: Potential MBF: packages failing to build twice in a row

2023-08-05 Thread Adrian Bunk
On Sat, Aug 05, 2023 at 05:06:27PM +0200, Lucas Nussbaum wrote:
>...
> Packages tested: 29883 (I filtered out those that take a very long time to 
> build)
> .. building OK all times: 24835 (83%)
> .. failing somehow: 5048 (17%)
>...
> I wonder what we should do, because 5000+ failing packages is a lot...

I doubt these are > 5k packages that need individual fixing.

What packages are failing, and why?

I would expect some debhelper machinery being responsible for most of 
these, e.g. perhaps some dh-whatever helper might be creating this 
issue for all 1k packages in some language ecosystem.

> Should we give up on requiring a 'clean' target that works? After all,
> when 17% of packages are failing, it means that many maintainers don't
> depend on it in their workflow.

You are mixing two related but not identical topics.

Your subject talks about "failing to build twice in a row",
but the contents mostly talks about dpkg-source.

Based on my workflows I can say that building twice in a row, defined as
  dpkg-buildpackage -b --no-sign && dpkg-buildpackage -b --no-sign
works for > 99% of all packages in the archive.

> Lucas

cu
Adrian



Re: Behavior change for Python packages built with CMake

2023-07-27 Thread Adrian Bunk
On Thu, Jul 27, 2023 at 01:49:33PM +0200, Timo Röhling wrote:
>...
> However, a few Debian
> packages have also relied on the old, broken behavior, which is
> why about 30 packages have been hit by FTBFS bugs from Lucas' latest
> archive rebuild ("dh_install: error: missing files, aborting")
>...

These unproblematic cases that result in a FTBFS are usually not the 
only cases.

The real problem are the unknown number of packages that are affected
but don't FTBFS where this will only have an effect after the next 
upload or binNMU.

> Cheers
> Timo
>...

cu
Adrian



Re: The future of mipsel port

2023-07-24 Thread Adrian Bunk
On Sun, Jul 23, 2023 at 08:36:53PM +0100, Mark Hymers wrote:
> On Sun, 23, Jul, 2023 at 08:36:15PM +0200, Paul Gevers spoke thus..
> > Speaking as a member of the Release Team, but without having consulted with
> > the others, I think we're OK with the removal.
> > 
> > I have not been involved in removal of an architecture before, I think it's
> > the Release Team configuration of britney2 that needs to change as the first
> > step or at least at the same time as the actual removal from the archive,
> > correct?
> 
> I don't want to get ahead of ourselves until we're sure that there's
> consensus, but the procedure would normally be:
> 
>  1. Release team: reconfigure britney2 to remove mipsel from testing
>  2. ftp-team remove architecture from testing and associated queues and
>  perform any needed cleanup
>  3. ftp-team remove architecture from unstable and experimental and
>  associated queues + cleanup

It might be a good idea to have a 3 year gap between 2. and 3.

mipsel/bookworm is (security) supported by Debian until mid-2026.

Currently all MIPS buildds are shared between mips64el and mipsel.

Separate build infrastructures with differently configured buildds 
running on different types of hardware between unstable/experimental
and oldstable/stable for the same architecture is something that
might not be a good idea.

> Mark

cu
Adrian



Re: Bug#1033888: ITP: usbscale -- read weight data from a USB scale

2023-04-03 Thread John Paul Adrian Glaubitz
Hi Johnny!

On Mon, 2023-04-03 at 16:50 -0300, johnny.nor...@policorp.com.br wrote:
> If you are interested in sponsoring a package that includes a scale with 
> USB, you should reach out to the seller or the manufacturer of the 
> product to discuss potential sponsorship opportunities. They may have 
> specific requirements or guidelines for sponsorships, so it's best to 
> directly communicate with them to ensure a smooth transaction.

I think you are misunderstanding something. This isn't about a financial
sponsorship but package sponsoring, i.e. uploading a package as a Debian
Developer on behalf on the person who created the package and doesn't have
upload rights themselves.

See: https://wiki.debian.org/DebianMentorsFaq#Sponsored_Packages

Adrian

-- 
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer
`. `'   Physicist
  `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913



Re: Bug#1033888: ITP: usbscale -- read weight data from a USB scale

2023-04-03 Thread John Paul Adrian Glaubitz
Hi John!


> Package: wnpp
> Severity: wishlist
> Owner: John Scott 
> Tags: newcomer
> X-Debbugs-Cc: debian-devel@lists.debian.org
> 
> * Package name: usbscale
>   Upstream Contact: Eric Jiang
> * URL : https://github.com/erjiang/usbscale
> * License : GPL 3.0 or any later version
>   Programming Lang: C
>   Description : read weight data from a USB scale
> 
> This package provides a utility one can use to read data from various
> USB scales, ones which are sold as postage scales in particular.

I'm actually about to buy such a scale with USB and I would therefore
be interested in sponsoring this package. Let me know if you're interested.

Adrian

-- 
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer
`. `'   Physicist
  `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913



Re: Reducing allowed Vcs for packaging?

2023-03-05 Thread Adrian Bunk
On Sat, Mar 04, 2023 at 07:43:37PM +, Scott Kitterman wrote:
> On March 4, 2023 5:25:35 PM UTC, Adrian Bunk  wrote:
> >On Wed, Mar 01, 2023 at 05:54:38PM -0700, Sean Whitton wrote:
> >> 
> >> This is a matter of perspective.  The fact that dak doesn't store git
> >> histories and send them out to mirrors is an implementation detail, to
> >> me.  salsa and dgit-repos are both just as significant Debian archives,
> >> even if they're not what we refer to when we write "Debian archive".
> >
> >for the contents of packages in the archive the ftp team requires that 
> >everything is in the preferred form of modification.
> >
> >It is therefore surprising that you as member of the ftp team declare 
> >that there is no requirement at all that the packages themselves that 
> >get uploaded to the archive are in the preferred form of modification
> >as long as the preferred form of modification is in salsa.
>...
> Putting something in a git repository doesn't make a particular file more or 
> less the preferred form of modification.  It's the same form.  IMO you are 
> conflating two separate concepts here.  I don't find  Sean's perspective at 
> all surprising.

In proper git workflows the metadata is in git only, and cannot be 
included in what is being exported for upload to the Debian archive.

Example:
https://salsa.debian.org/haskell-team/git-annex/-/blob/master/debian/patches/debian-changes

> Scott K

cu
Adrian



Re: Reducing allowed Vcs for packaging?

2023-03-04 Thread Adrian Bunk
On Wed, Mar 01, 2023 at 05:54:38PM -0700, Sean Whitton wrote:
> Hello,

Hi Sean,

> On Sun 26 Feb 2023 at 11:38PM +02, Adrian Bunk wrote:
> 
> > On Sun, Feb 26, 2023 at 09:57:34PM +0100, Diederik de Haas wrote:
> >> On Sunday, 26 February 2023 20:06:26 CET Adrian Bunk wrote:
> >>...
> >> > For anything in Debian, the package sources in Debian would not
> >> > disappear when a repository (or salsa) disappears.
> >>
> >> Question as I don't know: is that only the package change that gets 
> >> uploaded
> >> to the Debian archive, or is there also a place where the (git) history of 
> >> the
> >> changes leading up to a new upload gets stored?
> >>
> >> To use an analogy: I'd like that not only the 'destination' is preserved, 
> >> but
> >> also the lead up to the destination.
> >
> > What goes into the Debian archive is based on tarballs and quilt patches.
> > Nothing is stored there except the files you upload.
> 
> This is a matter of perspective.  The fact that dak doesn't store git
> histories and send them out to mirrors is an implementation detail, to
> me.  salsa and dgit-repos are both just as significant Debian archives,
> even if they're not what we refer to when we write "Debian archive".

for the contents of packages in the archive the ftp team requires that 
everything is in the preferred form of modification.

It is therefore surprising that you as member of the ftp team declare 
that there is no requirement at all that the packages themselves that 
get uploaded to the archive are in the preferred form of modification
as long as the preferred form of modification is in salsa.

Maintainers are now permitted to clone the upstream git tree, take one 
commit as upstream, work on top of that, and then upload this without 
the kludge of pristine-tar or restrictions due to quilt.

Formats 1.0 or 3.0 (native) will be the natural formats generated for
the Debian archive.

Format 3.0 (quilt) will be another option, where a generated tarball is 
uploaded as upstream source (as is already required by the ftp team for 
repacks) plus one debian/patches/debian.patch containing all changes to 
the upstream sources.

Generating one quilt patch per commit that changes the upstream sources
would not always be technically possible due to limitations of quilt.

> Sean Whitton

cu
Adrian



Re: DEB_BUILD_OPTIONS=nowerror

2023-02-27 Thread Adrian Bunk
On Sun, Feb 26, 2023 at 08:25:25PM +0100, Helmut Grohne wrote:
> On Sun, Feb 26, 2023 at 07:15:45PM +0200, Adrian Bunk wrote:
> > What you describe is an RC bug as soon as the more recent toolchain
> > becomes default, and the correct solution is to not build with -Werror. 
> >
> > DEB_BUILD_OPTIONS=nowerror would imply that building with -Werror
> > by default would be OK, but there are already too many FTBFS due
> > to -Werror.
> 
> I would happily agree with all of this, but I do not see consensus on
> either.

My point is that an opt-out DEB_BUILD_OPTIONS=nowerror would make the 
problem worse for the "FTBFS on buildds" problem, since it would result
in more people building their packages with -Werror by default.

>...
> The problem here specifically arises, because we do not have consensus
> on -Werror being a bad idea in release builds.
>...

Strictly disallowing all usage of -Werror (which might be set by the 
maintainer, but more often by upstream) would be controversial.

It would also be hard to define what exactly would be forbidden.
Individual warnings can be turned into errors, and our hardening
flags set -Werror=format-security.

There might be more consensus for language in Policy discouraging
-Werror that leaves maintainers room to diverge from the recommendation?

> Helmut

cu
Adrian



Re: Reducing allowed Vcs for packaging?

2023-02-26 Thread Adrian Bunk
On Sun, Feb 26, 2023 at 11:42:25PM +0100, Diederik de Haas wrote:
>...
> The reason that I'm such a proponent of that is that 3 weeks or 3 months from 
> now, there's a reasonable chance that you (the author/committer) does no 
> longer remember the details of that commit.
> In 3+ years that will be close to 0.
> AFAIK actual mind reading does not exist, so someone else surely wouldn't 
> know.
> 
> I've already encountered several cases where the commit was 10+ years old and 
> the commit msg what "Disable setting X" and looking at the diff, I can see 
> the 
> X was indeed disabled. But nothing more.
> But now I want to enable setting X. But I have no context to know why that 
> would be a bad idea, or actually a good idea *now*, or what will break as a 
> consequence of my enabling X. All I can do is enable X and keep an eye out 
> for 
> bug reports.
> 
> I think that's what you want to achieve with 'better' changelogs is something 
> similar. I think the git commits are a better place as it's easier to make 
> finer grained distinctions and it's more directly linked to the changes.
>...

Where applicable:
You can add comments in debian/rules.
You can write long descriptions in debian/patches/*.patch
That's even more directly linked to the changes.

You can also write 5 line changelog entries.

The "if" and "where" are likely far less related than you hope:

People tend to be either terse or verbose,
not terse in the package but verbose in git commits.

And when trying to improve verbosity, this shouldn't be only
in the git metadata outside the package.

> Cheers,
>   Diederik

cu
Adrian



Re: Reducing allowed Vcs for packaging?

2023-02-26 Thread Adrian Bunk
On Sun, Feb 26, 2023 at 09:57:34PM +0100, Diederik de Haas wrote:
> On Sunday, 26 February 2023 20:06:26 CET Adrian Bunk wrote:
>...
> > For anything in Debian, the package sources in Debian would not
> > disappear when a repository (or salsa) disappears.
> 
> Question as I don't know: is that only the package change that gets uploaded 
> to the Debian archive, or is there also a place where the (git) history of 
> the 
> changes leading up to a new upload gets stored?
> 
> To use an analogy: I'd like that not only the 'destination' is preserved, but 
> also the lead up to the destination. 

What goes into the Debian archive is based on tarballs and quilt patches.
Nothing is stored there except the files you upload.

But what do you expect to get from the git history?

There is no requirement that any history in addition to what is in
the Debian archive exists at all, or any guarantee that what is in
some git tree somewhere is actually the same as what is in the
Debian archive. And git history might just be one commit per upload.

I would rather like to see people write useful changelog entries
that will still be useful in 25 years instead of
  * New upstream version (Closes: #1234567, #1234568, #1234569, ...
or
  * Add R³
than hoping that git metadata would contain anything more useful
than that for such packages.

cu
Adrian



Re: Bug#1031548: FTBFS with ruby-jekyll-github-metadata 2.15.0

2023-02-26 Thread Adrian Bunk
On Sun, Feb 26, 2023 at 07:09:59PM +0100, Daniel Leidert wrote:
> Am Sonntag, dem 26.02.2023 um 19:45 +0200 schrieb Adrian Bunk:
> > 
> [..]
> > Debian Policy §4.9 says that *attempting* to access the internet
> > is forbidden:
> > 
> >   For packages in the main archive, required targets must not attempt
> >   network access, except, via the loopback interface, to services on the 
> >   build host that have been started by the build.
> 
> And the "test" target is not listed as a required target there.
>...

It is called from a required target.

Do you have a list of the packages maintained by the Ruby team that
are RC-buggy due to this?

> Daniel

cu
Adrian



Re: Reducing allowed Vcs for packaging?

2023-02-26 Thread Adrian Bunk
On Sun, Feb 26, 2023 at 07:25:57PM +0100, Diederik de Haas wrote:
>...
> Apart from me not liking proprietary systems in general and M$ GitHub in 
> particular, you also run the risk of things disappearing entirely without any 
> notice and without any recourse.

Perhaps tomorrow some company like Oracle decides to buy GitLab Inc.,
and then Oracle GitLab stops the current Freemium business model
effective immediately.

Would anyone be able to provide security support for the stale free 
version, or would salsa be shutdown with the next CVE?

> C.I.P. https://github.com/community/community/discussions/48173
> where VundleVim (vim plugin manager) disappeared 'out of the blue'.
> (that vundlevim isn't packaged for Debian is irrelevant)

For anything in Debian, the package sources in Debian would not 
disappear when a repository (or salsa) disappears.

cu
Adrian



Re: Bug#1031548: FTBFS with ruby-jekyll-github-metadata 2.15.0

2023-02-26 Thread Adrian Bunk
On Sun, Feb 26, 2023 at 04:32:59PM +0100, Daniel Leidert wrote:
> Am Sonntag, dem 26.02.2023 um 16:57 +0200 schrieb Adrian Bunk:
> > On Sun, Feb 26, 2023 at 03:47:49PM +0100, Daniel Leidert wrote:
> > > Am Samstag, dem 25.02.2023 um 16:15 +0200 schrieb Adrian Bunk:
> > > 
> > > [..]
> > > > FYI:
> > > > 
> > > > The package in bookworm builds with jekyll-github-metadata
> > > > 2.15.0:
> > > > https://tests.reproducible-builds.org/debian/rb-pkg/bookworm/amd64/ruby-jekyll-remote-theme.html
> > > > (the buildinfo link has the complete package list)
> > > 
> > > That is due to this environments not running the failing test. The
> > > test-file checks if there is an internet connection and adds or
> > > removes
> > > tests depending on the outcome). The test in question is one that
> > > requires an internet connection.
> > > ...
> > 
> > Accessing the internet during the build is an RC bug.
> 
> It would be pretty stupid to generally disable tests for a *remote
> theme* plugin (or any other package) that by defition relies on the
> internet. This would disable the majority of tests here. We (as in "the
> Ruby team") instead handle the tests if there is no internet, and
> whenever possible, run them via autopkgtest (needs-internet
> restriction) at least.
> 
> IMHO this is a valid approach and in this case spotted a regression. To
> my understanding, builds must not fail due to access attempts and the
> build itself must not rely on downloaded resources. However, this is
> the test stage, not the build stage. But if you feel that strongly
> about that, please show me the exact ruling.
>...

Debian Policy §4.9 says that *attempting* to access the internet
is forbidden:

  For packages in the main archive, required targets must not attempt 
  network access, except, via the loopback interface, to services on the 
  build host that have been started by the build.

Your additional approach via autopkgtest with the needs-internet 
restriction is a good way to test such packages.

I am adding debian-devel to Cc, where other people have more knowledge 
on that topic than I have.

> Daniel

cu
Adrian



Re: DEB_BUILD_OPTIONS=nowerror

2023-02-26 Thread Adrian Bunk
On Fri, Feb 24, 2023 at 07:19:41AM +0100, Helmut Grohne wrote:
>...
>  * A package adds -Werror to the build. When a new toolchain version is
>uploaded, it triggers a new warning and that makes the package FTBFS.
>...
> When building affected packages with more recent toolchains, such build
> failures are annoying. In a bootstrap setting, they hide later problems.
> For that reason, I would like to have a standard way to opt out of such
> failures. I understand that some of the warnings may be pointing at real
> bugs and that ignoring them certainly is a compromise. I also understand
> that packages may fail to build for other reasons with new toolchains
> (e.g. missing #includes). However, -Werror has proven to be quite
> repetitive and seems worthwhile to address to me.
> 
> As such, I propose a generic DEB_BUILD_OPTIONS=nowerror modelled after
> the original observation,
>...
> So let me know if you think this is a bad idea.

What you describe is an RC bug as soon as the more recent toolchain
becomes default, and the correct solution is to not build with -Werror. 

DEB_BUILD_OPTIONS=nowerror would imply that building with -Werror
by default would be OK, but there are already too many FTBFS due
to -Werror.

DEB_BUILD_OPTIONS=werror as standard name for an opt-in option for CI 
builds would be a better solution.

>...
> Examples:
> * glibc adds -Werror
>...

glibc does not use the default gcc, which avoids most of the problems 
you are worried about (but is not a general solution).

> Helmut

cu
Adrian



Re: Yearless copyrights: what do people think?

2023-02-26 Thread Adrian Bunk
On Wed, Feb 22, 2023 at 07:39:09AM -0700, Sam Hartman wrote:
> 
> As Jonas mentions, including the years allows people to know when works
> enter the public domain and the license becomes more liberal.
> I think our users are better served by knowing when the Debian packaging
> would enter the public domain.

If this is the intention, then including the years is pointless.

Article 7 of the Berne Convention says:
(1) The term of protection granted by this Convention shall be the life 
of the author and fifty years after his death.
...
(6) The countries of the Union may grant a term of protection in excess 
of those provided by the preceding paragraphs.
...

> --Sam

cu
Adrian



Re: Reducing allowed Vcs for packaging?

2023-02-26 Thread Adrian Bunk
On Sun, Feb 26, 2023 at 02:24:26PM +0100, Bastian Germann wrote:
> Hi!
> 
> During the last weeks I had a look at the Vcs situation in Debian. Currently,
> there are eight possible systems allowed and one might specify several of 
> them for
> one package. No package makes use of several Vcs references and frankly I do 
> not
> see why this was supported in the first place.

Policy §5.6.26 says it is not permitted.

> For the allowed systems the situation in unstable is the following:
> arch is used by 2 packages pointing to bad URLs: #1025510, 1025511.
> bzr is used by ~50 packages, half of which point to bad URLs.
> cvs is used by 3 packages, 2 of which point to bad URLs: #1031312, #1031313.
> svn is used by ~130 packages, many of which point to bad URLs.
> darcs, mtn, and hg are not used.
> 
> We can see: The Vcs wars are over; with git there is a clear winner and in my
> opinion, we should remove the possibility to use most of them for package
> maintenance. It is one additional barrier to get into package maintenance and
> we should remove the barriers that are not necessary.

One barrier is that our work is based around tarballs and quilt,
instead of using upstream git trees and commits.

> I would like to suggest removing the possibility to specify several systems 
> and
> removing all systems except bzr, svn, and git, while deprecating bzr and 
> possibly svn.
> This means solving the four listed bugs and convincing the cvsd maintainer to
> switch or drop the Vcs-Cvs reference. Then, the Debian Developer's Reference
> should specify the changes in §6.2.5 and whatever parses Vcs-* in 
> debian/control
> should be adapted to do the specified thing.

Policy §5.6.26 would be the primary definition you want to change.

Not using any Vcs for maintaining packages in Debian stays permitted, 
and I do not get your point what we would gain if the cvsd maintainer 
drops the Vcs-Cvs reference while continuing to maintain the package
in cvs.

In practice e.g. tracker.d.o seems to support Vcs-Bzr but not Vcs-Cvs,
and there is no requirement for tools to drop working support for
something that is no longer specified.

Vcs-Browser is Vcs agnostic and would stay permitted for any kind of Vcs,
including ones never listed in Policy.

> Thanks for any comments,
> Bastian

cu
Adrian



Re: OpenMPI 5.0 to be 32-bit only ?

2023-02-15 Thread Adrian Bunk
On Tue, Feb 14, 2023 at 04:16:33PM -0800, Steve Langasek wrote:
>...
> working out which of those reverse-dependencies are
> *not* scientific applications that should drop the build-dependency rather
> than being removed, and so forth.
> 
> So it's a tradeoff between the maintenance work of keeping mpi working on
> 32-bit, and the one-time work of removing it.
>...

Unfortunately your "one-time work" is not true.


Architecture specific differences are not unlikely to cause FTBFS later,
e.g. when bumping dh compat changes --list-missing to --fail-missing.

Symbols files for shared libraries can be a real pain when there are 
architecture specific differences that result in different symbols,
dropping symbols files for such libraries might be the best option.


New dependencies on the packages that are removed/unavailable on 
some architectures will appear all the time, an example:

Some architectures in ports do not have the complete Haskell ecosystem.

pandoc is written in Haskell.

src:flac builds both a widely used libflac and a rarely used flac
command-line program.

Upstream of src:flac recently switched from docbook-to-man to using 
pandoc for generating the manpage for the command-line program.
This made the new libflac unavailable on several ports architectures.

Someone will have to make the build dependency on pandoc and the 
contents of debian/flac.install architecture-dependent, or create
a separate binary-all flac-common package for the manpage.


cu
Adrian



Re: Consensus on closing old bugs

2023-02-13 Thread Adrian Bunk
On Mon, Feb 13, 2023 at 08:05:50AM -0800, Russ Allbery wrote:
>...
> So in short I agree with Holger: it really depends.  It's rude to ask
> someone to do a bunch of work when you have no intention of following
> through on that work, which happens a lot when new volunteers do bug
> triage without the skills required to follow up on the responses to that
> triage.  But also if you're never going to work on a bug and you don't
> think it serves any documentation purpose, it's okay to close it as
> wontfix and arguably better communication to the bug reporter than leaving
> it open and creating the illusion that you might fix it some day.

A maintainer closing a bug based on its contents is quite different from
"close bugs simply because they are old".

There is a certain stupidity if a (human or nonhuman) bot blindly asks 
the submitter whether the "typo in the manpage" bug is still reproducible,
or closes it simply because it is old.

How a maintainer deals with "systemd: Please port to Hurd" kind of bugs
is a different topic.

>...
> The more prominent the
> package and the larger the unsophisticated user base, the more aggressive
> you have to be about closing bugs if you want the open bug list to be a
> useful artifact for guiding development.
>...

I would say the typical Debian approach in such situations tends to be 
to optionally look at older bugs once when you adopt a package or join 
the packaging team, and afterwards only react to bug email.

> Or one can just use an autoclose bot, I guess, which is basically the
> equivalent of one of those email autoreplies that says "this mailbox is
> unattended and no one will ever read your email."  :)  And, to be honest,
> if that's the reality of the situation, maybe better to know than not!

I am sometimes getting an email from the BTS that this "typo in the manpage"
bug I reported 20 years ago has just been fixed in the "New maintainer"
upload of a package.

When working on orphaned packages or doing NMUs, it is also often useful 
for me to see the amount/age/contents of bugs in a package as an 
indication in what state it is.

cu
Adrian



Re: OpenMPI 5.0 to be 32-bit only ?

2023-02-13 Thread Adrian Bunk
On Mon, Feb 13, 2023 at 10:59:18AM +, Alastair McKinstry wrote:
>...
> > The case we should make is that "no one cares about 32-bit builds" from
> > the starting post in the GitHub issue is not true for Debian.
> > We do care that it *builds*, even if it might not be actually used.
> I've been making this point, mostly in the context of avoiding a future
> where no MPI is available on 32-bit
> (and by implication, essentially forking Debian into a toy 32-bit world and
> a properly-supported 64-bit one).

I don't see what important functionality for 32-bit today would be 
missing without MPI, it is just more work and breakages to have packages
configured differently on different platforms to continue providing the
functionality that is still important.

>...
> The point of going  64-bit only is to clean up data structures and remove
> technical debt: Hence 5.x will start a cleanup and removal of 32-bit code.
> 
> The next point release may work on 32-bit by just bypassing the compilation
> flag; ongoing support starts meaning more invasive patches need to be
> carried by us.

This sounds as if the lesser evil for us will be to configure packages 
differently when one or all MPI implementations are going away on 32-bit.

For example:
ffmpeg -> codec2 -> octave -> sundials -> sundials does not build with MPICH
One of these four arrows must be broken.
That's work and not fun work, but likely the lesser evil.

cu
Adrian



Re: Consensus on closing old bugs

2023-02-13 Thread Adrian Bunk
On Mon, Feb 13, 2023 at 10:33:51AM +, Holger Levsen wrote:
> On Sat, Feb 11, 2023 at 10:45:16PM +0200, Adrian Bunk wrote:
> > On Mon, Feb 06, 2023 at 10:07:59AM -0700, Sam Hartman wrote:
> > > Most of us do not prefer to close bugs simply because they are old.
> > It creates angry users and no real benefits.
>  
> this is undoubtingly true for some bugs and users.
> 
> for other bugs (and users) there will be no reply ever and unactionable bugs
> clutter the view and harm bug fixing.
> 
> so I don't think there is a general rule and I also don't think asking
> "does this bug still apply?" is harmful,
>...

An egoistic "bugs clutter the view" developer view that ignores that 
there are humans at the other end of the bugs is harmful.

I remember being pretty pissed when in a different open source project 
some abuser asked me every 6-12 months whether I can still reproduce
the problem with the latest upstream version, each time I spent several
hours for confirming it, but this abuser never bothered to follow up on
that after I did the work that was requested from me.

Regarding your "harm bug fixing": I do not have the impression that 
there is much intersection between the people who are eager to close
as many bugs as possible without even looking at them, and the people
who are actually making Debian better by fixing bugs.

If a developer has a problem with bugs cluttering the view, it is of 
course fine to use a different (UDD) view if this is more productive.
But this does not require touching bugs, which is an interactions with
our users and should be handled accordingly.

> cheers,
>   Holger

cu
Adrian



Re: OpenMPI 5.0 to be 32-bit only ?

2023-02-11 Thread Adrian Bunk
On Thu, Feb 09, 2023 at 09:53:37AM +, Alastair McKinstry wrote:
> Hi,
> 
> The push-back from upstream is that they're unconvinced anyone is actually
> using i386 for MPI.
> 
> For example, MPI is configured to use PMIx but its thought that doesn't work
> on 32-bit, but no bugs have been reported.
> 
> Either we increase our 32-bit testing regime, or realistically consider it
> marginal and dying.

I don't think lack of testing is the problem, we should have pretty good 
coverage due to buildtime and autopkgtest tests.

There are bugs like e.g. #1003020 or #1026912 that might be due to
OpenMPI failing on 32-bit with 160 cores.

Whether spending time trying to properly fix these would be worth it, 
that's a different question.

> Currently I'm favouring accepting a move to 64-bit OpenMPI as a fait
> accompli as part of code cleanups for 5.X (post Bookworm), and Debian moving
> to MPICH on at least 32-bit archs - I'd favour OpenMPI on 64-bit archs for
> better incoming-code-and-compatability support.
> 
> I'd like to hear the case otherwise.

The case we should make is that "no one cares about 32-bit builds" from 
the starting post in the GitHub issue is not true for Debian.
We do care that it *builds*, even if it might not be actually used.

[1] was about the benefits of switching the two architectures that were 
using MPICH to OpenMPI two years ago. The mentioned "makes packages like 
octave build" is due to sundials build depending on mpi-default-dev but 
requiring ompi-c.pc [2].

m68k and sh4 are building with nocheck, whether or not there might be
additional/different test failures in packages with MPICH is unknown.

Having different MPI implementations on different architectures again 
would be painful for us, especially if it would be on release architectures.

If it would be architecturally hard for upstream to continue supporting 
32-bit then that's how it is, otherwise the current status quo of 32-bit 
OpenMPI is good enough for us and a possible compromise might be if 
upstream says "32-bit patches are welcome" and requires an
  --i-know-that-32-bit-support-is-unsupported-and-might-be-broken 
configure flag when building for 32-bit archs.

> Best regards
> Alastair

cu
Adrian

[1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=853029#18
[2] 
https://buildd.debian.org/status/fetch.php?pkg=sundials=m68k=5.7.0%2Bdfsg-1~exp1=1626976038=0



Re: Consensus on closing old bugs

2023-02-11 Thread Adrian Bunk
On Mon, Feb 06, 2023 at 10:07:59AM -0700, Sam Hartman wrote:
>...
> Most of us do not prefer to close bugs simply because they are old.

It creates angry users and no real benefits.

> But closing bugs with a moreinfo tag when information has not been
> provided in six months to a year is likely fine.

Only if the bug closer is aware that the BTS does not Cc the submitter
by default, and checks first whether the question was ever sent to the
submitter (quite often it was not).

> So is asking for more info and adding a moreinfo tag when appropriate.
> 
> It's even appropriate to ask if the bug still happens.
>...

I would consider it abusive behaviour if the person asking the user does 
not have the intention of trying to fix the bug if the answer is "yes".

Our user might be spending hours or days trying to give a good reply,[1]
expecting a serious attempt at fixing the bug in exchange for the effort.

It is bad enough that we are often not good at trying to resolve bugs 
where users have sometimes spent considerable effort at writing a good
bug report, but asking users to do pointless work would be horrible.

cu
Adrian

[1] especially if asked "Does the problem still happen in unstable?",
only a small minority of our users are using unstable



Re: Please, minimize your build chroots

2023-01-29 Thread Adrian Bunk
On Sun, Jan 29, 2023 at 05:00:56AM +0100, Guillem Jover wrote:
> On Sat, 2023-01-28 at 21:35:01 +0200, Adrian Bunk wrote:
> > I don't think such arguments are bringing us forward,
> > we should rather resolve the problem that these differ.
> > 
> > All/Most(?) packages where they do differ are packages that were until 
> > recently part of the essential set, and since debootstrap still installs 
> > them they are in practice still part of the build essential set.
> 
> Sure.
> 
> > Removing tzdata from the essential set made sense, and I do see your 
> > point that tzdata does not fit the "totally broken" definition of
> > "required".
> 
> > What we need are technical discussions like whether packages like tzdata 
> > should also be removed from the build essential set, or whether tzdata 
> > being a part of the build essential set should be expressed by making 
> > the build-essential package depend on tzdata.
> 
> I guess my question is, what makes tzdata build essential, besides
> that it's currently kind-of implicitly there? To me it does not look
> like it has the properties to be considered as such, besides that if
> we lower its priority (as it deserves) it makes a bunch of packages
> FTBFS.

It has historically been part of the build essential set.

It is used in the build of many packages.

There would be so many invisible undeclared build dependencies of 
packages using it who have it pulled in by other packages that random
changes in the dependency tree might cause many packages to FTBFS at
any time if it does not stay build essential.

It is required to provide glibc functionality that is frequently
used during the build.

> So, for one, this is getting in the way of making our minimal
> (non build) systems smaller.

No, it is not.

There are 3 different topics:

1. making minimal (non build) systems smaller

Being able to exclude tzdata from a minimal system was achieved when it 
was removed from the essential set in stretch.
debootstrap not installing it by default would make that easier.
Whether build-essential depends on tzdata does not make any difference.

2. normal systems

In a normal (non-minimal) installation not having tzdata installed 
would be a bug harming many users, no matter what priority it will
have.

3. build essential

That's separate from 1. and 2.

> > I have so far not seen any technical arguments why removing tzdata from 
> > the build essential set would be better for Debian than keeping it there.
> > Removing tzdata reduces the size of a chroot that has the build 
> > dependencies for the hello package installed by ~ 0.5%, this size
> > decrease does not strike me as a sufficient reason for reducing the
> > build essential set.
> 
> I don't see how this can be a pure technical decision, TBH. To me this
> looks more like either a matter of trade-offs,

It is a tradeoff between less work and saving ~ 0.5% space in build
chroots.

> or IMO more importantly
> of clean definition of a specification (which seems rather more
> important than the other concerns). The work to spot these problems has
> already been done, and the changes required are trivial and minimal
> (disregarding any severity consideration here).

The work has been done for packages that do FTBFS today.

I would guess ~ 90% of the packages that did have tzdata installed 
during Santiagos builds did not have or need a direct build dependency
because something else pulled it in.
It is unknown how many of these would have a latent FTBFS bug due to an
undeclared direct build dependency.

Do we have any packages that build successfully but are broken without
tzdata installed during the build?

>...
> I appreciate the minimalism and simplicity of the definition. I've
> also heard from time to time complains that we even require a C/C++
> compiler as build-essential, when for many packages that's not even
> needed (although a C compiler is currently needed by dpkg-dev to
> determine the host arch).

I would also complain that dpkg-dev pulls the full perl into the build 
essential set.

The build essential set is so huge that I wouldn't even be surprised if 
at some point in the future this discussion becomes moot because some 
package in the build essential set gains a dependency on tzdata.

Perhaps some localtime related functionality could justify a dependency 
of perl or perl-modules-5.36 on tzdata, which would keep it in the build 
essential set due to dpkg-dev being build essential?

> Policy also has this to say:
> 
>   ,---
>   If build-time dependencies are specified, it must be possible to build
>   the package and produce working binaries on a system with only
>   essential and build-essential packages installed and also those
>   required to satisfy the build-ti

Re: Bug#1029911: rust-ureq FTBFS: error: unable to load build system class 'cargo': Can't locate String/ShellQuote.pm

2023-01-28 Thread Adrian Bunk
On Sun, Jan 29, 2023 at 01:33:56AM +0100, Jonas Smedegaard wrote:
> Hi,
> 
> Can someone help me understand what is going wrong in Bug#1029911?
> 
> The source package build-depends on libstring-shellquote-perl but it
> seems like that build-dependency is not getting installed on the buildd.
>...

This seems to be a regression in rust-rustls 0.20.8-1:

Package: librust-rustls-dev
Provides: ..., libstring-shellquote-perl

>  - Jonas

cu
Adrian



Re: Please, minimize your build chroots

2023-01-28 Thread Adrian Bunk
On Sat, Jan 28, 2023 at 10:23:19PM +0100, Santiago Vila wrote:
> El 28/1/23 a las 22:18, Adrian Bunk escribió:
> > On Sat, Jan 28, 2023 at 09:45:14PM +0100, Santiago Vila wrote:
> > > ...
> > > The other one: There are a bunch of packages whose unit tests rely on 
> > > tzdata. The tzdata
> > > package changes often during the lifetime of stable, and as a result, 
> > > some package might
> > > stop building from source. If we wanted to know in advance which packages 
> > > might break after
> > > a tzdata update, we could use the available information in the 
> > > build-depends fields.
> > > ...
> > 
> > No, that won't work.
> > 
> > In your builds, how many percent of the packages that did have tzdata
> > installed during the build did not have a direct build dependency?
> > 
> > Looking at the dependency trees, I'd assume the vast majority of
> > packages where tzdata was installed during the build do not have
> > a direct build dependency.
> 
> I think I see your point, but my idea was not to collect packages
> with tzdata in build-depends only, but those whose build-depends
> make tzdata to be installed (i.e. including transitive dependencies).
> 
> I don't know if there is already a tool for that, nor how much difficult
> it would be to have such a tool.

It shouldn't be hard to get this information from buildinfo files.

But tzdata is not unique here.

linux-libc-dev is also part of the build essential set and it is being 
updated in every point release, and this has caused FTBFS in the past.

The proper solution for the problem you describe would be a test rebuild
of the full archive at the beginning of the week between the upload freeze
for a point release and the actual release.

> Thanks.

cu
Adrian



Re: Please, minimize your build chroots

2023-01-28 Thread Adrian Bunk
On Sat, Jan 28, 2023 at 09:45:14PM +0100, Santiago Vila wrote:
>...
> The other one: There are a bunch of packages whose unit tests rely on tzdata. 
> The tzdata
> package changes often during the lifetime of stable, and as a result, some 
> package might
> stop building from source. If we wanted to know in advance which packages 
> might break after
> a tzdata update, we could use the available information in the build-depends 
> fields.
>...

No, that won't work.

In your builds, how many percent of the packages that did have tzdata 
installed during the build did not have a direct build dependency?

Looking at the dependency trees, I'd assume the vast majority of 
packages where tzdata was installed during the build do not have
a direct build dependency.

This could easily become the nightmare situation where one changed 
dependency somewhere makes 100 packages FTBFS that do need tzdata
during the build, but previously got it installed through other
build dependencies.

> As you requested, I think the above two are technical reasons, not
> merely "because policy says so".

Thanks, I do appreciate that.

> Thanks.

cu
Adrian



Re: Please, minimize your build chroots

2023-01-28 Thread Adrian Bunk
On Sat, Jan 28, 2023 at 07:34:48PM +0100, Guillem Jover wrote:
> On Sat, 2023-01-28 at 16:32:17 +0100, Adam Borowski wrote:
> > On Sat, Jan 28, 2023 at 01:59:40PM +0100, Santiago Vila wrote:
> > > Unsupported by whom? What is supported or unsupported is explained in 
> > > policy.
> > > Policy says it must work. Therefore it should be supported (by fixing the 
> > > bugs).
> > 
> > Policy §2.5:
> > # "required"
> > #Packages which are necessary for the proper functioning of the
> > #system (usually, this means that dpkg functionality depends on
> > #these packages). Removing a "required" package may cause your
> > #system to become totally broken and you may not even be able to use
> > #"dpkg" to put things back, so only do so if you know what you are
> > #doing.
> 
> As stated several times now this passage seems wrong, or inaccurate at
> best. See #950440. And I don't see how tzdata would ever fall into
> this definition even if that paragraph was correct. As mentioned
> before, the tzdata package does not seem like a "required" package at
> all, and this should be fixed by lowering its priority. Whether
> debootstrap can be fixed to not use the Priority workaround, seem
> orthogonal to the issue at hand.
> 
> > > That's a straw man. I'm not proposing anything of the sort. Policy says
> > > packages must build when essential and build-essential packages
> > > are installed (plus build-dependencies).
> > 
> > Build-essential _packages_.  Not the "build-essential" package which very
> > clearly says its dependencies are purely informational.
> 
> It does not seem fair to argue both that the build-essential package is
> just informational (when it's in fact the canonical declaration of what
> is Build-Essential, and what every tool uses to install or check for the
> Build-Essential package set), and then also argue that whatever
> debootstrap installs (which is based both on build-essential plus a
> workaround due to lack of proper dependency resolution) is the canonical
> thing.

I don't think such arguments are bringing us forward,
we should rather resolve the problem that these differ.

All/Most(?) packages where they do differ are packages that were until 
recently part of the essential set, and since debootstrap still installs 
them they are in practice still part of the build essential set.

Removing tzdata from the essential set made sense, and I do see your 
point that tzdata does not fit the "totally broken" definition of
"required".

What we need are technical discussions like whether packages like tzdata 
should also be removed from the build essential set, or whether tzdata 
being a part of the build essential set should be expressed by making 
the build-essential package depend on tzdata.

I have so far not seen any technical arguments why removing tzdata from 
the build essential set would be better for Debian than keeping it there.
Removing tzdata reduces the size of a chroot that has the build 
dependencies for the hello package installed by ~ 0.5%, this size
decrease does not strike me as a sufficient reason for reducing the
build essential set.

Everyone can feel free to disagree with me on the previous paragraph,
but please argue technically and not based on wording in policy.

> Regards,
> Guillem

cu
Adrian



Re: Please, minimize your build chroots

2023-01-28 Thread Adrian Bunk
On Sat, Jan 28, 2023 at 03:28:58PM +0100, Johannes Schauer Marin Rodrigues 
wrote:
>...
> My proposal is to fix debootstrap #837060 (patch is in the bug report) so that
> it only installs Essential:yes, build-essential and apt and discuss if it 
> makes
> sense to have packages like tzdata or e2fsprogs in a buildd chroot or not and
> if yes, add those packages as dependencies of the build-essential package.
>...

Note that there are at least 2 potential reasons why a package should 
stay in the build essential set:

1. many users, like tzdata

2. Important: yes
Making e2fsprogs not build essential would make it legal to do
  Build-Conflicts: e2fsprogs
It might avoid problems in the future to make such nearly-essential 
packages apt refuses to remove build-essential, otherwise there
could be problems like dose3 sending packages to a buildd where
apt refuses to fulfill the Build-Conflicts.

> Thanks!
> 
> cheers, josch

cu
Adrian



Re: Please, minimize your build chroots

2023-01-28 Thread Adrian Bunk
On Sat, Jan 28, 2023 at 02:28:30PM +0100, Johannes Schauer Marin Rodrigues 
wrote:
>...
> Why do people call just accepting that Priority:required packages have to be
> part of the buildd chroot the easier solution? We just need to fix debootstrap
> bug #837060 and we are done, no?

This is mostly a new problem, a side effect of a different recent change:
The efforts to reduce the essential set for enabling smaller installs
of Debian.

"Priority: required" packages like e2fsprogs and tzdata used to be part 
of the essential set, tzdata is no longer (transitively) essential since
stretch and e2fsprogs no longer essential since buster.

It was not an intended goal to remove such packages from the *build* 
essential set, and they are installed in all reasonable build 
environments which makes it a non-issue in practice.

Adding them to build-essential would just enforce what was enforced 
differently in the past, and what is still true in practice today.

It should at least be discussed first whether packages like tzdata that 
have been a part of the build essential set should stay there.

> Thanks!
> 
> cheers, josch

cu
Adrian



Re: Please, minimize your build chroots

2023-01-28 Thread Adrian Bunk
On Sat, Jan 28, 2023 at 12:24:47PM +0100, Santiago Vila wrote:
>...
> * Those bugs are RC by definition and have been for a long time.
>...

Please provide a pointer where a release team member has said so 
explicitly in recent years.

In my experience they are usually saying that FTBFS that do not happen 
on the buildds of release architectures are usually not RC.

> Thanks.

cu
Adrian



Re: Please, minimize your build chroots

2023-01-28 Thread Adrian Bunk
On Sat, Jan 28, 2023 at 12:20:16AM +0100, Santiago Vila wrote:
> El 27/1/23 a las 22:37, Adrian Bunk escribió:
> > On Fri, Dec 16, 2022 at 02:15:13AM +0100, Santiago Vila wrote:
...
> > I am right now looking at #1027382, and the first question is how I can
> > make apt remove e2fsprogs so that I can reproduce the problem - it feels
> > like a real waste of my QA work to "fix" something that is incredibly
> > hard to break.
> 
> You don't have to fix #1027382. The maintainer has.
>...

Reality in Debian is that at any time a 3 digit number of maintainers is
(sort-term or long-term or permanently) away/busy/MIA.

A large part of QA work in Debian is fixing bugs in packages maintained 
by other people, for me that's > 95% of my uploads.

I am not saying that trying to force maintainers to spend time on such 
issues by making them release critical is better, but you are also 
creating extra work and frustration for the people who are doing QA work 
in Debian.

> > It has been practice for many years that FTBFS that do not happen on the
> > buildds are usually not release critical bugs, and I would appreciate if
> > this is followed by everyone.
> 
> Well, that's also wrong, because having the build-dependencies correct has 
> been
> in the list of RC bugs for many years as well. See below.
> 
> I would appreciate if we all followed Policy 4.2, which says packages MUST 
> build
> when the build-dependencies are installed.

Policy tends to be a decade behind reality because it *follows* 
existing practices.

If existing practice is that build environments have all
"Priority: required" packages installed, then policy should
be updated to follow reality.

> In general, disputing the severity because it does not happen in the buildds
> misses completely the point of what should be the goal, namely, a distribution
> which may be rebuilt by everybody following documented procedures, not
> a distribution which may only be rebuilt in our buildds.

People who follow the (incomplete) documentation in our wiki for 
creating their own buildd setup will get a reasonable setup where
builds have all "Priority: required" packages installed.

> The end user MUST be able to rebuild the packages. Otherwise our
> free software licenses are meaningless in practice.

In general this is true, but you tend to use this argument when trying 
force pretty unimportant things on the whole project.

#932795 was your failed attempt to make the Technical Committee decide
that all build failures on single-core machines are release critical 
bugs in the year 2019.

> > It is not helpful if people try to force the few people who are doing
> > QA work to spend their scarce QA time on fixing bugs that only happen
> > when building on single-core machines or in non-UTF-8 locales or without
> > packages that are in practice installed everywhere, by making such
> > issues that are not a problem on our buildds release critical bugs.
> 
> That's the wrong approach. If the end user wants to make a modification,
> they can't use our buildd network.

In #932795 there was wide consensus that documentation could be improved 
to clarify what a supported/reasonable build environment is.

E.g. the maximum amount of diskspace a package might currently use when 
building on amd64 is the undocumented tribal knowledge how much storage
is available on our amd64 buildds.

> > It also opens a gigantic can of worms, since there is the even bigger
> > opposite problem that many packages FTBFS or are built differently
> > when built in an environment that differs from our buildd setup.
> > Adding Build-Conflicts for all such cases is not feasible in practice.
> 
> This is a straw-man. I'm not opening any can of worms.

Your argument is based on treating Policy 4.2 as a holy scripture that
must be followed and never questioned.

Policy 4.2 also says
  Source packages should specify which binary packages they require to 
  be installed or not to be installed in order to build correctly.

We are not following the "not to be installed" part,
which is the can of worms you would be opening.

In the section you are referring to, Policy 4.2 also says 
  In particular, this means that version clauses should be used 
  rigorously in build-time relationships so that one cannot produce bad or 
  inconsistently configured packages when the relationships are properly 
  satisfied.

Personally I would strongly agree with that, but current practice with 
Janitor commits removing older version clauses goes in the opposite 
direction.

> > If people want to support building without tzdata [...]> but none of these 
> > are critical for our releases since
> > none of these impact how packages are built for bookworm on our buildds.
> 
> There is a l

Re: Please, minimize your build chroots

2023-01-27 Thread Adrian Bunk
On Fri, Dec 16, 2022 at 02:15:13AM +0100, Santiago Vila wrote:
> Greetings.
> 
> I'm doing archive-wide rebuilds again.
> 
> I've just filed 21 bugs with subject "Missing build-depends on tzdata"
> in bookworm (as tzdata is not build-essential).
> 
> This is of course not fun for the maintainers, but it's also not fun
> for people doing QA, because those bugs could be caught earlier in the
> chain, but they are not. This is extra work for everybody.

Speaking as someone who is doing a lot of QA work, for *build* 
environments I would rather expand build-essential instead of doing 
extra QA work that consists of manually adding build dependencies for 
packages that are in practice anyway installed in all build 
environments.

There are important real-world usecases where reducing the essential set 
brings benefits, but for *build* essential there are not really benefits 
that are worth the extra work.

>...
> Because people accept the default by debootrap "as is", chroots used
> to build packages include packages which are neither essential nor
> build-essential, like tzdata, mount or e2fsprogs.
>...

I am right now looking at #1027382, and the first question is how I can 
make apt remove e2fsprogs so that I can reproduce the problem - it feels 
like a real waste of my QA work to "fix" something that is incredibly 
hard to break.

It has been practice for many years that FTBFS that do not happen on the 
buildds are usually not release critical bugs, and I would appreciate if
this is followed by everyone.

It is not helpful if people try to force the few people who are doing
QA work to spend their scarce QA time on fixing bugs that only happen 
when building on single-core machines or in non-UTF-8 locales or without 
packages that are in practice installed everywhere, by making such 
issues that are not a problem on our buildds release critical bugs.

It also opens a gigantic can of worms, since there is the even bigger 
opposite problem that many packages FTBFS or are built differently
when built in an environment that differs from our buildd setup.
Adding Build-Conflicts for all such cases is not feasible in practice.

If people want to support building without tzdata, or cross-building, or 
building for non-release architectures, then bugs with patches are of
course welcome - but none of these are critical for our releases since
none of these impact how packages are built for bookworm on our buildds.

> Thanks.

Thanks
Adrian



Re: LibreOffice architecture support (was: Fwd: Plan to remove dead C++ UNO bridge implementations (bridges/source/cpp_uno/*))

2023-01-11 Thread John Paul Adrian Glaubitz

Hi Helge!

On 1/11/23 15:03, Helge Deller wrote:

Yes, sadly we don't have a working java right now on hppa, and it will
probably take some more time to get one. At least I won't have time
for it during the next few months.
But it would be sad to loose those bindings...


There are some efforts to bring back gcj if that helps:


https://gcc.gnu.org/pipermail/gcc-patches/2023-January/609530.html



The reason for being BD-Uninstallable is the lack of cruft in Debian Ports:


https://lists.debian.org/debian-sparc/2017/12/msg00060.html


as a result of some packages FTBFS.


That is for sparc and m68k? (Where both the needed KDE packages are 
uninstallable)


Adrian, as there seems to be various arches (and reasons) will you speak 
upstream?


Yes, I have already replied upstream. I will follow up there tonight.

One of the biggest problems for Debian Ports remains the lack of cruft as 
explained
in my post to the debian-sparc mailing list above. This causes these long 
BD-Uninstallable
queues.

Adrian

--
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer
`. `'   Physicist
  `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913



Re: LibreOffice architecture support (was: Fwd: Plan to remove dead C++ UNO bridge implementations (bridges/source/cpp_uno/*))

2023-01-10 Thread John Paul Adrian Glaubitz

(posting this to debian-devel@ since debian-ports@ cross-posts to too many 
lists)

Hello Rene!

On 1/10/23 19:25, Rene Engelhard wrote:

(which are for many BD-Uninstallable since ages because it does not have Java 
(anymore), didn't do a long-ago transition, ...)


They all have Java support except for hppa, see:


https://buildd.debian.org/status/package.php?p=openjdk-11=sid
https://buildd.debian.org/status/package.php?p=openjdk-18=sid


The reason for being BD-Uninstallable is the lack of cruft in Debian Ports:


https://lists.debian.org/debian-sparc/2017/12/msg00060.html


as a result of some packages FTBFS.

So, it's more a Debian Ports problem than an architecture problem.

Also, both alpha and ia64 are BD-Uninstallable because you don't want to drop
clang from Build-Depends from these architectures for whatever reason despite
these not having had a LLVM/clang port for several years:


https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=963109


If clang could be dropped from Build-Depends for alpha and ia64, that would 
already
help us move a little further.


speak up at upstream or  they will be gone. And without those bridges no 
architecture support for it.


I will see what I can do.

Thanks,
Adrian

--
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer
`. `'   Physicist
  `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913



Re: packages expected to fail on some archs

2022-09-26 Thread Adrian Bunk
On Wed, Sep 14, 2022 at 01:38:01PM +0200, Guillem Jover wrote:
>...
> [ Mostly to summarize the status re dpkg. ]
>...
>   * Lack of bits/endianness arch "aliases" (#962848). The main problem
> with this one is that we cannot simply add such aliases, as then
> those would silently be considered as regular arches, and they do
> not map into the current naming convention at all. These would need
> to be added with a breaking syntax (say with some non-accepted
> char, such as % or whatever) so that they do not introduce silent
> breakage. This would then need to be supported by anything
> handling arch restrictions (field and dependencies) which can be a
> rather large surface. Then there is the problem that architectures
> are evaluated as ORed lists, and the bits/endianness might require
> to be treated as ANDed lists some times (of course the latter
> could be handled by combining them into single aliases, but meh).

If we limit the problem to avoiding build failures in cases that 
upstream does not support, there would be the trivial solution of
having a package ship Provides like:
- architecture-is-64bit
- architecture-is-32bit
- architecture-is-little-endian
- architecture-is-big-endian
- architecture-has-64bit-timet
-...

  Build-Depends: architecture-is-64bit, architecture-is-little-endian,...
would be a package that only supports 64bit little endian architectures, 
and that would never be attempted to build on 32bit or big endian
architectures.

The buildd page would then show for i386:
  mypackage build-depends on missing:
  - architecture-is-64bit

Not building a source package on one specific architecture could already 
today be achieved with:
  Build-Depends: package-is-broken-on-ppc64el [ppc64el],...

This might not be the most elegant solution, but it should be sufficient 
to solve the problem in this thread and it does not require any tool changes.

> Thanks,
> Guillem

cu
Adrian



Re: packages expected to fail on some archs

2022-09-13 Thread Adrian Bunk
On Mon, Sep 12, 2022 at 04:08:08PM +0200, Tobias Frost wrote:
>...
> The problem is that if you want to exclude an arch explicitly, you have to
> list all archs you want to build it on. IOW,  I'm missing an easy way to say
> "not on THIS architecture", somthing like "[!armel]"
>...
> I don't actually believe people are using them on that arch, because if they
> would, I would get bug reports about them… Upstream agrees with that: Although
> off-topic, I would be eager to know if there is anybody using $PACKAGE on an
> s390/System Z?"

A relevant quesion would be whether this is [!s390x] or [littleendian].

There are 5 additional big endian architectures in ports,
plus people elsewhere apparently working on ports like arm64eb.

And while it is unclear whether s390x has any users at all on Debian,
ports architectures like hppa or powerpc do have users.

cu
Adrian



Re: packages expected to fail on some archs

2022-09-11 Thread Adrian Bunk
On Sun, Sep 11, 2022 at 05:08:57PM +0200, Samuel Thibault wrote:
>...
> The issue we see is that some DDs end up setting a hardcoded list in
> the "Architecture" field, rather than just letting builds keep failing
> on these archs (and then possibly succeeding after some time whenever
> somebody contributes a fix upstream that gets propagated to Debian).
>...

I'd say it is the best solution when a package needs non-trivial
architecture-specific porting for every architecture it supports.

With "non-trivial" I mean not just adding a new architecture to a few 
#ifdefs, but serious upstream porting efforts. E.g. valgrind does not
support riscv64, and if it would ever gain the support in a new upstream
version I'd expect the maintainer to add that to the Architecture field
when upstream announces support for a new architecture.

But Architecture lists for expressing e.g. "64bit" or "little endian"
are a real pain for everyone working on bringup of a new port.

Which happens far more often than most people realize.

There is not only riscv64 (64bit, little endian).

Ports is about to start building for arc (32bit, little endian).

There are people working on ports like arm64be (64bit, big endian),
loongarch64 (64bit, little endian) and many other ports that might
never end up being built in the Debian infrastructure (but some of
them might get built by derivatives).

Architecture lists containing all 64bit ports or all little endian
ports create much extra work for anyone adding support for a new 64bit 
little endian architecture.

> Samuel

cu
Adrian



Re: packages expected to fail on some archs

2022-09-11 Thread Adrian Bunk
On Sun, Sep 11, 2022 at 09:25:40PM +0200, Samuel Thibault wrote:
> Paul Gevers, le dim. 11 sept. 2022 21:16:08 +0200, a ecrit:
> > 
> > - color packages that "never" had a successful built on an architecture
> > different. That information is already available because that's what marks
> > the package as "uncompiled" vs "out-of-date".
>...
> That doesn't cover the case when a package stopped building on an arch,
> though.

In practive it does, when Paul wrote "never" that actually means
"no older version is in the archive on that architecture".

On release architectures people are usually fast with getting stale 
versions of no longer buildable packages removed since it prevents
testing migration.

> Samuel

cu
Adrian



Converting Debian OpenStack images to btrfs

2022-08-26 Thread John Paul Adrian Glaubitz

(I'm not subscribed to the list, please CC me. Thanks!)

Hello!

I'm using Debian's OpenStack images to deploy buildd hosts for Debian
Ports. [1]

To workaround a longstanding qemu/glibc compatibility issue [2], I need
the images to use btrfs instead of ext4 and I was wondering whether anyone
can give me some hints on how to convert the images provided at [1] from
ext4 to btrfs.

Thanks,
Adriam


[1] https://cloud.debian.org/cdimage/cloud/OpenStack/
[2] https://sourceware.org/bugzilla/show_bug.cgi?id=23960


--
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer
`. `'   Physicist
  `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913



Bug#1016563: debhelper: Should dh_dwz be dropped?

2022-08-02 Thread Adrian Bunk
Package: debhelper
Version: 13.8
Severity: serious
X-Debbugs-Cc: debian-devel@lists.debian.org

[ debian-devel is in Cc for getting further input. ]

dh_dwz is part of the standard sequence in dh since debhelper compat 12.

dwz offers small optimizations of debug info, the typical benefit
seems to be ~ 3% size reduction.

These optimizations are only in the -dbgsym packages that nearly
noone installs and nearly noone uses.

Debug info is super useful when needed, but it is not installed by
default and dwz optimizations have little practical relevance in the
cases when it is used.

OTOH, the cases where dh_dwz has created additional work for maintainers
are many.[1]

dwz is processing debug info from several producers on many architectures,
and this sometimes breaks in various ways.

On arm64, gcc in stable sometime produces debug info that dwz in
stable cannot handle.[2]

C++ code can result in huge debuginfo, resulting in dwz exhausting
the address space on 32bit architectures, requiring workarounds from
maintainers.[3]

clang 14 which is now default in instable defaults to DWARF 5,
and dwz in unstable has some problems with that.[4,5]

Scroll through [1] for more workarounds for problems in dwz.

The track record of such bugs in dwz getting fixed swiftly is not good.
This is a real problem if dwz is a core toolchain tool most packages
are using by default, and usually maintainers are forced to use
override_dh_dwz whenever dwz chokes on what it is supposed to process.

IMHO the small benefits of dh_dwz are not worth the constant extra work
it causes, and it should be dropped from the standard sequence in dh.

Dropping dh_dwz from the standard sequence in dh could also be done
in the compat levels 12 and 13 since such a change should not cause
user-visible changes (except for a slight size increase of -dbgsym)

Ignoring errors from dh_dwz by default might work if we can trust
that dwz does not touch the file in case of errors, but it feels
wrong to ignore errors.


[1] https://codesearch.debian.net/search?q=override_dh_dwz=1
[2] 
https://sources.debian.org/src/qt6-base/6.2.4%2Bdfsg-10~bpo11%2B1/debian/rules/#L97
[3] https://sources.debian.org/src/qtcreator/8.0.0-2/debian/rules/#L66
[4] https://bugs.debian.org/1016329
[5] https://bugs.debian.org/1016330



Re: Bug#1014908: ITP: gender-guesser -- Guess the gender from first name

2022-07-16 Thread Adrian Bunk
On Fri, Jul 15, 2022 at 07:10:55PM +, Andrew M.A. Cater wrote:
> On Fri, Jul 15, 2022 at 07:05:09PM +0300, Adrian Bunk wrote:
>...
> > Debian is not a project that fights for trans people or fights for
> > denazification or fights for whatever other non-technical topics
> > individual contributors might consider worth fighting for elsewhere.
> 
> It does fight for under-represented / disadvantaged groups within Debian in a
> Debian context.

What data do you have to prove or disprove whether a group is actually 
under-represented or disadvantaged within Debian?

What tools did you use to generate this data?

The irony is that your "fight" requires exactly the tools you want to 
condemn, and data Debian should better not collect at all.

>...
> > The exact opposite of diversity is to call everything one dislikes or 
> > disagrees with "harassment" or *phobic.
> 
> I wonder how it would be if you wanted to use a similar script to test
> familiarity with English in our developers / a test for neurodiversity
> and high functioning autism / a test for colour vision or dexterity to
> single out anybody who's visually impaired or blind or a guess for
> background religion/beliefs/no belief - I don't think any of these
> (hypothetical, straw man) scripts would be useful or constructive or
> contribute well to our Debian community.
>...

Most software can be used for many purposes good or bad, looking at 
the vast amount of packages maintained by the Debian Med team I am 
quite astonished that you consider it not constructive contributions to 
Debian when people are packaging software that can be used to diagnose 
diseases.

I would rather wonder for how many of your "hypothetical" examples we 
already ship software.

I wouldn't be surprised if we already ship software that can tell the 
familiarity with English of a person based on a few emails.

Steve highlighted the problems of trying to guess gender based on names, 
determining the biological gender based on voice can be far more 
reliable than using the name. Debian does publish videos with audio that 
can be used for the mentioned usecase of determining the gender of 
Debconf speakers. I would expect that speech recognition tools for deaf 
people either already or in the future will be able to output gender and 
accent of the speaker in an audio recording.

In the Debian Med or Deep Learning teams we might some day have software 
that can test for high functioning autism of the speakers in the videos 
of Debconf talks.

Trying to restrict tools is not a new idea.
An EU directive from 2013 make it mandatory that production or
distibution of tools primarily for the purpose of committing hacking
offences must have a maximum sentence of at least 2 years in prison
in all EU countries.
Debian ships many such tools, in practice prosecution faces the
technical reality that the same tools are used for testing the
security of systems against attacks.

Prosecuting people caught using these tools for offences works.

What can realistically work for your examples is not restricting tools,
but restricting what can be done with data.

One thing we can and should do to protect members of our Debian
community is a robust legal response of prosecutions under civil and
criminal law if people are guilty of privacy abuse through policies or
practices when handling personal data that are not compliant with
applicable legislations like the GDPR.

Even in cases where such prosecution is not happening, it should be 
clear that privacy abusers are not welcome in our Debian community.

What is the defined maximum retention time for sensitive personal data
like sexual orientation, race, ethnicity, religion or political believes
in the Debian Community Team?
If there is none or if it is too long, how to fix this swiftly?
If it is not fixed swiftly, how should Debian act against the abusers?

>...
> Andy Cater 

cu
Adrian



Re: Bug#1014908: ITP: gender-guesser -- Guess the gender from first name

2022-07-15 Thread Adrian Bunk
On Thu, Jul 14, 2022 at 04:05:35PM +0200, Jeremy Bicha wrote:
>...
> Debian has a Diversity Statement [1] which says that Debian welcomes
> people regardless of how they identify themselves. Trans people and
> non-binary people face a lot of discrimination, harrassment and
> bullying around the world.

Our Diversity Statement says that Debian "welcomes and encourages 
participation by everyone".

People who express how they identify themselves by having a swastika 
tattoo on their forehead also face a lot of discrimination, harrassment 
and bullying around the world. Our Diversity Statement makes it clear 
that we are welcoming and encouraging their participation and are not 
ourselves discriminating against them.

> That bad treatment of these people is
> against Debian's core values.
>...

Our Diversity Statement says that we "welcome contributions from 
everyone as long as they interact constructively with our community".

Debian does not have core values regarding how people are treated 
outside Debian.

Debian is not a project that fights for trans people or fights for
denazification or fights for whatever other non-technical topics
individual contributors might consider worth fighting for elsewhere.

Diversity means that in any kinds of conflicts people on all sides
are encouraged to contribute to Debian as long as they interact 
constructively with our community.

> Therefore, the Debian Project wouldn't
> want to distribute software that appears to facilitate that kind of
> harassment, regardless of the software license it is released under.
> We might not want to distribute such software even if it also has 
> non-harmful uses.
>...

The exact opposite of diversity is to call everything one dislikes or 
disagrees with "harassment" or *phobic.

> Thank you,
> Jeremy Bicha

cu
Adrian



Re: enabling link time optimizations in package builds

2022-07-01 Thread Adrian Bunk
On Fri, Jun 17, 2022 at 10:18:43AM +0200, Matthias Klose wrote:
>...
> The proposal is to turn on LTO by default on most 64bit release
> architectures.
>...

By what factor does -ffat-lto-objects increase disk space usage during 
package builds?

Please coordinate with DSA to ensure that the buildds on these 
architectures have sufficient diskspace.

amd64 buildds have/had(?) only 74 GB of diskspace, which has even 
without LTO already forced some packages to do manual cleanup steps 
during the build to stay within the limited disk space.

>...
> Link time
> optimizations are also at least turned on in other distros like Fedora,
> OpenSuse (two years) and Ubuntu (one year).
>...
> The idea is to file wishlist bug reports for those 373 packages and then see
> how far we get, and if it's feasible to already turn on LTO for bookworm.
> If not, it should be turned on by default for the following release.

I assume these 373 packages have already been fixed/workarounded in Ubuntu?
Submitting 373 bugs with patches should settle the feasibility question.

A bigger worry is the schedule of such a change.
A major toolchain change shortly before the freeze means the vast 
majority of packages will be shipped with non-LTO builds in the release, 
with security updates or point release updates triggering a change to
an LTO built package.
This means few packages actually benefitting from LTO, but a higher
regression risk when fixing bugs in stable.
The best timing for such a change would be immediately after the release 
of bookworm.

> Matthias

cu
Adrian



Re: needs suggestion on LuaJit's IBM architecture dilemma

2022-05-11 Thread John Paul Adrian Glaubitz
Hi!

On 5/12/22 03:29, M. Zhou wrote:
> I learned in disappointment after becoming LuaJit uploader that
> the LuaJit upstream behaves uncooperatively especially for IBM
> architectures [1]. IIUC, the upstream has no intention to care
> about IBM architectures (ppc64el, s390x).
> 
> The current ppc64el support on stable is done through cherry-picked
> out-of-tree patch. And I learned that the patch is no longer
> functional[2] for newer snapshots if we step away from that
> ancient 2.1.0~beta3 release.
> 
> However, architectures like amd64 needs relatively newer version[3],
> while IBM architecture still has demand luajit[4] (only the
> ancient version will possibly work on IBM archs).

I saw that Matej Cepl was replying in the thread who is a colleague of
mine and who is the maintainer of the luajit package in openSUSE/SLE.

Since SUSE has a commercial interest in working POWER/S390 support, he
takes care of the package and makes sure it keeps working on these
architectures.

My suggestion would be to just pick the packages from openSUSE [1]
since they are kept up-to-date.

Adrian

> [1] https://build.opensuse.org/package/show/devel:languages:lua/luajit

-- 
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer
`. `'   Physicist
  `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913



Re: RFC: pam: dropping support for NIS/NIS+?

2022-05-08 Thread Adrian Bunk
On Fri, Apr 22, 2022 at 01:41:50PM -0700, Steve Langasek wrote:
>...
> On Fri, Apr 22, 2022 at 10:07:52PM +0200, la...@debian.org wrote:
> > I'm using NIS since 20+ years in a small network with about 60 computers.
> > Since I manage all computers and the physical network can be seen as secure
> > (I know it's not perfect secure) I do not need the additional crypto
> > features of NIS+ or LDAP, which would be overkill. All my users use
> > yppasswd on the NIS master for changing their password. I guess I
> > still need pam support for this because I set things like this in
> > pam.d/common-password:
> 
> > passwordrequisite   pam_cracklib.so retry=3 
> > difok=3 minlen=14
> 
> > Yes, I surely would miss the NIS support.
> 
> If your users are using yppasswd on the NIS master for changing passwords,
> then evidently you are not relying on support for NIS in PAM.  (yppasswd
> doesn't even link against libpam.)

Could you add this information to NEWS.Debian and/or the release notes?

People administrating networks tend to be the people who actually read 
the release notes before planning an upgrade to a new release.

Thanks
Adrian



Re: e17 is marked for autoremoval from testing

2022-04-25 Thread Adrian Bunk
On Mon, Apr 25, 2022 at 02:41:50PM -0700, Ross Vandegrift wrote:
> Hello,

Hi Ross,

> The autoremoval below has me stumped.  I couldn't find any
> (build-)dependency on freeradius packages.

e17 -> connman -> openconnect -> ocserv -> freeradius

> Thanks in advance for any hints,
> Ross

cu
Adrian

> On Mon, Apr 18, 2022 at 04:39:16AM +, Debian testing autoremoval watch 
> wrote:
> > e17 0.25.3-1 is marked for autoremoval from testing on 2022-05-17
> > 
> > It (build-)depends on packages with these RC bugs:
> > 1008832: freeradius-python3: Module not linked with libpython when built 
> > with Python 3.10
> >  https://bugs.debian.org/1008832
> > 
> > 
> > 
> > This mail is generated by:
> > https://salsa.debian.org/release-team/release-tools/-/blob/master/mailer/mail_autoremovals.pl
> > 
> > Autoremoval data is generated by:
> > https://salsa.debian.org/qa/udd/-/blob/master/udd/testing_autoremovals_gatherer.pl
> > 



Re: isa-support -- exit strategy?

2022-04-14 Thread Adrian Bunk
On Wed, Apr 06, 2022 at 01:38:09PM +0200, Adam Borowski wrote:
> On Sun, Apr 03, 2022 at 02:17:15PM +0300, Adrian Bunk wrote:
> > On Fri, Mar 25, 2022 at 11:34:17PM +0100, Adam Borowski wrote:
> > > * while a hard Depends: works for leafy packages, on a library it
> > >   disallows having alternate implementations that don't need the
> > >   library in question.  Eg, libvectorscan5 blocks a program that
> > >   uses it from just checking the regexes one by one.
> 
> > glibc 2.33 added a modernized version of the old hwcaps.
> > If a package builds a library several times with different optimizations 
> > and installs them into the correct directories in the binary package, 
> > the dynamic linker will automatically select the fastest one supported 
> > by the hardware.
> > 
> > SIMDe (or similar approaches) could be used to build variant(s) of the 
> > library that have compile-time emulation of SIMD instructions in the 
> > lower baseline builds of vectorscan.
> 
> In this particular case, it'd probably be faster to use non-SIMD ways
> instead of emulating them.  This means two code paths, which particular
> users may or may not want to do the effort to implement.

For supporting older baselines my priority would be functionality with 
minimal effort both for upstreams and Debian maintainers, not optimal
performance on old hardware.

> > For binaries, I have seen packages in the Debian Med (?) team that build 
> > several variants of a program and have a tiny wrapper program that chooses
> > the correct one at startup.
> 
> This may take substantial work to implement, which for typical Debian Med
> packages is an utter waste of time.
>...

The proper approach would be to have the implenmentation in debhelper,
so that the maintainer only has to declare which n different variants
of the program to build on $architecture, and then everything including
the wrapper is built by debhelper.

I am not saying that I plan to implement it, but that's how I would
design it to avoid the per-package work you are worried about.

cu
Adrian



Re: isa-support -- exit strategy?

2022-04-05 Thread Adrian Bunk
On Sun, Apr 03, 2022 at 02:42:18PM +0200, Bastian Blank wrote:
> On Sun, Apr 03, 2022 at 02:17:15PM +0300, Adrian Bunk wrote:
> > SIMDe (or similar approaches) could be used to build variant(s) of the 
> > library that have compile-time emulation of SIMD instructions in the 
> > lower baseline builds of vectorscan.
> 
> But why?  Who in their right mind would ever try to use those aweful
> slow implementations?

There are often usecases where speed is critical and usecases where 
speed is not that important.
E.g. one of the versions of the regex library Adam memtioned as example 
is being used by rspamd.
On a busy mailserver the performance of the regex library might
be critical, for filtering your personal emails an awful slow 
implementation might still be fast enough that you don't care.

In areas like multimedia it is common that you end up having gazillions 
of libraries linked and loaded you might never use.
E.g. a program that uses FFmpeg for mp3 decoding is also indirectly
linked with several libraries for video encoding.
If the whole library is compiled with some -msse or -march, then 
starting the program might fail due to unsupported instructions the 
compiler generated in the init function of a library you wouldn't
have used.

> Bastian

cu
Adrian



Re: isa-support -- exit strategy?

2022-04-03 Thread Adrian Bunk
On Fri, Mar 25, 2022 at 11:34:17PM +0100, Adam Borowski wrote:
> Hi!

Hi Adam!

>...
> * while a hard Depends: works for leafy packages, on a library it
>   disallows having alternate implementations that don't need the
>   library in question.  Eg, libvectorscan5 blocks a program that
>   uses it from just checking the regexes one by one.
> 
> Suggestions?

glibc 2.33 added a modernized version of the old hwcaps.
If a package builds a library several times with different optimizations 
and installs them into the correct directories in the binary package, 
the dynamic linker will automatically select the fastest one supported 
by the hardware.

SIMDe (or similar approaches) could be used to build variant(s) of the 
library that have compile-time emulation of SIMD instructions in the 
lower baseline builds of vectorscan.

People using libvectorscan5 on modern hardware with SSE 4.2 would then 
get the properly optimized fast version, while people on older hardware 
would get a version that is slow but works.

For binaries, I have seen packages in the Debian Med (?) team that build 
several variants of a program and have a tiny wrapper program that chooses
the correct one at startup.

> Meow!

cu
Adrian



Bug#1008145: ITP: partman-hfs -- Add to partman support for hfs and hfsplus

2022-03-23 Thread John Paul Adrian Glaubitz
Package: wnpp
Severity: wishlist
Owner: John Paul Adrian Glaubitz 
X-Debbugs-Cc: debian-devel@lists.debian.org,debian-powe...@lists.debian.org

* Package name: partman-hfs
  Version : 1
  Upstream Author : John Paul Adrian Glaubitz 
* URL : https://salsa.debian.org/installer-team/partman-btrfs
* License : GPL-2.0+
  Programming Lang: Shell
  Description : Add to partman support for hfs and hfsplus

This package contains partman support for creating HFS and HFS+ filesystems
in debian-installer. HFS and HFS+ are primarily useful on Apple Macintosh
computers. In particular, support for HFS/HFS+ filesystems is required in
debian-installer to create boot partitions for installing GRUB on Apple
Power Macintosh systems.



Re: How to get rid of unused packages (Was: proposed MBF: packages still using source format 1.0)

2022-03-16 Thread Adrian Bunk
On Wed, Mar 16, 2022 at 02:11:09PM +0100, Andreas Tille wrote:
>...
> I'm not sure whether there are any PalmPilot devices out there.  At
> least the actual *votes* in popcon[1] is down to zero now.

This is less convincing than it sounds, since popcon data is based only 
on a tiny and non-representative fraction of our users.

You cannot claim a package is unused solely based on popcon data.

Debian Med also has packages with zero popcon votes, users of software 
for exotic/ancient hardware or uncommon usecases (like Debian Med) are
not generating high popcon numbers.

> The package
> was not uploaded by its maintainer for >10 years.  It received an NMU by
> Adrian Bunk (in CC as well):
> 
> [2022-01-02] imgvtopgm 2.0-9.1 MIGRATED to testing (Debian testing watch)
> [2021-12-27] Accepted imgvtopgm 2.0-9.1 (source) into unstable (Adrian Bunk)
> [2011-02-23] imgvtopgm 2.0-9 MIGRATED to testing (Debian testing watch)
> [2011-02-13] Accepted imgvtopgm 2.0-9 (source i386) (signed by: Erik Schanze) 
> 
> The changelog of that NMU was:
> 
>* Non-maintainer upload.
>* debian/rules: Add build-{arch,indep}. (Closes: #999003)
> 
> 
> >From my naive perspective this package caused some work from a quite
> busy maintainer for no obvious user base.  May be I'm wrong in this
> specific case but this observation raises my question:  Do we have any
> means to get rid of packages that should be rather removed from the
> distribution than draining resources.

You are getting it wrong what was draining the resources.

It was not the package that was draining the resources,
it was the MBF that was draining the resources.

And these MBFs usually fail to make a convincing case that the benefits
are worth all the resources that are drained by the MBF.

> If the answer is no should we possibly use the list of packages that are
> not topic of the heated debate around the source format 1.0 (where
> maintainers are obviously are caring about their packages just disagree
> with format 3.0 format) to pick some packages that should be rather
> removed than fixed?

How do you define "rather removed"?

According to the BTS there was and is no known user-visible problem in 
the package that needed or needs fixing in the package you are using
as example.

I am still a regular user of my 15 year old iPod, and I was pretty 
annoyed when I had to do an emergency adoption (changing nothing but the 
maintainer field) of a package I use for it after seeing that someone 
thought it would be a good idea to do "RM: RoQA; Upstream not active, orphaned".

As DD I can do that if I notice, the average user cannot do anything and 
won't even notice until the next release in 1.5 years.

I do consider it a regression when we no longer ship a package in a 
release that was in the previous Debian release.
It is not a problem for us to continue shipping imgvtopgm.
And that's why I'd like to see a case made why it is better for our 
users when a package is no longer shipped.

It might or might not be possible to make the case for removal of this 
specific package, but "low popcon" or "abandoned upstream" alone are not
convincing points.

> Kind regards
> 
>   Andreas.
>...

cu
Adrian



Re: proposed MBF: packages still using source format 1.0 [revised proposal]

2022-03-10 Thread Adrian Bunk
On Thu, Mar 10, 2022 at 09:49:50PM +0100, Lucas Nussbaum wrote:
>...
> For packages in (1.1) and (1.2), I propose to file Severity: wishlist
> bugs using the following template:
> 
> -->8
> Subject: please consider upgrading to 3.0 source format
> Severity: wishlist
> Usertags: format1.0
> 
> Dear maintainer,
> 
> This package is among the few (1.9%) that still use source format 1.0 in
> bookworm.  Please upgrade it to source format 3.0, as (1) this format has many
> advantages, as documented in https://wiki.debian.org/Projects/DebSrc3.0 ; (2)
> this contributes to standardization of packaging practices.
> 
> Please note that this is also a sign that the packaging of this software
> could maybe benefit from a refresh. It might be a good opportunity to
> look at other aspects as well.
> 
> This mass bug filing was discussed on debian-devel@:
> https://lists.debian.org/debian-devel/2022/03/msg00074.html
>...

josch already has tested patches for more than half of the packages, 
starting by submitting bugs for these packages with these patches will 
avoid work for maintainers and result in faster fixing of the bugs.

> Lucas

cu
Adrian



Re: Seeking consensus for some changes in adduser

2022-03-08 Thread Adrian Bunk
On Tue, Mar 08, 2022 at 05:49:04PM +0100, Marc Haber wrote:
>...
> (2)
> #774046 #520037
> Which special characters should we allow for account names?
> 
> People demand being able to use a dot (which might break scripts using
> chown) and non-ASCII national characters in account names. The regex
> used to verify non-system accounts is configurable, so the policy can be
> locally relaxed at run-time.
> 
> For system-accounts, I'd like to stick to ASCII letters, numbers,
> underscores.
>...

There is a DD with the login 93sam, and this is already outside of 
what systemd accepts.[1]

Non-ASCII characters in account names sound like a lot of breakage
and CVEs to me.

> Greetings
> Marc

cu
Adrian

[1] https://github.com/systemd/systemd/issues/6237



Re: proposed MBF: packages still using source format 1.0

2022-03-08 Thread Adrian Bunk
On Tue, Mar 08, 2022 at 04:45:48PM +0100, Lucas Nussbaum wrote:
>...
> 1/ the arguments about using patches to track changes to upstream code.
> Among the ~600 packages in that potential MBF, there are still many that
> make changes to upstream code, which are not properly documented. I
> believe that it is widely accepted that seperate patches in 
> debian/patches/ are the recommended way to manage changes to upstream code 
> (good way to help with those changes getting reviewed, getting merged 
> upstream, etc.)

This is a reason *against* using RC bugs for forcing people to change it 
this year:

The sane way to minimize the regression risk when NMUing such an RC bug 
would be to dump the diff into a patch without touching it.

>...
> 3/ the arguments about standardization/simplication of packaging 
> practices, that make it easier (1) for contributors to contribute to any
> package (think security support, NMUers, but also derivatives);
>...

Actual work and actual breakage for the benefits of hypothetical 
contributors, that does not sound very convincing to me.

I am doing QA/NMU/*stable uploads for a three digit number of packages
I do not maintain every year, and this is not something I recall being 
a real problem.

> You argue that it's fine to wait 10 years for a transition such as the 
> switch to 3.0 (quilt). Actually, it has already been 11 years, since 
> 3.0 (quilt) was introduced around 2011
>...

Time before an issue is a lintian warning doesn't count.

If it isn't a problem for anyone and lintian does not warn,
how would anyone even notice?

> What we are talking about here is the "end game": there are less than 2%
> of packages in testing that are still using 1.0,
>...

There are lies, damn lies, and statistics.
600 (?) packages is a more realistic depiction than 2%.

If done carefully how many hours of work do you estimate it would take 
for all 600 packages, including ones that might for some reason be hard
to convert?

If you want to force work upon many people in the project, the burden of
proof is on you to show that the time is better spent on this than on
other bookworm work that could be done instead.

> Lucas

cu
Adrian



Re: proposed MBF: packages still using source format 1.0

2022-03-08 Thread Adrian Bunk
On Tue, Mar 08, 2022 at 05:10:44PM +0100, Johannes Schauer Marin Rodrigues 
wrote:
>...
> So now we have 364 source packages for which we have a patch and for which we
> can show that this patch does not change the build output. Do you agree that
> with those two properties, the advantages of the 3.0 (quilt) format are
> sufficient such that the change shall be implemented at least for those 364?

It sounds like a good idea to submit patches.

Some might be tagged wontfix, e.g. the Debian X Strike Force has a
workflow that would not work the same way with 3.0.

> Thanks!
> 
> cheers, josch

cu
Adrian



Re: proposed MBF: packages still using source format 1.0

2022-03-08 Thread Adrian Bunk
On Tue, Mar 08, 2022 at 11:39:04AM +0100, Andreas Tille wrote:
> Hi Adrian,

Hi Andreas,

> Am Mon, Mar 07, 2022 at 11:42:42PM +0200 schrieb Adrian Bunk:
>...
> > lintian already warns or has info tags that should be upgraded to warning,
> 
> I absolutely agree here.
> 
> > and then there will be slow migrations usually happening when someone
> > anyway does (and tests!) larger packaging changes.
> 
> This "someone anyway does larger packaging changes" did not seem very
> probable for the packages I've touched (see my other mail in this
> thread).
> 
> > Ensuring that all relevant lintian tags are warnings would be the 
> > appropriate action (which is not yet true[1]), but there is no urgency 
> > on getting everything "fixed" immediately.
> 
> I agree that there is no real urgency for immediate action - but this
> seemed to be the case for other bugs on the packages I've touched the
> case as well.

what time frame do you have in mind when you write "no real urgency"
and "did not seem very probable"?

For me a reasonable time frame for changes that are neither urgent nor
supposed to create user-visible changes in the binary packages would be
to ensure it is a lintian warning now, and then wait 10 years.

Many maintainers touch their packages at least once per release cycle 
and also check lintian warnings, so many packages will get fixed within 
the next 1-2 years.

Most packages will get a new maintainer or a new team member in an 
existing maintainance team within the next 10 years, and with the
help of a lintian warning this is the natural time for doing such
changes.

> Kind regards
> 
>  Andreas.
>...

cu
Adrian



Re: proposed MBF: packages still using source format 1.0

2022-03-07 Thread Adrian Bunk
On Sun, Mar 06, 2022 at 09:25:45PM +0100, Lucas Nussbaum wrote:
>...
> I think that we should reduce the number of packages using the 1.0 format, as
> (1) format 3.0 has many advantages, as documented in
> https://wiki.debian.org/Projects/DebSrc3.0 ; (2) this contributes to
> standardization of packaging practices, lowering the bar for contributors to
> contribute to those packages.
>...

You are not making a compelling case that these benefits clearly 
outweight the substantial costs.

Such a MBF also:
(1) causes a lot of extra work, and
(2) causes a lot of breakage because such larger packaging changes
are rarely done as careful as would be necessary

When people are making invasive packaging changes like a dh compat bump 
or change the packaging due to such a MBF we often end up with bug 
reports like #1000229 where something broke due to that (empty binary 
packages are among the more typical breakages).

Unless a compelling case is made that the benefits of a MBF clearly 
outweight these drawbacks, such MBFs usually have a negative benefit.

lintian already warns or has info tags that should be upgraded to warning,
and then there will be slow migrations usually happening when someone
anyway does (and tests!) larger packaging changes.

Ensuring that all relevant lintian tags are warnings would be the 
appropriate action (which is not yet true[1]), but there is no urgency 
on getting everything "fixed" immediately.

cu
Adrian

[1] https://lintian.debian.org/tags/older-source-format



Re: Use of License-Reference in debian/copyright allowed?

2022-01-16 Thread John Paul Adrian Glaubitz
Hi Jonas!

On 1/16/22 20:06, Jonas Smedegaard wrote:
> Quoting Jonas Smedegaard (2022-01-16 19:53:48)
>> Quoting John Paul Adrian Glaubitz (2022-01-16 19:38:25)
>>> I have updated debian/copyright of both fs-uae-* packages to use the 
>>> "License-Reference" keyword, however lintian now complains about the 
>>> missing license texts so I'm wondering whether this approach - which 
>>> I like - is actually compliant with the Debian Policy?
> [...]
>> I firmly believe that it is Policy-compliant to reference files 
>> included with package base-files and installed below 
>> /usr/share/common-licenses.  All other license texts must be included 
>> verbatim in the debian/copyright file
> 
> Maybe more interesting than what I personally believe might be, that I 
> use that writing style generally for the about 600 packages that I am 
> involved in maintaining, and evidently ftpmasters agree with me.

That's a very positive sign that the FTP team will accept the writing style
as well.

> For anyone considering to adopt this pattern, it is quite some time ago 
> that I helped Vasudev package Roboto fonts, and I have simplified and 
> extended my writing style to use the shorter field "Reference" and also 
> use it to reference sources of copyright holders and license grants when 
> not contained in licensed file itself (with a little special twist of 
> self-referencing canonical statements in debian/copyright).

I agree it's a great idea as it saves a lot of time. Writing an acceptable
debian/copyright file can be quite frustrating so this is a very welcome
improvement.

> I use the package ghostscript as my sort-of reference package.  Look at 
> that for my newest inventions on copyright file writing and checking.
> 
> See also https://wiki.debian.org/CopyrightReviewTools

Thanks, I'll have a look!

Thanks a lot for the quick and detailed response!

Adrian

-- 
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer - glaub...@debian.org
`. `'   Freie Universitaet Berlin - glaub...@physik.fu-berlin.de
  `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913



Use of License-Reference in debian/copyright allowed?

2022-01-16 Thread John Paul Adrian Glaubitz
(Please CC me, I'm not subscribed to debian-devel)

Hi!

I'm currently updating the debian/copyright of my two packages fs-uae-arcade 
[1] and
fs-uae-launcher [2] as both packages got rejected by the FTP team due to an 
incomplete
debian/copyright.

Since the packages contain a lot of different licenses, the debian/copyright 
would be
very long when copying the different license texts verbatim.

However, I stumbled over the fonts-roboto package which resolves this issue by 
using just
references to the full license texts which are present on any Debian system 
anyway [3].

I have updated debian/copyright of both fs-uae-* packages to use the 
"License-Reference"
keyword, however lintian now complains about the missing license texts so I'm 
wondering
whether this approach - which I like - is actually compliant with the Debian 
Policy?

Thanks,
Adrian

> [1] https://github.com/glaubitz/fs-uae-arcade-debian
> [2] https://github.com/glaubitz/fs-uae-launcher-debian
> [3] 
> https://salsa.debian.org/fonts-team/fonts-roboto/-/blob/master/debian/copyright

-- 
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer - glaub...@debian.org
`. `'   Freie Universitaet Berlin - glaub...@physik.fu-berlin.de
  `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913



Re: Bug#1000000: fixed in phast 1.6+dfsg-2

2021-11-18 Thread Adrian Bunk
On Thu, Nov 18, 2021 at 05:12:10PM +0100, Sebastiaan Couwenberg wrote:
>...
> For the Debian package you could drop use_debian_packaged_libpcre.patch and
> use the embedded copy to not block the prce3 removal in Debian.

As a general comment, this would be a lot worse than keeping pcre3.

If any copy of this library should be used at all in bookworm,
it should be provided by src:pcre3.

Switching from src:pcre3 to an older vendored copy would likely create 
additional security vulnerabilities for our users,[1] even with only one 
user in bookworm shipping it security supportable in src:pcre3 would be 
better than hiding vulnerabilities through vendoring.

> Kind Regards,
> 
> Bas

cu
Adrian

[1] https://security-tracker.debian.org/tracker/source-package/pcre3



  1   2   3   4   5   6   7   8   9   10   >