Re: "debian/main" support or ticket open?

2024-03-18 Thread Simon McVittie
On Mon, 18 Mar 2024 at 10:23:23 +0100, Agathe Porte wrote:
> 2024-03-15 10:16 CET, Simon McVittie:
> > When the GNOME team switched from debian/master to debian/latest, it
> > was a coordinated change applied to every package maintained by the team.
> 
> Do we know if this was automated by a tool/script, or if this was a
> manual effort by multiple people? I would be happy to help update our
> current DPT policy to use DEP-14 and perform the migration.

It was mostly done by Amin Bandali using a script:
https://lists.debian.org/debian-gtk-gnome/2023/08/msg5.html

A few packages needed manual checking afterwards because they were not
consistent with the team's conventions (either already using debian/latest,
or still using master, or some other branch name):
https://lists.debian.org/debian-gtk-gnome/2023/09/msg1.html

smcv



Re: "debian/main" support or ticket open?

2024-03-15 Thread Simon McVittie
On Fri, 15 Mar 2024 at 08:10:55 +, c.bu...@posteo.jp wrote:
> To my knowledge in context of DPT and Salsa the branch name "debian/master"
> is used. When creating a new package are there any technical reasons not
> renaming that to "debian/main"?

Naming is a social thing, not a technical thing, so there is unlikely to be
any technical reason for or against any naming that fits the syntax rules.
One important non-technical reason not to choose a different branch name
for new packages is to keep all the team-maintained packages consistent.

If there is going to be any change to this branch
name, then I think it should be to debian/latest as per
 (which is the name used
in various other teams like GNOME), not debian/main.

Other teams don't use debian/main because that name would be confusing:
in Debian, "main" normally refers to the archive area that is not contrib,
non-free or non-free-firmware (or in Ubuntu, the archive area that is
not universe etc.).

There are basically two models in DEP-14:

1. The latest development happens on debian/latest, and might be uploaded
   to either unstable or experimental, whichever is more appropriate. If
   experimental contains a version that is not ready for unstable,
   and a new upload to unstable is needed, then create a temporary
   debian/unstable or debian/trixie branch for it.

2. Uploads to unstable are done from debian/unstable. Uploads to
   experimental are done from debian/experimental, when needed. There is
   no debian/latest branch.

(1.) probably makes more sense for large teams like this one (and it's
what the GNOME team does). (2.) can be useful if your upstream has a
long-lived development branch, but that's not going to be the case for
most DPT packages.

When the GNOME team switched from debian/master to debian/latest, it
was a coordinated change applied to every package maintained by the team.

smcv



Re: Did Python 3.12 developers honestly broke special regexp sequences? (Was: hatop fails its autopkg tests with Python 3.12)

2024-02-13 Thread Simon McVittie
On Tue, 13 Feb 2024 at 18:21:17 +0100, Andreas Tille wrote:
> SyntaxWarning: invalid escape sequence '\.'
> 573s   CLI_INPUT_RE = re.compile('[a-zA-Z0-9_:\.\-\+; /#%]')

This should be:

re.compile(r'[a-zA-Z0-9_:\.\-\+; /#%]')
   ^

a raw string, where the backslashes are not interpreted by the
Python parser, allowing them to be passed through to the re module for
parsing; or alternatively

re.compile('[a-zA-Z0-9_:\\.\\-\\+; /#%]')
^^ ^^ ^^

like you would have to write in the C equivalent.

Reference:

"""
Regular expressions use the backslash character ('\') to indicate
special forms or to allow special characters to be used without
invoking their special meaning. This collides with Python’s usage
of the same character for the same purpose in string literals;
for example, to match a literal backslash, one might have to write
'' as the pattern string, because the regular expression must
be \\, and each backslash must be expressed as \\ inside a regular
Python string literal. Also, please note that any invalid escape
sequences in Python’s usage of the backslash in string literals
now generate a SyntaxWarning and in the future this will become a
SyntaxError. This behaviour will happen even if it is a valid escape
sequence for a regular expression.

The solution is to use Python’s raw string notation for regular
expression patterns; backslashes are not handled in any special way
in a string literal prefixed with 'r'. So r"\n" is a two-character
string containing '\' and 'n', while "\n" is a one-character string
containing a newline. Usually patterns will be expressed in Python
code using this raw string notation.
"""
—re module docs

> which makes me scratching my head what else we should write
> for "any kind of space" now in Python3.12.

\s continues to be correct for "any kind of space", but Python now
complains if you do the backslash-escapes in the wrong layer of syntax.

smcv



Re: Naming of python binary packages

2023-08-11 Thread Simon McVittie
On Fri, 11 Aug 2023 at 14:49:00 +, Stefano Rivera wrote:
> > > According to the Debian Python Policy Section 4.3, binary package
> > > names should be named after the *import* name of the module, not the
> > > PyPI distribution name.
> 
> > Unfortunately, I do not agree at all with this policy. The import name has
> > no importance, and IMO, we should change that policy so that the package
> > name matches the egg-name rather than the import name.
> 
> I wouldn't quite say it has no importance. It describes which part of
> the filesystem the package owns.

More important than that, it describes the interface that the package
provides to its reverse-dependencies: changing the name changes the
interface, and vice versa. Having the package that lets you "import
dbus" systematically be installable as "python3-dbus" is the same design
principle as having the C library with SONAME libgtk-4.so.1 installable
as libgtk-4-1 (and not gtk4-libs as it would be in some distributions),
or having the Perl library that lets you "use File::chdir" installable
as libfile-chdir-perl.

This has been the policy for a while, and I think it's a good policy.
In particular, it forces the necessary conflict resolution to happen
at the distro level if two unrelated upstream projects (perhaps
pyfoo-1.egg-info and Foo-2.egg-info) are both trying to be our
implementation of "import foo".

(disclosure: I wrote some of the text in Python Policy describing the
naming convention under discussion here, but I was clarifying an existing
convention and filling in the details of what to do in corner cases,
rather than originating new policy. See also the thread starting at
https://lists.debian.org/debian-python/2019/11/msg00125.html.)

smcv



Re: pybuild now supports meson

2023-08-02 Thread Simon McVittie
On Wed, 02 Aug 2023 at 17:44:24 +, Stefano Rivera wrote:
> The latest upload of dh-python to unstable (6.20230802) includes a
> meson plugin, so pybuild can easily build a package multiple times for
> all supported Python modules.

I don't think this is necessarily appropriate for a lot of the packages
in the dd-list: many of them don't install any public Python modules,
only private Python modules for internal use (often only for their
tests). It seems better for those to keep using Meson directly, to match
upstream expectations and give their maintainers full control over their
build options.

One that I was surprised not to see on the list is dbus-python, which
currently uses Meson directly. I'd vaguely planned to try building it
using pybuild and meson-python, but pybuild invoking Meson directly
might be better.

(meson-python itself is a false positive: it's already using pybuild.)

smcv



Re: [backintime] Switch the maintainer to "Debian Python Team (DPT)"

2023-07-28 Thread Simon McVittie
On Fri, 28 Jul 2023 at 11:53:29 +0200, Carsten Schoenert wrote:
> To quote from the BTS:
> ---%<---
> > In 1.2 upstream added a test suite. We should run it during build
> > (cd common && $(MAKE) test) but it needs to be able to write to the home
> > directory, which is disabled on Debian auto-builders. Need to find
> > a solution to that.
> --->%---
> 
> To me it's clear what the problem is. The test requires a $HOME folder, but
> the build environment doesn't provide something like this.

If backintime uses $HOME, and doesn't rely on $HOME being the same as
$(getent passwd $(id -u)|cut -d: -f6), then it might actually be possible
to run its test suite with a dependency on a suitably new debhelper.

In debhelper compatibility level 13, dh_auto_test sets $HOME to a temporary
directory (#942111) which might well be enough to run the test suite
non-destructively. If that's sufficient, I'm sure the maintainer of the
backintime package would appreciate a tested patch sent to #940319.
The way to test this would be to build backintime in sbuild, with a uid
whose "official" home directory in /etc/passwd doesn't exist in the chroot.

The other angle this could be approached from is as an upstream developer:
as an upstream, would you really want running the backintime test suite
to make potentially destructive changes to your real home directory? As
an upstream developer of other packages, I wouldn't want that: if I have
made an implementation mistake, I want to be able to find out about that
by running the test suite, knowing that the test suite won't damage my
real home directory.

Making the test suite write to a mock home directory instead of to the
real home directory, and changing or unsetting environment variables that
point to the real home directory (again, see #942111) during automated
testing, would make the test suite safer and more predictable.

smcv



Re: [backintime] Switch the maintainer to "Debian Python Team (DPT)"

2023-07-28 Thread Simon McVittie
On Fri, 28 Jul 2023 at 11:08:38 +, c.bu...@posteo.jp wrote:
> Am 28.07.2023 11:53 schrieb Carsten Schoenert:
> > I don't see any workaround and there is non needed. The bug issue is
> > about the not usable upstream test suite that would need to be called
> > through d/rules.
> 
> Maybe this is again about my expectations and wrong assumptions.
> 
> So it is possible to have packages in the debian repo that don't run any
> tests? I wasn't expecting this.
> So Back In Time is in Debian for many years and never run tests on the
> Debian build system? I'm shocked.

Some packages can sensibly run tests at build-time, and for those packages,
we usually try to do so. Some packages can't sensibly run tests at
build-time (for instance if they need access to a GPU) so we don't; and
for some packages we run tests at build-time, but we need to skip or ignore
specific tests, or even ignore failure entirely, because the tests are
known to be unreliable on some or all architectures.

The only requirements on testing are:

* the package's uploader has done whatever amount of automated or manual
  testing they feel is appropriate;
* if automated tests *are* run, then they must succeed (or the failures
  must be explicitly ignored, if they're not believed to reflect a serious
  problem)
* if the package runs build-time tests, they must meet the requirements in
  Policy (not writing to the user's home directory, not accessing the
  internet, etc.)

If the uploader of backintime has tested it manually, either by running
the test suite themselves or in real-world use of the updated package, then
that's fine for an upload.

smcv



Re: how to properly split up into python3-foo and foo-util package?

2023-07-17 Thread Simon McVittie
On Mon, 17 Jul 2023 at 23:16:03 +0200, Christoph Anton Mitterer wrote:
> How does one know (I guess it must be written somewhere and I just
> missed it - or was to lazy to read the relevant section O:-) ) which
> one the "current directory" is in which stage of the build?
> Or is it simply always ./debian/?

All the targets in debian/rules are invoked from the top-level source
directory, so relative paths like ./debian/rules will exist.

> > I would personally be inclined
> > to use something like
> > 
> > usr/lib/python3*/dist-packages/foo
> > usr/lib/python3*/dist-packages/Foo-*.egg-info
>
> This I don't however understand fully. I thought at least the dist-
> packages/ intermediate dir would come from pybuild?
> 
> Or is your aim rather at the foo and Foo-*.egg-info? Well for those I
> had hoped pybuild would do all possibly necessary checks.

I meant the part inside dist-packages. If version 1.0 had the paths I
quoted above, but my upstream changes the package so that in version 1.5,
it now installs:

usr/lib/python3*/dist-packages/foo
usr/lib/python3*/dist-packages/bar
usr/lib/python3*/dist-packages/Foobar-*.egg-info

then those are changes which affect compatibility with other software that
depends on this library, and I will need to react to them appropriately in
the packaging (and I can update the .install file at the same time).

smcv



Re: how to properly split up into python3-foo and foo-util package?

2023-07-12 Thread Simon McVittie
On Wed, 12 Jul 2023 at 11:19:07 +0200, Andrey Rakhmatullin wrote:
> I don't think "usr/bin stuff should likely go
> in the other". Many Python module packages ship executables, especially
> now that you no longer have Python 2 subpackages.

I would personally say that if the executables are significant, and
particularly if we expect that users will use them without knowing or
caring whether they are implemented in Python, then they should be in
a package with a name and Section that make it look like an executable
program and not a Python library: if nothing else, that will make them
a lot more discoverable. So I think Christoph is probably correct to be
thinking about shipping them as foo-util or similar.

If nothing else, making executables part of the interface of the
python3-foo package is going to come back to bite us when Python 4 happens
(hopefully not soon, but if there have been 3 major versions then there
will probably be a 4th eventually).

If the Python library is considered to be a public API, then it should
be in a python3-foo library. src:binwalk and src:tap.py are examples
of separating out executable + library like this.

If the Python library is considered to be a private implementation detail
of the executables, then it doesn't need to be packaged separately
(for example bmap-tools, dput, meson and offlineimap all contain
private Python libraries that are not a stable API), and ideally it
would be in a location that is not on the default import search path,
like /usr/share/foo or /usr/lib/foo (dput and offlineimap do this,
although bmap-tools and meson don't).

smcv



Re: how to properly split up into python3-foo and foo-util package?

2023-07-12 Thread Simon McVittie
On Wed, 12 Jul 2023 at 02:21:48 +0200, Christoph Anton Mitterer wrote:
> 2) I then tried with such package.install files like those:
>foo-util.install:
>  usr/bin
> 
>python3-foo.install:
>  usr/lib
> 
>a) Why does it work to use just usr/... and not debian/tmp/usr/... ?
>   Actually, both seems to work, which confuses me even more ^^

   From debhelper compatibility level 7 on, dh_install will fall back to
   looking in debian/tmp for files, if it does not find them in the
   current directory (or wherever you've told it to look using
   --sourcedir).
   — dh_install(1)

So what dh_install -pfoo-util does for the usr/bin line is:

- is there a ./usr/bin? - no
- is there a ./debian/tmp/usr/bin? - yes, so package that

I think the short form with just usr/... is the more obvious one in simple
cases like this. Writing it out the long way is only necessary if you're
doing multiple builds (like dbus, which builds and installs the same
source code with different options into debian/tmp and debian/tmp-udeb),
or if you have to disambiguate because your source code contains a
./usr directory.

But if you put a greater value on "explicit is better than implicit"
than I do, then you might prefer to prefix everything with debian/tmp/.

>b) What - if any - is the proper way here? Like I did, with one
>   argument?
>   Or should one use the two arguments per line version?

If the upstream package installs files into their correct places, then
one argument per line is fine, and provides "don't repeat yourself".

More than one argument per line is for when you want to change upstream's
installation location for whatever reason, for example:

usr/bin/this-is-a-game usr/games

or when you are taking a file from the source tree that is not installed
by the upstream build system, and installing it directly:

contrib/utils/some-extra-utility usr/bin

>   Or perhaps (for the 2nd file) rather usr/lib/python* ?

IMO it's often good to be relatively specific in .install files, so that
if your upstream makes an incompatible change, attempting to build an
updated package without acknowledging the change will FTBFS and alert you
that something unusual is happening. So I would personally be inclined
to use something like

usr/lib/python3*/dist-packages/foo
usr/lib/python3*/dist-packages/Foo-*.egg-info

on the basis that if those no longer match, then upstream has made a
significant change that will affect compatibility for third-party code,
in which case I want to know about it (and perhaps do an experimental
upload and check that dependent packages are ready for it before going
to unstable).

> 3) In debian/tmp, the subdir was /python3.11/ but in the final .deb
>file it's actually /python3/ (as I think it should be).
>Is it expected, that first it's /python3.11/ or am I doing anything
>wrong?

I think this is expected, dh_python3 moves it around automatically.

> 4) Are there way to have the Dependencies in control even more
>autodetected?
>a) That foo-util's dependency on python3-foo is somehow auto-filled
>   by dh_python?

Even if it was auto-detected, dependencies within a single source
package should usually be relatively strict, because within the same
source package it's common to make use of internal interfaces that are
not considered to be public API - so you probably want to override it
so it will depend on python3-foo (= ${binary:Version}) anyway.

smcv



Re: Unittests writing to HOME (backintime)

2023-03-29 Thread Simon McVittie
On Wed, 29 Mar 2023 at 07:52:35 +, c.bu...@posteo.jp wrote:
> My question now is why newer version of this package are uploaded then? I
> couldn't find that the tests where deactivated. Maybe this "disabled on
> Debian auto-builders" is outdated and today it is possible to write to HOME
> during build?

In compat level 13, debhelper will set $HOME to a temporary directory
during the build (see #942111).

The user's "official" home directory in the system user database
(getent passwd $(id -u) | cut -d: -f6) is still set to a directory that
does't exist (/nonexistent) on the official buildds.

>From an upstream point of view it's still bad for unit tests to do
anything destructive in $HOME, because upstream developers who run the
unit tests won't want their data destroyed, but it's harmless to do
non-destructive things like writing to ~/.cache.

smcv



Re: What is this about the metainfo-file?

2023-01-20 Thread Simon McVittie
This is really a question about packaging applications, not a question
about packaging Python.

On Fri, 20 Jan 2023 at 09:48:17 +, c.bu...@posteo.jp wrote:
> What is the advantage for Debian users of such a file? Debian doesn't offer
> a "software center".

Yes it does. GNOME Software and KDE Discover are examples of applications
in Debian that use appstream metadata to show a more user-facing view
of the archive than apt does (showing only the interesting end-user-visible
applications, and not showing implementation details like libraries).

> Python projects to offer meta data in form of pyproject.toml or setup.cfg.
> So why should I add another (redundant) meta data file?

One reason is that if software-installation applications like GNOME
Software and KDE Discover were expected to parse Python-specific metadata
for apps that happen to be written in Python, Perl-specific metadata
for apps that happen to be written in Perl, and so on, then that would
scale really badly across all the various languages that exist.

Another reason is that all Python packages, whether they are user-facing
applications (like backintime-qt) or libraries (like dbus-python),
have the Python metadata; but apps like GNOME Software and KDE Discover
mostly only want to show user-facing applications that might appear
in your desktop environment's menus. Having dbus-python appear in a
"software center" app would be pointless and confusing.

> Where is the location of that file? Should it be in the root of the repo or
> is it part of the "/debian" (with control file, etc) folder in that repo?

It should be installed into /usr/share/metainfo in the .deb.

Exactly how it gets there is up to you: the upstream developer of the
package could provide a static XML file in the source package (often
at the top level or in ./data), or the Debian packaging could provide a
static XML file in ./debian, or it could even be generated dynamically at
build-time from metadata in some other representation such as setup.cfg
or pyproject.toml.

This is similar to the way the requirement for a .desktop file is that
it ends up in /usr/share/applications *somehow*, but exactly how it gets
there is up to you, and generating it from a template is one possible
implementation.

If this information is a Debian-specific addition, then please talk to your
upstream about choosing an app ID (in reversed-DNS style, like D-Bus names),
because the app ID should be the same for the Debian package, the Fedora
package, a Flatpak package on Flathub (if it exists), a Snap package (if it
exists), and so on.

In the case of backintime-qt, it seems to use "net.launchpad.backintime"
for D-Bus and polkit, so that would perhaps be a good choice for the
app ID as well.

smcv



Re: Python 3.11 for bookworm?

2023-01-07 Thread Simon McVittie
On Sat, 07 Jan 2023 at 10:23:19 +0200, Andrius Merkys wrote:
> If I may, I would as well be grateful if someone could give a look at:
> 
> #1023972 [src:python-ase] FTBFS with Python 3.11 due to
> pathlib.Path.__enter__() deprecation
> 
> I have no idea how to fix this and the upstream is silent.

My first thought on seeing this was: why were Path objects a context
manager in the first place? What would that mean?

Looking at the code in python3.10 and python3.11 pathlib, it seems the
reason this is deprecated is indeed that it didn't make sense:

def __enter__(self):
# In previous versions of pathlib, __exit__() marked this path as
# closed; subsequent attempts to perform I/O would raise an IOError.
# This functionality was never documented, and had the effect of
# making Path objects mutable, contrary to PEP 428.
# In Python 3.9 __exit__() was made a no-op.
# In Python 3.11 __enter__() began emitting DeprecationWarning.
# In Python 3.13 __enter__() and __exit__() should be removed.
warnings.warn("pathlib.Path.__enter__() is deprecated and scheduled "
  "for removal in Python 3.13; Path objects as a context "
  "manager is a no-op",
  DeprecationWarning, stacklevel=2)
return self

def __exit__(self, t, v, tb):
pass

So the solution seems to be that if your package contains this:

with some_path_object:
do_stuff(some_path_object)

replace it with:

do_stuff(some_path_object)

and if it contains:

with Path(...) as my_path:
do_stuff(my_path)

replace with:

my_path = Path(...)
do_stuff(my_path)

I hope that should be a relatively straightforward change.

smcv



Re: Bug#1026312: Setuptools 65.5.0-1.1 breaks installing Python modules/extensions via meson

2022-12-20 Thread Simon McVittie
Control: severity -1 serious
Control: block 1026526 by -1
Control: block 1026751 by -1
Control: block 1026732 by -1
Control: affects -1 + meson python3-distutils src:dbus-python src:libgit2-glib 
src:gi-docgen

On Sun, 18 Dec 2022 at 11:11:46 +0100, Enrico Zini wrote:
> After the 65.5.0-1.1 update, installing Python modules and extensions
> via meson makes them end up in /usr/local instead of /usr.
> 
> More details are in this debian-devel thread:
> 
> https://lists.debian.org/debian-devel/2022/12/msg00152.html
> 
> This currently breaks wreport and dballe, and xraylib when I try to
> build its Python extensions.

In Lucas Nussbaum's latest round of mass-rebuilds, multiple packages
including dbus-python, gi-docgen and libgit2-glib FTBFS because their
Python modules are installed into /usr/local instead of the expected /usr.

gi-docgen_2022.2+ds-1 is a nice simple example. When it was built on the
buildds, this happened:

> cd obj-x86_64-linux-gnu && LC_ALL=C.UTF-8 meson .. --wrap-mode=nodownload 
> --buildtype=plain --prefix=/usr --sysconfdir=/etc --localstatedir=/var 
> --libdir=lib/x86_64-linux-gnu -Ddevelopment_tests=false
...
>dh_auto_install -i -O--buildsystem=meson
>   cd obj-x86_64-linux-gnu && 
> DESTDIR=/<>/gi-docgen-2022.2\+ds/debian/tmp LC_ALL=C.UTF-8 ninja 
> install
> [0/1] Installing files.
> Installing subdir /<>/gidocgen to 
> /<>/debian/tmp/usr/lib/python3/dist-packages/gidocgen
> Installing /<>/gidocgen/utils.py to 
> /<>/debian/tmp/usr/lib/python3/dist-packages/gidocgen
— 
https://buildd.debian.org/status/fetch.php?pkg=gi-docgen=all=2022.2%2Bds-1=1668280443=0

but when it was rebuilt by Lucas' infrastructure more recently, this
happened:

> cd obj-x86_64-linux-gnu && LC_ALL=C.UTF-8 meson setup .. 
> --wrap-mode=nodownload --buildtype=plain --prefix=/usr --sysconfdir=/etc 
> --localstatedir=/var --libdir=lib/x86_64-linux-gnu -Ddevelopment_tests=false
...
>dh_auto_install -O--buildsystem=meson
>   cd obj-x86_64-linux-gnu && 
> DESTDIR=/<>/gi-docgen-2022.2\+ds/debian/tmp LC_ALL=C.UTF-8 ninja 
> install
> [0/1] Installing files.
> Installing subdir /<>/gidocgen to 
> /<>/debian/tmp/usr/local/lib/python3.10/dist-packages/gidocgen
> Installing /<>/gidocgen/__init__.py to 
> /<>/debian/tmp/usr/local/lib/python3.10/dist-packages/gidocgen
— http://qa-logs.debian.net/2022/12/20/gi-docgen_2022.2+ds-1_unstable.log

and as a result, dh_install failed to find the expected files.

This appears to be a behaviour change in the build system, and I think it's
triggered by the new setuptools. Meson's python module has special cases
for distutils.command.install containing deb_system, and the "real"
distutils.command.install in python3-distutils has that scheme patched
into it; but the new python3-setuptools overrides parts of distutils:

> >>> import distutils.command.install
> >>> distutils.command.install.__file__
> '/usr/lib/python3/dist-packages/setuptools/_distutils/command/install.py'
> >>> sorted(distutils.command.install.INSTALL_SCHEMES)
> ['nt', 'nt_user', 'osx_framework_library', 'posix_home', 'posix_prefix', 
> 'posix_user', 'pypy', 'pypy_nt']

... which notably does not contain the deb_system that Meson relies on
for the expected behaviour on Debian systems.

(See file:///usr/lib/python3/dist-packages/mesonbuild/modules/python.py
for full details)

> Fun fact: unless I missed something in sources.debian.net, there seems
> to be nobody else but me maintaining Debian packages which install
> Python modules via meson.

You must have missed something in sources.debian.net, because you're not
alone here: dbus-python, libgit2-glib and gi-docgen show similar symptoms.

smcv



Re: Bug#1018689: override: python3:python/standard

2022-09-04 Thread Simon McVittie
On Sat, 03 Sep 2022 at 22:41:36 -0700, Sean Whitton wrote:
> On Sun 28 Aug 2022 at 10:33PM -05, Daniel Lewart wrote:
> > Currently, python3 is Priority: optional.
> >
> > The following Buster packages have Priority: standard:
> >   * python
> >   * python-minimal
> >   * python2.7
> >   * python3-reportbug
> >
> > Now the following Priority:standard packages depend on python3
> > (directly or indirectly):
> >   * apt-listchanges
> >   * python3-reportbug
> >   * reportbug
> >
> > Therefore, I think that python3 should change from:
> > Priority: optional
> > to:
> > Priority: standard
> 
> I don't think these dependency relationships bear directly on the issue?

I agree. In Debian Policy versions earlier than 4.0.1, we had the rule
that if a Depends on b, then the Priority of a <= the Priority of b.

However, that rule was removed in Policy 4.0.1 (2017), for good reasons:
briefly, it resulted in supporting packages like obsolete/no-longer-used
shared libraries and older versions of gcc-*-base remaining installed
when there was no longer any real reason for them to be. The rule in
Policy §2.5 is now:

The priority of a package is determined solely by the functionality it
provides directly to the user. The priority of a package should not be
increased merely because another higher-priority package depends on it

So python3 should not be elevated to standard priority merely because
tools like reportbug depend on it: the dependency system will already
ensure that reportbug's dependencies are present.

Instead, the decision should be made on this basis: imagine that
apt-listchanges and reportbug had been written or rewritten in some other
language (perhaps Perl or C). Would we still want the python3 interpreter
to be available in a standard Debian installation on its own merits,
as an interpreter for user-written scripts and/or an interactive REPL,
as part of "a reasonably small but not too limited character-mode system"
that "doesn’t include many large applications"?

On one hand, it's probably a positive thing for a "standard" Debian
installation to include an interpreter for an easy-to-learn programming
language with fewer sharp edges than shell script, both as something we
can suggest people use to learn more about programming and as something
we can encourage people to prefer over writing shell scripts.

On the other hand, Python upstream would likely prefer for someone who
is interested in learning Python to be expected to install python3-full,
and it's difficult to say "python3 should be Priority: standard" without
either making subjective value-judgements about the relative quality
of various programming languages, or using reasoning that would apply
equally to promoting (for example) ruby, nodejs or lua to standard
(which seems like an approach that would scale poorly).

Looking at other interpreters, we already have bash and perl-base in
Essential, awk transitively Essential, and perl in Priority: standard. I
would personally lean more towards demoting perl to optional than
promoting python3 to standard.

smcv



Re: Bug#1017959: RFP: meson-python -- Meson PEP 517 Python build backend

2022-09-03 Thread Simon McVittie
Control: retitle -1 ITP: meson-python -- Meson PEP 517 Python build backend
Control: owner -1 !

On Tue, 23 Aug 2022 at 01:25:49 +0200, Drew Parsons wrote:
> * Package name: meson-python
>   Description : Meson PEP 517 Python build backend

I started looking at this because I've wondered whether to use it for
dbus-python. Work in progress:
https://salsa.debian.org/python-team/packages/meson-python
(not really tested yet, I don't yet have an upstream project that
needs it).

Co-maintainers welcome; Drew, would you be interested?

smcv



Re: pybuild-autopkgtest (was: Notes from the DC22 Python Team BoF)

2022-07-27 Thread Simon McVittie
On Wed, 27 Jul 2022 at 09:18:42 +0100, Julian Gilbey wrote:
> There seems to be little point running both pybuild-autopkgtest and a
> manually written debian/tests/* test suite.

I think it can make sense to have both. d/tests is the right place for
an integration test that checks things like "any user-facing executables
are correctly in the PATH", which an upstream-oriented test probably
cannot assert because you might be installing into ~/.local or similar.

Using src:tap.py as an example, pybuild-autopkgtest should presumably
replace d/tests/python3 and/or d/tests/python3-with-recommends (which
are tests for the library you get from "import tap"), but wouldn't test
the tappy(1) CLI entry point, which is tested by d/tests/tappy.

smcv



Re: Build and run-time triplets

2022-06-09 Thread Simon McVittie
On Thu, 09 Jun 2022 at 09:56:42 +0100, Julian Gilbey wrote:
> OK (and yes, it does require the full path at runtime).  What triplet
> do I use in d/rules?  dpkg-architecture offers 6 different ones:
> DEB_{BUILD,HOST,TARGET}_{GNU_TYPE,MULTIARCH}?  I'm guessing
> DEB_TARGET_MULTIARCH, but I'm really not certain, so it would be great
> to confirm that.

You'd want DEB_HOST_MULTIARCH here (or use ${LIB} as I mentioned in a
previous message to this thread).

DEB_BUILD_* is the architecture you are building *on*. For example,
if you cross-build ARM packages on an x86 machine, DEB_BUILD_* is x86.
Only use this rarely, for instance if you are compiling a program that
you will run during the build but not install into the package. If you
don't know that this is the correct choice then it probably isn't.

DEB_HOST_* is the architecture you will run the built code on. For example,
if you cross-build ARM packages on an x86 machine, DEB_HOST_* is ARM.
This is usually the interesting one in practice.

DEB_TARGET_* is almost never relevant: it only matters if you are
compiling a compiler.

*_MULTIARCH is the normalized multiarch tuple, for example i386-linux-gnu
on the i386 architecture.

*_GNU_TYPE is the GNU tuple to be used by Autotools build systems and
to choose a cross-compiler, for example i686-linux-gnu on the i386
architecture.

> About the location, though: why do compiled Python libraries live in
> /usr/lib/python3/dist-packages/ and not
> /usr/lib//?

The Python jargon for a native C/C++ library that can be loaded to
provide a Python module is an *extension*.

If a program will load a shared object as a plugin, then the shared object
needs to be installed into whatever directory the program has chosen as
the location where it looks for plugins, otherwise the program will not
find it: it is the program's choice, not the plugin's choice.

Python extensions are like plugins for Python, and Python has chosen
/usr/lib/python3/dist-packages as one of several locations where it looks
for extensions (specifically, extensions that come from a package other
than Python itself), so that is where you have to put them. If the extension
that implements the "foo" module is somewhere else, then Python code that
does "import foo" won't find it.

(Analogous: if a package wants to provide a plugin for GStreamer, it has
to put it in /usr/lib//gstreamer-1.0. If it puts the plugin
anywhere else, GStreamer will not find it and the plugin will be useless.)

The shared object also needs to follow the naming convention and API that
the program that will load it has chosen. The naming convention that Python
has chosen is that if you will load the module with "import foo", then
the filename is something like "foo.cpython-310-x86_64-linux-gnu.so" or
"foo.abi3.so", and it must export a symbol named "PyInit_foo" with a
particular signature.

Something that might be confusing you is that for a lot of Python modules
that are implemented in C, the thing that API users are meant to import
directly is Python code, and the Python code internally imports an
extension with a different name. For instance, to use python3-dbus,
you "import dbus", but the lower-level C code that implements that is
actually in a private extension named "_dbus_bindings" (so the filename is
_dbus_bindings.cpython-310-x86_64-linux-gnu.so). As an API user, you are
not meant to "import _dbus_bindings", but the Python code in python3-dbus
needs to be able to do that in order to do its job.

> And is there a good reason not to do
> the same with this Python-package-specific library?

If it isn't a Python extension (cannot be loaded into Python with
"import foo" to provide a module API to Python code) then it would seem
inappropriate to put it in the directory that is reserved for Python
extensions.

> It's not for
> general use, so I can't see why I shouldn't put it in the python3
> directory with the other compiled Python module libraries.

Because it isn't a Python extension (what you are calling a "compiled
Python module library"), but instead some other sort of library with a
different purpose (from your other messages, it seems like it's being
injected into a program that is being debugged with gdb).

smcv



Re: Build and run-time triplets

2022-06-09 Thread Simon McVittie
On Thu, 09 Jun 2022 at 13:03:25 +0500, Andrey Rahmatullin wrote:
> The normal way for this is putting it into
> /usr/lib//pkgname/foo.so, and according to the code below you'll
> need the full path at the run time so you indeed need the triplet at both
> build and run time.

You can do something like

 handle = dlopen("/usr/${LIB}/pkgname/foo.so", flags);

(that's a literal string passed to dlopen!) and ld.so/libdl will expand
${LIB} at runtime to the token that was configured into our glibc, which
in Debian's case is "lib/x86_64-linux-gnu" or similar. On non-Debian
distributions, ${LIB} typically expands to "lib64" or "lib32" or "lib"
instead, whichever one is most appropriate for the architecture and the
distro's layout.

Then you'd install the private library into what Autotools would refer to
as ${libdir}/pkgname/foo.so (adjust as necessary for other build systems)
and it will usually end up in the correct place. This assumes that
${libdir} is configured to something like
${exec_prefix}/lib/x86_64-linux-gnu or ${exec_prefix}/lib64 as appropriate
for the distribution, but that's normally true anyway, and in particular
should be true in debhelper.

Replace /usr with the ${exec_prefix} determined at compile-time if you
want to send code upstream that supports custom installation prefixes.

(Historical note: this is broken on ancient/EOL versions of Debian and
maybe some of the Extended Security Maintenance versions of Ubuntu,
where ${LIB} mistakenly expanded to the multiarch tuple without the
"lib/" prefix. Non-FHS distributions like Exherbo and NixOS might also
need to adjust it a bit, but they have to adjust everything else anyway,
so they should be used to that...)

smcv



Re: Python C-library import paths

2022-04-02 Thread Simon McVittie
On Sat, 02 Apr 2022 at 12:55:37 +0100, Wookey wrote:
> On 2022-04-01 00:30 -0400, M. Zhou wrote:
> > They have written
> > their own ffi loader, so I think it is an upstream bug. The upstream
> > should detect and add multiarch directory to the paths.
>
> A correct implemntation really should use the full ldconfig set of search 
> paths.

I think what they should actually be doing on Linux (at least in distro
packages) is taking a step back from all these attempts to reproduce
the system's search path for public shared libraries, and instead doing
this in https://github.com/apache/tvm/blob/main/python/tvm/_ffi/base.py:

ctypes.CDLL('libtvm.so.0')

which will (ask glibc to) do the correct path search, in something like
99% less code.

Maybe all this complexity is needed on Windows or in a "relocatable"
package, but for a distro package it's completely unnecessary and
sometimes harmful.

> I also don't think it should use the $PATH paths for finding
> libraries (but maybe upstream have some reason for doing that)

I suspect the reason is: on Windows, PATH is the equivalent of Linux PATH,
but it also has a dual role as the equivalent of Linux LD_LIBRARY_PATH.

smcv



Re: sysconfig default scheme change in Python 3.10

2022-03-28 Thread Simon McVittie
On Mon, 28 Mar 2022 at 21:17:40 +, Stefano Rivera wrote:
> We've fixed this issue in pybind11 and automake-1.16, so far.

Does this mean that packages that build using Automake, but use the
pregenerated configure/Makefile.in provided by the upstream maintainer
(often on an older or non-Debian distro) instead of regenerating the
build system with dh_autoreconf or equivalent, are going to be broken now?

If true, this will hopefully not affect too many packages, because using
dh_autoreconf is the debhelper default and has been considered a Debian
best-practice for a while; but it seems worth being aware of, because I'm
reasonably sure there are still some packages that deliberately do not
regenerate the build system, and some of those probably build CPython
modules.

smcv



Re: DPT repositories checks/"violations" report

2021-11-27 Thread Simon McVittie
On Sat, 27 Nov 2021 at 09:38:41 +, Scott Kitterman wrote:
> I don't think the pypi tarball "issue" should be presumed to be a
> problem at all.  I wasn't paying attention to Debian when that discussion
> happened, but in my experience there was a lot wrong with the idea.
> A properly constructed sdist is exactly what we want to build a package
> from.  That's almost never found on GitHub.

I think the closest we got to a conclusion was "it depends": if your
upstream reliably produces a properly constructed sdist (or at least is
happy to accept pull requests to make their sdist properly constructed)
then it makes an ideal source package, but if your upstream treats sdists
in closer to the same way a C programmer would treat a prebuilt binary
release (omitting source and including content generated from that source
instead), then a git clone is probably more appropriate.

To me, at least, it makes sense for this to be a case-by-case decision
made by someone who is familiar with this specific upstream - and wanting
to have someone familiar with this specific upstream is why we have named
maintainers, rather than having everything collectively-maintained like
some distributions do.

(For what it's worth, the GNOME team uses a mixture of `meson dist` and
git clones, and that's with an upstream that is a single project that is
in principle meant to be team-maintained with a single cohesive policy -
so if we can't standardize on one source format being "always the right
one" for GNOME, I would be very surprised if the Python team was able to
standardize on one source format for a large number of separate upstreams
linked only by their implementation language.)

smcv



Re: [RFC] DPT Policy: Canonise recommendation against PyPi-provided upstream source tarballs

2021-06-26 Thread Simon McVittie
On Fri, 25 Jun 2021 at 18:29:19 -0400, Nicholas D Steeves wrote:
> Take for example the
> case where upstream exclusively supports a Flatpak and/or Snap
> package...

Flatpak and Snap aren't source package formats (like Autotools "make dist"
or Meson "meson dist" or Python sdist), they're binary package formats
(like .deb or Python wheels).

I don't know Snap infrastructure well, but Flatpak apps are built from
a manifest that lists one or more source projects, referenced as either
a VCS commit with a known-good commit identifier (usually git) or an
archive with a known-good hash (usually tar and sha256). The manifest
format and the upstream-recommended Flathub "app store" infrastructure
try to push authors towards building from source, although as with
.deb, technically it's possible to release an archive containing binary
blobs and use it as the "source" (which is how proprietary apps like
com.valvesoftware.SteamLink work, similar to many packages in the non-free
archive area).

If the upstream only provides source via their VCS, then obviously we
have to use `git archive` or equivalent because we have no other way to
get a flat-file version, and the experimental dpkg-source format
"3.0 (git)" isn't currently allowed in the Debian archive. If the upstream
releases tarball artifacts and builds their Flatpak app from those, we can
use those too.

I think the problem case here is when the upstream releases something that
has the name and format we would associate with a source release, but
has contents that are somewhere between a pure source release and a binary
release. Autotools "make dist" has always been a bit like this (it contains
a pre-generated build system so that people can build on platforms where
m4 and perl aren't available, and it's common to include pre-generated
convenience copies of things like gtk-doc documentation); Python sdist
archives are sometimes similar. In both Autotools and setuptools, it's
also far too easy to have files in the VCS but accidentally omit them from
the source distribution, by not listing them in Autotools EXTRA_DIST or in
setuptools MANIFEST.in.

What I have generally done to resolve this problem is to use the upstream's
official source releases ("make dist" or sdist), and if they are missing
files that we want, send merge requests to add them to the next release
(for example https://gitlab.gnome.org/GNOME/gi-docgen/-/commit/5fcaba6f
and https://github.com/containers/bubblewrap/commit/1c775f43),
and if necessary work around missing files by shipping them in debian/
(for example https://salsa.debian.org/gnome-team/gi-docgen/-/commit/f16845d9).

Several upstreams of projects I work on, notably GNOME, have been
switching from Autotools to Meson, and one of the reasons I'm in favour
of this tendency is that the Meson "meson dist" archive is a lightly
filtered version of `git archive` (it excludes `.gitignore` and other
highly git-specific files, but includes everything else), making it
harder for upstreams to accidentally omit necessary source code from
their source releases.

smcv



Re: [RFC] DPT Policy: Canonise recommendation against PyPi-provided upstream source tarballs

2021-06-25 Thread Simon McVittie
On Fri, 25 Jun 2021 at 16:42:42 -0400, Nicholas D Steeves wrote:
> I feel like there is probably consensus against the use of PyPi-provided
> upstream source tarballs in preference for what will usually be a GitHub
> release tarball

This is not really consistent with what devref says:

The defining characteristic of a pristine source tarball is that the
.orig.tar.{gz,bz2,xz} file is byte-for-byte identical to a tarball
officially distributed by the upstream author

— 
https://www.debian.org/doc/manuals/developers-reference/best-pkging-practices.en.html#best-practices-for-orig-tar-gz-bz2-xz-files

Sites like Github and Gitlab that generate tarballs from git contents
don't (can't?) guarantee that the exported tarball will never change -
I'm fairly sure `git archive` doesn't try to make that guarantee - so it
seems hard to say that the official source code release artifact is always
the one that appears as a side-effect of the upstream project's git hosting
platform.

That doesn't *necessarily* mean that the equivalent of a `git archive`
is always the wrong thing (and indeed there are a lot of packages where
it's the only reasonably easily-obtained thing that is suitable for our
requirememnts), but I don't think it's as simple or clear-cut as you
are implying.

devref also says:

A repackaged .orig.tar.{gz,bz2,xz} ... should, except where impossible
for legal reasons, preserve the entire building and portablility
infrastructure provided by the upstream author. For example, it is
not a sufficient reason for omitting a file that it is used only
when building on MS-DOS. Similarly, a Makefile provided by upstream
should not be omitted even if the first thing your debian/rules does
is to overwrite it by running a configure script.

I think devref goes too far on this - for projects where the official
upstream release artifact contains a significant amount of content we
don't want (convenience copies, portability glue, generated files, etc.),
checking the legal status of everything can end up being more work than
the actual packaging, and that's work that isn't improving the quality of
our operating system (which is, after all, the point).

However, PyPI sdist archives are (at least in some cases) upstream's
official source code release artifact, so I think a blanket recommendation
that we ignore them probably goes too far in the other direction.

I'd prefer to mention both options and have "use your best judgement,
like you have to do for every other aspect of the packaging" as a
recommendation :-)

smcv



Re: upstream python concerns, python3-full package for bullseye

2021-02-12 Thread Simon McVittie
On Fri, 12 Feb 2021 at 10:40:48 +0100, Valentin Vidic wrote:
> Perhaps python3-core would be more appropriate, and python3-full can be
> left for something even bigger.

We have a python3 package already. If I saw a python3 package and a
python3-core package, I would expect that either they're the same thing,
or python3-core is a smaller and less fully-featured version of python3.

Conversely, we already have a python3-minimal package, and I would expect
python3-core to be larger and more fully-featured than python3-minimal
(or maybe the same), because by definition if it's minimal then it's
the least Python you can have. So:

python3-minimal ≤ python3-core ≤ python3 ≤ python3-full

Changing the meaning of the python3 name is not an option right now,
because that would be a disruptive change, and we're already in the
Debian 11 freeze.

If we want to have a metapackage that is "larger" than our current python3,
then the only option that's really feasible for Debian 11 is for that
larger metapackage to have a new name that is chosen to imply "this is
larger than python3", like python3-full.

> collectd-core - statistics collection and monitoring daemon (core system)
> gnome-core - GNOME Desktop Environment -- essential components

collectd-core is smaller than collectd, gnome-core is smaller than gnome,
and so on.

smcv



Re: Should Binaries provided by python-babel have a "python3-" prefix?

2020-11-27 Thread Simon McVittie
On Thu, 26 Nov 2020 at 22:33:19 +0100, Steffen Möller wrote:
> On 26.11.20 13:16, Nilesh Patra wrote:
> > Currently src:python-babel provides 3 binaries:
> >
> > * python3-babel
> > * python-babel-doc
> > * python-babel-localedata
> >
> > of which python3-babel is the main binary, -babel-doc is for the
> > documentation and -babel-localedata is for storing locale data files
> > used by python3-babel.
> >
> > Should this be renamed to a "python3-" prefix for both binaries? They
> > do not contain any actual code though
>
> I propose to have the "3" only for packages that depend on python3. The
> source package name, documentation and data package names should not be
> versioned.

For the documentation,
https://www.debian.org/doc/packaging-manuals/python-policy/module_packages.html
says python-babel-doc is correct (I wrote this wording, but the
python3-defaults maintainers merged it and I think there's consensus
that it's right):

If the documentation for a module foo provided in python3-foo is
large enough that a separate binary package for documentation is
desired, then the documentation package should preferably be named
python-foo-doc (and in particular, not python3-foo-doc).

For the locale data, the policy doesn't say either way (Python libraries
with separate version-independent data are somewhat rare), but I agree that
python- is likely to be the most appropriate choice here too.

A good way to decide this is to think about what we would do if we had a
Python 4 that is incompatible with Python 3 (which I assume will happen
eventually, although hopefully not for a few years). If these packages
would be shared between python3-babel and python4-babel, then they should
be named with an unversioned python- prefix. That's the reasoning for why
the documentation gets a python- prefix.

The unversioned python- namespace is shared between "Python 2 specifically"
and "not specific to a Python version" for historical reasons: Python 1.x
and 2.x were sufficiently compatible that there was no need to distinguish
between python1-foo and python2-foo.

smcv



Re: Package naming advice: python3-pyls-jsonrpc or python3-jsonrpc-server?

2020-11-01 Thread Simon McVittie
On Sun, 01 Nov 2020 at 19:36:52 +0200, Otto Kekäläinen wrote:
> I am currently reviewing the Debian packaging at
> https://salsa.debian.org/python-team/packages/python-jsonrpc-server of
> the upstream project https://github.com/palantir/python-jsonrpc-server
> 
> Upstream uses 'python-jsonrpc-server' as the repository and also the
> pip package name. Should we follow that in Debian or perhaps use the
> alternative name 'python3-pyls-jsonrpc'?
> 
> Is there some existing naming convention/policy about Python modules
> of this sort?

When you say "follow that", do you mean for the Debian source package
name (.dsc, like dbus-python), or for the Debian binary package name
(.deb, like python3-dbus)?

The binary package name should be mechanically derived from what you
import. If you 'import pyls_jsonrpc', then python3-pyls-jsonrpc is right.
If you 'import pyls.jsonrpc', then python3-pyls.jsonrpc, and so on.

The source package name is less important, and could either resemble the
binary package name or match what upstream calls it.

smcv



Re: SETUPTOOLS_SCM_PRETEND_VERSION trick : how to use it in autopkgtest?

2020-06-09 Thread Simon McVittie
On Tue, 09 Jun 2020 at 11:50:02 +0200, Julien Puydt wrote:
> (1) during the autopkgtest run, we're not in the package source tree,
> are we? So there should be no access do d/changelog?

The cwd of each test is guaranteed to be the root of the source
package, which will have been unpacked but not built. However note
that the tests must test the installed version of the package,
as opposed to programs or any other file from the built tree.
— 
https://salsa.debian.org/ci-team/autopkgtest/blob/master/doc/README.package-tests.rst

I've found that a lot of the time, what makes most sense is to gather
the information you need from the source tree, copy the files you
need from the source tree into a temporary directory, cd into the
temporary directory, and do the rest of your testing there. This avoids
things like Python's default "add the script's directory to sys.path"
behaviour accidentally picking up the version of the library that's in
the source package, which would result in not testing the installed copy
as required. For example, src:python-mpd does this.

Another way to get a similar result is to install the tests as part of the
binary packages, cd into an empty temporary directory, and run them from the
installed location. src:tap.py does this (the tests are small, so they're
just included in the main binary package) and so does src:dbus-python (the
tests are larger and have non-trivial dependencies, so they're a separate
binary package).

autopkgtest guarantees that $AUTOPKGTEST_TMP is an empty temporary
directory, or you can make your own with mktemp -d or similar, or you
can use a tool like ginsttest-runner (aka gnome-desktop-testing-runner,
in the gnome-desktop-testing package) that does it for you.

smcv



Re: python 3.7 for Debian 9

2020-05-05 Thread Simon McVittie
On Tue, 05 May 2020 at 07:45:55 +0200, Vimanyu Jain wrote:
> I have my python application written in version 3.7 and would like to run the
> application on Debian. I would like to know of there is a plan to upgrade
> python to version 3.7 from 3.5 in Debian 9. 

The version of Python in Debian 9 will not change. However, Debian 10 is
the current stable release and has Python 3.7 as its default.

smcv



Re: pkg-config of python3

2020-04-24 Thread Simon McVittie
On Fri, 24 Apr 2020 at 18:32:06 +, PICCA Frederic-Emmanuel wrote:
> > If you want to embed python in an application, you need to use 
> > python3-embed.pc
> > Or python3-config --embed
> 
> then it links the program with -lpython3.8
> 
> so what is the purpose of python3.pc ?

You use python3.pc if you're building extensions that will be loaded by a
Python interpreter, like python3-apt or python3-gi.

smcv



Re: Python3 modules not built for all supported Python versions

2020-03-30 Thread Simon McVittie
On Mon, 30 Mar 2020 at 15:30:01 +0200, Johannes Schauer wrote:
> does this mean that build-depending on python3-dev is wrong in general and
> should instead be replaced by build-depending on python3-all-dev?

It is only wrong for packages that build Python 3 extensions (binary
modules) that are intended to be loadable by all supported Python
3 versions (roughly: `find /usr/lib/python3/dist-packages -name '*.so'`).

For packages that embed Python 3, like the versions of vim that
have Python scripting support, or packages that use a Python 3
extension as an internal implementation detail of some tool, like
gobject-introspection, my understanding is that build-depending
on python3-dev continues to be appropriate. These extensions would
ideally be installed in a private directory, like gobject-introspection's
/usr/lib/x86_64-linux-gnu/gobject-introspection/giscanner/_giscanner.cpython-38-x86_64-linux-gnu.so
- but I know some upstreams and some downstream maintainers (arguably
incorrectly) package private extensions as though they were public
extensions, because the mechanics of doing so are much simpler.

> For example the package src:ros-geometry2 has a super simple
> dh-style rules file, basically just doing:
> 
> %:
>   dh $@ --buildsystem=cmake --with python3
> 
> What would I have to change to successfully fix this problem?

The general answer is that you would have to build it repeatedly in a
loop, with each supported version of Python 3 in turn. I am not aware
of a way to do this in a similarly simple rules file.

smcv



Re: New packages: -doc package with python or python3 prefix?

2020-03-28 Thread Simon McVittie
On Sat, 28 Mar 2020 at 11:44:35 +0100, ghisv...@gmail.com wrote:
> Le samedi 28 mars 2020 à 11:22 +0100, Christian Kastner a écrit :
> > The Python 2 removal page [1] states that existing python-$foo-doc
> > packages should not be renamed to python3-$foo-doc.
> > 
> > But what about new packages? I have a package in NEW that provides
> > python3-tpot, but should the doc package have a python- or python-3
> > prefix?
> > 
> > [1] https://wiki.debian.org/Python/2Removal
> 
> I believe it should remain python- (as the programming language),
> instead of python3- (the major version targeted).

In cases where the documentation is large enough to justify a separate
binary package, this matches my understanding of the policy. If/when there
is eventually a Python 4, I think we would want the module that is used
via "import tpot" to ship python3-tpot, python4-tpot and python-tpot-doc
binary packages; it would seem odd for the documentation for python4-tpot
to be in python3-tpot-doc.

If the documentation is small (in particular if there's just a README),
skip the -doc package and include the documentation in python3-tpot.

Unfortunately https://www.debian.org/doc/packaging-manuals/python-policy/
doesn't say anything either way on this. I think it should, and I'll open
a bug with a reference to the proposed patch in
.

smcv



Re: Python3.8 as default final status

2020-03-27 Thread Simon McVittie
On Fri, 27 Mar 2020 at 11:12:35 -0400, Scott Kitterman wrote:
> meson/0.52.1-1: #952610

This is fixed in experimental. The version in experimental is an unrelated
new upstream release candidate, but the relevant packaging change seems
readily backportable.

smcv



Re: Bug#949187: transition: python3.8

2020-02-05 Thread Simon McVittie
On Wed, 05 Feb 2020 at 08:18:41 +0100, rene.engelh...@mailbox.org wrote:
> Thanks, yes, that prevents the install of the "old"
> gobject-introspection with the new python3 from experimental.

Sorry, I wasn't thinking straight (I blame post-FOSDEM illness). That
isn't actually what you need if you want to port to python3.8 - it won't
support python3.8 until the binNMUs for this transition happen. If you
need a python3.8 version of gobject-introspection before then, you could
maybe NMU it into experimental?

I *think* Build-Depends: python3-dev (>= 3.8) should do what you'd need,
but don't necessarily trust my technical judgement right now!

smcv



Re: Bug#949187: transition: python3.8

2020-02-04 Thread Simon McVittie
On Tue, 04 Feb 2020 at 21:20:07 +0100, Rene Engelhard wrote:
> root@frodo:/# g-ir-scanner 
...
> ModuleNotFoundError: No module named 'giscanner._giscanner'

This is fixed in 1.62.0-5 (#950267). Upload was delayed by FOSDEM, needing
a glib2.0 upload to be built first (to have the right Breaks for the libffi7
transition to avoid autopkgtest regressions), and me being unwell.

smcv



Re: Bug#949187: transition: python3.8

2020-02-03 Thread Simon McVittie
On Sun, 02 Feb 2020 at 09:35:04 +0100, Matthias Klose wrote:
> I think this is now in shape to be started.

Please can this wait until the remaining bits of the libffi7 transition
and the restructuring of the libgcc_s packaging have settled down?

I'm still trying to sort out the missing Breaks around
gobject-introspection, as highlighted by autopkgtest failures: this has
been delayed by needing coordinated action between multiple packages,
some of them quite big (glib2.0), and by Paul and I being at FOSDEM.
This is entangled with python3.8 via pygobject (which will fail tests
with python3.8 as default - an upload is pending to fix that).

Meanwhile, multiple packages seem to FTBFS on s390x with the new libgcc_s
(I've just opened the bug for that, so no bug number known yet), which is
going to limit the ability to get things into testing.

Thanks,
smcv



Re: py2removal RC severity updates - 2019-12-22 17:36:38.269399+00:00

2020-01-03 Thread Simon McVittie
Control: severity 942941 normal
Control: user debian-python@lists.debian.org
Control: usertags 942941 + py2keep

On Sun, 22 Dec 2019 at 12:36:38 -0500, Sandro Tosi wrote:
> # python-dbus-tests is a module and has 0 external rdeps or not in testing
> severity 942941 serious

I do not consider this to justify a release-critical bug at this stage.

src:dbus-python cannot drop its Python 2 support until all of the reverse
dependencies of python-dbus have done so (or been removed from testing, but
that's unlikely to happen while they include key packages like avahi,
jackd2 and pyqt5).

As long as python-dbus exists, there is little or no cost to having
python-dbus-tests continue to be built by the same source package, and
I would be reluctant to have python-dbus exist in the archive without
its automated tests also being present.

I also don't think it's appropriate to escalate the severity of the Python
2 removal bug for an entire source package just because *one* of its
binary packages is a leaf: the criterion should probably be *all* of
its Python 2 binary packages being leaf packages. (See also
libpython2.7-testsuite in #937569, which seems to have a similar issue.)

Thanks,
smcv



Re: name change: python-lark-parser -> python-lark

2019-12-30 Thread Simon McVittie
On Mon, 30 Dec 2019 at 17:15:54 +0100, Peter Wienemann wrote:
> https://bugs.debian.org/945823
> 
> which says:
> 
> "use the name you import in preference to the name from the PKG-INFO".
> 
> That is why I decided to change the name to python-lark. But given the
> PyPI name clash this is certainly not optimal either. So this seems to
> be a particular unfortunate case.

If there are two modules on PyPI, both of which you use via
"import lark", then they cannot both be installed correctly into the
system-wide module search path on the same Debian system - if they
were, even if they happen to avoid having directly conflicting files
(because one is /usr/lib/python3/dist-packages/lark.py and the other is
/usr/lib/python3/dist-packages/lark/__init__.py, or similar), installing
both and using "import lark" would not consistently import the one you
intended to use, leading to broken programs.

So the rule has served its purpose: it has detected a conflict that needs
to be avoided somehow.

For users of virtualenv there is perhaps no problem, because you can
install the lark you wanted in a particular virtualenv and avoid installing
the other lark, but Debian packages are a flat global namespace of modules.

There are two options:

* If "lark" on PyPI is a dead project, or otherwise something that is never
  going to be useful to package in Debian for some reason, then perhaps it's
  safe for the lark parser to claim the python3-lark name.

* Otherwise, if its PyPI name is lark-parser, then I would personally
  recommend asking the upstream developer to rename the module you import
  to lark_parser (or maybe larkparser if that's preferred), and packaging
  it as python3-lark-parser (or python3-larkparser), optionally with
  compatibility glue to make "import lark" continue to work (which might not
  get packaged in Debian).

(I'm talking about binary package names python3-foo because those are the
most important thing for avoiding conflicts, but if the binary package
name is python3-foo then it probably makes most sense for the source
package to be python-foo.)

smcv



Re: Proposal on how to proceed with Python 2 removal from bullseye

2019-12-22 Thread Simon McVittie
On Wed, 18 Dec 2019 at 01:08:11 -0500, Sandro Tosi wrote:
> let me know if this makes sense or additional changes are required.

#942941 in src:dbus-python was bumped to serious because:
> python-dbus-tests is a module and has 0 external rdeps or not in testing

Please could you give python-dbus-tests or *-tests an exception to the
RC severity bumps, or only bump the severity if *every* Python 2 binary
package in a source package is eligible for removal, or something?
python-dbus still has a significant number of rdeps, and I don't want to
support python-dbus without keeping its automated tests available.

For now I've downgraded it back to normal.

Thanks,
smcv



Re: Future of Trac in Debian

2019-11-30 Thread Simon McVittie
On Fri, 29 Nov 2019 at 18:13:02 -0500, Nicholas D Steeves wrote:
> At that upstream issue gwync writes that he might have to drop Trac in
> Fedora if there isn't a py3 test release "before Fedora 32 is GA".  I'm
> not sure what "GA" means

Presumably "general availability", i.e. properly released (as opposed
to a beta or other prerelease).

smcv



Re: autopkgtest-pkg-python fails if package name is python-pyMODULENAME (Was: Bug#945768: python-pypubsub: autopkgtest failure: No module named 'pypubsub')

2019-11-29 Thread Simon McVittie
On Thu, 28 Nov 2019 at 17:27:53 +, Simon McVittie wrote:
> On Thu, 28 Nov 2019 at 11:15:31 -0500, Sandro Tosi wrote:
> > if you install `pubsub` as top-level module, your package must be
> > named pythonN-pubsub, if not it violates the policy and it's RC buggy.
> 
> That's what I had thought, but I've also seen people asserting that the
> Debian package name ought to reflect the egg name in cases where it
> differs from the top-level Python module name.

I've opened
https://salsa.debian.org/cpython-team/python3-defaults/merge_requests/2
to clarify this. Comments welcome, particularly if you don't think my
proposed change reflects consensus.

(I tried to open a wishlist bug in python3-defaults, but the BTS hasn't
responded so far.)

On Fri, 29 Nov 2019 at 08:29:34 +, Simon McVittie wrote:
> On Fri, 29 Nov 2019 at 08:30:16 +0800,  Yao Wei (魏銘廷) wrote:
> > If the module name has upper case in it, it would actually break Policy 
> > §5.6.1
> 
> I'd assumed the "foo" here was shorthand for
> module_name.lower().replace('_', '-'), although maybe it would be better
> for this to be explicit in the policy.

Documenting this is also part of
https://salsa.debian.org/cpython-team/python3-defaults/merge_requests/2

> autopkgtest-pkg-python would ideally transform - back to _ to decide
> what to import, which would fix the more_itertools case?

In fact it already does.
https://salsa.debian.org/ci-team/autodep8/merge_requests/17 adds test
coverage.

> It would need
> an override mechanism for the Xlib case anyway

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=884181 and
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=929957 are requests
for such a mechanism.

https://salsa.debian.org/ci-team/autodep8/merge_requests/6 and
https://salsa.debian.org/ci-team/autodep8/merge_requests/17 are two
proposals for implementations of this.

smcv



Re: autopkgtest-pkg-python fails if package name is python-pyMODULENAME (Was: Bug#945768: python-pypubsub: autopkgtest failure: No module named 'pypubsub')

2019-11-29 Thread Simon McVittie
On Fri, 29 Nov 2019 at 08:30:16 +0800,  Yao Wei (魏銘廷) wrote:
> The binary package for module foo should preferably be named
> python3-foo, if the module name allows
>
> If the module name has upper case in it, it would actually break Policy §5.6.1

I'd assumed the "foo" here was shorthand for
module_name.lower().replace('_', '-'), although maybe it would be better
for this to be explicit in the policy. For example, the modules with
which you can 'import Xlib' and 'import more_itertools' are packaged as
python3-xlib and python3-more-itertools.

autopkgtest-pkg-python would ideally transform - back to _ to decide
what to import, which would fix the more_itertools case? It would need
an override mechanism for the Xlib case anyway, because lower() isn't
a reversible transformation.

smcv



Re: autopkgtest-pkg-python fails if package name is python-pyMODULENAME (Was: Bug#945768: python-pypubsub: autopkgtest failure: No module named 'pypubsub')

2019-11-28 Thread Simon McVittie
On Thu, 28 Nov 2019 at 11:15:31 -0500, Sandro Tosi wrote:
> if you install `pubsub` as top-level module, your package must be
> named pythonN-pubsub, if not it violates the policy and it's RC buggy.

That's what I had thought, but I've also seen people asserting that the
Debian package name ought to reflect the egg name in cases where it
differs from the top-level Python module name.

Some examples of where the difference between egg name and module name
matters:

- this one:
  - module: pubsub (-> python3-pubsub)
  - egg: pypubsub-*.egg-info (-> python3-pypubsub)
  - is actually python3-pypubsub (named for the egg)

- src:dbus-python:
  - module: dbus (-> python3-dbus)
  - egg: dbus_python-1.2.14.egg-info (-> python3-dbus-python)
  - is actually python3-dbus (named for the module)

- src:pygobject:
  - module: gi (-> python3-gi) and pygtkcompat
  - egg: PyGObject-3.34.0.egg-info (-> python3-pygobject)
  - is actually python3-gi (named for the module)

(Maybe python3-gi should also have Provides: python3-pygtkcompat?)

Is there consensus that the top-level module name is what matters, and not
following the recommendation is a bug?
https://www.debian.org/doc/packaging-manuals/python-policy/module_packages.html
says "The binary package for module foo should preferably be named
python3-foo, if the module name allows" and "import foo should import
the module", which suggests that it is indeed the name of the top-level
importable module, and not the name of the egg, that matters (which would
imply that -dbus and -gi are correct, and -pypubsub is not).

Is there consensus that not following this recommendation is a *RC* bug?
The bits I quoted above say "should" rather than "must".

Thanks,
smcv



Re: Discussing next steps for the Python2 removal

2019-10-28 Thread Simon McVittie
On Fri, 25 Oct 2019 at 12:38:11 +0200, Ondrej Novy wrote:
> this is not how autorm works. You can't remove from testing only one of two
> binary package from same source package. You are removing  package as a whole.
> 
> But maintainer can anytime fix that bug by removing py2 binary from source
> package, which is few minutes work.

The maintainer of a leaf package can do this in a few minutes at any
time, but library packages with many reverse-dependencies (for example
dbus-python) don't really have that option, so I hope using autorm for
library packages is not on the table at this stage?

smcv



Re: 2Removal: handling circular dependencies

2019-10-24 Thread Simon McVittie
On Wed, 23 Oct 2019 at 23:05:39 +0100, Rebecca N. Palmer wrote:
> - One big tangle (159 packages).  This probably needs breaking up:
> --- Some of it involves documentation tools (e.g. sphinx).  These cycles can
> be broken by using the Python 3 version of the tool.

You've listed dbus-python as being part of this cycle, but since version
1.2.10-1 it only Build-Depends on sphinx-common, python3-sphinx and
python3-sphinx-rtd-theme. Is that enough to remove it from this cycle?

I removed the block when I made that change, but someone('s script?)
seems to have added it back? I can't remove python-dbus and close the
2removal bug yet, because lots of packages still depend on python-dbus.

I also can't see how -sphinx would depend directly or indirectly on
-dbus. Perhaps some of the blocks have been added the wrong way round?

I wonder whether parts of the 159-package cycle are actually smaller
cycles with non-trivial intersection, such as:

 python-sphinx --D--> sphinx-common
  | ^  <--S--
  D S
  v |
 python-alabaster

I'd tend to think of that as two intersecting 2-package cycles, rather
than one larger cycle.

> (Assuming we're still using "broken Suggests are not
> allowed": this has previously been discussed, I forget where.)

I think that's much too strong, particularly during a
transition. Transitions from unstable to testing only look at
(Pre-)Depends and Build-Depends(|-Arch|-Indep), although in general we
also consider broken Recommends to be a serious bug if anyone notices
them (due to Policy §2.2.1 "must not require or recommend a package
outside of main for compilation or execution").

Suggests are specifically not considered by Policy §2.2.1 (packages in
main are allowed to suggest packages that are in contrib, in non-free,
or not in the archive at all) and the pattern of cyclic Depends in one
direction, and Recommends or Suggests in the other, is extremely common.

smcv



Re: Help needed for issue in test suite for Python3 (Was: Bug#937698: python-dendropy: Python2 removal in sid/bullseye)

2019-10-08 Thread Simon McVittie
tl;dr: The issue in the test suite is that there is no test suite.

On Tue, 08 Oct 2019 at 09:28:45 +0200, Andreas Tille wrote:
>   File "/usr/lib/python3/dist-packages/setuptools/command/test.py", line 229, 
> in run
> self.run_tests()

setuptools is trying to find the tests declared in setup.py:

EXTRA_KWARGS = dict(
install_requires = ['setuptools'],
include_package_data = True,
test_suite = "tests",<-- here
zip_safe = True,
)

According to ,
setting test_suite like this means: you can import the 'tests' module and
the result is a package or module containing unittest.TestCase subclasses.

However, the tests/ directory (as of python-dendropy_4.4.0-1) does not
contain any Python code at all, only some test data in tests/data/,
so this assertion doesn't seem to be true.

If you look at the buildd logs you'll see that dh_auto_test fails in
Python 2 as well, but the failure is ignored:

https://buildd.debian.org/status/fetch.php?pkg=python-dendropy=all=4.4.0-1=1527005052=0
> LC_ALL=en_US.utf-8 dh_auto_test || true
> perl: warning: Setting locale failed.
> perl: warning: Please check that your locale settings:
>   LANGUAGE = (unset),
>   LC_ALL = "en_US.utf-8",
>   LANG = (unset)
> are supported and installed on your system.
> perl: warning: Falling back to the standard locale ("C").
> I: pybuild base:217: python2.7 setup.py test 
...
> running build_ext
> Traceback (most recent call last):
>   File "setup.py", line 192, in 
> **EXTRA_KWARGS
>   File "/usr/lib/python2.7/dist-packages/setuptools/__init__.py", line 129, 
> in setup
> return distutils.core.setup(**attrs)
>   File "/usr/lib/python2.7/distutils/core.py", line 151, in setup
> dist.run_commands()
>   File "/usr/lib/python2.7/distutils/dist.py", line 953, in run_commands
> self.run_command(cmd)
>   File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
> cmd_obj.run()
>   File "/usr/lib/python2.7/dist-packages/setuptools/command/test.py", line 
> 226, in run
> self.run_tests()
>   File "/usr/lib/python2.7/dist-packages/setuptools/command/test.py", line 
> 248, in run_tests
> exit=False,
>   File "/usr/lib/python2.7/unittest/main.py", line 94, in __init__
> self.parseArgs(argv)
>   File "/usr/lib/python2.7/unittest/main.py", line 149, in parseArgs
> self.createTests()
>   File "/usr/lib/python2.7/unittest/main.py", line 158, in createTests
> self.module)
>   File "/usr/lib/python2.7/unittest/loader.py", line 130, in 
> loadTestsFromNames
> suites = [self.loadTestsFromName(name, module) for name in names]
>   File "/usr/lib/python2.7/unittest/loader.py", line 91, in loadTestsFromName
> module = __import__('.'.join(parts_copy))
> ImportError: No module named tests
> E: pybuild pybuild:336: test: plugin distutils failed with: exit code=1: 
> python2.7 setup.py test
> dh_auto_test: pybuild --test -i python{version} -p 2.7 returned exit code 13
> # need to add true since for Python 3.4 5 tests are failing due some
> # strange encoding problem.  upstream is unable to verify this and
> # there is no better idea for the moment

The failure mode is different in Python 3 because in Python 2, a directory
that does not contain __init__.py cannot be imported as a package (hence
"No module named tests"), but in Python 3, it can. From the backtrace
you gave, presumably the resulting module object has
tests.__file__ == None, which breaks assumptions made by setuptools.

I would suggest removing the test_suite parameter for now, and asking
your upstream to include the test suite in future source code releases.

smcv



Bug#935395: RFP: python3-anytree -- Tree data library

2019-08-22 Thread Simon McVittie
Package: wnpp
Severity: wishlist

* Package name: python3-anytree
  Version : 2.6.0
  Upstream Author : "c0fec0de"
* URL : https://github.com/c0fec0de/anytree
https://pypi.org/project/anytree/
* License : Apache 2.0
  Programming Lang: Python
  Description : Tree data library

Newer versions of gtk-doc-tools require anytree for gtkdoc-mkhtml2, an
experimental replacement for gtkdoc-mkhtml and gtkdoc-fixxref which speeds
up processing by transforming Docbook into HTML in Python code instead of
using XSLT.

For now I've replaced its use in gtk-doc-tools with a simple
reimplementation (it's a tree data structure, it isn't rocket science),
but in the long term either someone should package anytree, or someone
needs to ask the upstream maintainer of gtk-doc to use a different tree
implementation instead of depending on anytree (in which case this bug
can be closed as wontfix).



Re: Bug#916428: autopkgtest-virt-qemu: Fails to set up test environment when run with python3.7

2018-12-14 Thread Simon McVittie
On Fri, 14 Dec 2018 at 20:19:00 +0100, Matthias Klose wrote:
> On 14.12.18 12:48, Simon McVittie wrote:
> > On Fri, 14 Dec 2018 at 11:31:02 +0000, Simon McVittie wrote:
> >> tl;dr: autopkgtest-virt-qemu doesn't work with python3.7.
> > 
> > This seems to be caused by using socket.send() (and assuming the entire
> > buffer will be sent in one transaction) instead of socket.sendall().
> > This was always a bug, at least in theory. I don't know why Python 3.7
> > makes it significant in practice when it wasn't previously.
> 
> if you already ran autopkg using 3.7, then that might point out to the recent
> 3.7.2 release candidate 1. changes. At least the timing of the report 
> suggests this.

Well spotted, you are correct. Looking at apt/history.log and the timing
of my uploads, I must have run autopkgtest successfully with virt-qemu
and python3.7 3.7.1-1 while I was preparing flatpak 1.1.1-1.

The correlation with 3.7.2~rc1-1 seems very reliable, but I don't see
anything in the Python 3.7 news that looks like a likely trigger.

To be clear, I think this was always an autopkgtest-virt-qemu bug,
and I don't know why autopkgtest-virt-qemu worked so reliably in the past,
or why it still works with python3.6.

smcv



Re: Bug#916428: autopkgtest-virt-qemu: Fails to set up test environment when run with python3.7

2018-12-14 Thread Simon McVittie
Control: forwarded -1 
https://salsa.debian.org/ci-team/autopkgtest/merge_requests/42
Control: tags -1 + patch

On Fri, 14 Dec 2018 at 11:31:02 +, Simon McVittie wrote:
> tl;dr: autopkgtest-virt-qemu doesn't work with python3.7.

This seems to be caused by using socket.send() (and assuming the entire
buffer will be sent in one transaction) instead of socket.sendall().
This was always a bug, at least in theory. I don't know why Python 3.7
makes it significant in practice when it wasn't previously.

smcv



Bug#916428: autopkgtest-virt-qemu: Fails to set up test environment when run with python3.7

2018-12-14 Thread Simon McVittie
Package: autopkgtest
Version: 5.6
Severity: important
File: /usr/bin/autopkgtest-virt-qemu
Control: found -1 5.7
Control: user debian-python@lists.debian.org
Control: usertags -1 + python3.7
X-Debbugs-Cc: debian-python@lists.debian.org

tl;dr: autopkgtest-virt-qemu doesn't work with python3.7. A short-term
workaround is to run it with python3.6, for example by using
"autopkgtest ... -- /usr/bin/python3.6 /usr/bin/autopkgtest-virt-qemu ...",
or changing its first line.

Steps to reproduce (minimal reproducer with just autopkgtest-virt-qemu,
not autopkgtest itself):

* Have an autopkgtest-virt-qemu VM image that previously worked (or see
  the man page for instructions to make one)
* Have python3.6 and python3.7, with python3.7 as default
* python3.7 /usr/bin/autopkgtest-virt-qemu --debug .../autopkgtest.qcow2
* wait for "ok"
* enter the command "open"
* wait for timeout or "ok /tmp/autopkgtest.XX" response
* if still running, enter the command "close", wait for "ok" response,
  and enter the command "quit"
* python3.6 /usr/bin/autopkgtest-virt-qemu --debug .../autopkgtest.qcow2
* wait for "ok"
* type "open"
* wait for timeout or "ok /tmp/autopkgtest.XX" response
* if still running, enter the command "close", wait for "ok" response,
  and enter the command "quit"

Expected result:

* In both cases you eventually get "ok /tmp/autopkgtest.XX" where
  XX is a random string

Actual result:

* With python3.6, we get the expected result
* With python3.7, setup_shared() times out (symptoms are similar to #892023):

$ autopkgtest-virt-qemu --debug 
~/.cache/vectis/amd64/debian/sid/autopkgtest.qcow2
ok
open
autopkgtest-virt-qemu: DBG: executing open
autopkgtest-virt-qemu: DBG: Creating temporary overlay image in 
/tmp/autopkgtest-qemu.r20th32n/overlay.img
autopkgtest-virt-qemu: DBG: execute-timeout: qemu-img create -f qcow2 -b 
/home/smcv/.cache/vectis/amd64/debian/sid/autopkgtest.qcow2 
/tmp/autopkgtest-qemu.r20th32n/overlay.img
autopkgtest-virt-qemu: DBG: find_free_port: trying 10022
autopkgtest-virt-qemu: DBG: find_free_port: 10022 is free
autopkgtest-virt-qemu: DBG: Forwarding local port 10022 to VM ssh port 22
autopkgtest-virt-qemu: DBG: Detected KVM capable Intel host CPU, enabling 
nested KVM
autopkgtest-virt-qemu: DBG: expect: " login: "
qemu-system-x86_64: warning: host doesn't support requested feature: 
CPUID.01H:ECX.vmx [bit 5]
autopkgtest-virt-qemu: DBG: expect: found ""login prompt on ttyS0""
autopkgtest-virt-qemu: DBG: expect: "ok"
autopkgtest-virt-qemu: DBG: expect: found ""b'ok'""
autopkgtest-virt-qemu: DBG: setup_shell(): there already is a shell on ttyS1
qemu-system-x86_64: terminating on signal 15 from pid 10760 (/usr/bin/python3)
autopkgtest-virt-qemu: DBG: cleanup...
: failure: timed out on client shared directory setup

Hacking in some extra debug shows that the thing that is timing out,
similar to #892023, is that the done_shared flag file in the 9p shared
file system is never created.

I tried applying the patches from
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=892023#10 to make our
shell screen-scraping more robust, but they didn't help.

When I added echo=True to all the expect() calls, I didn't see any output
from the commands that set up the shared directory (starting with
"mkdir -p -m 1777 /run/autopkgtest/shared").

I initially thought this was a regression with the new qemu 3, but this
was not the case: qemu 2 from testing exhibits the same symptoms.

smcv

-- System Information:
Debian Release: buster/sid
  APT prefers unstable-debug
  APT policy: (500, 'unstable-debug'), (500, 'buildd-unstable'), (500, 
'unstable'), (500, 'testing'), (500, 'stable'), (1, 'experimental-debug'), (1, 
'buildd-experimental'), (1, 'experimental')
Architecture: amd64 (x86_64)
Foreign Architectures: i386

Kernel: Linux 4.18.0-3-amd64 (SMP w/2 CPU cores)
Locale: LANG=en_GB.utf8, LC_CTYPE=en_GB.utf8 (charmap=UTF-8), LANGUAGE=en_GB:en 
(charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled

Versions of packages autopkgtest depends on:
ii  apt-utils   1.8.0~alpha2
ii  libdpkg-perl1.19.2
ii  procps  2:3.3.15-2
ii  python3 3.7.1-2
ii  python3-debian  0.1.33

Versions of packages autopkgtest recommends:
ii  autodep8  0.17

Versions of packages autopkgtest suggests:
pn  lxc   
pn  lxd   
ii  ovmf  0~20181115.85588389-2
ii  qemu-efi-aarch64  0~20181115.85588389-2
pn  qemu-efi-arm  
ii  qemu-system   1:3.1+dfsg-1
ii  qemu-utils1:3.1+dfsg-1
ii  schroot   1.6.10-6+b1
pn  vmdb2 

-- no debconf information



Re: Upstreams dropping Python 2 support

2018-09-27 Thread Simon McVittie
On Thu, 27 Sep 2018 at 11:58:28 +0200, Ole Streicher wrote:
> Is there a reason why one would use Python2-sphinx instead of the Python
> 3 version?

src:dbus-python has more Python 2 API than Python 3 API (some objects
cease to exist in Python 3 builds). As long as python-dbus.deb exists,
it's somewhat valuable for the Python-version-independent documentation
in python-dbus-doc.deb to have been built with the Python 2 version
of sphinx.

smcv



Re: Bug#886291: Debian package transition: Rename package and reuse old name with new content

2018-08-19 Thread Simon McVittie
On Sat, 18 Aug 2018 at 16:31:37 +0200, Alexis Murzeau wrote:
> To fix #886291, we should:
> - Rename python3-pycryptodome to python3-pycryptodomex
> - Reuse python3-pycryptodome package name to package a non compatible
> python3 module.
> 
> The rationale of this rename + reuse is that currently,
> python3-pycryptodome contains, in fact, the pycryptodomex module. So
> renaming that one + introduce the new package for the pycryptodome module.

According to apt-file(1), python3-pycryptodome contains
/usr/lib/python3/dist-packages/Cryptodome, which you use via "import
Cryptodome". If you're renaming packages anyway, would it be better for
the package containing /usr/lib/python3/dist-packages/Cryptodome to be
the python3-cryptodome package?

(My reasoning is that the name you import is the name of the "ABI",
the same way the ABI of, for example, libdbus is represented by its
SONAME libdbus-1.so.3, which we translate into libdbus-1-3 as a Debian
package name.)

> I already though of a solution on 886...@bugs.debian.org use multiple
> dependencies with "|" but the package must still be buildable with the
> first dependency (sbuild ignore dependencies after "|" for example)

It's OK for packages in unstable to be uninstallable or unbuildable for
a short time, as long as Depends/Breaks/Conflicts or RC bugs ensure that
the brokenness doesn't propagate into testing.

For instance, if you are going ahead with your renaming plan, you could
give the new packages a versioned Breaks on python3-httpsig (<< H) and
python3-pysnmp4 (<< S), where H is the first version of python3-httpsig
that has been modified to use/expect the new (py)cryptodome(x) package
layout, and S is the corresponding version of python3-pysnmp4.

smcv



Re: git-dpm -> gbp conversion (mass-change)

2018-08-09 Thread Simon McVittie
On Thu, 09 Aug 2018 at 10:16:31 +0200, Thomas Goirand wrote:
> Now, if all goes well, and if the above cases are fixed, them I'm fine
> using "gbp pq", but it's not any better than fixing by hand using quilt.

One advantage of both quilt and gbp-pq over git-dpm (and probably
git-debrebase) is that they interoperate: quilt and gbp-pq both work
well in a "patches-unapplied" git repository. If one team member (for
instance me) prefers to use gbp-pq, and another team member (for instance
Thomas) prefers to use plain quilt, then we can share one git repository
without significant problems: the worst that normally happens is a bit
of unnecessary diffstat for the lines matching /^(git )?diff/, /^Index:/,
etc. if one of us re-exports the whole patch series.

I personally think gbp-pq is a better tool than quilt because it lets us
use 'git rebase' on the patch series, but part of the point of choosing
the gbp-pq-compatible repository layout is that it isn't either/or:
we can use both.

smcv



Re: git-dpm -> gbp conversion (mass-change)

2018-08-03 Thread Simon McVittie
On Fri, 03 Aug 2018 at 08:21:28 +0200, W. Martin Borgert wrote:
> In fact, I thought that "upstream/master" were DEP-14-ish, but
> only "upstream/latest" (for the newest release) is.

Yes. The simple case for DEP-14 is that you are only following one
upstream branch, which is upstream/latest; the more complex case
is that you are also following an upstream stable-branch or something,
for which naming like upstream/1.2.x is suggested for the older (stable)
branch.

dbus  and glib2.0
 are examples
of DEP-14 packages that track more than one upstream branch.

There is no upstream/master, upstream/unstable, upstream/stretch or
similar in DEP-14, because:

* if uploads to a particular Debian suite are tracking upstream versions
  older than the latest, they presumably meet some sort of criteria
  for which versions are acceptable, most commonly "is from the same
  stable-branch as the previous release"; the branch is named after
  those criteria rather than the target suite, because multiple suites
  might be sharing an upstream release series

* if there is no upstream version that is acceptable for e.g. stable,
  then new upstream versions won't get imported for that suite, so
  there's no need to maintain a branch that they could be imported into
  (if using pristine-tar, it uses commits, not branches, to check out
  upstream source)

smcv



Re: Python 3.7 in testing/experimental

2018-06-30 Thread Simon McVittie
On Fri, 29 Jun 2018 at 23:29:37 +0200, Vincent Danjean wrote:
> Will the python3-numpy pacakge be fixed by an automatic rebuild ?
> (ie I just have to wait for a few days)

It should be. It's in Needs-Build state at the moment:
https://buildd.debian.org/status/package.php?p=python-numpy

378 packages need to be rebuilt, so it might take a few days:
https://release.debian.org/transitions/html/python3.7.html

> Do I need to fill a bug report on python3-numpy ?

No, that would not be helpful at this stage. The maintainer of
python3-numpy does not need to take action unless the rebuild fails.

Regards,
smcv



Re: Questions about salsa and Git

2018-04-10 Thread Simon McVittie
On Tue, 10 Apr 2018 at 10:41:41 +, Scott Kitterman wrote:
> On April 10, 2018 7:24:18 AM UTC, "Guðjón Guðjónsson" 
>  wrote:
> >Following the advice on
> >https://wiki.debian.org/Python/GitPackaging
> 
> Use this instead:
> https://wiki.debian.org/Python/GitPackagingPQ

Is this now official policy for new/updated/converted Python modules
maintained by DPMT?

Is it official policy for new/updated/converted Python applications
maintained by PAPT?

At the moment each of those pages says that the Python teams have chosen
the relevant packaging system and that the other packaging system is
forbidden, which is confusing at best.

If there is consensus that gbp pq + DEP 14 is recommended for Python
packages and the older git-dpm setup is deprecated, then anyone with a
wiki account can fix those pages, but I for one don't want to make those
edits until someone who can speak for the team specifically says it.

(I'd like to convert tap.py to gbp pq as soon as it isn't considered to
be hostile to do so.)

Thanks,
smcv



Re: packages that use dh_python{2,3} but don't depend on dh-python

2018-03-26 Thread Simon McVittie
On Mon, 26 Mar 2018 at 13:32:10 +0200, Piotr Ożarowski wrote:
> Here's a list of packages that will FTBFS soon if dh-python will not be
> added to Build-Depends (it's time to drop dh-python from python3's
> Depends and old version of dh_python2 from python package).

Is there a Lintian tag for this? That's sometimes a relatively efficient
way to get packages fixed.

https://lintian.debian.org/tags/missing-build-dependency-for-dh_-command.html
is a large part of it (that tag seems to be 99% dh-python already).

https://lintian.debian.org/tags/missing-build-dependency-for-dh-addon.html
suggests that data/debhelper/dh_addons-manual needs changing from

python2||python:any | python-all:any | python-dev:any | python-all-dev:any
python3||python3:any | python3-all:any | python3-dev:any | python3-all-dev:any

to maybe something like

python2||dh-python, python:any | python-all:any | python-dev:any | 
python-all-dev:any
python3||dh-python, python3:any | python3-all:any | python3-dev:any | 
python3-all-dev:any

(assuming multiple dependencies work in that context).

> The plan is to report bugs first and follow up with changes in -defaults
> packages in April or May.

It might help to announce the intention to do a MBF on -devel now, in
the hope that some packages will get fixed before the bug-filing starts?

smcv



Bug#893924: python3-distutils: Please describe road map/recommendations for users of distutils

2018-03-23 Thread Simon McVittie
Package: python3-distutils
Version: 3.6.5~rc1-2
Severity: wishlist
X-Debbugs-Cc: debian-python@lists.debian.org

I'm confused about the current status of distutils, and what should be
done by packages that depend on it to be as future-proof as possible. I
don't think I'm the only one confused by this, so it would be very
helpful if a maintainer could clarify what the intention is so that
other maintainers can do the right things.

When structural changes like this are needed, I think it would
be useful for them to be represented by a bug (perhaps of the form
"libpython3.6-stdlib: should not contain distutils" or something similar)
that gives the reasons for the structural change and describes the
action that should be taken by maintainers of dependent packages. This
bug could be referenced in the changelog and would be an obvious central
coordination point for whatever changes are needed, including follow-ups
if unforeseen fallout means the plan has to change. The release team
would probably also appreciate it being treated as a transition so that
they can plan around it.

Since that didn't happen in this case, I'm opening this bug in the hope
that it can fulfil a similar role.

So far, the sequence of events goes something like this:

* 13 December 2017: distutils moves from -stdlib into its own package
* 20 March 2018: -stdlib stops depending on distutils, packages start
  to fail to build from source
* 22-23 March 2018: A small subset of distutils (__init__.py and version.py)
  moves back to -stdlib

I assume there is a reason (size on disk? dependencies? update
frequency?) why most of distutils shouldn't be in -stdlib, but in the
absence of a reference in the changelog, I can only guess at why that is.

When a small subset of distutils moved back, I assume that the
intention was to un-break the relatively common(?) case of users of
distutils.version that do not need the rest of the module, such as
the gdbus-codegen tool in libglib2.0-dev-bin. However, it isn't clear
whether the Python maintainers consider this to be a workaround to keep
those packages working in the short term (in which case they need to
pick up a new dependency on python3-distutils for the longer term), or
whether distutils.version is going to remain part of the API of -stdlib
in the long term (in which case packages like libglib2.0-dev-bin should
not depend on the full -distutils package because that would negate the
benefit of splitting it out).

I'm aware that structural changes that break dependencies are sometimes
necessary in pursuit of a goal (I've done them myself, most recently
moving glib-compile-resources to libglib2.0-dev-bin for #885019), but
when making them, having a plan visible to everyone is beneficial.
Please could you clarify the situation?

Related bug reports include:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=893755
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=893847

Thanks,
smcv



Re: RFS: mwic 0.7.4-1

2018-03-20 Thread Simon McVittie
On Tue, 20 Mar 2018 at 12:18:39 +0100, Gregor Riepl wrote:
> > In case I've misunderstood you, and you're referring to unit tests
> > shipped debian/tests/*, than yes, I agree. :)
> 
> As far as I understand, these tests are executed by the package builder after
> the upstream build script has finished. They're meant as a sort of integration
> test, i.e. "does this package run on Debian".
> 
> There's even a Lintian tag for them:
> https://lintian.debian.org/tags/testsuite-autopkgtest-missing.html (which, I
> think, is a bit overzealous)

I think you're slightly confusing build-time tests with as-installed
autopkgtests, but you have the right idea.

Build-time tests (dh_auto_test or similar) run the upstream "make check"
or equivalent, in the same environment as the build itself (typically an
autobuilder). For simple libraries, build-time tests are enough. However,
for more complex packages, the autobuilder environment is too artificial
or too restricted for build-time tests to give full coverage:

- can't run tests as root (system-level packages like dbus often need this
  for good coverage)
- can't contact the Internet (even if in practice you usually can, Policy
  says you must not, for good reasons)
- can't rely on a reasonable/realistic environment, like system services
  running, being in a systemd-logind login session on systems that have
  it, or even having a home directory
- can't rely on the packages being "properly installed" so that
  hard-coded paths can work, and have to rely on overrides like PYTHONPATH
  to make newly-built code visible

autopkgtest (debian/tests/) is a form of as-installed testing, which takes
the packages that were built, installs them in a relatively complete and
realistic environment (typically a lxc container or a qemu/kvm virtual
machine) and runs further tests there. Sometimes these tests just repeat
the build-time test coverage, but often they can make use of the ability
to do things that wouldn't work at build-time, like contacting Internet
services, running things as root, or relying on system services. This
often gives them better test coverage.
https://wiki.gnome.org/Initiatives/GnomeGoals/InstalledTests is an example
of an upstream project that is doing similar things.

Because the autopkgtest container or VM is thrown away after running one
package's tests in it, the tests can do things that would be unacceptable
in an autobuilder environment, which again increases coverage.

In Debian, autopkgtests are run separately, by different infrastructure
(ci.debian.net), long after the package was built. The same package
will usually be tested multiple times against newer versions of its
dependencies, to look for regressions caused by a dependency change.

The normal package upload/autobuild workflow does not typically run
autopkgtests, although it could. Vectis, which is a personal package
building tool that I'm working on, builds the package in sbuild in
a virtual machine then immediately puts it through autopkgtests and
piuparts testing in separate VMs, so that test failures can be checked
(and either used as a reason not to upload, or ignored, based on the
maintainer's judgement about the severity of the failure and the urgency
of the upload - the same way the maintainer would triage Lintian errors).

smcv



Re: Where to put docs of a -doc package for python 2 + 3 modules?

2018-03-12 Thread Simon McVittie
On Mon, 12 Mar 2018 at 11:16:16 +0100, W. Martin Borgert wrote:
> policy (12.3) says, that putting the contents of package-doc
> into /usr/share/doc/package/ (main package) is preferred over
> /usr/share/doc/package-doc/. debhelper detects the Python 2
> package as main package. One can override this to the path for
> the Python 3 package, but both feels wrong to me. Even if we
> drop Python 2 at some point, maybe then there is Python 4 or
> PyPy.

In python-mpd-doc and python-dbus-doc, I installed the real documentation
files in /u/s/d/python-*-doc, but placed symlinks to them in both
/u/s/d/python-* and /u/s/d/python3-*. Perhaps that's a reasonable way
to achieve the spirit of the Policy §12.3 recommendation while not
privileging one of Python 2, Python 3, PyPy, etc. over the others?

smcv



Re: Next python-mote pre-condition - issue with pybuild: python-backports.tempfile conflicting python-backports.weakref

2018-01-25 Thread Simon McVittie
On Thu, 25 Jan 2018 at 17:45:33 +, peter green wrote:
> > However, in Debian case, I do not know how this can be handled as
> > 2 packages cannot hold the same file (even if __init__ is only an empty
> > file), and at least one must be present (if you install only one).

The Python jargon is that the "backports" shared by backports.tempfile
and backports.weakref is a "namespace package".

For Python 2, dh_python2 handles this: python-lazr.restfulclient and
python-lazr.uri are an example of cooperating packages that share a
namespace package.

For Python >= 3.3, the __init__.py is unnecessary due to
.

> I'm not a python expert but I expect the least-horrible way to do this
> would be to ship a package that only contained the __init__. Then have
> all the python-backports.* packages depend on it.

This is not necessary, and would probably (hopefully?) lead to rejection
from the NEW queue.

smcv



Re: DPMT and git workflows

2018-01-19 Thread Simon McVittie
On Fri, 19 Jan 2018 at 14:25:57 +0300, Dmitry Shachnev wrote:
> I think for new packages it is better to use gbp-pq based workflow:
> https://wiki.debian.org/Python/GitPackagingPQ

Is there consensus that the gbp-pq workflow is now allowed? I only
maintain one package in DPMT (tap.py) and every time I upload it I have
to remind myself how git-dpm works, so I'd like to switch it over to
gbp-pq as soon as I can.

Relatedly, Alioth is going to be shut down at some point, with git
repositories frozen and made read-only, so it would seem a good idea to
start migrating git packaging to salsa.debian.org before that happens.
python-modules-team and python-apps-team groups, perhaps? I can create
a python-modules-team group and migrate tap.py as a sample if people
would like to see an example package.

smcv



Re: Please help with test suite error and installation problem of python-aws-xray-sdk (Was: If there is no response in debian-python then debian-science might be the right team)

2018-01-15 Thread Simon McVittie
On Mon, 15 Jan 2018 at 12:59:29 +0100, Andreas Tille wrote:
> E File 
> "/build/python-aws-xray-sdk-0.95/.pybuild/pythonX.Y_2.7/build/tests/test_async_local_storage.py",
>  line 10
> E   async def _test():
> E   ^
> E   SyntaxError: invalid syntax

Looks like it needs python3 >= 3.5. https://www.google.com/search?q=async%20def

> python-aws-xray-sdk (0.95-1) wird eingerichtet ...
>   File "/usr/lib/python2.7/dist-packages/aws_xray_sdk/core/async_context.py", 
> line 14
> def __init__(self, *args, loop=None, use_task_factory=True, **kwargs):
>  ^
> SyntaxError: invalid syntax

That's Python 3 syntax too. https://www.python.org/dev/peps/pep-3102/

smcv



Re: python-markdown and mkdocs circular build-dep

2018-01-11 Thread Simon McVittie
On Thu, 11 Jan 2018 at 13:35:13 +0300, Dmitry Shachnev wrote:
> The new release of python-markdown has switched docs building from its own
> custom build system to mkdocs. However python-mkdocs itself build-depends on
> python3-markdown for tests, which results in a circular build-dependency.

It would probably be best for python-mkdocs to build-depend on
python3-markdown , after making sure that building with
"DEB_BUILD_PROFILES=nocheck DEB_BUILD_OPTIONS=nocheck" and without
python3-markdown installed does work (it looks as though it should). That
way the cycle can be broken from either end (and nocheck never changes
package contents, whereas nodoc does, so nocheck is probably a better
way to break it).

> Will it be fine if I just mark the build-dependency in python-markdown as
> ?

If the resulting package builds successfully with DEB_BUILD_PROFILES=nodoc
then that seems a good idea anyway. In some packages you might need to use
dh-exec to skip documentation files in debian/*.install, but it looks as
though that won't be necessary for python-markdown.

smcv



Re: Bug#883246: ITP: python-enum-compat -- Python enum/enum34 compatibility package

2017-12-01 Thread Simon McVittie
On Fri, 01 Dec 2017 at 12:13:26 +0100, Ondrej Novy wrote:
> 2017-12-01 11:25 GMT+01:00 Simon McVittie <[1]s...@debian.org>:
> Within Debian, wouldn't this be better achieved by having Python 2 
> packages
> that require enum34 depend on python-enum34 directly, as they already do?
> 
> I already tried this solution.

That's unfortunate - the solution you tried first does seem better.

Would it perhaps work to bundle enum_compat's egg-info with
python-enum34? It doesn't seem great to be introducing a whole new
source package just to express a conditional dependency (that will itself
become obsolete when Python 2 does), and it's not as if enum_compat has
any actual code, so hopefully it will never have bugs or new versions.
dpkg-source and gbp can do multi-tarball upstreams these days (yquake2
is one example).

That doesn't work either if a Python 3 package depends on enum_compat,
but it would seem a little absurd to have a python3-enum-compat package,
given that we no longer support any version of Python 3 that doesn't
have enum. (And in the worst case, python3-dev could provide egg-info
for enum_compat, if dependencies on it become widespread.)

I can't help wondering whether it would be a better solution for
upstreams that need enum to depend on
(['enum34'] if sys.version < (3, 4) else []), or on Python 3.4
(either way, short-term it's still a Debian patch, but long-term it's
hopefully upstreamable).

smcv



Re: doc-central

2017-10-13 Thread Simon McVittie
On Fri, 13 Oct 2017 at 08:13:08 +0800, Paul Wise wrote:
> On Fri, Oct 13, 2017 at 3:27 AM, Diane Trout wrote:
> 
> > Being able to find all your documentation in one place would really be
> > convenient.
> 
> I don't think doc-base/doc-central will ever be the answer to this as
> it is very specific to Debian and thus not available on other distros.
> Eventually the Freedesktop folks will come up with something
> cross-distro and cross-desktop and we will have to replace doc-base
> with it, just like we had to do with the Debian-specific menu system.

... unless someone from Debian with an interest in documentation goes
upstream and comes up with something cross-distro, cross-desktop and
suspiciously similar to doc-base. Relevant people to talk to would
include the maintainers of GNOME's yelp (user-facing, usually-topic-based
help in Docbook or Mallard format, processed into HTML for viewing) and
devhelp (developer reference documentation in any format, processed into
HTML at build-time), and their equivalents in other upstream projects.

As far as I understand it, yelp and devhelp are separate apps as a
deliberate design choice, because they have different audiences and
requirements. Whether you agree with it or not, understanding the
reasoning behind that design choice seems likely to be valuable.

smcv



Re: pycharm package in debian

2017-10-04 Thread Simon McVittie
On Wed, 04 Oct 2017 at 14:11:02 -0700, Diane Trout wrote:
> I do wish that these third party app systems like conda, snappy or
> flatpak would include metadata like AppStream or DOAP.

Flatpak already does.

Flatpak apps nearly always include AppStream metadata, which Flatpak's
repository maintenance tool aggregates into AppStream distro XML in
special branches in the repository for easier access (analogous to how dak
aggregates dpkg metadata into dists/**/Packages, and AppStream metadata
into DEP-11 YAML). Any Flatpak package that doesn't have AppStream
metadata won't appear in GNOME Software (and probably other GUIs), and so
will only be installable via the CLI or a web link to a .flatpakref file.

DOAP isn't really for this: it provides developer-facing information
about projects (source packages), whereas AppStream provides user-facing
information about apps (binary packages). If AppStream is like dpkg
binary package metadata (control.tar.*), then DOAP is more like the
first paragraph of debian/control in a dpkg source package.

Regards,
smcv



Re: providing sphinx3-* binaries

2017-09-28 Thread Simon McVittie
On Thu, 28 Sep 2017 at 01:03:27 +0300, Dmitry Shachnev wrote:
> On Tue, Sep 26, 2017 at 06:29:05PM -0400, Antoine Beaupré wrote:
> >  1. there should be a more easy way to override SPHINXBUILD in
> > quickstart-generated Makefiles. IMHO, that means SPHINXBUILD?=
> > instead of SPHINXBUILD=
> 
> Good suggestion, I have submitted a pull request upstream:
> https://github.com/sphinx-doc/sphinx/pull/4092

I'm not sure this is necessary. If a Makefile has

SPHINXBUILD = sphinx-build
# or := or ::=

some-target:
$(SPHINXBUILD) some-arguments

then you can override it on the command-line:

make SPHINXBUILD="python3 -m sphinx"

without needing the ?= syntax. The ?= syntax is only necessary if you want
to pick up a value from the environment:

export SPHINXBUILD="python3 -m sphinx"; make

or if your Makefile has complicated logic that may or may not have set it
already:

ifeq(...)
SPHINXBUILD = $(PYTHON) -m sphinx
endif

SPHINXBUILD ?= sphinx-build

The Makefiles generated by GNU Automake, which are very verbose but
generally also very good about doing the right thing in the face of
strange make(1) quirks, use plain "=".

(I've said the same on the upstream PR.)

Regards,
smcv



Re: PAPT git migration

2017-06-01 Thread Simon McVittie
On Thu, 01 Jun 2017 at 15:06:07 +1000, Brian May wrote:
> So to me it looks like the required changes are:
>  
> * Rename Author field to From. Ensure it is first field.

It doesn't *have* to be the first, but if it isn't, gbp pq export will
re-order it.

> * Add Date field. Set to what?

The date the change was made, by whatever definition seems most appropriate
(conveniently, the format is the same as in debian/changelog). This would
be used as the author date if the patch gets sent upstream (to a git user).

Strictly speaking this isn't mandatory, but if it's missing, gbp pq import
will assume the current date/time.

> The Applied-upstream field looks nice to have, but maybe not essential.

Nothing except From and Subject is mandatory. I only mentioned the
other fields to illustrate that gbp pq wants to see them in a
pseudo-header at the end (next to Signed-off-by if you use that),
rather than in the email header.

S



Re: PAPT git migration

2017-05-31 Thread Simon McVittie
On Thu, 01 Jun 2017 at 00:16:45 +0200, Stefano Rivera wrote:
> Hi Barry (2017.05.31_23:32:20_+0200)
> > $ gbp pq export
> > - This doesn't work until you at least do a first pq import, but now I see 
> > the
> >   d/p/changlog-docs patch gets changed in ways that lose information:
> 
> Sounds like a limitation of pq import. I'm suprised it doesn't support
> DEP3.

DEP3 describes some metadata fields, and a family of incompatible formats
(which do not necessarily seem to be designed for machine-readability) that
use those metadata fields.

gbp pq import consumes patches in `git format-patch` format. One of the
possible formats in the DEP3 family (with RFC2822-style From/Date/Subject,
unindented long description, and all non-email fields in a trailing
pseudo-header similar to common practice for Signed-off-by) is compatible
with that:

From: ...
Date: ...
Subject: First line of description

More description
more description
yet more description

Bug-Debian: ...
Applied-upstream: ...
More-DEP3-fields-in-pseudo-header: ...
---
 optional diffstat here

diff --git ...
...

but this style (which is a DEP3 invention, and not used outside Debian and its
derivatives) is not:

Author: ...
Description: First line of description
 More description
 more description
 yet more description
Bug-Debian: ...

Converting the latter into the former seems like a valid gbp pq feature
request, but might not be practically feasible (detecting that style
mechanically so that it can be parsed is probably not trivial).

For packages that are maintained in git, either upstream or downstream,
preferring the former format makes a lot of sense IMO. Anything that relies
on round-tripping patches through git like gbp pq does is going to want
git-format-patch-compatible patches.

> So, our options are:
> 1. fix pq
> 2. modify all the patches to a format that pq understands
> 3. leave this to the maintainer to resolve (I think we expect all pq use
>to be entirely local, so pq use isn't something we're imposing on
>anyone)

I think it would make sense to leave the patches as-is during initial
conversion, and expect maintainers who are interested in using gbp pq
to resolve this when they import and re-export the patch series. Hopefully
a lot of current patches will become unnecessary with newer upstream
software versions, so preferring git format-patch style for new patches
might be a good "90%" solution.

S



Re: python-parse-type

2017-05-17 Thread Simon McVittie
On Wed, 17 May 2017 at 11:03:40 +0200, Thomas Goirand wrote:
> On 05/16/2017 02:30 PM, Simon McVittie wrote:
> > PyPI packages correspond to Debian source packages, not binary packages.
> 
> I don't think there ever was a source package name policy, neither in
> Debian nor in this group.

I meant conceptually rather than literally for this one - there is indeed
no hard requirement for source package names (because there does not need
to be a hard requirement, because they are not functionally significant
in the same way binary package names are). As far as I'm aware, there
is a loose common-sense policy that the source package name should either
be what upstream call it, or what upstream call it plus some disambiguation
where required (like the way some Python packages reuse the binary package
name python-foo for software that upstream just calls foo).

S



Re: python-parse-type

2017-05-16 Thread Simon McVittie
On Tue, 16 May 2017 at 08:00:43 -0400, Barry Warsaw wrote:
> On May 16, 2017, at 11:51 AM, Piotr Ożarowski wrote:
> >packaged as python-enum34 (correct name is python-enum, that's why you
> >didn't find it most probably)
> 
> Why is that wrong?  Agreed it's perhaps less discoverable in this case, but if
> you were looking for the PyPI enum34 package in Debian, you'd find
> python-enum34 first, and it would make sense.

Debian Python policy is that the package that lets you "import foo"
into /usr/bin/python is named python-foo, because names are APIs and
APIs are names.

If you import this backported module with "import enum" then it should
in principle be python-enum.

That policy does break down if there are two libraries with the same name
and different APIs; it looks as though that might have been the case here.
If that's true, then the python-enum34 name is a hack for encoding "should
be called python-enum according to Policy, but is incompatible with a
previous enum module", in much the same way that ABI-transition suffix names
like libpcrecpp0v5 are a hack for encoding "should be called libpcrecpp0
according to Policy, but is incompatible with a previous libpcrecpp.so.0".

PyPI packages correspond to Debian source packages, not binary packages.
dbus-python (upstream and) on PyPI is the source package dbus-python but
the binary package python-dbus in Debian, because you have to "import dbus"
to use it.

S



Re: Salvaging python-cassandra for Stretch

2017-04-06 Thread Simon McVittie
On Thu, 06 Apr 2017 at 17:49:15 +0200, Thomas Goirand wrote:
> Attached is the debdiff. As you can see, I'm attempting to use the new
> system that creates -dbgsym, and transitioning to it.

Sorry, I don't think this is a correct solution.

For non-Python packages, foo-dbg traditionally contained detached debug
symbols for the "production" version of foo (for example libglib2.0-0-dbg
contained debug symbols that were stripped from the libraries and binaries
in libglib2.0-0 and libglib2.0-bin). This can easily be superseded by
-dbgsym packages.

However, for Python packages, python[3]-foo-dbg has traditionally contained
two distinct types of content:

* Detached debug symbols as above

* A version of the same Python libraries as python[3]-foo, but recompiled
  with different options such that they can be imported into the debug
  interpreter python[3]-dbg (whose ABI is not the same as python[3])

You're keeping the first but losing the second. Is this intentional? Is
this correct?

It is certainly not correct to keep the -dbg packages and make them
transitional. I'm not sure whether this is considered to be a Policy
violation (-dbgsym packages are not in the main archive), but it's
certainly unconventional; and in this case, the -dbgsym package does
not correctly provide (all the functionality of) the -dbg package,
because the -dbg package contained libraries for the debug interpreter
and the -dbgsym package does not.

With hindsight, Python packages should probably not have ended with
-dbg, because that misleads developers like you into thinking they
are basically the same thing as libglib2.0-0-dbg - they aren't.
Perhaps they should have been like python[3]-dbg-cassandra instead,
which would make it a little clearer that they are a plugin for
python[3]-dbg.

Normally, dropping -dbg packages looks like this:
https://anonscm.debian.org/cgit/pkg-games/ioquake3.git/commit/?id=87594a58b03b850569357543b3823954b4fb0e73

> Also, does #857298 really deserves severity "grave"? Are others sharing
> the view that it could be downgraded to "important"?

It is grave for the binary package, because on the affected architectures,
python-cassandra-dbg is useless: it fails to meet its intended purpose
(letting users of python-dbg "import cassandra").

Unfortunately, autoremovals act on source packages, not binary packages,
because we don't want to remove individual binary packages from testing.

Perhaps removing the binary package is the best resolution - I don't
know. It's certainly the easiest. However, you need to be aware that
this is what you're doing: deliberately removing functionality.

S



Re: Transition away from git-dpm was: Re: Adopting OpenStack packages

2017-03-08 Thread Simon McVittie
On Wed, 08 Mar 2017 at 17:47:40 +1100, Brian May wrote:
> At the moment - since there were no objections yet - I have revised the
> wiki documentation (link already provided) to include DEP-14 and
> debian/master (as per DEP-14).

I think there's value in using debian/master for the focus of development
rather than arguing debian/master vs. debian/unstable vs. debian/sid,
on the basis that it's essentially an arbitrary choice, and debian/master
is what other packages are already using.

In a thread about moving from a less-widely-used tool-specific git repo
layout (git-dpm) to a layout that is used by a lot of teams and doesn't
even strictly require a particular tool (a gbp-pq-style patches-unapplied
branch), it would seem odd to introduce another DPMT-specific point of
divergence :-)

S



Re: Adopting OpenStack packages

2017-03-06 Thread Simon McVittie
On Mon, 06 Mar 2017 at 10:32:17 -0500, Scott Kitterman wrote:
> I think it's reasonable to try this out on a branch

Here's a maybe-stupid idea: use http://dep.debian.net/deps/dep14/ branch
naming (debian/master, debian/experimental) for that branch, and switch to
it as the default branch (edit foo.git/HEAD on alioth) when unfreezing
and "officially" switching to gbp-pq?

(You would have to stick to either upstream or upstream/latest but not
mix them, though, because file vs. directory duality applies here.)

I can offer https://anonscm.debian.org/cgit/pkg-utopia/dbus.git as an
example of a repository with semi-complex history, that uses gbp-pq and
DEP-14. ioquake3, flatpak, ostree, openjk, iortcw are all simpler examples
if you want one of those.

S



Re: Moving off of git-dpm (Re: git-dpm breakage src:faker)

2017-02-14 Thread Simon McVittie
On Tue, 14 Feb 2017 at 11:44:33 -0500, Barry Warsaw wrote:
> So how do I drop a patch with gbp-pq?

rm debian/patches/this-got-fixed-upstream.patch, vi debian/patches/series,
commit? :-)

Or more generally, to do it the git way, if the rest of the patch series
might need non-trivial adjustment:

git checkout debian/master  # old version, patches-unapplied
gbp pq import   # moves to patch-queue/debian/master
git checkout debian/master  # or gbp pq switch
gbp import-orig ../whatever.tar.gz
dch
git commit -m "New upstream version"
git checkout patch-queue/debian/master  # or gbp pq switch
git rebase -i debian/master
gbp pq export   # back to debian/master
git add debian/patches
git commit -m "Refresh patches or whatever"

(Substitute master for debian/master if DPMT doesn't use DEP-14,
but moving to gbp pq might be a good flag day to do that too. Then
you'll never have to get your local version of Debian's master branch
mixed up with your local version of upstream's master branch.)

S



Re: Re: Moving off of git-dpm (Re: git-dpm breakage src:faker)

2017-02-07 Thread Simon McVittie
On Tue, 07 Feb 2017 at 10:47:00 +, Ghislain Vaillant wrote:
> I know the discussion is leaning towards replacing usage of git-dpm
> with gbp-pq. I have nothing against it but, since we are talking about
> solutions for a git-centric workflow, has anyone considered the dgit-
> maint-merge workflow?

The dgit-maint-merge man page starts with some axioms:

   The workflow makes the following opinionated assumptions:

   ·   Git histories should be the non-linear histories produced by
   git-merge(1), preserving all information about divergent
   development that was later brought together.

I don't think that is actually a useful model of distro development.
I'm sure the rest of dgit-maint-merge(7) is a perfectly reasonable
workflow when you start from its assumptions, but I think they're
the wrong assumptions.

I think the downstream maintainer job in vendors like Debian (and Red
Hat, etc.) is essentially a rebasing patch series, whether we represent
it like that in git or not:

* we track an upstream version, which we treat as somehow canonical[1]

* we track what the downstream does as divergence from that upstream
  version, and only secondarily as a product in its own right (this is
  a core assumption in the design of all the non-native dpkg-source
  formats, and also SRPMs, so this is clearly something that has been
  considered important to downstreams)

* when we import a new upstream version, we adjust our divergence to
  apply on top of that new version

* some of the divergence is vendor-specific and we will never upstream it
  (for example adjusting paths/defaults to meet Debian Policy)

* some of the divergence is intended to go upstream, although our
  upstreams don't always apply in-principle-upstreamable changes
  as fast as we'd like them to; when it does get applied, it is often
  applied to their current development branch (like a git-cherry-pick)
  rather than being merged, and even if we send Github pull requests
  or similar, the upstream will want them to be based on some upstream
  branch and not on Debian's

* in Debian's case, even if they want to apply all the patches we propose
  to their upstream source, our upstreams will never want to `git merge`
  the contents of our VCS, because they usually don't want to merge
  debian/ (and in fact we actively recommend that they don't)

That's why, although I find dgit an interesting idea, I think
dgit-maint-merge(7) is a trap. If I use dgit at all, it'll be
dgit-maint-gbp(7) or similar.

[Conflict-of-interest disclaimer: I am a happy user of gbp-pq for every
patched package I maintain, except for tap.py and pkg-gnome svn. I would
love to see gbp-pq adopted by DPMT so I can use it for tap.py, and
when pkg-gnome finally moves from svn to git, given the overlap among
active maintainers between pkg-{systemd,utopia,gnome}, I suspect they
are going to use gbp-pq like pkg-systemd and pkg-utopia do.
I also seriously considered maintaining tap.py outside DPMT as a way
to avoid git-dpm.]

Regards,
S

[1] but in most cases not Canonical :-)



Re: Best way to package a python module which is "private" with exposed calling script

2017-02-06 Thread Simon McVittie
On Mon, 06 Feb 2017 at 16:43:32 -0500, Thomas Nyberg wrote:
> What I would ideally like is for the module
> code to be put somewhere off the regular system path and then have the
> binary "know" how to find it.

If you do this:

 /usr/
 ├── bin/
 │   └── script → ../share/mypackage/script (symlink)
 └── share/
 └── mypackage/
 │
 ├── module/
 │   └── __init__.py
 └── script

then the script will automatically put the directory containing the script's
real file, in this case /usr/share/mypackage, at the beginning of sys.path.

The offlineimap package is a good example of relying on this
technique: /usr/bin/offlineimap is actually a symlink to
/usr/share/offlineimap/run to avoid colliding with its module, which is
also called offlineimap. I think a script named "run" is quite a common
convention for doing this? Or if you have a convention of using
dash-separated-words for the script and underscore_separated_words for
the library module, they won't collide anyway.

game-data-packager-runtime (the launcher/frontend part of
game-data-packager) works similarly, although for historical reasons
game-data-packager itself is a shell script that sets PYTHONPATH.

This is not portable to platforms that don't have symlinks (hello Windows)
or platforms where argv[0] isn't always absolute (obscure Unixes?) but for
the purposes of Linux, and probably also *BSD and Hurd, it's fine.

S



Re: [Python-modules-team] Bug#849652: faker: FTBFS on 32-bit: ValueError: timestamp out of range for platform time_t

2017-01-30 Thread Simon McVittie
On Mon, 30 Jan 2017 at 20:31:06 +1100, Brian May wrote:
> >   File "faker/providers/date_time/__init__.py", line 403, in 
> > date_time_this_century
> > return cls.date_time_between_dates(now, next_century_start, tzinfo)
> >   File "faker/providers/date_time/__init__.py", line 381, in 
> > date_time_between_dates
> > datetime_to_timestamp(datetime_end),
> >   File "faker/providers/date_time/__init__.py", line 21, in 
> > datetime_to_timestamp
> > dt = dt.astimezone(tzlocal())
> >   File "/usr/lib/python2.7/dist-packages/dateutil/tz/tz.py", line 99, in 
> > utcoffset
> > if self._isdst(dt):
> >   File "/usr/lib/python2.7/dist-packages/dateutil/tz/tz.py", line 143, in 
> > _isdst
> > return time.localtime(timestamp+time.timezone).tm_isdst
> > ValueError: timestamp out of range for platform time_t

It looks as though this module is doing date/time computations with libc
time_t (signed integer seconds since 1970-01-01 00:00 UTC).

On older 32-bit ABIs like i386, time_t is just not large enough to
represent dates after 2038 - there are not enough bits. Anything needing
to compute dates outside the range 1970 to 2038 in a portable way needs
to use something that is not time_t. There is no way around that.

next_century_start sounds suspiciously like it might be 2100-01-01,
which is too late to be representable in 32-bit time_t.

If this particular functionality is not release-critically important,
then its test might need to be skipped on architectures where
time.localtime(2**32) would raise an exception, with a note in its
documentation that it only works on platforms with 64-bit time_t
(that's 64-bit platforms, plus newer 32-bit ABIs like x32).

> (not sure but suspect it might be too late for stretch)

For what it's worth, I had a brief look at this package during the bug
squashing party this weekend, and de-prioritized it on the basis that it
has never been in a stable release, removing faker and factory-boy from
testing would not break any other dependencies or build-dependencies,
and they appear to be development tools rather than user-facing
functionality. Obviously you're welcome to fix it if it's useful to
you, but I don't think it is necessarily a top priority to have in the
stretch release, so I spent my time fixing different RC bugs instead.

S



Re: git-dpm breakage src:faker

2017-01-29 Thread Simon McVittie
On Sun, 29 Jan 2017 at 20:50:48 +0100, Raphael Hertzog wrote:
> On Sun, 29 Jan 2017, Brian May wrote:
> >   3. Update debian/source/options with "unapply-patches" (anything else?).
> 
> You don't need that. dpkg-buildpackage unapplies them automatically after
> the build if it has applied them. If they were already applied before the
> build, then it leaves them applied.

Not only do you not need that, it isn't even allowed:

   --no-unapply-patches, --unapply-patches
  [...] Thoseoptions are only allowed in
  debian/source/local-options   so   that   all  generated  source
  packages have the same behavior by default.

See flatpak, dbus, ioquake3, or basically anything else I maintain
if you need typical examples of gbp-pq packages that sometimes or always
have patches.

(I would be delighted to switch src:tap.py to gbp-pq if you want an example
of migration.)

S



Re: using git-dpm or plain git-buildpackage in PAPT and DPMT

2016-08-11 Thread Simon McVittie
On Thu, 11 Aug 2016 at 09:29:13 -0400, Barry Warsaw wrote:
> On Aug 11, 2016, at 12:12 AM, Simon McVittie wrote:
> >where all Debian-specific pseudo-headers appear at the end of the diff
> >(next to the Signed-off-by if any),
> 
> Did you mean to say "end of the diff headers"?

Yes, that's what I should have written. It needs to look like the
Signed-off-by on a conventional git patch submission.

> the DEP-3 headers are *before* the actual diff, separated from the diff
> headers by a blank line.

Separated by a line containing exactly "---" (which is mandated by DEP-3
and is one of several syntaxes accepted by git-am), but yes. Here's a
typical real-world example:
https://anonscm.debian.org/cgit/collab-maint/flatpak.git/tree/debian/patches/debian/Try-gtk-3.0-version-of-the-icon-cache-utility-first.patch

S



Re: using git-dpm or plain git-buildpackage in PAPT and DPMT

2016-08-10 Thread Simon McVittie
On Wed, 10 Aug 2016 at 16:41:40 -0400, Barry Warsaw wrote:
> * With git-dpm we *had* to enforce the tool choice because git-dpm's artifacts
>   had to be preserved. If we ditch git-dpm, is that still the case?  IOW, if
>   you choose to use gbp-pq, am I forced to do so when I modify the same repo?

You do not have to choose gbp-pq. You do have to use some tool that
copes with:

* a git repository with patches unapplied but present in debian/patches/
* no other special metadata present in git (you can optionally commit a
  debian/gbp.conf, and I would recommend it, but it isn't required)

In particular this rules out dgit (which wants a patches-applied tree)
and git-dpm (which wants a patches-applied tree with its own metadata).

In practice this means you can build with either gbp buildpackage, or plain
dpkg-buildpackage/debuild; and you can manage the patches either with gbp pq,
with quilt, or (in simple cases) by running git format-patch in an
upstream-tracking repository, dropping the results into debian/patches/
and modifying debian/patches/series with a text editor.

gbp pq works best if all repository users stick to the dialect of DEP-3
where all Debian-specific pseudo-headers appear at the end of the diff
(next to the Signed-off-by if any), so that it looks a lot like git
format-patch output (canonically with the leading From_ line and the
trailing signature omitted, although if they're present in input it
will of course cope). This is basically also what git-dpm generates,
so it should be familiar to DPMT people already. Good for gbp-pq:

From: Donald Duck 
Date: Fri, 01 Apr 2016 12:34:00 +0100
Subject: Reticulate splines correctly

This regressed in 2.0.

In particular, this broke embiggening.

Origin: vendor, Debian
Forwarded: http://bugs.example.org/123
---
[diff goes here]

Not good for gbp-pq (it works OK, but an import/export round-trip will
mangle the metadata if you don't take steps to preserve it):

Author: Donald Duck 
Description: Reticulate splines correctly
 This regressed in 2.0.
 .
 In particular, this broke embiggening.
Last-update: Fri, 01 Apr 2016 12:34:00 +0100
Origin: vendor, Debian
Forwarded: http://bugs.example.org/123
---
[diff goes here]

Regards,
S



Re: Package name for github.com/miguelgrinberg/python-socketio

2016-08-02 Thread Simon McVittie
On Wed, 03 Aug 2016 at 09:12:22 +1000, Ben Finney wrote:
> Yes, this is a problem with the current Debian Python policy: it assumes
> distribution authors will not collide on package names.
> 
> I don't have an answer, though I will point out that whatever the
> solution is, it will be incompatible with the current Debian Python
> policy for at least one of those packages.

Debian cannot provide more than one Python module named socketio, have them
co-installable, and make them both work in Python via "import socketio"
without taking some sort of special steps to choose one. It's exactly the
same problem as having two C libraries both trying to be named libfoo.so.0.

The policy has been useful here, because you have found that there is
a problem now, not later.

(I don't have a general solution to the problem of upstreams naming things
as generically as this either.)

S



Re: transition: python3-defaults (python3.5 as default python3) - status update

2016-01-13 Thread Simon McVittie
On 13/01/16 14:02, Scott Kitterman wrote:
> On Wednesday, January 06, 2016 03:39:15 PM you wrote:
>> b.  Remove pygpgme from Testing.  It has rdepends so it would kill off a few
>> other packages as well:
...
>> bmap-tools: bmap-tools

It turns out I can drop pygpgme to a Recommends on this one: it's only
conditionally imported, and if you obtained the bmap image from a
trusted source (or have verified its signature manually, or it doesn't
have one at all), you don't need pygpgme. I'll upload that later today.

S



Re: Rebuild for packages with entry points?

2015-12-08 Thread Simon McVittie
On 08/12/15 16:50, Nikolaus Rath wrote:
> On Dec 07 2015, Simon McVittie <s...@debian.org> wrote:
>> This looks like a job for Lintian, assuming setuptools entry points are
>> easy to detect with a regex.
> 
> Well, yes, but what's the point? New uploads will not be affected by
> this bug anyway, and if you just want the warnings for old packages,
> wouldn't the time consumed by writing a Lintian check be better spent
> writing a script that triggers rebuilds directly?

If you already know a complete set of packages that have entry points,
sure, rebuild away. For binNMUs of arch:any packages the thing to do is
to ask the release team; for arch:all packages, it needs to be done as
individual sourceful uploads, unfortunately.

If you don't already know a complete set of affected packages, a Lintian
check + a bit of waiting would tell you, with progress tracking. I've
found that very useful for various issues with dbus policy files.

S



Re: Rebuild for packages with entry points?

2015-12-07 Thread Simon McVittie
On 07/12/15 19:00, Barry Warsaw wrote:
> On Dec 07, 2015, at 10:22 AM, Nikolaus Rath wrote:
>> It'd be nice to have https://bitbucket.org/pypa/setuptools/issues/443/
>> fixed in stretch.
> 
> I'm also not sure how many packages it affects in practice.  We could also let
> rebuilds be bug-driven.

This looks like a job for Lintian, assuming setuptools entry points are
easy to detect with a regex. Conveniently, Python's re uses
Perl-compatible regular expressions, so many Python developers are
probably already familiar with the syntax.

S



Re: so you want to migrate DPMT/PAPT to git? look at what pgk-perl did!

2015-08-07 Thread Simon McVittie
On 07/08/15 11:30, Sandro Tosi wrote:
 On Fri, Aug 7, 2015 at 12:18 AM, Barry Warsaw ba...@debian.org wrote:
 But, is that a good thing?  quilt itself is a PITA to use IMHO
 
 a lot of people seems to appreciate quilt (I know that 3.0 (quilt)
 doesnt necessarily reflect in using quilt). It's not perfect but I
 find it usable and in line with the style of other packaging tools.

I agree with Sandro about repository contents while disagreeing about
the quilt(1) command-line tool, which is perhaps an interesting perspective.

I avoid quilt(1) wherever possible, and whenever I use it to resolve
some weird patch-queue corner case, I have to look up how it works.
However, the patch-queue format, and patches-unapplied git repository
contents, make a lot of sense to me: the git history contains exactly
the parts that don't get rebased.

To avoid quilt(1), I use gbp pq instead. What I commit to git as a
result interoperates with quilt(1), in the sense that someone like
Sandro could clone one of my repositories, manipulate the patch queue
with quilt(1), and not have to know or care that I used gbp pq; and I
could work with one of Sandro's repositories with gbp pq, without having
to deal with quilt. That seems like a nice property to have.

(Example repositories: dbus, ioquake3)

S


-- 
To UNSUBSCRIBE, email to debian-python-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/55c495bb.4010...@debian.org



Re: Bug #751908, tox, and bin-only Python packages

2015-08-06 Thread Simon McVittie
On 06/08/15 15:50, Barry Warsaw wrote:
 The example that sparked issue #751908 was tox, which when I initially
 packaged it, I called the binary package python-tox.  I did this because,
 while the package does not provide any publicly importable modules, I felt it
 was presumptuous to claim a rather generic name like 'tox' as the binary
 package.

If it's pollution in the flat namespace of packages, then it's pollution
in the flat namespace of what's in $PATH. If it isn't, it isn't. Pick
one? :-)

 Should there be a naming convention for Python packages which only provide an
 executable?

Policy has this to say on the subject of a different flat global namespace:

When scripts are installed into a directory in the system PATH, the
script name should not include an extension such as .sh or .pl that
denotes the scripting language currently used to implement it.

Does similar reasoning make sense for package names - the user of the
package is looking for the functionality of the package, not the
implementation language?

If disambiguation is needed due to a naming conflict, a descriptive
prefix/suffix might make more sense: tox-tester or tox-python-tester
would be in the same spirit as chromium-browser (now chromium) vs. the
game Chromium B.S.U. (now chromium-bsu), and nodejs vs. ax25-node
fighting over node. (Note the subtle distinction that nodejs is *for
use with* JavaScript, not *written in* JavaScript.)

S


-- 
To UNSUBSCRIBE, email to debian-python-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/55c37b4a.90...@debian.org



Re: pybuild (Re: image-file-in-usr-lib)

2015-05-11 Thread Simon McVittie
On 11/05/15 08:03, Ole Streicher wrote:
 What is the rationale between having all this in /usr/lib?

Conversely, it might be informative to consider the rationale for
/usr/lib and /usr/share being separate:


This hierarchy is intended to be shareable among all architecture
platforms of a given OS; thus, for example, a site with i386, Alpha, and
PPC platforms might maintain a single /usr/share directory that is
centrally-mounted. Note, however, that /usr/share is generally not
intended to be shared by different OSes or by different releases of the
same OS.


... but does anyone actually do this? It's not as if dpkg really
supports it. If I was trying to provide NFS-mounted root filesystems or
/usr for multiple architectures, my process would go more like this:

* stop doing that, it's 2015;
* failing that, have a chroot per architecture;
* if necessary, deduplicate with btrfs reflinks;
* if not on btrfs and disk space is short, deduplicate with hardlinks

S


-- 
To UNSUBSCRIBE, email to debian-python-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/555128c2.5090...@debian.org



Re: Keeping upstream commits separate from Debian packaging commits

2014-10-16 Thread Simon McVittie
On 16/10/14 18:01, Tristan Seligmann wrote:
 The purpose of pristine-tar is the same whether you base it on a
 revision fetched from upstream, or a revision created by
 git-import-orig or a similar tool

... or a revision created by git-import-orig
--upstream-vcs-tag=v1.2.3, which has the contents of the tarball as its
tree, and two parent commits (a pseudo-merge): the upstream VCS tag
v1.2.3, and the previous tarball. This seems like the best of both
worlds, assuming IRC/email commit bots filter out the upstream-only
commits in its ancestry.

 Alternatively, if you will never generate the upstream source from the
 git repository, then you avoid this problem, but then building a
 particular package version may require manually fetching the correct
 tarball from the archive / snapshot.debian.org if they are no longer
 available from the original source

That's assuming the correct tarball is even in the archive. For
un-uploaded packages for which a sponsored upload was requested, you
need to obtain a compatible tarball in some out-of-band way. For
packages in NEW, it's worse: you need to obtain precisely the same
tarball that's already in NEW in some out-of-band way.

S


-- 
To UNSUBSCRIBE, email to debian-python-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/543fff3c.60...@debian.org



Re: git-dpm vs gbp-pq: new upstream and patch refresh (long)

2014-09-05 Thread Simon McVittie
On 04/09/14 20:40, Barry Warsaw wrote:
 The file is patched, but now I have an d/p/0005- file instead of a modified
 0003- patch file.  Sigh.

The systemd maintainers configured git-buildpackage (in their
debian/gbp.conf) to not use patch numbers. I'm starting to think that's
The Right Thing in general.

S



-- 
To UNSUBSCRIBE, email to debian-python-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/5409a839.6020...@debian.org



Re: git-dpm vs gbp-pq: new upstream and patch refresh (long)

2014-09-05 Thread Simon McVittie
On 05/09/14 13:10, Simon McVittie wrote:
 On 04/09/14 20:40, Barry Warsaw wrote:
 The file is patched, but now I have an d/p/0005- file instead of a modified
 0003- patch file.  Sigh.
 
 The systemd maintainers [...]

It might also be worth noting that the systemd maintainers switched from
git-dpm to gbp-pq recently (between 204 and 208, I think), so they
obviously didn't think git-dpm was the better option.

The systemd package is an interesting stress-test for patch systems,
because:

* upstream don't do formal micro releases (there is no v208.1 and
  probably never will be) but they do cherry-pick a lot of bugfixes to
  a stable-branch (e.g. v208-stable), so the Debian maintainers apply
  patches from the upstream v208-stable branch in bulk;

* the Debian maintainers also apply a significant number of local
  patches to preserve historical functionality of Debian's udev and
  sysvinit, some of which are never going to go upstream

so managing its patch-set is non-trivial. This might mean that the right
decision for systemd is not the same as the right decision as for a
package that will hopefully only have a couple of Debian patches; I
don't know.

S


-- 
To UNSUBSCRIBE, email to debian-python-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/5409aada.5020...@debian.org



Re: git-dpm vs gbp-pq: new upstream and patch refresh (long)

2014-09-05 Thread Simon McVittie
On 05/09/14 15:53, Barry Warsaw wrote:
 On Sep 05, 2014, at 01:21 PM, Simon McVittie wrote:
 
 It might also be worth noting that the systemd maintainers switched from
 git-dpm to gbp-pq recently (between 204 and 208, I think), so they
 obviously didn't think git-dpm was the better option.
 
 Are there any artifacts of this switch, e.g. mailing list archives, wiki
 pages, etc.?  I'd love to read some background on why they switched.

Sorry, I'm just a bystander (in both Python and systemd).

systemd maintainers, for context: the Python modules/apps packaging
teams are discussing pros and cons of the git-dpm or gbp-pq repo layout
for packages in git, and Barry has so far been advocating git-dpm. Since
systemd switched from git-dpm to gbp-pq recently, do you have any input
on why you decided against it?

(Probably best to reply to d-python only, since I realise this is way
outside the scope of pkg-systemd-maintainers - Reply-To set.)

S


-- 
To UNSUBSCRIBE, email to debian-python-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/5409d13a.5080...@debian.org



Re: git-dpm vs gbp-pq: new upstream and patch refresh (long)

2014-09-05 Thread Simon McVittie
On 05/09/14 16:18, Martin Pitt wrote:
 I don't think anyone in pkg-systemd@ has looked at git-dpm yet. In
 fact we switched from gitpkg to standard git-buildpackage.

Ugh, sorry.

 So I'm not sure where switched from git-dpm came from?

smcv mis-remembering the situation, evidently.

S


-- 
To UNSUBSCRIBE, email to debian-python-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/5409d865.5010...@debian.org



Re: multiple deb packages from python program.

2014-08-30 Thread Simon McVittie
On 30/08/14 10:50, Cornelius Kölbel wrote:
 But now my originial program package is empty and does not contain the
 python code.
 It looks like only the .install scripts are run, but obviously python
 setup.py install is not run anymore - so I guess something does not work
 right with the simple rules file anymore...

You now need a .install file for each binary package, including your
original one.

If there is only one binary package in debian/control (e.g.
python-privacyidea, like in your original situation) then debhelper
defaults to telling setup.py to install into debian/python-privacyidea/
which ends up as the python-privacyidea package.

If there is more than one binary package in debian/control, debhelper
defaults to telling setup.py to install into debian/tmp/ (which does not
automatically install to any binary packages), then you have to arrange
for the files in debian/tmp/ to be copied into
debian/python-privacyidea/ and debian/python-privacyidea-data/ (or
whatever your other package is called). Writing debian/*.install is the
usual way to achieve that.

S


-- 
To UNSUBSCRIBE, email to debian-python-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/5401a51c@debian.org



Re: How does team maintenace of python module works?

2013-02-20 Thread Simon McVittie
On 20/02/13 14:14, Thomas Goirand wrote:
 Now, do you know if it is possible to use git-buildpackage
 without storing the full upstream source in a branch?

Yes, most conveniently done via 'overlay = True' in debian/gbp.conf. You
have to supply a copy of the upstream tarball as you would for plain
debuild or svn-buildpackage, typically in .. or ../tarballs (also
configurable in gbp.conf).

I do this for openarena-data and its various related packages, because
the full upstream source is *huge* (mostly audio, graphics etc.),
particularly once I included the actual source files, so keeping a copy
in git would be a significant problem; and the upstream source is
unlikely to ever be patched in Debian (not that binary files patch well
anyway), so there's no point.

I don't think debian-directory-only maintenance in git is a good idea
for typical packages containing mostly code - you lose the ability to
use gbp-pq to manage patches, for instance. openarena and ioquake3 do
have an 'upstream' branch, using the typical git-buildpackage workflow.

S


-- 
To UNSUBSCRIBE, email to debian-python-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/5124e066.4050...@debian.org



Re: python-xlrd

2013-02-06 Thread Simon McVittie
On 06/02/13 16:12, Thomas Kluyver wrote:
 Of course, it's good to exercise due diligence, but the flip
 side is that technical changes which I hope would be uncontroversial
 have now taken a back seat to bureaucracy, because one man a few years
 ago declared himself 'the maintainer'.

If the request for a new version has been open for 2 years, waiting
another couple of months to confirm that the maintainer doesn't object
isn't really going to make much difference - particularly if that's 2
months of release-freeze time.

Debian is currently in a freeze, so any uploads you make now are not
going to reach the next release anyway, unless they meet the freeze
policy http://release.debian.org/wheezy/freeze_policy.html.

If you have changes that *do* meet the freeze policy (roughly:
non-invasive fixes for bugs with severity = important, with a small
enough diffstat for the release team to be able to review it sensibly),
they can be made via a NMU or a team upload.

If you have changes that *don't* meet the freeze policy, I would suggest
that now is not the time: they won't migrate from unstable to testing
anyway, and if important bugs are subsequently reported in python-xlrd,
having a newer version in unstable will make it more difficult to get
those bugs fixed in testing.

New upstream versions are not usually eligible for freeze exceptions
(unless they're targeted, bugfix-only releases from an upstream with a
relatively strict stable-branch policy).

Major packaging changes, like moving from dpatch to 3.0 (quilt) (or cdbs
to dh, or anything similar) are specifically mentioned in the freeze
policy as something that is not eligible.

S


-- 
To UNSUBSCRIBE, email to debian-python-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/5112a717.4050...@debian.org



Re: bug in backports that affects wheezy

2013-01-31 Thread Simon McVittie
On 31/01/13 09:00, Javi Merino wrote:
 Assuming that it does affect wheezy, should I upgrade it to
 important

If you think it has a major effect on the usability of the package,
without rendering it completely unusable to everyone[1] then yes.

 fix it in wheezy and then backport it to squeeze-backports?

If you think the fix is eligible for release in Wheezy under the
freeze policy[2], then yes.

Regards,
S

[1] http://www.debian.org/Bugs/Developer
[2] http://release.debian.org/wheezy/freeze_policy.html


-- 
To UNSUBSCRIBE, email to debian-python-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/510a5632.80...@debian.org



Re: Advise on packaging a new Python module

2012-11-07 Thread Simon McVittie
On 07/11/12 16:06, Tomás Di Domenico wrote:
 On 07/11/12 16:43, Jakub Wilk wrote:
 * Tomás Di Domenico td...@tdido.com.ar, 2012-11-07, 12:30:
 About the different versions in the git repository and the upstream
 package, that is actually my fault. I checked out the code from the
 upstream Mercurial repository and built the tarball myself, hence
 using a more recent version than the one in the tarball.

If you find yourself needing to do that, you should indicate it in the
version number (e.g. see ioquake3_1.36+svn2287.orig.tar.gz) rather than
claiming that your orig.tar.gz is the upstream release (e.g. 1.36 here).
For svn, commit numbers are useful; for git, the number of commits since
the tag is a useful thing to use in version numbers (as done by, e.g.,
git describe).

 I believe I have seen Debian packages that include revision
 numbers in their version numbers. I was wondering what would be a
 scenario where you'd actually build the Debian package with a upstream
 revision that's newer than an official release, and if this happens often.

I use snapshots of ioquake3 to have a codebase that's somewhere close to
the one that OpenArena's fork is based on (we use a shared ioquake3
engine, not the forked version, in Debian). 1.36 was released in April
2009, so it's far too old for current OpenArena.

It also means we have some sort of hope for security support - upstream
don't make security or bugfix releases, only infrequent feature
releases, but they do fix security bugs in svn and announce which
commits are necessary for security. Trying to backport security fixes
past 3.5 years of development isn't ideal; supporting one recent-ish
snapshot per major Debian release limits how far back we need to go.

I do not recommend this approach, but if your upstream makes it
necessary, there might be no alternative.

S


-- 
To UNSUBSCRIBE, email to debian-python-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/509a96e2.2070...@debian.org



Re: RFR: python-secretstorage

2012-06-22 Thread Simon McVittie
On 22/06/12 11:27, Thomas Kluyver wrote:
 I recently did a wrapper for the dbus desktop notifications API

 5 # This is needed on buildd so that dbus can use ~/.dbus
 6 export HOME = $(CURDIR)

FYI this shouldn't be necessary on Linux if you're either under X or
running dbus-daemon manually, but it's still needed on kFreeBSD and
probably Hurd, even if you run dbus-daemon manually.

(It's needed on kFreeBSD because dbus supports Linux
credentials-passing, but not FreeBSD credentials-passing; so it falls
back to proving its uid by writing a cookie to a well-known location
in $HOME.)

 PYTHONS=$(PYTHON2) $(PYTHON3) xvfb-run -a debian/runtests.sh

You don't need X for D-Bus if you run a dbus-daemon yourself.
tools/with-session-bus.sh in src:telepathy-glib and other Telepathy
packages is a relatively simple way to achieve that: run it like this:

with-session-bus.sh --session -- debian/runtests.sh

I want to add a new dbus-run-session tool to dbus itself[1], which is
basically with-session-bus.sh but simpler and in C. I'm waiting for code
review on that; it probably won't be there in time for wheezy.

S

[1] https://bugs.freedesktop.org/show_bug.cgi?id=39196


-- 
To UNSUBSCRIBE, email to debian-python-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4fe458ec.1010...@debian.org



  1   2   >