Re: PyPI wheels (was Re: Python Policy)

2015-10-21 Thread Jeremy Stanley
On 2015-10-21 09:31:04 -0500 (-0500), Ian Cordasco wrote:
> On Wed, Oct 21, 2015 at 8:58 AM, Barry Warsaw <ba...@debian.org> wrote:
> > On Oct 21, 2015, at 08:47 PM, Brian May wrote:
> >
> >>in one case this is because upstream have only supplied a *.whl
> >>file on Pypi.
> >
> > I'm *really* hoping that the PyPA will prohibit binary wheel-only uploads.
> 
> I'm not sure why they should prohibit binary wheel-only uploads. A
> company may wish to publish a binary wheel of a tool and only that (a
> wheel for Windows, OS X, different supported linux distributions,
> etc.). If they do, that's their prerogative. I don't think there's
> anything that says Debian (or Ubuntu) would then have to package that.
> 
> PyPI is not just there for downstream, it's there for users too
> (although the usability of PyPI is not exactly ideal).

Yep, I'm as much a fan of free software as the next person, but PyPI
doesn't _require_ what you upload is free software. It only requires
that you grant the right to redistribute what you're uploading.
While having source code to go along with things uploaded there
(which, mind you, aren't even actually required to be usable python
packages, they could be just about anything) would be nice, I don't
have any expectation that PyPI would ever eventually make it
mandatory.
-- 
Jeremy Stanley



Re: python-networkx_1.10-1_amd64.changes ACCEPTED into experimental

2015-10-05 Thread Jeremy Stanley
On 2015-10-05 23:45:57 +0200 (+0200), Thomas Goirand wrote:
[...]
> Upstream will *not* fix the issue, because you know, they "fixed" it in
> their CI by adding an upper version bound in the pip requirements, which
> is fine for them in the gate. It is fixed in OpenStack Liberty though,
> which I will soon upload to Sid.
[...]

It's a bit of a mischaracterization to say that "upstream will not
fix the issue." In fact as you indicate it was fixed within a couple
days in the master branches of affected projects. The mock pin in
stable/kilo branches is a temporary measure and can be removed if
all the broken tests are either removed or corrected (the assumption
being that distro package maintainers who have an interest in that
branch may volunteer to backport those patches from master if this
is important to them).
-- 
Jeremy Stanley



Re: mock 1.2 breaking tests (was: python-networkx_1.10-1_amd64.changes ACCEPTED into experimental)

2015-10-06 Thread Jeremy Stanley
On 2015-10-06 09:28:56 +0200 (+0200), Thomas Goirand wrote:
> Master != kilo. It still means that I have to do all of the backport
> work by myself.
[...]
> I know that it's the common assumption that, as the package maintainer
> in Debian, I should volunteer to fix any issue in the 6+ million lines
> of code in OpenStack ! :)
> 
> I do try to fix things when I can. But unfortunately, this doesn't scale
> well enough... In this particular case, it was really too much work.

That is the trade-off you make by choosing to maintain as many
packages as you do. You can obviously either spend time contributing
stable backports upstream or time packaging software. Just accept
that, as with Debian itself, "stable" means OpenStack upstream makes
the bare minimum alterations necessary. This includes, in some
cases, continuing to test the software in those branches with
dependencies which were contemporary to the corresponding releases
rather than chasing ever changing behavior in them. Sometimes it is
done for expediency due to lack of interested volunteer effort, and
sometimes out of necessity because dependencies may simply conflict
in unresolvable ways.
-- 
Jeremy Stanley



Re: static analysis and other tools for checking Python code

2016-03-02 Thread Jeremy Stanley
On 2016-03-02 11:22:52 +0800 (+0800), Paul Wise wrote:
[...]
> One of the things it has checks for is Python. So far it runs pyflakes
> and pep8 and a few hacky greps for some things that shouldn't be done
> in Python in my experience.
[...]

The "flake8" framework basically incorporates the pyflakes and pep8
analyzers along with a code complexity checker, and provides a
useful mechanism for controlling their behavior in a consistent
manner as well as pluggability to add your own:

https://packages.debian.org/flake8

One flake8 plug-in which came out of the OpenStack developer
community is "hacking" (obviously not for every project, but an
interesting reference example of layering in your own style checks):

https://packages.debian.org/python-hacking

Another output of the OpenStack community is "bandit," a security
analyzer for Python code:

https://packages.debian.org/bandit

Some other interesting analyzers not yet packaged for Debian as far
as I can tell include "pep257" (a Python docstring checker) and
"clonedigger" (a DRYness checker).

https://pypi.python.org/pypi/pep257
https://pypi.python.org/pypi/clonedigger

I can probably think up more that I've used, but the above rise to
the top of my list.
-- 
Jeremy Stanley



Re: static analysis and other tools for checking Python code

2016-03-02 Thread Jeremy Stanley
On 2016-03-03 08:38:40 +0800 (+0800), Paul Wise wrote:
[...]
> FYI pep257 is definitely packaged:
> 
> https://packages.debian.org/search?keywords=pep257
[...]

Whoops! Thanks--I almost certainly fat-fingered my package search on
that one.
-- 
Jeremy Stanley



Re: Test suite in github but missing from pypi tarballs

2016-04-21 Thread Jeremy Stanley
On 2016-04-21 11:23:20 -0400 (-0400), Fred Drake wrote:
> On Thu, Apr 21, 2016 at 10:54 AM, Tristan Seligmann
[...]
> > For distribution packaging purposes, the GitHub tags are generally
> > preferrable. GitHub makes archives of tagged releases available as tarballs,
> > so this is generally a simple tweak to debian/watch.
> 
> I'd generally be worried if the source package doesn't closely match a
> tag in whatever VCS a project is using, but I don't think that's
> essential, release processes being what they are.
[...]

Agreed, as long as "closely" is interpreted in ways consistent with,
say, tarballs for C-based projects. Consider `setup.py sdist`
similar to `make dist` where the dist target of some projects may
still run additional commands to generate metadata or other files
not tracked in revision control prior to invoking tar/gzip.
-- 
Jeremy Stanley



Re: pip for stretch

2016-11-21 Thread Jeremy Stanley
On 2016-11-21 18:33:48 -0500 (-0500), Barry Warsaw wrote:
[...]
> I have not started to look at what if anything needs to be done to
> transition to pip 9, but if you have a strong opinion one way or
> the other, please weigh in.

The fix to uninstall properly when replacing with an editable
install of the same package is a pretty huge one in my opinion. Ran
into it quite a bit where I'd do an install from unreleased source
(in editable mode because I was hacking on it) of some library, and
that software was a transitive dependency of something in its own
requirements list so had already been installed from an sdist/wheel
without my realizing. This leads to confusingly testing the released
version of the source code because it shows up first in the path
when you import what you think is the code you're editing. Not a fun
way to spend your time.

Granted, I'm mostly running pip on unstable when developing, and I
run it from a bootstrapped virtualenv anyway so don't actually use
the Debian package of it other than to bootstrap my initial venv.
-- 
Jeremy Stanley



Re: Binary naming for Django Related Packages

2016-12-03 Thread Jeremy Stanley
On 2016-12-03 17:01:45 +0100 (+0100), Thomas Goirand wrote:
[...]
> Because of problems when doing imports in Python3 (in a venv, the system
> module wont be loaded if it's there and there's already something in the
> venv), we should attempt to discourage upstream to use namespaced
> modules. This indeed could prevent from running unit tests. That's what
> has been discovered in the OpenStack world, and now all the oslo libs
> aren't using namespace (though we've kept the dot for the egg-names).

To clarify, the main issue encountered there was a conflict over
namespace-level init when some modules were editable installs.
Historical details of the decision are outlined at:

https://specs.openstack.org/openstack/oslo-specs/specs/kilo/drop-namespace-packages.html#problem-description
 >

-- 
Jeremy Stanley



Re: PyPI source or github source?

2017-03-13 Thread Jeremy Stanley
On 2017-03-13 17:55:32 +0100 (+0100), Thomas Goirand wrote:
[...]
> IMO, upstream are right that the PyPi releases should be minimal. They
> are, from my view point, a binary release, not a source release.
> 
> It makes a lot of sense to therefore use the git repository, which is
> what I've been doing as much as possible.

Yes, as much as the name "sdist" indicates it's a source
distribution, in many cases it's not exactly pristine source and may
be missing files deemed unimportant for end users or could include
some autogenerated files the upstream authors would rather not check
into their revision control systems. So sdists, while a tarball
under the hood (and by filename extension), are still really an
installable packaging format more than they are a source
distribution format.
-- 
Jeremy Stanley



Re: GnuPG signatures on PyPI: why so few?

2017-03-12 Thread Jeremy Stanley
On 2017-03-12 11:46:31 +1100 (+1100), Ben Finney wrote:
[...]
> In response to polite requests for signed releases, some upstream
> maintainers are now pointing to that thread and closing bug reports as
> “won't fix”.
> 
> What prospect is there in the Python community to get signed upstream
> releases become the obvious norm?

Speaking for OpenStack's tarballs at least, our sdists are built by
release automation which also generates detached OpenPGP
signatures so as to provide proof of provenance... but we don't
upload them to PyPI since the authors of the coming Warehouse
replacement for the current CheeseShop PyPI have already indicated
that they intend to drop support for signatures entirely. We
consider https://releases.openstack.org/ the official source of
information for our release information and host our signatures
there instead (well, really on https://tarballs.openstack.org/ with
direct links from the former).

The same key used to sign our tarballs (and wheels) also signs our
Git tags, for added consistency.
https://releases.openstack.org/#cryptographic-signatures
Of possible further interest: we modeled a fair amount of our key
management after what's employed for Debian's archive keys.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature


Re: a few quick questions on gbp pq workflow

2017-08-06 Thread Jeremy Stanley
On 2017-08-06 20:00:59 +0100 (+0100), Ghislain Vaillant wrote:
[...]
> You'd still have to clean the pre-built files, since they would be
> overwritten by the build system and therefore dpkg-buildpackage
> would complain if you run the build twice.
> 
> So, you might as well just exclude them from the source straight
> away, no?

Repacking an upstream tarball just to avoid needing to tell
dh_install not to copy files from a particular path into the binary
package seems the wrong way around to me, but maybe I'm missing
something which makes that particularly complicated? This comes up
on debian-mentors all the time, and the general advice is to avoid
repacking tarballs unless there's a policy violation or you can get
substantial (like in the >50% range) reduction in size on especially
huge upstream tarballs. Otherwise the ability to compare the
upstream tarball from the source package to upstream release
announcements/checksums/signatures is a pretty large benefit you're
robbing from downstream recipients who might wish to take advantage
it.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature


Re: a few quick questions on gbp pq workflow

2017-08-06 Thread Jeremy Stanley
On 2017-08-06 10:44:36 -0400 (-0400), Allison Randal wrote:
> The OpenStack packaging team has been sprinting at DebCamp, and
> we're finally ready to move all general Python dependencies for
> OpenStack over to DPMT. (We'll keep maintaining them, just within
> DPMT using the DPMT workflow.)
> 
> After chatting with tumbleweed, the current suggestion is that we
> should migrate the packages straight into gbp pq instead of making
> an intermediate stop with git-dpm.
[...]

More a personal curiosity on my part (I'm now a little disappointed
that I didn't make time to attend), but are you planning to leverage
pristine tarballs as part of this workflow shift so you can take
advantage of the version details set in the sdist metadata and the
detached OpenPGP signatures provided upstream? Or are you sticking
with operating on a local fork of upstream Git repositories (and
generating intermediate sdists on the fly or supplying version data
directly from the environment via debian/rules)?

I'm eager to see what upstream release management features you're
taking advantage of so we can better know which of those efforts are
valuable to distro package maintainers.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature


Re: a few quick questions on gbp pq workflow

2017-08-06 Thread Jeremy Stanley
On 2017-08-06 14:11:13 -0400 (-0400), Ondrej Novy wrote:
> It's not always possible/simple/nice to use sdist, because it contains
> prebuild docs. And I don't like to do +dfsg rebuild just for removing docs.
> Sometimes sdists doesn't contain tests.
> 
> So my preference is:
> 
>- use sdist if it's possible (have tests, don't have prebuilds, ...)
>- use git tag tarballs (https://github.com///tags)
> 
> I already migrated few packages OS->DPMT so far.

Why would you need to repack a tarball just because it contains
prebuilt docs (non-DFSG-free licensed documentation aside)? I'm all
for rebuilding those at deb build time just to be sure you have the
right deps packaged too, but if the ones in the tarball are built
from DFSG-compliant upstream source, included in the archive for
that matter, then leaving the tarball pristine shouldn't be a policy
violation, right? That's like repacking a tarball for an
autotools-using project because upstream is shipping a configure
script built from an included configure.in file.

Pretty sure OpenStack at least would consider any content which
requires Debian package maintainers to alter tarballs prior to
including them in the archive as fairly serious bug in its software.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature


Re: Ad-hoc Debian Python BoF at PyCon US 2017

2017-06-20 Thread Jeremy Stanley
On 2017-06-20 16:40:26 +0200 (+0200), Matthias Klose wrote:
[...]
> another one many openstack packages.
[...]

Spot checking the source packages in the archive currently, it looks
like Thomas already has most of these done.

By way of background there, a coordinated effort has been underway
for the last several years to get all OpenStack software working
with recent Python 3 interpreters. The slowest part of that work
involved reaching out to the upstreams of (hundreds of) dependencies
not maintained within the OpenStack community and either helping
them get working Py3K support, adopting defunct libraries so
OpenStack contributors could fix them directly, or in some cases
abandoning/replacing dependencies with better-maintained
alternatives. This really is an ecosystem-wide effort, as complex
Python software doesn't generally run in isolation. I expect the
story for other large Python-based applications is very similar to
this.

Most OpenStack services and libraries are integration-tested
upstream to work under Python 3.5 today, but there are still many
Python-2.7-only testsuites for them (especially unit testing and
some functional tests) which need heavy refitting before the
community feels its Py3K support efforts are truly complete.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature


Re: Backport of Python 3.6 for Debian Stretch?

2018-04-24 Thread Jeremy Stanley
On 2018-04-24 23:42:30 +0700 (+0700), Nguyễn Hồng Quân wrote:
[...]
> Then why Debian project invent *.deb file, not just pack binary as
> tar file and let user to untar it? I favor building deb file,
> rather than copying "make altinstall" result, because of the same
> reason.

I completely understand the reason for using packages, but Debian is
a volunteer project which does not exist solely to solve your
problems so you can either do something you already know how to do
(build Python 3.6 from source) and move on, or learn to make the
packages you want (including fixing any backporting issues you find
when doing so). Asking others to tell you how to do that is not the
sort of self-directed research expected of participants in a
volunteer project, it's at best begging and at worst disrespectful
of those who have invested the time to learn the things you're not
willing.

https://backports.debian.org/Contribute/

-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Backport of Python 3.6 for Debian Stretch?

2018-04-25 Thread Jeremy Stanley
On 2018-04-25 10:06:47 +0700 (+0700), Nguyễn Hồng Quân wrote:
[...]
> I spent much time to research on it, so that I can tell what
> difference between 3.6.1 and 3.6.4 packaging.
[...]

http://metadata.ftp-master.debian.org/changelogs/main/p/python3.6/python3.6_3.6.5-3_changelog

https://manpages.debian.org/debdiff

http://snapshot.debian.org/package/python3.6/

[also, please don't Cc me, I do already read the mailing list]
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Backport of Python 3.6 for Debian Stretch?

2018-04-24 Thread Jeremy Stanley
On 2018-04-24 22:07:03 +0700 (+0700), Nguyễn Hồng Quân wrote:
[...]
> I don't need to "not disturb" system.
> If have to use conda, pyenv, I would rather build Python3.6 from source
> tarball, not to bring more overhead (conda body, pyenv body), and
> "Python3.6 from source" still not disturb my system, because it is
> installed to "/usr/local".
> 
> But I don't want any method that requires to build Python from source
> (tarball, pythonz, conda or alike), because I really need *pre-built
> binaries*.
[...]

Unless I'm missing something, there's no substantial difference
between building a package of Python3.6 and copying it to the
system, or performing a `make altinstall` and copying the resulting
files (via rsync, tar and scp, whatever) to the target system. If
you're okay with the idea of building packages remotely, then why
not build from source remotely?
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: python-urllib3 1.25.6 uploaded to experimental (closes CVE-2019-11236) but fails build tests

2019-10-29 Thread Jeremy Stanley
On 2019-10-29 13:29:02 +0100 (+0100), Michael Kesper wrote:
> On 27.10.19 17:27, Drew Parsons wrote:
> > On 2019-10-27 23:13, Daniele Tricoli wrote:
[...]
> > > Not an expert here, but I think fallback is not done on
> > > purpose due downgrade attacks:
> > > https://en.wikipedia.org/wiki/Downgrade_attack
> > 
> > I see. Still an odd kind of protection though.  The attacker can
> > just downgrade themselves.
> 
> No. A sensible server will not talk to you if your requested SSL
> version is too low. pub.orcid.org seems to use absolutely outdated
> and insecure software versions.

Well, downgrade attacks aren't usually a two-party scenario. The
risk with a downgrade attack is when a victim client attempts
communication with some server, and a third-party attacker tampers
with the communication between the client and server sufficiently to
cause protocol negotiation to fall back to an old enough version
that the attacker can then exploit known flaws to decrypt and/or
proxy ("man in the middle") that communication. Having both the
client and the server be unwilling to use susceptible older protocol
versions helps thwart this attack vector.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Where can I find packages that need a maintainer?

2020-02-13 Thread Jeremy Stanley
There's also this wonderful utility:

https://packages.debian.org/sid/how-can-i-help

You can use it to easily find packages installed on your system
which are orphaned or in other similar states of help wanted, which
at least helps focus your efforts on packages you're more likely
using and relying on, rather than wading through a large list of
packages which are mostly orphaned because nobody's using them
anyway.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Automatically removing "badges" pictures from README.rst files

2020-04-09 Thread Jeremy Stanley
On 2020-04-10 00:25:41 +0200 (+0200), Thomas Goirand wrote:
> On 4/9/20 10:05 PM, PICCA Frederic-Emmanuel wrote:
> > what about lintian brush ?
> 
> What's that?

This:

automatically fix lintian problems

This package contains a set of scripts that can automatically
fix more than 80 common lintian issues in Debian packages.

It comes with a wrapper script that invokes the scripts, updates
the changelog (if desired) and commits each change to version
control.

(from https://packages.debian.org/lintian-brush )
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Build Python 2.7 version >= 2.7.15 on Debian 9

2020-04-03 Thread Jeremy Stanley
On 2020-04-03 23:21:25 +0300 (+0300), ellis.mag...@pp.inet.fi wrote:
[...]
> What is the correct way to build a clean version of python2.7 on
> Debian9 that will be compatible with already packaged python2.7
> modules?

The Python modules with C extensions packaged in Debian are built
against the Python development library headers for the version of
the Python interpreter which is packaged in Debian. If you replace
the interpreter with a different version I expect you'll at least
have to relink, if not entirely recompile, those extensions against
newer headers. I don't personally know a way to go about that short
of rebuilding those additional modules from source. You might be
better off switching to a newer version of Debian which provides a
newer Python 2.7 release and has the other packages you need already
built against it, or using some other Python package management
solution like conda or virtualenv.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Example package using python3-pbr and Sphinx documentation with manual page

2020-05-04 Thread Jeremy Stanley
On 2020-05-04 19:13:38 +0200 (+0200), Florian Weimer wrote:
> I'm trying to package pwclient, which depends on python3-pbr and has a
> rudimentary manual page generated from Sphinx documentation.  Is there
> a similar example package which I can look at, to see how to trigger
> the manual page generation?
> 
> I currently get this:
> 
> dh_sphinxdoc: warning: Sphinx documentation not found
[...]

Since PBR originated in OpenStack, the python3-openstackclient
package may serve as a good example. It does a dh_sphinxdoc override
for manpage building here:

https://salsa.debian.org/openstack-team/clients/python-openstackclient/-/blob/88bdecc66a30b4e3d5aec9cdae4cc529c33690e6/debian/rules#L27
 >

Then there's a similar dh_installman override a few lines later.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Example package using python3-pbr and Sphinx documentation with manual page

2020-05-04 Thread Jeremy Stanley
On 2020-05-04 19:07:00 + (+), Jeremy Stanley wrote:
> On 2020-05-04 19:13:38 +0200 (+0200), Florian Weimer wrote:
> > I'm trying to package pwclient, which depends on python3-pbr and has a
> > rudimentary manual page generated from Sphinx documentation.  Is there
> > a similar example package which I can look at, to see how to trigger
> > the manual page generation?
> > 
> > I currently get this:
> > 
> > dh_sphinxdoc: warning: Sphinx documentation not found
> [...]
> 
> Since PBR originated in OpenStack, the python3-openstackclient
> package may serve as a good example. It does a dh_sphinxdoc override
> for manpage building here:
> 
>  https://salsa.debian.org/openstack-team/clients/python-openstackclient/-/blob/88bdecc66a30b4e3d5aec9cdae4cc529c33690e6/debian/rules#L27
>  >
> 
> Then there's a similar dh_installman override a few lines later.

Oh, and since you mentioned the conf.py contents, here's how it's
being done in the upstream source for that repo:

https://opendev.org/openstack/python-openstackclient/src/commit/fdefe5558b7237757d788ee000382f913772bffc/doc/source/conf.py#L225-L233
 >

-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-30 Thread Jeremy Stanley
On 2020-06-30 09:15:47 +0200 (+0200), Thomas Goirand wrote:
[...]
> If there's some nasty NPM job behind, then I probably will just
> skip the dashboard, and expect deployment to get the dashboard not
> from packages. What is included in the dashboard? Things like
> https://zuul.openstack.org/ ?

That's a white-labeled tenant of https://zuul.opendev.org/ but yes,
basically an interface for querying the REST API for in-progress
activity, configuration errors, build results, log browsing, config
exploration and so on. The result URLs it posts on tested changes
and pull/merge requests are also normally to a build result detail
page provided by the dashboard, thought you should be able to
configure it to link directly to the job logs instead.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: The python command in Debian

2020-07-09 Thread Jeremy Stanley
On 2020-07-09 15:26:47 +0200 (+0200), Matthias Klose wrote:
> As written in [1], bullseye will not see unversioned python
> packages and the unversioned python command being built from the
> python-defaults package.
> 
> It seems to be a little bit more controversial what should happen
> to the python command in the long term.  Some people argue that
> python should never point to python3, because it's incompatible,
> however Debian will have difficulties to explain that decision to
> users who start with Python3 and are not aware of the 2 to 3
> transition.  So yes, in the long term, Debian should have a python
> command again.
[...]

I don't follow your logic there. Why is it hard to explain? Python
was a programming language, and its last interpreter (2.7) is no
longer developed or supported. Python3 (formerly Python3000) is also
a programming language, similar to Python and developed by the same
community, but not directly compatible with Python. Debian provides
an interpreter for Python3, but has (or will have by then) ceased
distributing a Python interpreter.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-29 Thread Jeremy Stanley
On 2020-06-29 23:55:49 +0200 (+0200), Thomas Goirand wrote:
[...]
> nodepool from OpenStack,

Well, *formerly* from OpenStack, these days Nodepool is a component
of the Zuul project gating system, which is developed by an
independent project/community (still represented by the OSF):

https://zuul-ci.org/
https://opendev.org/zuul/nodepool/

You could probably run a Nodepool launcher daemon stand-alone
(without a Zuul scheduler), but it's going to expect to be able to
service node requests queued in a running Apache Zookeeper instance
and usually the easiest way to generate those is with Zuul's
scheduler. You might be better off just trying to run Nodepool along
with Zuul, maybe even set up a GitLab connection to Salsa:

https://zuul-ci.org/docs/zuul/reference/drivers/gitlab.html

> and use instances donated by generous cloud providers (that's not
> hard to find, really, I'm convinced that all the providers that
> are donating to the OpenStack are likely to also donate compute
> time to Debian).
[...]

They probably would, I've approached some of them in the past when
it sounded like the Salsa admins were willing to entertain other
backend storage options than GCS for GitLab CI/CD artifacts. One of
those resource donors (VEXXHOST) also has a Managed Zuul offering of
their own, which they might be willing to hook you up with instead
if you decide packaging all of Zuul is daunting (it looks like both
you and hashar from WMF started work on that at various times in
https://bugs.debian.org/705844 but more recently there are some
JavaScript deps for its Web dashboard which could get gnarly to
unwind in a Debian context).
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-28 Thread Jeremy Stanley
On 2020-06-28 16:48:02 +0200 (+0200), Thomas Goirand wrote:
[...]
> I don't want this to happen again. So I am hereby asking to take
> over the maintenance of these packages which aren't in the
> OpenStack team. They will be updated regularly, each 6 months,
> with the rest of OpenStack, following the upstream
> global-requirement pace. I'm confident it's going to work well for
> me and the OpenStack team, but as well for the rest of Debian.
> 
> Is anyone from the team opposing to this? If so, please explain
> the drawbacks if the OpenStack team takes over.

While I don't agree with Thomas's harsh tone in the bits of the
message I snipped (please Thomas, I'm sure everyone's trying their
best, there's no need to attack a fellow contributor personally over
technical issues), I did want to point out that the proposal makes
some sense. The Testing Cabal folk were heavily involved in
OpenStack and influential in shaping its quality assurance efforts;
so OpenStack relies much more heavily on these libraries than other
ecosystems of similar size, and OpenStack community members, present
and past, continue to collaborate upstream on their development.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: [Python-modules-team] Bug#954381: marked as done (python3-kubernetes: New upstream version available)

2020-11-21 Thread Jeremy Stanley
On 2020-11-22 00:01:03 +0100 (+0100), Thomas Goirand wrote:
> On 11/21/20 3:36 AM, Sandro Tosi wrote:
> >>* Use git to generate upstream tarball, as the PyPi module doesn't 
> >> include
> >>  the test folder. Using the gen-orig-xz in debian/rules, as using the
> >>  repack function of debian/watch doesn't make sense (why downloading a
> >>  tarball that would be later on discarded? I'm open to a better 
> >> solution
> >>  which would be uscan compatible though...). Switch d/watch to the 
> >> github
> >>  tag therefore.
> > 
> > you can track the github project instead of pypi (man uscan has the
> > details); this is was i'm doing recently, as most of the time PyPI
> > releases dont have all the files we need (tests, or test data, or
> > documentation, or a combination of that)
> 
> Hi.
> 
> Thanks, I know that. However, that's not my problem. The issue is that
> uscan --download will download the tarball from github, and I'd like to
> replace that by what I'm doing in debian/rules, which is using git and
> git submodule, to fetch things using git, and create a tarball. Sure, I
> could use a repack script in debian/watch, but then uscan will continue
> to first download the archive from github, and *then* only, I can
> discard what's been downloaded, and fetch stuff from github with git.
> 
> Is there a solution here, so that uscan uses a repack script directly
> without attempting to download first?

Maybe I'm missing something obvious, but can't you just use mode=git
(see uscan manpage for details on this feature). I assumed this is
what was being suggested.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: How to watch pypi.org

2020-10-31 Thread Jeremy Stanley
On 2020-10-31 12:03:50 +0100 (+0100), Thomas Goirand wrote:
[...]
> On 10/31/20 3:07 AM, Jeremy Stanley wrote:
> > I have to agree, though in the upstream projects with which I'm
> > involved, those generated files are basically a lossy re-encoding of
> > metadata from the Git repositories themselves: AUTHORS file
> > generated from committer headers, ChangeLog files from commit
> > subjects, version information from tag names, and so on. Some of
> > this information may be referenced from copyright licenses, so it's
> > important in those cases for package maintainers to generate it when
> > making their source packages if not using the sdist tarballs
> > published by the project.
> 
> Unfortunately, the FTP masters do not agree with you. I've been told
> that the OpenStack changelog is a way too big, and it's preferable to
> not have it in the binary packages.

PBR started creating much smaller changelogs years ago, after you
asked ftpmaster. I get that you see no value in changelog files, but
it seems like it would be worth revisiting.

> Also, there's nothing in the Apache license that mandates having
> an AUTHORS list as per what PBR builds. If we are to care that
> much in OpenStack, then the license must be changed.
[...]

I agree, it's not commonplace in OpenStack other than this possible
exception:

https://opendev.org/openstack/python-openstackclient/src/branch/master/doc/source/cli/man/openstack.rst#user-content-copyright

You do tend to find it in other Python projects however, for
example:

https://github.com/pygments/pygments/blob/master/LICENSE#L1

My point was that, in general, some Python projects do autogenerate
an AUTHORS file from commit metadata at dist time rather than
storing it directly in a file within their Git repositories, and
some projects (including Python projects) refer to AUTHORS from
copyright statements, so it's a good idea to build/keep it.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: How to watch pypi.org

2020-11-01 Thread Jeremy Stanley
On 2020-11-01 20:23:20 +0100 (+0100), Thomas Goirand wrote:
[...]
> However, if I am to put more efforts on stuff like that, my priority
> would be first on getting the reno release notes published in the Debian
> package. I've been thinking about this for a long time, and haven't
> figured out yet what would be the best way, with a reasonable workflow.
> 
> From the Debian perspective, I'd have to:
> - generate the release notes from the last version in Debian Stable, up
> to what's in Sid. For example, I would include all the OpenStack release
> notes for Stein, Train, Ussuri and Victoria in all packages uploaded for
> Debian Bullseye, as this would concern anyone upgrading from Buster.
> - make it so that release notes would be generated from Git, and maybe
> stored in a debian/release-notes folder, so that it wouldn't generate a
> diff with the original upstream tag.
> 
> The drawback would be that, on each upload of a new Debian revision, the
> debian.tar.xz will contain the release notes, which could be of
> significant size (especially if they include static objects like CSS,
> JS, and all what makes a theme).
> 
> If you have any other suggestion concerning how to handle these release
> notes, please let me know.

It's likely I'm missing some subtle reason for the complexities you
outline above, but if you install python3-reno and then run `reno
report` in the upstream Git repository for any project with a
releasenotes tree (or pass the path to said Git repository in the
command line) it will generate a reStructuredText compilation of the
release notes contained therein. Very lightweight, no need for extra
files or anything. I'd think you could just dump that output into a
NEWS file or similar at binary package build time. This is basically
the same thing reno's Sphinx extension does under the covers.

Check out `reno report --help` for a number of flags you might want
to pass it to make the results more readable like omitting the
source filename comments, skipping notes earlier than a certain
version, collapsing pre-release notes, and so on. A quick test of
Nova's release notes indicates that even if you don't truncate them
though and include everything back to when the project started using
reno 5 years ago, that NEWS file would only increase the compressed
size of the nova-doc package by 1%.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: How to watch pypi.org

2020-10-30 Thread Jeremy Stanley
On 2020-10-31 01:33:36 + (+), Paul Wise wrote:
> On Fri, Oct 30, 2020 at 2:19 PM Fioddor Superconcentrado wrote:
> > As I said I'm very new to this and all (python) packages I'm
> > using lately use the usual python tools (pipy, setup.py, etc)
> > and my first approach has been to stick as close as possible to
> > the upstream procedures. But I might very likely be taking a
> > wrong decision. What are the reasons to go for git instead of
> > pypi? I see that it is 'more upstream' but it seems that
> > everyone else is pointing to pypi as a distro-agnostic solution.
> 
> As Andrey says, missing files is one issue, another is that tarballs
> often contain extra generated files that should be built from source,
> but if you use the tarball then they quite likely will not be built
> from source.

I have to agree, though in the upstream projects with which I'm
involved, those generated files are basically a lossy re-encoding of
metadata from the Git repositories themselves: AUTHORS file
generated from committer headers, ChangeLog files from commit
subjects, version information from tag names, and so on. Some of
this information may be referenced from copyright licenses, so it's
important in those cases for package maintainers to generate it when
making their source packages if not using the sdist tarballs
published by the project.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Re: Challenges packaging Python for a Linux distro - at Python Language Summit

2021-05-17 Thread Jeremy Stanley
On 2021-05-17 07:10:39 +0100 (+0100), Luke Kenneth Casson Leighton wrote:
> (apologies i forgot to say, please do cc me
[...]

Done.

> a dist-upgrade to debian / testing - a way to obtain the latest
> variants of critical software - frequently resulted in massive
> breakage.
> 
> i quickly learned never to attempt that again given the massive
> disruption it caused to me being able to earn money as a software
> engineer.
[...]

You're probably just going to see this as further confirmation of
your opinion, or yet another person telling you that you're doing it
wrong, but as someone who also writes rather a lot of Python
programs for a living I learned long ago to not develop against the
"system" on any platform. I use sid/unstable for my development
systems, but I consider the python3 package in it to have two uses:
Testing new packages of software which are targeting a future Debian
stable release, and running other packaged things which are already
part of Debian.

For software development work, I compile my own Python interpreters
and libraries, because I need to develop against a variety of
different versions of these, sometimes in chroots, to be able to
target other distros and releases thereof. I keep all these in my
homedir or otherwise outside of normal system-wide installation
paths. Bear in mind that this isn't just a Debian problem, for
decades Red Hat has also advised programmers not to rely on their
/usr/bin/python for custom development because it is patched and
tuned specifically for running system applications packaged for
their distro and not intended as a general-purpose Python
distribution.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: [RFC] DPT Policy: Canonise recommendation against PyPi-provided upstream source tarballs

2021-06-25 Thread Jeremy Stanley
On 2021-06-25 19:01:39 -0400 (-0400), Nicholas D Steeves wrote:
[...]
> And yes, I agree moderate is better, but I must sadly confess
> ignorance to the technical reasons why PyPI is sometimes more
> appropriate. Without technical reasons it seems like a case of
> ideological compromise (based on the standards I've been mentored
> to and the feedback I've received over the years).

Hopefully my other replies here and in Salsa have provided some
fairly large counterexamples for you. If those still aren't entirely
clear, I'm happy to go into deeper detail or broaden to related
examples elsewhere in the ecosystem.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: [RFC] DPT Policy: Canonise recommendation against PyPi-provided upstream source tarballs

2021-06-25 Thread Jeremy Stanley
On 2021-06-25 18:29:19 -0400 (-0400), Nicholas D Steeves wrote:
> A recommendation is non-binding, and the intent of this proposal is to
> say that the most "sourceful" form of source is the *most* suitable for
> Debian packages.  The inverse of this is that `make dist` is less
> suitable for Debian packages.  Neither formulation of this premise
> applies to a scope outside of Debian.  In other words, just because a
> particular form of source packaging and distribution is not considered
> ideal in Debian does not in any comment on its suitability for other
> purposes.  Would you prefer to see a note like "PyPi is a good thing for
> the Python ecosystem, but sdists are not the preferred form of Debian
> source tarballs"?

To reset this discussion, take the case of an upstream like the one
I'm involved with. For each project, two forms of source release are
made available:

1. Cryptographically signed tags in a Git repository, with
   versioning, revision history, release notes and authorship either
   embedded within or tied to the Git metadata.

2. Cryptographically signed tarballs of the file tree corresponding
   to a tag in the Git repository, with versioning, revision
   history, release notes and authorship extracted into files
   included directly within the tarball.

If some alternative mechanism is used to grab only the work tree
from a checkout of the Git repository, critical information about
the software is lost, making it uninstallable in some cases (can't
figure out its own version), or even illegal to redistribute
(missing authors list referenced from the copyright license).

So in this case you have a few options: package from upstream's Git
repository, package from upstream's "release tarball" (which happens
to be in Python sdist format because the egg-info is used to hold
information extracted from their Git metadata), or use something
which is neither of those and then have to rely on one of them
anyway to supply the missing bits.

> It's also worth mentioning that upstream's "official release"
> preference is not necessarily relevant to a Debian context.  Take
> for example the case where upstream exclusively supports a Flatpak
> and/or Snap package...
[...]

The problem is that you seem to want to talk in absolutes. Sure some
(I'll wager many) Python projects can be reasonably packaged from a
flat dump of the file content in their revision control. There are
many which can't. Sure some upstreams may only want to release
Flatpaks or Snaps, or may even be openly hostile to getting packaged
in distributions at all. There are also quite a few which don't host
their revision control in platforms which provide raw tarball
exports generated on the fly. Some sdist tarballs leave out files, I
agree, but they don't have to (ours don't, we only add more in order
to supply the exported revision control metadata).

Saying that a raw dump of the file content from a revision control
system is recommended over using upstream's sdists presumes all
upstreams are the same. They're not, and which is preferable (or
doable, or even legal) differs from one to another. Just because
some sdists, or even many, are not suitable as a basis for packaging
doesn't mean that sdists are a bad idea to base packages on. Yes,
basing packages on bad sdists is bad, it's hard to disagree with
that.

> Thinking about an ideal solution, and the interesting PBR case, I
> remember that gbp is supposed to be able to associate gbp tags with
> upstream commits (or possibly tags), so maybe it's also possible to do
> this:
> 
> 1. When gbp import-orig finds a new release
> 2. Fetch upstream remote as well
> 3. Run PBR against the upstream release tag
> 4. Stage this[ese] file[s]
> 5. Either append them to the upstream tarball before committing to the
>pristine-tar branch, or generate the upstream tarball from the
>upstream branch (intent being that the upstream branch's HEAD should
>be identical to the contents of that tarball)
> 6. Gbp creates upstream/x.y tag
> 7. Gbp merges to Debian packaging branch.

You'll either need a copy of the upstream Git repository or at least
some of the files generated from that repository's metadata which
has been embedded in the release tarball. I understand the desire to
not put files into Debian source packages which can be generated at
package build time from other files in Debian, but when those files
can't be generated without the presence of the Git repository itself
which *isn't* files in Debian, using the generated copies supplied
(and signed!) by upstream seems no different than many other sorts
of data which get shipped in Debian source packages.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: [RFC] DPT Policy: Canonise recommendation against PyPi-provided upstream source tarballs

2021-06-25 Thread Jeremy Stanley
On 2021-06-25 16:42:42 -0400 (-0400), Nicholas D Steeves wrote:
> I feel like there is probably consensus against the use of PyPi-provided
> upstream source tarballs in preference for what will usually be a GitHub
> release tarball, so I made an MR to this effect (moderate recommendation
> rather than a "must" directive):
> 
>   
> https://salsa.debian.org/python-team/tools/python-modules/-/merge_requests/16
> 
> Comments, corrections, requests for additional information, and
> objections welcome :-)  I'm also curious if there isn't consensus by
> this point and if it requires further discussion

I work on a vast ecosystem of Python-based projects which consider
the sdist tarballs they upload to PyPI to be their official release
tarballs, because they encode information otherwise only available
in revision control metadata (version information, change history,
copyright holders). The proposal is somewhat akin to saying that a
tarball created via `make dist` is unsuitable for packaging.

"GitHub tarballs" (aside from striking me as a blatant endorsement
of a wholly non-free software platform) lack this metadata, being
only a copy of the file contents from source control while missing
other relevant context Git would normally provide.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: [RFC] DPT Policy: Canonise recommendation against PyPi-provided upstream source tarballs

2021-06-25 Thread Jeremy Stanley
On 2021-06-26 02:04:40 + (+), Paul Wise wrote:
> On Fri, Jun 25, 2021 at 11:42 PM Jeremy Stanley wrote:
[..]
> > 2. Cryptographically signed tarballs of the file tree corresponding
> >to a tag in the Git repository, with versioning, revision
> >history, release notes and authorship extracted into files
> >included directly within the tarball.
> 
> I would like to see #2 split into two separate tarballs, one for the
> exact copy of the git tree and one containing the data about the other
> tarball. Then use dpkg-source v3 secondary tarballs to add the data
> about the git repo to the Debian source package.
[...]

You might like to see them split, but why is the exact copy of the
work tree the only legitimate way to export data from a Git
repository? Adding egg-info to the tarball creates a *Python Source
Distribution* which is a long-standing standard method for
distributing source code of Python software. Those files could even
be checked directly into the repository, so that the work tree was
itself also a valid sdist. The only reason the projects I work on
don't do that is because some of it would be redundant with the
metadata from the revision control system.

You could of course create your own split tarballs of the work tree
and the additional metadata files, but to what end? If upstream is
already delivering them together in a release tarball, how is making
your own beneficial when it still has to be done by the package
maintainer before assembling the source package? Users of Debian
don't benefit, because they still can't recreate your split tarball
if they wanted without also having a copy of the upstream Git
repository anyway. It just seems like make-work.

> Probably we should start systematically comparing upstream VCS repos
> with upstream sdists and reacting to the differences. So far, I've
> reacted by ignoring the sdists completely.

I highly recommend it. We explicitly test that our sdists don't omit
files from the Git worktree (sans .git* files like .gitignore and
.gitreview which make no sense outside the context of a Git
repository). On the other hand, I've found at least one case where a
copyright statement in a Debian package refers to an AUTHORS file
shipped as part of the sdist, but since the maintainer chose to
package it from Git instead and did not generate that file when
doing so, it's not included in the packaged version distributed in
Debian. (Not linking the bug report here as I don't want it to seem
like I'm picking on the maintainer.)

Just to reiterate, as an upstream we don't consider the work trees
of our Git repos to be complete source distributions. They can be
used along with the versioning and history tracked as part of the
repository to generate a complete source distribution, and that's
what we officially release. Downstream distributions are encouraged
to either use our release tarballs or clones of our Git repositories
to recreate the same files we would release, but if you choose to do
neither of those you're likely to miss something.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: [RFC] DPT Policy: Canonise recommendation against PyPi-provided upstream source tarballs

2021-06-28 Thread Jeremy Stanley
On 2021-06-27 23:49:18 -0300 (-0300), Emmanuel Arias wrote:
[...]
> if we package from PyPi, that don't contain the testsuite, that
> result in packages with any test, and that isn't good.
> 
> Also, I'm not sure, but the docs aren't in PyPi, isn't?
[...]

This depends entirely on how upstream is creating their sdists. They
might certainly choose to omit tests or even documentation, but I
think that's becoming less popular now that wheels exist. It is
expected for a wheel to omit basically everything except the
application, licensing information and some metadata. This has
reduced the pressure on upstreams with massive suites of tests or
volumes of documentation to strip them out of sdists, making it more
likely they'll ship full source distributions that way.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: [RFC] DPT Policy: Canonise recommendation against PyPi-provided upstream source tarballs

2021-06-26 Thread Jeremy Stanley
On 2021-06-26 18:51:26 -0400 (-0400), Louis-Philippe Véronneau wrote:
[...]
> To me, the most important thing is that all packages must at least
> run the upstream testsuite when it exists (I'm planning on writing
> a policy proposal saying this after the freeze). If PyPi releases
> include them, I think it's fine (but they often don't).

When you do write that, you'll of course want to clarify what "the
upstream testsuite" really means too. Lots of projects have vast
testing which is simply not feasible to replicate within Debian for
a number of reasons. Running some battery of upstream tests makes
sense, but testsuites which require root access outside a chroot,
integration tests orchestrated across multiple machines, access to
unusual sorts of accelerator or network hardware, and so on can
easily comprise part of "the upstream testsuite."
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Need a Python 3.8 virtual environment

2021-03-03 Thread Jeremy Stanley
On 2021-03-03 10:45:46 + (+), Julien Palard wrote:
[...]
> I'm using a bash function [1] to easily recompile needed Pythons (I test
> some projects with Python 3.5 to 3.9), but it's not that hard without my
> bash function:
[...]

This is pretty much identical to how I tackle it too, though I build
from tags checked out of the cpython Git repository.

If you're looking for prebuilt debs, I know some folks use this,
from what I understand the packages should be compatible with
reasonably contemporary Debian releases:

https://launchpad.net/~deadsnakes/+archive/ubuntu/ppa

-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: python3.5 + oldstable dilemma

2021-03-01 Thread Jeremy Stanley
On 2021-03-01 16:15:38 +1300 (+1300), Bailey, Josh wrote:
> I'm a maintainer for a python based SDN network controller, FAUCET. One of
> the platforms we've been supporting to-date is python3.5/oldstable.
> 
> Of course, now, python3.5 is EOL. To some degree we can keep building our
> package under python3.5, but now not all of our dependencies (like pyyaml)
> build or are even released for 3.5 anymore. That's an issue as there are
> security vulnerabilities that are now difficult to address.
> 
> Given that oldstable will be around until 2022, does that mean python3 as
> python3.5 will live on in oldstable until then? I can understand the case
> for not adding a newer python3 version, but also OTOH addressing security
> vulnerabilities over the LTS window will probably only get harder.
> 
> Any advice appreciated,

If you're going to use the python3 packaged in oldstable, then can't
you use the libraries (e.g. python3-yaml) packaged in oldstable as
well and take advantage of whatever security fixes are backported by
the package maintainers/security team?
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: upstream python concerns, python3-full package for bullseye

2021-02-12 Thread Jeremy Stanley
On 2021-02-12 01:11:07 +0100 (+0100), Thomas Goirand wrote:
[...]
> Please do not add distutils, venv and lib2to3 in this python3-full
> metapackage. IMO that's falling into a design that isn't Debian. This
> would probably be best in a "python3-dev-full" or something similar, as
> from the distribution perspective, we see them as developer use only.
> Don't confuse our users so that they install something they don't need.
[...]

I'm failing to see the distinction here. Who are the direct "users"
of the current python3 package if not developers (i.e. those who
would explicitly `apt install python3` on their systems if it
weren't already present)? Any "python3 users" who aren't developers
are getting the python3 package as a dependency of some Python-based
software they're using, they're not going out and installing the
python3 package on their own.

The proposal already indicated that no other packages should declare
dependencies on python3-full anyway, so its only consumers will be
people manually installing it... that is, developers of Python-based
software, or people wanting to run software which isn't packaged in
Debian (which you seem to consider synonymous with being a software
developer for some reason, but I'll go along with it for the sake of
argument).

So it seems like you're saying the people who manually install
python3 will be confused by the presence of python3-full and install
it instead, and accidentally get "software developer tools" when
they do so. But who else is specifically choosing to install a
Python interpreter if not people writing and running non-packaged
Python source?
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: upstream python concerns, python3-full package for bullseye

2021-02-16 Thread Jeremy Stanley
On 2021-02-16 18:24:20 + (+), Stefano Rivera wrote:
> Hi Bastian (2021.02.16_09:17:18_+)
> > heck, even PIP is outdated in a way that you actually have to `pip
> > install -U pip` in order to use it properly due to the recent
> > manylinux change.
> 
> Hrm, we probably should be backporting support for manylinux2014. Care
> to file a bug against pip?

Unfortunately, ABI 3 support falls into the same category as well.
Lots of projects are now publishing abi3 wheels to cover multiple
interpreter versions instead of separate cp36/cp37/cp38... and newer
pip is needed to be able to deal with that too (at least if you
don't want to have to preinstall an entire build toolchain so you
can install sdists instead).
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: RFS: python-click-default-group: Extension for Python click adding default subcommand to group

2021-09-29 Thread Jeremy Stanley
On 2021-09-29 16:32:05 -0400 (-0400), Sandro Tosi wrote:
> > One note: I'd consider watching for PyPI instead of GitHub.
> 
> there was actually a recent discussion on this list, discouraging
> from using PyPI in favor of github, since GH tarball usually
> contains docs, tests, and other files useful when building from
> source, usually not included in tarball released to users, ie pypi

And as was also pointed out in that discussion, this depends a lot
on the upstream maintainers and their workflow. Some upstreams are
careful to always include all files from the Git worktree within
their sdist tarballs, but may also include required files which
aren't contained in their Git worktree (such as version information,
copyright holders, or release notes extracted from Git tags,
revision history, Git "notes" refs, and so on)... in which cases you
either need their sdist or the full Git repository, since a "GitHub
tarball" of the worktree alone is insufficient to reproduce this
information.

Also since the advent of "wheels" a lot of maintainers are more
willing to make their sdists full archives of their projects (as was
the original intent for a "source distribution" package), since most
users installing directly from PyPI are going to pull a wheel
instead of an sdist when available, and wheels are expected to be
much more pared down anyway.

Like many things in the packaging realm, there is no
one-size-fits-all answer.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Bug#997758: nose: FTBFS: There is a syntax error in your configuration file: invalid syntax (conf.py, line 220)

2021-10-24 Thread Jeremy Stanley
On 2021-10-24 16:24:31 +0300 (+0300), Dmitry Shachnev wrote:
[...]
> If anyone is still using nose (1.x), please port your packages to
> nose2, pure unittest or pytest. I am attaching a dd-list and I
> intend to do a MBF in a few weeks when I have more time.

Further alternatives include
https://packages.debian.org/python3-testrepository or
https://packages.debian.org/python3-stestr (both are
subunit-emitting test runners), which pretty much all of the
OpenStack projects moved to years ago as replacements for nose.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: mass bug filling for nose removal (was: Bug#997758: nose: FTBFS: There is a syntax error in your configuration file: invalid syntax (conf.py, line 220))

2021-11-11 Thread Jeremy Stanley
On 2021-11-11 11:04:10 +0100 (+0100), Thomas Goirand wrote:
> On 10/24/21 3:24 PM, Dmitry Shachnev wrote:
[...]
> > I intend to do a MBF in a few weeks when I have more time.
> 
> I wonder if we could do a mass bug filling for this.
[...]

He does say right there, "I intend to do a MBF," so I assumed that's
been the plan all along? Or are you asking why it hasn't been
started now that it's been a few weeks?
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Uncleaned egg-info directory giving lots of bugs about failing to build after successful build

2023-09-06 Thread Jeremy Stanley
On 2023-09-05 14:16:55 +0200 (+0200), Thomas Goirand wrote:
[...]
> Yes, we can have dh-python to do the work, but IMO, the only thing
> it should be doing, is rm -rf *.egg-info, and error out if the
> egg-info is within the orig tarball, as this should not happen,
> IMO.
[...]

See
https://salsa.debian.org/python-team/tools/dh-python/-/commit/31eff8f
which merged last week.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: pyyaml 6

2022-10-07 Thread Jeremy Stanley
On 2022-10-07 00:10:21 +0200 (+0200), Gordon Ball wrote:
[...]
> The only bug requesting it actually be upgraded is
> https://bugs.debian.org/1008262 (for openstack). I don't know if
> that has proved a hard blocker - I _think_ anything designed to
> work with 6.x should also work with 5.4.

I have a feeling 5.4 would also work for the latest versions of
OpenStack Horizon. The change which increased the PyYAML minimum for
it to >=6.0 didn't really say why it picked that minimum, other than
for the sake of consistency:

https://review.opendev.org/834053

This can be tested upstream if folks think that would be a helpful
data point, but it's not entirely trivial since I'll have to do some
extra work to override the requirement constraints (otherwise I
would have just done it before replying).
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: pyyaml 6

2022-10-09 Thread Jeremy Stanley
On 2022-10-09 21:39:56 +0200 (+0200), Gordon Ball wrote:
[...]
> gnocchi # confirm, in gnocchi/gendoc

Looks like it was fixed in gnocchi 4.4.2 earlier this year (unstable
still has 4.4.0).

> jeepyb # confirm, in cmd/notify_impact

I'm honestly surprised this is packaged for Debian, since it's just
a pile of random scripts and hacks we use in OpenDev for things like
Gerrit hooks, and that notify_impact tool has been unused for a long
time. I could easily fix the yaml.load() call upstream, but I'm
tempted to just go on a cleaning spree and remove a bunch of unused
stuff like that from the repository.

> refstack-client # confirm, in refstack_client

Fixed in 0.1.0 from February (unstable carries an unreleased
snapshot from last year).
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: #!/usr/bin/python3 vs virtualenv

2023-03-03 Thread Jeremy Stanley
On 2023-03-03 16:22:11 -0500 (-0500), Jorge Moraleda wrote:
> I did not know about `sudo pip install --break-system-packages
> foo` or `sudo rm /usr/lib/python3.11/EXTERNALLY-MANAGED` (Frankly
> I only knew about this issue what I have read on this discussion).
[...]

They come from a reading of the pip documentation and associated
PEP-668 specification respectively. I do hope power users read
documentation before they choose to break their systems.

Be aware that the second approach I mentioned is removing a file
which gets installed by the libpython3.11-stdlib package, so will
end up getting replaced each time you upgrade that package as it's
not a conffile. It's a reasonable approach for things like container
base images where subsequent image build steps use pip to install
packages into the container, but it would be annoying for a "normal"
system where you upgrade packages without building entirely new
images.

The pip install command-line option is more of a user-facing escape
hatch, since it's clearly worded to let you know that it can quite
easily render existing Python applications inoperable if you choose
to install things into user or system site directories with pip.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: #!/usr/bin/python3 vs virtualenv

2023-03-03 Thread Jeremy Stanley
On 2023-03-03 15:29:09 -0500 (-0500), Jorge Moraleda wrote:
> Would it be hard to support both philosophies?
> 
> I would like to suggest a couple of configuration options that by default
> disallow using pip outside a virtual environment but that users with root
> privilege can modify by editing a config file (probably somewhere in /etc)
> and that would enable using pip outside a virtual environment, both as root
> and as regular user respectively.
> 
> I feel this would satisfy the needs of regular users to be protected
> against accidentally breaking their system while enabling power users to
> have full control of their computer and enjoy the simplicity of a single
> environment. Clearly this discussion suggests that debian has both types of
> users and we should support them both if we can.
[...]

"Power users" who like to break their systems can simply `sudo pip
install --break-system-packages foo` or even just `sudo rm
/usr/lib/python3.11/EXTERNALLY-MANAGED` and then do whatever they
want anyway. It doesn't seem to me like there's much need for a
config option that does that for you, and adding one would imply
that Debian will help you fix things once you've done it. This
feature is simply a guard rail for users who otherwise wouldn't know
where the edge of that cliff is located.

There are already solutions for your power users, but as is often
said in such situations: If it breaks you get to keep the pieces.
Have fun!
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: #!/usr/bin/python3 vs virtualenv

2023-03-03 Thread Jeremy Stanley
On 2023-03-03 18:44:19 + (+), Danial Behzadi دانیال بهزادی wrote:
> You just want to install sphinx via pip in the virtual environment
> too. Each venv should be atomic and isolated which means not
> depended to system packages.

However a venv can be made to use system packages if you use the
--system-site-packages option when creating it.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: can pip be made using local Debian packages for any dependencies

2023-02-16 Thread Jeremy Stanley
On 2023-02-16 01:12:49 + (+), Ian Norton wrote:
> I agree that is "easiest" but what I was after was the ability to
> restrict myself to the curated and signed packages from debian,
> pypi is just as bad as old CPAN when it comes to packages
> disappearing or being broken or depending on totally random
> versions
[...]

I think you missed my point, which was to explicitly create a venv
and install your project there instead of relying on pip's --user
default (which seemed to be resulting in errors for you). If you
create the venv with --system-site-packages enabled then it will
still use any Debian-packaged Python libraries you've installed.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: How should upstream document/communicate none-Python dependencies?

2023-02-02 Thread Jeremy Stanley
In the OpenStack ecosystem we mostly solved this around 8 years ago
with https://pypi.org/project/bindep which is effectively a manifest
of distribution package names and a grammar for indicating
differences in package naming or requirements between different
distributions and versions thereof. It's used quite heavily by
hundreds of projects in our ecosystem, and has been picked up by a
lot of projects outside OpenStack as well. Where it mostly shines is
when you want to provide some means of confirming you have the
correct packages in your development system to be able to
install/run a project and its tests, but we also rely on it heavily
in automated CI jobs to determine what non-Python packages should be
installed for tests to execute correctly.
-- 
Jeremy Stanley



Re: How should upstream document/communicate none-Python dependencies?

2023-02-02 Thread Jeremy Stanley
On 2023-02-02 14:16:11 + (+), c.bu...@posteo.jp wrote:
[...]
> The upstream maintainers have to create a binddep.txt file.

Yes, it would look something like this:

https://opendev.org/openstack/nova/src/branch/master/bindep.txt

And then projects relying on it document using the bindep tool like
this:

https://docs.openstack.org/nova/latest/contributor/development-environment.html#linux-systems

Another goal of that tool is for it to use a file format which is
reasonably human-readable, so that someone can work out dependencies
fairly quickly from it even if they can't or don't want to install
another tool to parse it.

> And the distro maintainers do need to use binddep do parse that file and
> "translate" it into the package names of their distro.
[...]

Yes, I don't know whether the OpenStack packaging team in Debian
relies on it at all, but the intent is for it to be usable by
prospective package maintainers for projects, at least as a means of
double-checking against their declared dependencies. Its grammar
allows projects to distinguish between arbitrary kinds of
dependencies through the use of free-form "profile" names and many
projects have settled on some common ones (particularly "test") to
separate hard dependencies from optional or testing-related
dependencies, but these concepts vary widely enough between
different distros that I wouldn't expect a package maintainer to
take the filtered bindep results directly without some manual
vetting.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: can pip be made using local Debian packages for any dependencies

2023-02-15 Thread Jeremy Stanley
As someone who does Python software development on Debian constantly
for their $dayjob, my best advice is to just install things from
PyPI into and run them from venvs/virtualenvs. The default "--user"
install mode pip offers is fragile and leaves you with potential
conflicts anyway if you need different versions of dependencies for
different things.

To your original question, if you really want to use some
Debian-packaged libraries mixed with things installed from source or
from PyPI, make your venv with the --system-site-packages option.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature