Re: [Python-modules-team] Bug#954381: marked as done (python3-kubernetes: New upstream version available)

2020-11-21 Thread Jeremy Stanley
On 2020-11-22 00:01:03 +0100 (+0100), Thomas Goirand wrote:
> On 11/21/20 3:36 AM, Sandro Tosi wrote:
> >>* Use git to generate upstream tarball, as the PyPi module doesn't 
> >> include
> >>  the test folder. Using the gen-orig-xz in debian/rules, as using the
> >>  repack function of debian/watch doesn't make sense (why downloading a
> >>  tarball that would be later on discarded? I'm open to a better 
> >> solution
> >>  which would be uscan compatible though...). Switch d/watch to the 
> >> github
> >>  tag therefore.
> > 
> > you can track the github project instead of pypi (man uscan has the
> > details); this is was i'm doing recently, as most of the time PyPI
> > releases dont have all the files we need (tests, or test data, or
> > documentation, or a combination of that)
> 
> Hi.
> 
> Thanks, I know that. However, that's not my problem. The issue is that
> uscan --download will download the tarball from github, and I'd like to
> replace that by what I'm doing in debian/rules, which is using git and
> git submodule, to fetch things using git, and create a tarball. Sure, I
> could use a repack script in debian/watch, but then uscan will continue
> to first download the archive from github, and *then* only, I can
> discard what's been downloaded, and fetch stuff from github with git.
> 
> Is there a solution here, so that uscan uses a repack script directly
> without attempting to download first?

Maybe I'm missing something obvious, but can't you just use mode=git
(see uscan manpage for details on this feature). I assumed this is
what was being suggested.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: How to watch pypi.org

2020-11-01 Thread Jeremy Stanley
On 2020-11-01 20:23:20 +0100 (+0100), Thomas Goirand wrote:
[...]
> However, if I am to put more efforts on stuff like that, my priority
> would be first on getting the reno release notes published in the Debian
> package. I've been thinking about this for a long time, and haven't
> figured out yet what would be the best way, with a reasonable workflow.
> 
> From the Debian perspective, I'd have to:
> - generate the release notes from the last version in Debian Stable, up
> to what's in Sid. For example, I would include all the OpenStack release
> notes for Stein, Train, Ussuri and Victoria in all packages uploaded for
> Debian Bullseye, as this would concern anyone upgrading from Buster.
> - make it so that release notes would be generated from Git, and maybe
> stored in a debian/release-notes folder, so that it wouldn't generate a
> diff with the original upstream tag.
> 
> The drawback would be that, on each upload of a new Debian revision, the
> debian.tar.xz will contain the release notes, which could be of
> significant size (especially if they include static objects like CSS,
> JS, and all what makes a theme).
> 
> If you have any other suggestion concerning how to handle these release
> notes, please let me know.

It's likely I'm missing some subtle reason for the complexities you
outline above, but if you install python3-reno and then run `reno
report` in the upstream Git repository for any project with a
releasenotes tree (or pass the path to said Git repository in the
command line) it will generate a reStructuredText compilation of the
release notes contained therein. Very lightweight, no need for extra
files or anything. I'd think you could just dump that output into a
NEWS file or similar at binary package build time. This is basically
the same thing reno's Sphinx extension does under the covers.

Check out `reno report --help` for a number of flags you might want
to pass it to make the results more readable like omitting the
source filename comments, skipping notes earlier than a certain
version, collapsing pre-release notes, and so on. A quick test of
Nova's release notes indicates that even if you don't truncate them
though and include everything back to when the project started using
reno 5 years ago, that NEWS file would only increase the compressed
size of the nova-doc package by 1%.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: How to watch pypi.org

2020-10-31 Thread Jeremy Stanley
On 2020-10-31 19:48:29 +0500 (+0500), Andrey Rahmatullin wrote:
> On Sat, Oct 31, 2020 at 12:03:50PM +0100, Thomas Goirand wrote:
> > Pypi is often thought as a Python module source repository. It
> > is *NOT*. It is a repository for binaries to be consumed by pip.
> 
> Oooh, that's a very interesting thought I never considered.

It's not entirely accurate, however. These days there are two
remaining package formats commonly distributed through PyPI: sdists
and wheels. The sdist format is intended to be a
platform-independent "source distribution" (hence its name), perhaps
more analogous to Debian's own source package formats, and the
traditional setup.py in many sdists is akin to debian/rules and the
$PACKAGE.egg-info/ tree similar to other sorts of metadata you would
expect under the debian/ directory. Python package installation
tools like pip call libraries to do things like compile and link
included C extensions from the unpacked sdist before installing the
results into a usable location in the system (or more recently,
putting the results into a wheel package, then caching that and
installing its contents into the system).

When it comes to "binaries" this is definitely the domain of wheels.
A wheel is (usually, with the exception of toolchains like flit)
built from an sdist and may be platform-dependent, especially if it
contains compiled extensions. The wheel is much more akin to
Debian's binary package format.

The main operating system distribution package maintainer argument
against relying on sdists is that they may omit files from the
upstream revision control system which that upstream did not want
included in their official source distributions, or may include
extra generated files which upstream did want included but don't
exist (or at least don't exist as files in that form) within the
upstream revision control. This is perhaps not entirely dissimilar
from C/autotools based projects having a `make dist` target which
they use to prepare their source distribution tarballs. Whether it
actually represents a problem for downstream packaging likely varies
a bit from project to project.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: How to watch pypi.org

2020-10-31 Thread Jeremy Stanley
On 2020-10-31 12:03:50 +0100 (+0100), Thomas Goirand wrote:
[...]
> On 10/31/20 3:07 AM, Jeremy Stanley wrote:
> > I have to agree, though in the upstream projects with which I'm
> > involved, those generated files are basically a lossy re-encoding of
> > metadata from the Git repositories themselves: AUTHORS file
> > generated from committer headers, ChangeLog files from commit
> > subjects, version information from tag names, and so on. Some of
> > this information may be referenced from copyright licenses, so it's
> > important in those cases for package maintainers to generate it when
> > making their source packages if not using the sdist tarballs
> > published by the project.
> 
> Unfortunately, the FTP masters do not agree with you. I've been told
> that the OpenStack changelog is a way too big, and it's preferable to
> not have it in the binary packages.

PBR started creating much smaller changelogs years ago, after you
asked ftpmaster. I get that you see no value in changelog files, but
it seems like it would be worth revisiting.

> Also, there's nothing in the Apache license that mandates having
> an AUTHORS list as per what PBR builds. If we are to care that
> much in OpenStack, then the license must be changed.
[...]

I agree, it's not commonplace in OpenStack other than this possible
exception:

https://opendev.org/openstack/python-openstackclient/src/branch/master/doc/source/cli/man/openstack.rst#user-content-copyright

You do tend to find it in other Python projects however, for
example:

https://github.com/pygments/pygments/blob/master/LICENSE#L1

My point was that, in general, some Python projects do autogenerate
an AUTHORS file from commit metadata at dist time rather than
storing it directly in a file within their Git repositories, and
some projects (including Python projects) refer to AUTHORS from
copyright statements, so it's a good idea to build/keep it.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: How to watch pypi.org

2020-10-30 Thread Jeremy Stanley
On 2020-10-31 01:33:36 + (+), Paul Wise wrote:
> On Fri, Oct 30, 2020 at 2:19 PM Fioddor Superconcentrado wrote:
> > As I said I'm very new to this and all (python) packages I'm
> > using lately use the usual python tools (pipy, setup.py, etc)
> > and my first approach has been to stick as close as possible to
> > the upstream procedures. But I might very likely be taking a
> > wrong decision. What are the reasons to go for git instead of
> > pypi? I see that it is 'more upstream' but it seems that
> > everyone else is pointing to pypi as a distro-agnostic solution.
> 
> As Andrey says, missing files is one issue, another is that tarballs
> often contain extra generated files that should be built from source,
> but if you use the tarball then they quite likely will not be built
> from source.

I have to agree, though in the upstream projects with which I'm
involved, those generated files are basically a lossy re-encoding of
metadata from the Git repositories themselves: AUTHORS file
generated from committer headers, ChangeLog files from commit
subjects, version information from tag names, and so on. Some of
this information may be referenced from copyright licenses, so it's
important in those cases for package maintainers to generate it when
making their source packages if not using the sdist tarballs
published by the project.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: The python command in Debian

2020-07-09 Thread Jeremy Stanley
On 2020-07-09 15:26:47 +0200 (+0200), Matthias Klose wrote:
> As written in [1], bullseye will not see unversioned python
> packages and the unversioned python command being built from the
> python-defaults package.
> 
> It seems to be a little bit more controversial what should happen
> to the python command in the long term.  Some people argue that
> python should never point to python3, because it's incompatible,
> however Debian will have difficulties to explain that decision to
> users who start with Python3 and are not aware of the 2 to 3
> transition.  So yes, in the long term, Debian should have a python
> command again.
[...]

I don't follow your logic there. Why is it hard to explain? Python
was a programming language, and its last interpreter (2.7) is no
longer developed or supported. Python3 (formerly Python3000) is also
a programming language, similar to Python and developed by the same
community, but not directly compatible with Python. Debian provides
an interpreter for Python3, but has (or will have by then) ceased
distributing a Python interpreter.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-30 Thread Jeremy Stanley
On 2020-06-30 09:15:47 +0200 (+0200), Thomas Goirand wrote:
[...]
> If there's some nasty NPM job behind, then I probably will just
> skip the dashboard, and expect deployment to get the dashboard not
> from packages. What is included in the dashboard? Things like
> https://zuul.openstack.org/ ?

That's a white-labeled tenant of https://zuul.opendev.org/ but yes,
basically an interface for querying the REST API for in-progress
activity, configuration errors, build results, log browsing, config
exploration and so on. The result URLs it posts on tested changes
and pull/merge requests are also normally to a build result detail
page provided by the dashboard, thought you should be able to
configure it to link directly to the job logs instead.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-29 Thread Jeremy Stanley
On 2020-06-29 23:55:49 +0200 (+0200), Thomas Goirand wrote:
[...]
> nodepool from OpenStack,

Well, *formerly* from OpenStack, these days Nodepool is a component
of the Zuul project gating system, which is developed by an
independent project/community (still represented by the OSF):

https://zuul-ci.org/
https://opendev.org/zuul/nodepool/

You could probably run a Nodepool launcher daemon stand-alone
(without a Zuul scheduler), but it's going to expect to be able to
service node requests queued in a running Apache Zookeeper instance
and usually the easiest way to generate those is with Zuul's
scheduler. You might be better off just trying to run Nodepool along
with Zuul, maybe even set up a GitLab connection to Salsa:

https://zuul-ci.org/docs/zuul/reference/drivers/gitlab.html

> and use instances donated by generous cloud providers (that's not
> hard to find, really, I'm convinced that all the providers that
> are donating to the OpenStack are likely to also donate compute
> time to Debian).
[...]

They probably would, I've approached some of them in the past when
it sounded like the Salsa admins were willing to entertain other
backend storage options than GCS for GitLab CI/CD artifacts. One of
those resource donors (VEXXHOST) also has a Managed Zuul offering of
their own, which they might be willing to hook you up with instead
if you decide packaging all of Zuul is daunting (it looks like both
you and hashar from WMF started work on that at various times in
https://bugs.debian.org/705844 but more recently there are some
JavaScript deps for its Web dashboard which could get gnarly to
unwind in a Debian context).
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Maintaining all of the testing-cabal packages under the OpenStack team

2020-06-28 Thread Jeremy Stanley
On 2020-06-28 16:48:02 +0200 (+0200), Thomas Goirand wrote:
[...]
> I don't want this to happen again. So I am hereby asking to take
> over the maintenance of these packages which aren't in the
> OpenStack team. They will be updated regularly, each 6 months,
> with the rest of OpenStack, following the upstream
> global-requirement pace. I'm confident it's going to work well for
> me and the OpenStack team, but as well for the rest of Debian.
> 
> Is anyone from the team opposing to this? If so, please explain
> the drawbacks if the OpenStack team takes over.

While I don't agree with Thomas's harsh tone in the bits of the
message I snipped (please Thomas, I'm sure everyone's trying their
best, there's no need to attack a fellow contributor personally over
technical issues), I did want to point out that the proposal makes
some sense. The Testing Cabal folk were heavily involved in
OpenStack and influential in shaping its quality assurance efforts;
so OpenStack relies much more heavily on these libraries than other
ecosystems of similar size, and OpenStack community members, present
and past, continue to collaborate upstream on their development.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Example package using python3-pbr and Sphinx documentation with manual page

2020-05-04 Thread Jeremy Stanley
On 2020-05-04 19:07:00 + (+), Jeremy Stanley wrote:
> On 2020-05-04 19:13:38 +0200 (+0200), Florian Weimer wrote:
> > I'm trying to package pwclient, which depends on python3-pbr and has a
> > rudimentary manual page generated from Sphinx documentation.  Is there
> > a similar example package which I can look at, to see how to trigger
> > the manual page generation?
> > 
> > I currently get this:
> > 
> > dh_sphinxdoc: warning: Sphinx documentation not found
> [...]
> 
> Since PBR originated in OpenStack, the python3-openstackclient
> package may serve as a good example. It does a dh_sphinxdoc override
> for manpage building here:
> 
>  https://salsa.debian.org/openstack-team/clients/python-openstackclient/-/blob/88bdecc66a30b4e3d5aec9cdae4cc529c33690e6/debian/rules#L27
>  >
> 
> Then there's a similar dh_installman override a few lines later.

Oh, and since you mentioned the conf.py contents, here's how it's
being done in the upstream source for that repo:

https://opendev.org/openstack/python-openstackclient/src/commit/fdefe5558b7237757d788ee000382f913772bffc/doc/source/conf.py#L225-L233
 >

-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Example package using python3-pbr and Sphinx documentation with manual page

2020-05-04 Thread Jeremy Stanley
On 2020-05-04 19:13:38 +0200 (+0200), Florian Weimer wrote:
> I'm trying to package pwclient, which depends on python3-pbr and has a
> rudimentary manual page generated from Sphinx documentation.  Is there
> a similar example package which I can look at, to see how to trigger
> the manual page generation?
> 
> I currently get this:
> 
> dh_sphinxdoc: warning: Sphinx documentation not found
[...]

Since PBR originated in OpenStack, the python3-openstackclient
package may serve as a good example. It does a dh_sphinxdoc override
for manpage building here:

https://salsa.debian.org/openstack-team/clients/python-openstackclient/-/blob/88bdecc66a30b4e3d5aec9cdae4cc529c33690e6/debian/rules#L27
 >

Then there's a similar dh_installman override a few lines later.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Automatically removing "badges" pictures from README.rst files

2020-04-09 Thread Jeremy Stanley
On 2020-04-10 00:25:41 +0200 (+0200), Thomas Goirand wrote:
> On 4/9/20 10:05 PM, PICCA Frederic-Emmanuel wrote:
> > what about lintian brush ?
> 
> What's that?

This:

automatically fix lintian problems

This package contains a set of scripts that can automatically
fix more than 80 common lintian issues in Debian packages.

It comes with a wrapper script that invokes the scripts, updates
the changelog (if desired) and commits each change to version
control.

(from https://packages.debian.org/lintian-brush )
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Build Python 2.7 version >= 2.7.15 on Debian 9

2020-04-03 Thread Jeremy Stanley
On 2020-04-03 23:21:25 +0300 (+0300), ellis.mag...@pp.inet.fi wrote:
[...]
> What is the correct way to build a clean version of python2.7 on
> Debian9 that will be compatible with already packaged python2.7
> modules?

The Python modules with C extensions packaged in Debian are built
against the Python development library headers for the version of
the Python interpreter which is packaged in Debian. If you replace
the interpreter with a different version I expect you'll at least
have to relink, if not entirely recompile, those extensions against
newer headers. I don't personally know a way to go about that short
of rebuilding those additional modules from source. You might be
better off switching to a newer version of Debian which provides a
newer Python 2.7 release and has the other packages you need already
built against it, or using some other Python package management
solution like conda or virtualenv.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Where can I find packages that need a maintainer?

2020-02-13 Thread Jeremy Stanley
There's also this wonderful utility:

https://packages.debian.org/sid/how-can-i-help

You can use it to easily find packages installed on your system
which are orphaned or in other similar states of help wanted, which
at least helps focus your efforts on packages you're more likely
using and relying on, rather than wading through a large list of
packages which are mostly orphaned because nobody's using them
anyway.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: python-urllib3 1.25.6 uploaded to experimental (closes CVE-2019-11236) but fails build tests

2019-10-29 Thread Jeremy Stanley
On 2019-10-29 13:29:02 +0100 (+0100), Michael Kesper wrote:
> On 27.10.19 17:27, Drew Parsons wrote:
> > On 2019-10-27 23:13, Daniele Tricoli wrote:
[...]
> > > Not an expert here, but I think fallback is not done on
> > > purpose due downgrade attacks:
> > > https://en.wikipedia.org/wiki/Downgrade_attack
> > 
> > I see. Still an odd kind of protection though.  The attacker can
> > just downgrade themselves.
> 
> No. A sensible server will not talk to you if your requested SSL
> version is too low. pub.orcid.org seems to use absolutely outdated
> and insecure software versions.

Well, downgrade attacks aren't usually a two-party scenario. The
risk with a downgrade attack is when a victim client attempts
communication with some server, and a third-party attacker tampers
with the communication between the client and server sufficiently to
cause protocol negotiation to fall back to an old enough version
that the attacker can then exploit known flaws to decrypt and/or
proxy ("man in the middle") that communication. Having both the
client and the server be unwilling to use susceptible older protocol
versions helps thwart this attack vector.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Backport of Python 3.6 for Debian Stretch?

2018-04-25 Thread Jeremy Stanley
On 2018-04-25 10:06:47 +0700 (+0700), Nguyễn Hồng Quân wrote:
[...]
> I spent much time to research on it, so that I can tell what
> difference between 3.6.1 and 3.6.4 packaging.
[...]

http://metadata.ftp-master.debian.org/changelogs/main/p/python3.6/python3.6_3.6.5-3_changelog

https://manpages.debian.org/debdiff

http://snapshot.debian.org/package/python3.6/

[also, please don't Cc me, I do already read the mailing list]
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Backport of Python 3.6 for Debian Stretch?

2018-04-24 Thread Jeremy Stanley
On 2018-04-24 23:42:30 +0700 (+0700), Nguyễn Hồng Quân wrote:
[...]
> Then why Debian project invent *.deb file, not just pack binary as
> tar file and let user to untar it? I favor building deb file,
> rather than copying "make altinstall" result, because of the same
> reason.

I completely understand the reason for using packages, but Debian is
a volunteer project which does not exist solely to solve your
problems so you can either do something you already know how to do
(build Python 3.6 from source) and move on, or learn to make the
packages you want (including fixing any backporting issues you find
when doing so). Asking others to tell you how to do that is not the
sort of self-directed research expected of participants in a
volunteer project, it's at best begging and at worst disrespectful
of those who have invested the time to learn the things you're not
willing.

https://backports.debian.org/Contribute/

-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Backport of Python 3.6 for Debian Stretch?

2018-04-24 Thread Jeremy Stanley
On 2018-04-24 22:07:03 +0700 (+0700), Nguyễn Hồng Quân wrote:
[...]
> I don't need to "not disturb" system.
> If have to use conda, pyenv, I would rather build Python3.6 from source
> tarball, not to bring more overhead (conda body, pyenv body), and
> "Python3.6 from source" still not disturb my system, because it is
> installed to "/usr/local".
> 
> But I don't want any method that requires to build Python from source
> (tarball, pythonz, conda or alike), because I really need *pre-built
> binaries*.
[...]

Unless I'm missing something, there's no substantial difference
between building a package of Python3.6 and copying it to the
system, or performing a `make altinstall` and copying the resulting
files (via rsync, tar and scp, whatever) to the target system. If
you're okay with the idea of building packages remotely, then why
not build from source remotely?
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: a few quick questions on gbp pq workflow

2017-08-06 Thread Jeremy Stanley
On 2017-08-06 20:00:59 +0100 (+0100), Ghislain Vaillant wrote:
[...]
> You'd still have to clean the pre-built files, since they would be
> overwritten by the build system and therefore dpkg-buildpackage
> would complain if you run the build twice.
> 
> So, you might as well just exclude them from the source straight
> away, no?

Repacking an upstream tarball just to avoid needing to tell
dh_install not to copy files from a particular path into the binary
package seems the wrong way around to me, but maybe I'm missing
something which makes that particularly complicated? This comes up
on debian-mentors all the time, and the general advice is to avoid
repacking tarballs unless there's a policy violation or you can get
substantial (like in the >50% range) reduction in size on especially
huge upstream tarballs. Otherwise the ability to compare the
upstream tarball from the source package to upstream release
announcements/checksums/signatures is a pretty large benefit you're
robbing from downstream recipients who might wish to take advantage
it.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature


Re: a few quick questions on gbp pq workflow

2017-08-06 Thread Jeremy Stanley
On 2017-08-06 14:11:13 -0400 (-0400), Ondrej Novy wrote:
> It's not always possible/simple/nice to use sdist, because it contains
> prebuild docs. And I don't like to do +dfsg rebuild just for removing docs.
> Sometimes sdists doesn't contain tests.
> 
> So my preference is:
> 
>- use sdist if it's possible (have tests, don't have prebuilds, ...)
>- use git tag tarballs (https://github.com///tags)
> 
> I already migrated few packages OS->DPMT so far.

Why would you need to repack a tarball just because it contains
prebuilt docs (non-DFSG-free licensed documentation aside)? I'm all
for rebuilding those at deb build time just to be sure you have the
right deps packaged too, but if the ones in the tarball are built
from DFSG-compliant upstream source, included in the archive for
that matter, then leaving the tarball pristine shouldn't be a policy
violation, right? That's like repacking a tarball for an
autotools-using project because upstream is shipping a configure
script built from an included configure.in file.

Pretty sure OpenStack at least would consider any content which
requires Debian package maintainers to alter tarballs prior to
including them in the archive as fairly serious bug in its software.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature


Re: a few quick questions on gbp pq workflow

2017-08-06 Thread Jeremy Stanley
On 2017-08-06 10:44:36 -0400 (-0400), Allison Randal wrote:
> The OpenStack packaging team has been sprinting at DebCamp, and
> we're finally ready to move all general Python dependencies for
> OpenStack over to DPMT. (We'll keep maintaining them, just within
> DPMT using the DPMT workflow.)
> 
> After chatting with tumbleweed, the current suggestion is that we
> should migrate the packages straight into gbp pq instead of making
> an intermediate stop with git-dpm.
[...]

More a personal curiosity on my part (I'm now a little disappointed
that I didn't make time to attend), but are you planning to leverage
pristine tarballs as part of this workflow shift so you can take
advantage of the version details set in the sdist metadata and the
detached OpenPGP signatures provided upstream? Or are you sticking
with operating on a local fork of upstream Git repositories (and
generating intermediate sdists on the fly or supplying version data
directly from the environment via debian/rules)?

I'm eager to see what upstream release management features you're
taking advantage of so we can better know which of those efforts are
valuable to distro package maintainers.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature


Re: Ad-hoc Debian Python BoF at PyCon US 2017

2017-06-20 Thread Jeremy Stanley
On 2017-06-20 16:40:26 +0200 (+0200), Matthias Klose wrote:
[...]
> another one many openstack packages.
[...]

Spot checking the source packages in the archive currently, it looks
like Thomas already has most of these done.

By way of background there, a coordinated effort has been underway
for the last several years to get all OpenStack software working
with recent Python 3 interpreters. The slowest part of that work
involved reaching out to the upstreams of (hundreds of) dependencies
not maintained within the OpenStack community and either helping
them get working Py3K support, adopting defunct libraries so
OpenStack contributors could fix them directly, or in some cases
abandoning/replacing dependencies with better-maintained
alternatives. This really is an ecosystem-wide effort, as complex
Python software doesn't generally run in isolation. I expect the
story for other large Python-based applications is very similar to
this.

Most OpenStack services and libraries are integration-tested
upstream to work under Python 3.5 today, but there are still many
Python-2.7-only testsuites for them (especially unit testing and
some functional tests) which need heavy refitting before the
community feels its Py3K support efforts are truly complete.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature


Re: PyPI source or github source?

2017-03-13 Thread Jeremy Stanley
On 2017-03-13 17:55:32 +0100 (+0100), Thomas Goirand wrote:
[...]
> IMO, upstream are right that the PyPi releases should be minimal. They
> are, from my view point, a binary release, not a source release.
> 
> It makes a lot of sense to therefore use the git repository, which is
> what I've been doing as much as possible.

Yes, as much as the name "sdist" indicates it's a source
distribution, in many cases it's not exactly pristine source and may
be missing files deemed unimportant for end users or could include
some autogenerated files the upstream authors would rather not check
into their revision control systems. So sdists, while a tarball
under the hood (and by filename extension), are still really an
installable packaging format more than they are a source
distribution format.
-- 
Jeremy Stanley



Re: GnuPG signatures on PyPI: why so few?

2017-03-12 Thread Jeremy Stanley
On 2017-03-12 11:46:31 +1100 (+1100), Ben Finney wrote:
[...]
> In response to polite requests for signed releases, some upstream
> maintainers are now pointing to that thread and closing bug reports as
> “won't fix”.
> 
> What prospect is there in the Python community to get signed upstream
> releases become the obvious norm?

Speaking for OpenStack's tarballs at least, our sdists are built by
release automation which also generates detached OpenPGP
signatures so as to provide proof of provenance... but we don't
upload them to PyPI since the authors of the coming Warehouse
replacement for the current CheeseShop PyPI have already indicated
that they intend to drop support for signatures entirely. We
consider https://releases.openstack.org/ the official source of
information for our release information and host our signatures
there instead (well, really on https://tarballs.openstack.org/ with
direct links from the former).

The same key used to sign our tarballs (and wheels) also signs our
Git tags, for added consistency.
https://releases.openstack.org/#cryptographic-signatures
Of possible further interest: we modeled a fair amount of our key
management after what's employed for Debian's archive keys.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature


Re: Binary naming for Django Related Packages

2016-12-03 Thread Jeremy Stanley
On 2016-12-03 17:01:45 +0100 (+0100), Thomas Goirand wrote:
[...]
> Because of problems when doing imports in Python3 (in a venv, the system
> module wont be loaded if it's there and there's already something in the
> venv), we should attempt to discourage upstream to use namespaced
> modules. This indeed could prevent from running unit tests. That's what
> has been discovered in the OpenStack world, and now all the oslo libs
> aren't using namespace (though we've kept the dot for the egg-names).

To clarify, the main issue encountered there was a conflict over
namespace-level init when some modules were editable installs.
Historical details of the decision are outlined at:

https://specs.openstack.org/openstack/oslo-specs/specs/kilo/drop-namespace-packages.html#problem-description
 >

-- 
Jeremy Stanley



Re: pip for stretch

2016-11-21 Thread Jeremy Stanley
On 2016-11-21 18:33:48 -0500 (-0500), Barry Warsaw wrote:
[...]
> I have not started to look at what if anything needs to be done to
> transition to pip 9, but if you have a strong opinion one way or
> the other, please weigh in.

The fix to uninstall properly when replacing with an editable
install of the same package is a pretty huge one in my opinion. Ran
into it quite a bit where I'd do an install from unreleased source
(in editable mode because I was hacking on it) of some library, and
that software was a transitive dependency of something in its own
requirements list so had already been installed from an sdist/wheel
without my realizing. This leads to confusingly testing the released
version of the source code because it shows up first in the path
when you import what you think is the code you're editing. Not a fun
way to spend your time.

Granted, I'm mostly running pip on unstable when developing, and I
run it from a bootstrapped virtualenv anyway so don't actually use
the Debian package of it other than to bootstrap my initial venv.
-- 
Jeremy Stanley



Re: Test suite in github but missing from pypi tarballs

2016-04-21 Thread Jeremy Stanley
On 2016-04-21 11:23:20 -0400 (-0400), Fred Drake wrote:
> On Thu, Apr 21, 2016 at 10:54 AM, Tristan Seligmann
[...]
> > For distribution packaging purposes, the GitHub tags are generally
> > preferrable. GitHub makes archives of tagged releases available as tarballs,
> > so this is generally a simple tweak to debian/watch.
> 
> I'd generally be worried if the source package doesn't closely match a
> tag in whatever VCS a project is using, but I don't think that's
> essential, release processes being what they are.
[...]

Agreed, as long as "closely" is interpreted in ways consistent with,
say, tarballs for C-based projects. Consider `setup.py sdist`
similar to `make dist` where the dist target of some projects may
still run additional commands to generate metadata or other files
not tracked in revision control prior to invoking tar/gzip.
-- 
Jeremy Stanley



Re: static analysis and other tools for checking Python code

2016-03-02 Thread Jeremy Stanley
On 2016-03-03 08:38:40 +0800 (+0800), Paul Wise wrote:
[...]
> FYI pep257 is definitely packaged:
> 
> https://packages.debian.org/search?keywords=pep257
[...]

Whoops! Thanks--I almost certainly fat-fingered my package search on
that one.
-- 
Jeremy Stanley



Re: static analysis and other tools for checking Python code

2016-03-02 Thread Jeremy Stanley
On 2016-03-02 11:22:52 +0800 (+0800), Paul Wise wrote:
[...]
> One of the things it has checks for is Python. So far it runs pyflakes
> and pep8 and a few hacky greps for some things that shouldn't be done
> in Python in my experience.
[...]

The "flake8" framework basically incorporates the pyflakes and pep8
analyzers along with a code complexity checker, and provides a
useful mechanism for controlling their behavior in a consistent
manner as well as pluggability to add your own:

https://packages.debian.org/flake8

One flake8 plug-in which came out of the OpenStack developer
community is "hacking" (obviously not for every project, but an
interesting reference example of layering in your own style checks):

https://packages.debian.org/python-hacking

Another output of the OpenStack community is "bandit," a security
analyzer for Python code:

https://packages.debian.org/bandit

Some other interesting analyzers not yet packaged for Debian as far
as I can tell include "pep257" (a Python docstring checker) and
"clonedigger" (a DRYness checker).

https://pypi.python.org/pypi/pep257
https://pypi.python.org/pypi/clonedigger

I can probably think up more that I've used, but the above rise to
the top of my list.
-- 
Jeremy Stanley



Re: PyPI wheels (was Re: Python Policy)

2015-10-21 Thread Jeremy Stanley
On 2015-10-21 09:31:04 -0500 (-0500), Ian Cordasco wrote:
> On Wed, Oct 21, 2015 at 8:58 AM, Barry Warsaw <ba...@debian.org> wrote:
> > On Oct 21, 2015, at 08:47 PM, Brian May wrote:
> >
> >>in one case this is because upstream have only supplied a *.whl
> >>file on Pypi.
> >
> > I'm *really* hoping that the PyPA will prohibit binary wheel-only uploads.
> 
> I'm not sure why they should prohibit binary wheel-only uploads. A
> company may wish to publish a binary wheel of a tool and only that (a
> wheel for Windows, OS X, different supported linux distributions,
> etc.). If they do, that's their prerogative. I don't think there's
> anything that says Debian (or Ubuntu) would then have to package that.
> 
> PyPI is not just there for downstream, it's there for users too
> (although the usability of PyPI is not exactly ideal).

Yep, I'm as much a fan of free software as the next person, but PyPI
doesn't _require_ what you upload is free software. It only requires
that you grant the right to redistribute what you're uploading.
While having source code to go along with things uploaded there
(which, mind you, aren't even actually required to be usable python
packages, they could be just about anything) would be nice, I don't
have any expectation that PyPI would ever eventually make it
mandatory.
-- 
Jeremy Stanley



Re: mock 1.2 breaking tests (was: python-networkx_1.10-1_amd64.changes ACCEPTED into experimental)

2015-10-06 Thread Jeremy Stanley
On 2015-10-06 09:28:56 +0200 (+0200), Thomas Goirand wrote:
> Master != kilo. It still means that I have to do all of the backport
> work by myself.
[...]
> I know that it's the common assumption that, as the package maintainer
> in Debian, I should volunteer to fix any issue in the 6+ million lines
> of code in OpenStack ! :)
> 
> I do try to fix things when I can. But unfortunately, this doesn't scale
> well enough... In this particular case, it was really too much work.

That is the trade-off you make by choosing to maintain as many
packages as you do. You can obviously either spend time contributing
stable backports upstream or time packaging software. Just accept
that, as with Debian itself, "stable" means OpenStack upstream makes
the bare minimum alterations necessary. This includes, in some
cases, continuing to test the software in those branches with
dependencies which were contemporary to the corresponding releases
rather than chasing ever changing behavior in them. Sometimes it is
done for expediency due to lack of interested volunteer effort, and
sometimes out of necessity because dependencies may simply conflict
in unresolvable ways.
-- 
Jeremy Stanley



Re: python-networkx_1.10-1_amd64.changes ACCEPTED into experimental

2015-10-05 Thread Jeremy Stanley
On 2015-10-05 23:45:57 +0200 (+0200), Thomas Goirand wrote:
[...]
> Upstream will *not* fix the issue, because you know, they "fixed" it in
> their CI by adding an upper version bound in the pip requirements, which
> is fine for them in the gate. It is fixed in OpenStack Liberty though,
> which I will soon upload to Sid.
[...]

It's a bit of a mischaracterization to say that "upstream will not
fix the issue." In fact as you indicate it was fixed within a couple
days in the master branches of affected projects. The mock pin in
stable/kilo branches is a temporary measure and can be removed if
all the broken tests are either removed or corrected (the assumption
being that distro package maintainers who have an interest in that
branch may volunteer to backport those patches from master if this
is important to them).
-- 
Jeremy Stanley