Re: RFS: jupyter components

2017-09-03 Thread Julien Puydt
Hi,

Le 28/08/2017 à 23:56, Gordon Ball a écrit :

>  * nbconvert: 5.2.1
> 
>waiting on python-pandocfilters >= 1.4 (already in dpmt git, but
>not yet uploaded)

I updated it to latest upstream -- the build doesn't fail because of
python-pandocfilters afaict, but because for some reason entrypoints
don't get activated correctly. I still pushed my changes because I'm
sure I'm just missing something stupid...

Snark on #debian-python



Re: a few quick questions on gbp pq workflow

2017-09-03 Thread Thomas Goirand
On 08/07/2017 12:20 AM, Jeremy Stanley wrote:
> Thomas references the AUTHORS and ChangeLog files (which embed
> important metadata from the revision control system into the release
> tarball, far from useless in my opinion); but taking the
> nova-15.0.0.tar.gz release for example, those two files account for
> a total of 5% of the unpacked tree according to du and probably
> compress fairly well. By comparison, the unit tests and fixtures
> make up 42% of the size of the tree on their own. He's also
> referring to a situation from _years_ ago, which was subsequently
> changed after he asked... those current tarballs only include author
> names and very abbreviated information parsed out of the git log and
> have been that way since 2013 (pbr commit 94a6bb9 released in 0.6),
> but mentioning that would likely have undermined his argument.

Jeremy, you still don't get it, sorry, it probably is my fault.


Using upstream sdist tarballs, we get a ChangeLog file. Due to all the
automation inside dh_* helpers, if nobody takes care of it, then
ChangeLog files automatically gets pushed into each and every individual
.deb files. This mean that even a tiny metapackage will get it, even if
it doesn't carry anything else but dependency information.

Multiply this by so many packages that the OpenStack package maintainer
have to deal with, adding such manual removal of the ChangeLog file in
each package is too much of a pain, when it's not needed at all if we're
using the upstream git as a source.

That, plus generated docs, and so many things that the sdist is
attempting to deal with, which we don't really want. It may even forget
to package some files for example (yes, I saw this a few time...).

PyPi's format is *not* designed as a mean to ship source code, but as a
mean to ship *binary* (ie: food for pip). The fact that you're shipping
built docs shows exactly that.

BTW, pristine-tar is a broken concept. Anyone that pretends otherwise
really has no clue about how tarballs work, their internal timestamps
that needs to be removed, and the fact that one has to order files in a
certain way when adding them to the archive, otherwise everything is
completely broken. Did I mention it also depends on the implementation
of tar itself, that BSD people have a different one, and that Debian has
to carry patches to fix upstream issues that generate different tarballs
depending on the tar utility version? Not to mention also that the
original author of pristine-tar (Joey Hess) agrees with me...

So why should we even attempt to bother? That's additional pain that
you're asking for, when there's not even enough man power, and we're
probably even on the way to get OpenStack removed form Debian if the
situation doesn't change.



Re: a few quick questions on gbp pq workflow

2017-09-03 Thread Thomas Goirand
On 08/06/2017 09:15 PM, Jeremy Stanley wrote:
> On 2017-08-06 20:00:59 +0100 (+0100), Ghislain Vaillant wrote:
> [...]
>> You'd still have to clean the pre-built files, since they would be
>> overwritten by the build system and therefore dpkg-buildpackage
>> would complain if you run the build twice.
>>
>> So, you might as well just exclude them from the source straight
>> away, no?
> 
> Repacking an upstream tarball just to avoid needing to tell
> dh_install not to copy files from a particular path into the binary
> package seems the wrong way around to me

What's wrong is for upstream to pretend that one tarball / archive is
its released source, when in fact it contains binary / generated files.

A source tarball / archive from upstream must contain *only* source
code, nothing else. If it contains anything that comes from the original
source, then it's additional pain for the package maintainer.

> but maybe I'm missing
> something which makes that particularly complicated? This comes up
> on debian-mentors all the time, and the general advice is to avoid
> repacking tarballs unless there's a policy violation or you can get
> substantial (like in the >50% range) reduction in size on especially
> huge upstream tarballs.

That's one view, probably motivated by the fact it's probably easier to
deal with in the long run. However convenient it may be, I don't think
it feels "clean".

And by the way, when it comes to the OpenStack stuff, FTP masters have
already expressed their dislike of the upstream ChangeLog: it is a *WAY*
to big, at the level of megabytes sometimes, and it may appear in .deb
files that would otherwise be a few kilobytes. All this isn't new...

Cheers,

Thomas Goirand (zigo)



Re: a few quick questions on gbp pq workflow

2017-09-03 Thread Thomas Goirand
On 08/06/2017 05:37 PM, Jeremy Stanley wrote:
> On 2017-08-06 10:44:36 -0400 (-0400), Allison Randal wrote:
>> The OpenStack packaging team has been sprinting at DebCamp, and
>> we're finally ready to move all general Python dependencies for
>> OpenStack over to DPMT. (We'll keep maintaining them, just within
>> DPMT using the DPMT workflow.)
>>
>> After chatting with tumbleweed, the current suggestion is that we
>> should migrate the packages straight into gbp pq instead of making
>> an intermediate stop with git-dpm.
> [...]
> 
> More a personal curiosity on my part (I'm now a little disappointed
> that I didn't make time to attend), but are you planning to leverage
> pristine tarballs as part of this workflow shift so you can take
> advantage of the version details set in the sdist metadata and the
> detached OpenPGP signatures provided upstream? Or are you sticking
> with operating on a local fork of upstream Git repositories (and
> generating intermediate sdists on the fly or supplying version data
> directly from the environment via debian/rules)?
> 
> I'm eager to see what upstream release management features you're
> taking advantage of so we can better know which of those efforts are
> valuable to distro package maintainers

Jeremy,

If you think what Alison described includes artifacts from upstream
OpenStack, you're mistaking. She was talking about moving *general
purpose* Python libraries only. The rest of will continue to use the
workflow of generating orig files using "git archive".

At least, that's what I understand from our sprint meetings, and that is
if anyone (but me) in the team dares to start doing a little bit of
packaging, which I haven't seen happening so far...

Cheers,

Thomas Goirand (zigo)