Re: [Distutils] Current Python packaging status (from my point of view)

2016-11-03 Thread Nathaniel Smith
On Thu, Nov 3, 2016 at 2:48 PM, Chris Barker  wrote:
> On Thu, Nov 3, 2016 at 3:50 AM, Paul Moore  wrote:
>> Or there's the
>> option that's been mentioned before, but never (to my knowledge)
>> developed into a complete proposal, for packaging up external
>> dependencies as wheels.
>
>
> Folks were working on this at Pycon last spring -- any progress?

I figured out how to make it work in principle; the problem is that it
turns out we need to write some nasty code to munge Mach-O files
first. It's totally doable, just a bit more of a hurdle than
previously expected :-). And then this happened:
https://vorpus.org/blog/emerging-from-the-underworld/ so I'm way
behind on everything right now. (If anyone wants to help then speak
up!)

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Current Python packaging status (from my point of view)

2016-11-03 Thread Chris Barker
On Thu, Nov 3, 2016 at 3:02 PM, Paul Moore  wrote:

> It would buy *me* flexibility to use python.org build of Python, or my
> own builds. And not to have to wait for conda to release a build of a
> new version.


you are perfectly free to make your own conda package of python -- it's not
that hard.

In fact, that's one of the key benefits of conda's approach -- conda
manages python itself, so you can easily have a conda environment for the
latest development release of python.

of, for that matter, just build python from source inside an environment.

ANd if you want to build a custom python package, you don't even need your
own recipe:

https://github.com/conda-forge/python-feedstock/tree/master/recipe

and conda can provide all the dependencies for you too :-)


> Of course, I have specialised needs, so that's not an
> important consideration for the conda developers.
>

True -- though the particular need of using a custom-compiled python is
actually pretty well supported!

 Caveat: I've never tried this -- and since conda itself uses Python, you
may break conda if you install a python that can't run conda. But maybe not
-- conda could be smart enough to use its "root" install of python
regardless of what python you have in your environment. in fact, I think
that is the case. when I am in an activated conda environment, the "conda"
command is a link to the root conda command, so Im pretty sure it will use
the root python to run itself.




-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Current Python packaging status (from my point of view)

2016-11-03 Thread Chris Barker
On Wed, Nov 2, 2016 at 5:02 PM, Matthew Brett 
wrote:

> Anaconda has an overwhelming advantage on Windows, in that Continuum
> can bear the licensing liabilities enforced by the Intel Fortran
> compiler, and we can not.


Technically, that's an advantage that a commercial distribution has --
really nothing to do with the conda technology per se. But yes, from a
practical perspective Continuum's support for conda an Anaconda is a major
bonus in many ways.

Note that continuum in now (I think) default ing to MKL also -- they seem
to have solved the res-distribution issues.

But the conda-forge project has its own builds of lots of stuff --
including the core scipy stack, based on OpenBLAS. There is a scipy build
there:

https://github.com/conda-forge/scipy-feedstock/tree/master/recipe

but alas, still stuck:

```
build:
# We lack openblas on Windows, and therefore can't build scipy there
either currently.
skip: true # [win or np!=111]
```

>   I'm sure you know, but the only
> practical open-source option is mingw-w64, that does not work with the
> Microsoft runtime used by Python 3.5 [1].


OT -- but is it stable for Python2.7 now?


> > But not pyHDF, netCDF5, gdal, shapely, ... (to name a few that I need to
> > work with). And these are ugly: which means very hard for end-users to
> > build, and very hard for people to package up into wheels (is it even
> > possible?)
>
> I'd be surprised if these packages were very hard to build OSX and
> Linux wheels for.  We're already building hard packages like Pillow
>

Pilllow is relatively easy :-) -- I was doing that years ago.


> and Matplotlib and h5py and pytables.


Do h5py and pytables share libhdf5 somehow? Or is the whole mess statically
linked into each?

  If you mean netCDF4 - there are
> already OSX and Windows wheels for that [2].
>

God bless Chris Gohlke  --- I have no idea how he does all that!

So take all the hassle of those -- and multiply by about ten to get what
the GIS stack needs: GDAL/OGR, Shapely, Fiona, etc

Which doesn't mean it couldn't be done -- just that it would be a pain, and
you'd end up with some pretty bloated wheels -- in the packages above, how
many copies of libpng will there be? how many of hdf5? proj.4? geos?

Maybe that simply doesn't matter -- computers have a lot of disk space and
memory these days.

But there is one more use-case -- folks that need to build their own stuff
against some of those libs (netcdf4 in my case). The nice thing about conda
is that it can provide the libs for me to use in my own custom built
projects, or for other folks to use to build conda packages with.

Maybe pip could be used to distribute those libs, too -- I know folks are
working on that.

Then there are non-python stuff that you need to work with that would be
nice to mange in environments, too -- R, MySQL, who knows?

As someone else on this thread noted -- it's jamming a square peg into a
round hole -- and why do that when there is a square hole you can use
instead?

At least that's what I decided.

-CHB


-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Current Python packaging status (from my point of view)

2016-11-03 Thread Paul Moore
On 3 November 2016 at 21:48, Chris Barker  wrote:
> On Thu, Nov 3, 2016 at 3:50 AM, Paul Moore  wrote:
>
>>
>> Even on the "hard" cases like Windows, it may be possible to define a
>> standard approach that goes something along the lines of defining a
>> standard location that (somehow) gets added to the load path, and
>> interested parties provide DLLs for external dependencies, which the
>> users can then manually place in those locations.
>
> that't pretty much what conda is :-)

Well, it doesn't feel like that. Maybe we're not understanding each
other. Am I able to take a non-conda installation of Python, use conda
to install *just* a set of non-Python DLLs (libxml, libyaml, ...) and
then use pip to install wheels for lxml, pyyaml, etc? I understand
that there currently isn't a way to *build* an lxml wheel that links
to a conda-installed libxml, but that's not the point I'm making - if
conda provided a way to manage external DLLs only, then it would be
possible in theory to contribute a setup.py fix to a project like lxml
that detected and linked to a conda-installed libxml. That single
source could then be used to build *both* wheels and conda packages,
avoiding the current situation where effort on getting lxml to build
is duplicated, once for conda and once for wheel.

> though it adds the ability to handle multiple environments, and tools so you
> don't have to manually place anything.

Well, there are other tools (virtualenv, venv) to do this as well, so
again conda is to an extent offering a "buy into our infrastructure or
you're on your own" proposition. That's absolutely the right thing for
people wanting an integrated "just get on with it" solution, so don't
assume this is a criticism - just an explanation of why conda doesn't
suit *my* needs.

> It would probably be feasible to make a
> conda-for-everything-but-python-itself. I'm just not sure that would buy you
> much.

It would buy *me* flexibility to use python.org build of Python, or my
own builds. And not to have to wait for conda to release a build of a
new version. Of course, I have specialised needs, so that's not an
important consideration for the conda developers.

Paul
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Current Python packaging status (from my point of view)

2016-11-03 Thread Chris Barker
On Thu, Nov 3, 2016 at 10:56 AM, Matthew Brett 
wrote:


> But - it would be a huge help if the PSF could help with funding to
> get mingw-w64 working.  This is the crucial blocker for progress on
> binary wheels on Windows.


for what it's worth, this is a blocker for many of for using Win64 at all!

As far as I know, conda has not solved this one -- maybe for Continuum's
paying customers?

(or for binaries that continuum builds, but not stuff I need to build
myself)

If it's just money we need, I'd hope we could scrape it together somehow!

-CHB


-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Current Python packaging status (from my point of view)

2016-11-03 Thread Chris Barker
On Thu, Nov 3, 2016 at 3:50 AM, Paul Moore  wrote:


> Even on the "hard" cases like Windows, it may be possible to define a
> standard approach that goes something along the lines of defining a
> standard location that (somehow) gets added to the load path, and
> interested parties provide DLLs for external dependencies, which the
> users can then manually place in those locations.


that't pretty much what conda is :-)

though it adds the ability to handle multiple environments, and tools so
you don't have to manually place anything.

It would probably be feasible to make a
conda-for-everything-but-python-itself. I'm just not sure that would buy
you much.

-CHB

Or there's the
> option that's been mentioned before, but never (to my knowledge)
> developed into a complete proposal, for packaging up external
> dependencies as wheels.
>

Folks were working on this at Pycon last spring -- any progress?



> In some ways, Windows is actually an *easier* platform in this regard,
> as it's much more consistent in what it does provide - the CRT, and
> nothing else, basically. So all of the rest of the "external
> dependencies" need to be shipped, which is a bad thing, but there's no
> combinatorial explosion of system dependencies to worry about, which
> is good


and pretty much what manylinux is about, yes?

-CHB

-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Current Python packaging status (from my point of view)

2016-11-03 Thread Chris Barker
On Thu, Nov 3, 2016 at 3:39 AM, Nick Coghlan  wrote:

> I don't think there's much chance of any of this ever working on
> Windows - conda will rule there, and rightly so. Mac OS X seems likely
> to go the same way, although there's an outside chance brew may pick
> up some of the otherwise Linux-only capabilities if they prove
> successful.
>

this is a really good point then -- we're building  "platform independent
system that, oh actually is a "a few different Linux vendors" system. Which
is a perfectly worthy goal, but it was not at all the goal of conda.

I wasn't there, but I think conda started with the goal of supporting three
core platforms (that are pretty different) with the same UI, and as much of
the same code as possible. And if you want to do that -- it really makes a
lot of sense to control as much of the system as possible.


> What I'm mainly reacting to here is Chris's apparent position that he
> sees the entire effort as pointless.


I don't believe I actually ever said that -- though I can see why'd you'd
think so.

But for the record, I don't see it as pointless.

It's not pointless, just hard,
>

That I do see -- I see it as hard, so hard that it's likely to hit road
blocks, so we could make better progress with other solutions if that's
where people put their energies. And you said yourself earlier in this post
that:

 "I don't think there's much chance of any of this ever working on Windows"

One of the "points" for me it to support Windows, OS-X and Linux -- so in
that sense -- I guess it is pointless ;-)

By the way, I was quite disappointed when i discovered that conda has done
literally nothing to make building cross-platfrom -- I understand why, but
none the less, building can be a real pain.

But it's also nice that it has a made a nice clean distinction between
package management and building, a distinction that is all too blurred in
the distutils/setuptools/pip/wheel stack (which, I know, is slowly being
untangled..)

Not in the general case it won't, no, but it should be entirely
> feasible for distro-specific Linux wheels and manylinux1 wheels to be
> as user-friendly as conda in cases where folks on the distro side are
> willing to put in the effort to make the integration work properly.
>

I agree -- though I think the work up front by the distributors will be
more.


> It may also be feasible to define an ABI for "linuxconda" that's
> broader than the manylinux1 ABI, so folks can publish conda wheels
> direct to PyPI, and then pip could define a way for distros to
> indicate their ABI is "conda compatible" somehow.


Hmm -- now THAT's an interesting idea...

-CHB


-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Current Python packaging status (from my point of view)

2016-11-03 Thread Chris Barker
On Thu, Nov 3, 2016 at 2:34 AM, Paul Moore  wrote:

> On 2 November 2016 at 23:08, Chris Barker  wrote:
> > After all, you can use pip from within a conda environment just fine :-)
>
> In my experience (some time ago) doing so ended up with the "tangled
> mess of multiple systems" you mentioned.


well, yes, my experience too, as a of two years ago,  -- which is why I'm
taken the time make conda packages of all the pure python libs I need (and
now that there is conda-forge, it's not a huge lift)


> Conda can't uninstall
> something I installed with pip,


sure can't! -- though that's too much of a barrier -- it'll give you an
error, and then you use pip to uninstall it.


> and I have no obvious way of
> introspecting "which installer did I use to install X?" And if I pip
> uninstall something installed by conda, I break things. (Apologies if
> I misrepresent, it was a while ago, but I know I ended up deciding
> that I needed to keep a log of every package I installed and what I
> installed it with if I wanted to keep things straight...)
>
> Has that improved?
>

yes, it has -- and lots of folks seem to have it work fine for them, though
I"d say in more of a one-way street approach:

- create a conda environment
- install all the conda packages you need
- install all the pip packages you need.

you're done. And most conda packages of python packages are built
"properly" with pip-compatible meta data, so a pip install won't get
confused and think it needs to install a dependency that is already there.

then you  can export your environment to a yaml file that keeps track of
which packages are conda, and which pip, and you (and your users) can
re-create that environment reliably -- even on a different platform.

I'm not trying to complain, I don't have any particular expectation
> that conda "has to" support using pip on a conda installation. But I
> do want to make sure we're clear on what works and what doesn't.


I'm not the expert here --  honestly do try avoid mixing pip and conda --
but support for it has gotten a lot better.

-CHB


-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] continuous integration options (was Re: Travis-CI is not open source, except in fact it *is* open source)

2016-11-03 Thread Alex Grönholm
I don't know if it has been mentioned before, but Travis already 
provides a way to automatically package and upload sdists and wheels to 
PyPI: https://docs.travis-ci.com/user/deployment/pypi/


I've been using it myself in many projects and it has worked quite well. 
Granted, I haven't had to deal with binary wheels yet, but I think 
Travis should be able to handle creation of macOS and manylinux1 wheels.



03.11.2016, 22:07, Nathaniel Smith kirjoitti:


I think we're drifting pretty far off topic here... IIRC the original 
discussion was about whether the travis-ci infrastructure could be 
suborned to provide an sdist->wheel autobuilding service for pypi. 
(Answer: maybe, though it would be pretty awkward, and no one seems to 
be jumping up to make it happen.)



On Nov 3, 2016 11:28 AM, "Barry Warsaw" > wrote:


On Nov 03, 2016, at 11:08 AM, Glyph Lefkowitz wrote:

>I think phrasing this in terms of "perfect" and "good enough"
presents a
>highly misleading framing.  Examined in this fashion, of course
we may
>reluctantly use the "good enough" option, but don't we want the
best option?

What are the criteria for "best"?

I'm not saying don't use Travis, I'm just trying to express that
there are
technical limitations, which is almost definitely true of any CI
infrastructure.  If those limitations don't affect your project,
great, go for
it!

Cheers,
-Barry

___
Distutils-SIG maillist  - Distutils-SIG@python.org

https://mail.python.org/mailman/listinfo/distutils-sig




___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] continuous integration options (was Re: Travis-CI is not open source, except in fact it *is* open source)

2016-11-03 Thread Nathaniel Smith
I think we're drifting pretty far off topic here... IIRC the original
discussion was about whether the travis-ci infrastructure could be suborned
to provide an sdist->wheel autobuilding service for pypi. (Answer: maybe,
though it would be pretty awkward, and no one seems to be jumping up to
make it happen.)

On Nov 3, 2016 11:28 AM, "Barry Warsaw"  wrote:

> On Nov 03, 2016, at 11:08 AM, Glyph Lefkowitz wrote:
>
> >I think phrasing this in terms of "perfect" and "good enough" presents a
> >highly misleading framing.  Examined in this fashion, of course we may
> >reluctantly use the "good enough" option, but don't we want the best
> option?
>
> What are the criteria for "best"?
>
> I'm not saying don't use Travis, I'm just trying to express that there are
> technical limitations, which is almost definitely true of any CI
> infrastructure.  If those limitations don't affect your project, great, go
> for
> it!
>
> Cheers,
> -Barry
>
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] continuous integration options (was Re: Travis-CI is not open source, except in fact it *is* open source)

2016-11-03 Thread Barry Warsaw
On Nov 03, 2016, at 11:08 AM, Glyph Lefkowitz wrote:

>I think phrasing this in terms of "perfect" and "good enough" presents a
>highly misleading framing.  Examined in this fashion, of course we may
>reluctantly use the "good enough" option, but don't we want the best option?

What are the criteria for "best"?

I'm not saying don't use Travis, I'm just trying to express that there are
technical limitations, which is almost definitely true of any CI
infrastructure.  If those limitations don't affect your project, great, go for
it!

Cheers,
-Barry


pgpnNUUCsLakA.pgp
Description: OpenPGP digital signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


[Distutils] continuous integration options (was Re: Travis-CI is not open source, except in fact it *is* open source)

2016-11-03 Thread Glyph Lefkowitz

> On Nov 3, 2016, at 10:17 AM, Barry Warsaw  wrote:
> 
> On Nov 03, 2016, at 12:54 AM, Nick Coghlan wrote:
> 
>> This is also an area where I'm fine with recommending freemium
>> solutions if they're the lowest barrier to entry option for new users,
>> and "Use GitHub + Travis CI" qualifies on that front.
> 
> I won't rehash the GitHub/GitLab debate, but in some of my projects (hosted on
> GH) I've had to ditch Travis because of limitations on that platform.
> Specifically, I needed to run various tests on an exact specification of
> various Ubuntu platforms, e.g. does X run on an up-to-date Ubuntu Y.Z?
> 
> I originally used Docker for this, but our projects had additional
> constraints, such as needing to bind-mount, which aren't supported on the
> Travis+Docker platform.  So we ended up ditching the Travis integration and
> hooking our test suite into the Ubuntu autopkgtest system (which is nearly
> identical to the Debian autopkgtest system but runs on Ubuntu infrastructure).
> 
> Python may not be affected by similar constraints, but it is worth keeping in
> mind.  Travis isn't a perfect technical solution for all projects, but it may
> be good enough for Python.

I think phrasing this in terms of "perfect" and "good enough" presents a highly 
misleading framing.  Examined in this fashion, of course we may reluctantly use 
the "good enough" option, but don't we want the best option?

A better way to look at it is cost vs. benefit.

How much does it cost you in terms of time and money to run and administer the 
full spectrum of "real" operating systems X.Z that you wish to support?  How 
much does it cost in terms of waiting for all that extra build infrastructure 
to run all the time?  How much additional confidence and assurance that it will 
work does that buy you, over the confidence of passing tests within a docker 
container?  Is that additional confidence worth the investment of resources?

Of course, volunteer-driven projects are not concerned directly with top-level 
management allocation of ostensibly fungible resources, and so a hard "costly" 
solution that someone is interested in and committed to is far less expensive 
than a "cheap" solution that everyone finds boring, so we have to take that 
into account as well.

As it happens, Twisted has a massive investment in existing Buildbot CI 
infrastructure _as well as_ Travis and Appveyor.  Travis and Appveyor address 
something that our CI can't, which is allowing unauthenticated builds from 
randos issuing their first pull requests.  This gives contributors much faster 
feedback which is adequate for the majority of changes.

However, many of our ancillary projects, which do not have as many 
platform-sensitive components, are built using Travis only, and that's a very 
good compromise for them.  It has allowed us to maintain a much larger and more 
diverse ecosystem with a much smaller team than we used to be able to.

In the future, we may have to move to a different CI service, but I can tell 
you that for sure that 90% of the work involved in getting builds to run on 
Travis is transferrable to any platform that can run a shell script.  There's a 
bit of YAML configuration we would need to replicate, and we might have to do 
some fancy dancing with Docker to get other ancillary services run on the 
backend in some other way, but I would not worry about vendor lock-in at all 
for this sort of service.  Probably, the amount of time and energy on system 
maintenance that Travis saves us in a given week is enough to balance out all 
the possible future migration work.

-glyph___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Current Python packaging status (from my point of view)

2016-11-03 Thread Matthew Brett
Hi,

On Thu, Nov 3, 2016 at 2:29 AM, Paul Moore  wrote:
> On 3 November 2016 at 00:02, Matthew Brett  wrote:
>> Anaconda has an overwhelming advantage on Windows, in that Continuum
>> can bear the licensing liabilities enforced by the Intel Fortran
>> compiler, and we can not.  We therefore have no license-compatible
>> Fortran compiler for Python 3.5 - so we can't build scipy, and that's
>> a full stop for the scipy stack.   I'm sure you know, but the only
>> practical open-source option is mingw-w64, that does not work with the
>> Microsoft runtime used by Python 3.5 [1].  It appears the only person
>> capable of making mingw-w64 compatible with this runtime is Ray
>> Donnelly, who now works for Continuum, and we haven't yet succeeded in
>> finding the 15K USD or so to pay Continuum for his time.
>
> Is this something the PSF could assist with? Either in terms of
> funding work to get mingw-w64 working, or in terms of funding (or
> negotiating) Intel Fortran licenses for the community?

I'm afraid paying for the Intel Fortran licenses won't help because
the problem is in the Intel Fortran runtime license [1].

But - it would be a huge help if the PSF could help with funding to
get mingw-w64 working.  This is the crucial blocker for progress on
binary wheels on Windows.   Nathaniel - do you have any recent news on
progress?

Cheers,

Matthew

[1] https://software.intel.com/en-us/comment/1881526
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Travis-CI is not open source. Was: Current Python packaging status (from my point of view)

2016-11-03 Thread Barry Warsaw
On Nov 03, 2016, at 12:54 AM, Nick Coghlan wrote:

>This is also an area where I'm fine with recommending freemium
>solutions if they're the lowest barrier to entry option for new users,
>and "Use GitHub + Travis CI" qualifies on that front.

I won't rehash the GitHub/GitLab debate, but in some of my projects (hosted on
GH) I've had to ditch Travis because of limitations on that platform.
Specifically, I needed to run various tests on an exact specification of
various Ubuntu platforms, e.g. does X run on an up-to-date Ubuntu Y.Z?

I originally used Docker for this, but our projects had additional
constraints, such as needing to bind-mount, which aren't supported on the
Travis+Docker platform.  So we ended up ditching the Travis integration and
hooking our test suite into the Ubuntu autopkgtest system (which is nearly
identical to the Debian autopkgtest system but runs on Ubuntu infrastructure).

Python may not be affected by similar constraints, but it is worth keeping in
mind.  Travis isn't a perfect technical solution for all projects, but it may
be good enough for Python.

Cheers,
-Barry


pgpOSaL3aAK0M.pgp
Description: OpenPGP digital signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Current Python packaging status (from my point of view)

2016-11-03 Thread Paul Moore
On 3 November 2016 at 10:39, Nick Coghlan  wrote:
> It may also be feasible to define an ABI for "linuxconda" that's
> broader than the manylinux1 ABI, so folks can publish conda wheels
> direct to PyPI, and then pip could define a way for distros to
> indicate their ABI is "conda compatible" somehow.

Even on the "hard" cases like Windows, it may be possible to define a
standard approach that goes something along the lines of defining a
standard location that (somehow) gets added to the load path, and
interested parties provide DLLs for external dependencies, which the
users can then manually place in those locations. Or there's the
option that's been mentioned before, but never (to my knowledge)
developed into a complete proposal, for packaging up external
dependencies as wheels.

In some ways, Windows is actually an *easier* platform in this regard,
as it's much more consistent in what it does provide - the CRT, and
nothing else, basically. So all of the rest of the "external
dependencies" need to be shipped, which is a bad thing, but there's no
combinatorial explosion of system dependencies to worry about, which
is good (as long as the "external dependencies" ecosystem maintains
internal consistency).

Paul
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Current Python packaging status (from my point of view)

2016-11-03 Thread Nick Coghlan
On 3 November 2016 at 19:10, Nathaniel Smith  wrote:
> On Nov 3, 2016 1:40 AM, "Nick Coghlan"  wrote:
>> The approach Tennessee and Robert Collins came up with (which still
>> sounds sensible to me) is that instead of dependencies on particular
>> external components, what we want to be able express is instead a
>> range of *actions* that the software is going to take:
>>
>> - "I am going to dynamically load a library named "
>> - "I am going to execute a subprocess for a command named "
>
> And then it segfaults because it turns out that your library named  is
> not abi compatible with my library named . Or it would have been if you
> had the right version, but distros don't agree on how to express version
> numbers either. (Just think about epochs.) Or we're on Windows, so it's
> interesting to know that we need a library named , but what are we
> supposed to do with that information exactly?

I don't think there's much chance of any of this ever working on
Windows - conda will rule there, and rightly so. Mac OS X seems likely
to go the same way, although there's an outside chance brew may pick
up some of the otherwise Linux-only capabilities if they prove
successful.

Linux distros have larger package management communities than conda
though, and excellent integration testing infrastructure on the
commercial side of things, so we "just" need to get some of their
software integration pipelines arranged in better configurations, and
responding to the right software publication events :)

> Again, I don't want to just be throwing around stop energy -- if people want
> to tackle these problems then I wish them luck. But I don't think we should
> be hand waving this as a basically solved problem that just needs a bit of
> coding, because that also can create stop energy that blocks an honest
> evaluation of alternative approaches.

There's a reason I'm not volunteering to work on this in my own time,
and am instead letting the Fedora Modularity and Python maintenance
folks decide when it's a major blocker to further process improvement
efforts on their side of things, while I work on other
non-Python-specific aspects of Red Hat's software supply chain
management.

What I'm mainly reacting to here is Chris's apparent position that he
sees the entire effort as pointless. It's not pointless, just hard,
since it involves working towards process improvements that span
*multiple* open source communities, rather than being able to work
with just the Python community or just the conda community. Even if
automation can only cover 80-90% of legacy cases and 95% of future
cases (numbers plucked out of thin air), that's still a huge
improvement since folks generally aren't adding *new* huge-and-gnarly
C/C++/FORTRAN dependencies to the mix of things pip users need to
worry about, while Go and Rust have been paying much closer attention
to supporting distributed automated builds from the start of their
existence.

>> When it comes to things that stand in the way of fully automating the
>> PyPI -> RPM pipeline, there are still a lot of *much* lower hanging
>> fruit out there than this. However, the immediate incentives of folks
>> working on package management line up in such a way that I'm
>> reasonably confident that the lack of these capabilities is a matter
>> of economic incentives rather than insurmountable technical barriers -
>> it's a hard, tedious problem to automate, and manual repackaging into
>> a platform specific format has long been a pretty easy thing for
>> commercial open source redistributors to sell.
>
> The idea does seem much more plausible as a form of metadata hints attached
> to sdists that are used as inputs to a semi-automated-but-human-supervised
> distro package building system. What I'm skeptical about is specifically the
> idea that someday I'll be able to build a wheel, distribute it to end users,
> and then pip will do some negotiation with
> dnf/apt/pacman/chocolatey/whatever and make my wheel work everywhere -- and
> that this will be an viable alternative to conda.

Not in the general case it won't, no, but it should be entirely
feasible for distro-specific Linux wheels and manylinux1 wheels to be
as user-friendly as conda in cases where folks on the distro side are
willing to put in the effort to make the integration work properly.

It may also be feasible to define an ABI for "linuxconda" that's
broader than the manylinux1 ABI, so folks can publish conda wheels
direct to PyPI, and then pip could define a way for distros to
indicate their ABI is "conda compatible" somehow.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Current Python packaging status (from my point of view)

2016-11-03 Thread Paul Moore
On 2 November 2016 at 23:08, Chris Barker  wrote:
> After all, you can use pip from within a conda environment just fine :-)

In my experience (some time ago) doing so ended up with the "tangled
mess of multiple systems" you mentioned. Conda can't uninstall
something I installed with pip, and I have no obvious way of
introspecting "which installer did I use to install X?" And if I pip
uninstall something installed by conda, I break things. (Apologies if
I misrepresent, it was a while ago, but I know I ended up deciding
that I needed to keep a log of every package I installed and what I
installed it with if I wanted to keep things straight...)

Has that improved?

I'm not trying to complain, I don't have any particular expectation
that conda "has to" support using pip on a conda installation. But I
do want to make sure we're clear on what works and what doesn't.

Paul
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Current Python packaging status (from my point of view)

2016-11-03 Thread Paul Moore
On 3 November 2016 at 00:02, Matthew Brett  wrote:
> Anaconda has an overwhelming advantage on Windows, in that Continuum
> can bear the licensing liabilities enforced by the Intel Fortran
> compiler, and we can not.  We therefore have no license-compatible
> Fortran compiler for Python 3.5 - so we can't build scipy, and that's
> a full stop for the scipy stack.   I'm sure you know, but the only
> practical open-source option is mingw-w64, that does not work with the
> Microsoft runtime used by Python 3.5 [1].  It appears the only person
> capable of making mingw-w64 compatible with this runtime is Ray
> Donnelly, who now works for Continuum, and we haven't yet succeeded in
> finding the 15K USD or so to pay Continuum for his time.

Is this something the PSF could assist with? Either in terms of
funding work to get mingw-w64 working, or in terms of funding (or
negotiating) Intel Fortran licenses for the community?

Paul
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Current Python packaging status (from my point of view)

2016-11-03 Thread Nathaniel Smith
On Nov 3, 2016 1:40 AM, "Nick Coghlan"  wrote:
>
> On 3 November 2016 at 05:28, Nathaniel Smith  wrote:
> > On Nov 2, 2016 9:52 AM, "Nick Coghlan"  wrote:
> >> Tennessee Leeuwenberg started a draft PEP for that first part last
> >> year: https://github.com/pypa/interoperability-peps/pull/30/files
> >>
> >> dnf/yum, apt, brew, conda, et al all *work around* the current lack of
> >> such a system by asking humans (aka "downstream package maintainers")
> >> to supply the additional information by hand in a platform specific
> >> format.
> >
> > To be fair, though, it's not yet clear whether such a system is actually
> > possible. AFAIK no one has ever managed to reliably translate between
> > different languages that Linux distros use for describing environment
> > constraints, never mind handling the places where they're genuinely
> > irreconcilable (e.g. the way different distro openssl packages have
> > incompatible ABIs), plus other operating systems too.
>
> The main problem with mapping between Debian/RPM/conda etc in the
> general case is that the dependencies are generally expressed in terms
> of *names* rather than runtime actions, and you also encounter
> problems with different dependency management ecosystems splitting up
> (or aggregating!) different upstream components differently. This
> means you end up with a situation that's more like a lossy transcoding
> between MP3 and OGG Vorbis or vice-versa than it is a pristine
> encoding to either from a losslessly encoded FLAC or WAV file.
>
> The approach Tennessee and Robert Collins came up with (which still
> sounds sensible to me) is that instead of dependencies on particular
> external components, what we want to be able express is instead a
> range of *actions* that the software is going to take:
>
> - "I am going to dynamically load a library named "
> - "I am going to execute a subprocess for a command named "

And then it segfaults because it turns out that your library named  is
not abi compatible with my library named . Or it would have been if you
had the right version, but distros don't agree on how to express version
numbers either. (Just think about epochs.) Or we're on Windows, so it's
interesting to know that we need a library named , but what are we
supposed to do with that information exactly?

Again, I don't want to just be throwing around stop energy -- if people
want to tackle these problems then I wish them luck. But I don't think we
should be hand waving this as a basically solved problem that just needs a
bit of coding, because that also can create stop energy that blocks an
honest evaluation of alternative approaches.

> And rather than expecting folks to figure that stuff out for
> themselves, you'd use tools like auditwheel and strace to find ways to
> generate it and/or validate it.
>
> > I mean, it would be awesome if someone pulls it off. But it's possible
that
> > this is like saying that it's not an inherent design limitation of my
bike
> > that it's only suited for terrestrial use, because I could always strap
on
> > wings and a rocket...
>
> When it comes to things that stand in the way of fully automating the
> PyPI -> RPM pipeline, there are still a lot of *much* lower hanging
> fruit out there than this. However, the immediate incentives of folks
> working on package management line up in such a way that I'm
> reasonably confident that the lack of these capabilities is a matter
> of economic incentives rather than insurmountable technical barriers -
> it's a hard, tedious problem to automate, and manual repackaging into
> a platform specific format has long been a pretty easy thing for
> commercial open source redistributors to sell.

The idea does seem much more plausible as a form of metadata hints attached
to sdists that are used as inputs to a semi-automated-but-human-supervised
distro package building system. What I'm skeptical about is specifically
the idea that someday I'll be able to build a wheel, distribute it to end
users, and then pip will do some negotiation with
dnf/apt/pacman/chocolatey/whatever and make my wheel work everywhere -- and
that this will be an viable alternative to conda.

-n
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Current Python packaging status (from my point of view)

2016-11-03 Thread Nick Coghlan
On 3 November 2016 at 05:28, Nathaniel Smith  wrote:
> On Nov 2, 2016 9:52 AM, "Nick Coghlan"  wrote:
>> Tennessee Leeuwenberg started a draft PEP for that first part last
>> year: https://github.com/pypa/interoperability-peps/pull/30/files
>>
>> dnf/yum, apt, brew, conda, et al all *work around* the current lack of
>> such a system by asking humans (aka "downstream package maintainers")
>> to supply the additional information by hand in a platform specific
>> format.
>
> To be fair, though, it's not yet clear whether such a system is actually
> possible. AFAIK no one has ever managed to reliably translate between
> different languages that Linux distros use for describing environment
> constraints, never mind handling the places where they're genuinely
> irreconcilable (e.g. the way different distro openssl packages have
> incompatible ABIs), plus other operating systems too.

The main problem with mapping between Debian/RPM/conda etc in the
general case is that the dependencies are generally expressed in terms
of *names* rather than runtime actions, and you also encounter
problems with different dependency management ecosystems splitting up
(or aggregating!) different upstream components differently. This
means you end up with a situation that's more like a lossy transcoding
between MP3 and OGG Vorbis or vice-versa than it is a pristine
encoding to either from a losslessly encoded FLAC or WAV file.

The approach Tennessee and Robert Collins came up with (which still
sounds sensible to me) is that instead of dependencies on particular
external components, what we want to be able express is instead a
range of *actions* that the software is going to take:

- "I am going to dynamically load a library named "
- "I am going to execute a subprocess for a command named "

And rather than expecting folks to figure that stuff out for
themselves, you'd use tools like auditwheel and strace to find ways to
generate it and/or validate it.

> I mean, it would be awesome if someone pulls it off. But it's possible that
> this is like saying that it's not an inherent design limitation of my bike
> that it's only suited for terrestrial use, because I could always strap on
> wings and a rocket...

When it comes to things that stand in the way of fully automating the
PyPI -> RPM pipeline, there are still a lot of *much* lower hanging
fruit out there than this. However, the immediate incentives of folks
working on package management line up in such a way that I'm
reasonably confident that the lack of these capabilities is a matter
of economic incentives rather than insurmountable technical barriers -
it's a hard, tedious problem to automate, and manual repackaging into
a platform specific format has long been a pretty easy thing for
commercial open source redistributors to sell.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Current Python packaging status (from my point of view)

2016-11-03 Thread Nick Coghlan
On 3 November 2016 at 04:39, Chris Barker  wrote:
> On Wed, Nov 2, 2016 at 9:49 AM, Nick Coghlan  wrote:
>> - you need a system for specifying environmental *constraints* (like
>> dynamically linked C libraries and command line applications you
>> invoke)
>> - you need a system for asking the host environment if it can satisfy
>> those constraints
>
> and it it can't -- you're then done -- that's actually the easy part (and
> happens already and build or run time, yes?):

When it comes to the kinds of environmental constraints most Python
applications (even scientific ones) impose, conda, dnf/yum, and apt
can meet them.

If you're consistently running (for example) the Fedora Scientific Lab
as your Linux distro of choice, and aren't worried about other
platforms, then you don't need and probably don't want conda - your
preferred binary dependency management community is the Fedora one,
not the conda one, and your preferred app isolation technology will be
Linux containers, not conda environments. Within the containers, if
you use virtualenv at all, it will be with the system site-packages
enabled.

> I try to build libgdal, it'll fail if I don't have that huge pile of
> dependencies installed.
>
> I try to run a wheel someone else built -- it'll also fail.
>
> It'd be better if this could be hashed out a compilation or linking error,
> sure, but that's not goign to do a whole lot except make the error messages
> nicer :-)

This is why fixing this in the general case requires the ability for
pip to ask the host platform to install additional components.

>> dnf/yum, apt, brew, conda, et al all *work around* the current lack of
>> such a system by asking humans (aka "downstream package maintainers")
>> to supply the additional information by hand in a platform specific
>> format.
>
> if conda is a "platform", then yes. but in that sense pip is a platform,
> too.

No, it's not, as it still wouldn't have the ability to install
external dependencies itself, only the ability to request them from
the host platform.

You'd still need a plugin interacting with something like conda or
PackageKit underneath to actually do the installation (you'd want
something like PackageKit rather than raw dnf/yum/apt on Linux, as
PackageKit is what powers features like PackageKit-command-not-found,
which lets you install CLI commands from system repos


> I'll beat this drum again -- if you want to extend pip to solve all (most)
> of the problems conda solves, then you are re-inventing conda.

I don't. I want publishers of Python packages to be able to express
their external dependencies in a platform independent way such that
the following processes can be almost completely and reliably
automated:

* converting PyPI packages to conda packages
* converting PyPI packages to RPMs
* converting PyPI packages to deb packages
* requesting external dependencies from the host system (whether
that's conda, a Linux distro, or something else) when installing
Python packages with pip

> If someone
> doesn't like the conda design, and has better ideas, great, but only
> re-invent the wheel if you really are going to make a better wheel.

It wouldn't be conda, because it still wouldn't be give you a way to
publish Python runtimes and other external dependencies - only a way
to specify what you need when installed so that pip can ask the host
environment to provide it (and fail noisily if it says it can't
oblige) and so format converters like "conda skeleton" and "pyp2rpm"
can automatically emit the correct external dependencies.

> However, I haven't thought about it carefully -- maybe it would be possible
> to have a system than managed everything except python itself. But that
> would be difficult, and I don't see the point, except to satisfy brain dead
> IT security folks :-)

The IT security folks aren't brain dead, they have obligations to
manage organisational risk, and that means keeping some level of
control over what's being run with privileged access to network
resources (and in most organisations running "inside the firewall"
still implicitly grants some level of privileged information access)

Beyond that "because IT said so" case, we'd kinda like Fedora system
components to be running in the Fedora system Python (and ditto for
other Linux distributions), while folks running in
Heroku/OpenShift/Lambda should really be using the platform provided
runtimes so they benefit from automated security updates.

More esoterically, Python interpreter implementers and other folks
need a way to obtain Python packages that isn't contingent on the
interpreter being published in any particular format, since it may not
be published at all. Victor Stinner's "performance" benchmark suite is
an example of that, since it uses pip and virtualenv to compare the
performance of different Python interpreters across a common set of
benchmarks: https://github.com/python/performance

>> > But the different audiences