Re: [Distutils] GnuPG signatures on PyPI: why so few?

2017-03-14 Thread Glyph Lefkowitz
The big problem here, of course, is "key management"; what happens when someone 
throws their laptop in a river.

https://github.com/ahf/teneo indicates to me that it may be possible to use a 
KDF to get an Ed25519 key from a passphrase that the user remembers, 
minilock-style, largely mitigating that problem, assuming we can get users to 
remember stuff :-).

-g

> On Mar 14, 2017, at 7:35 AM, Daniel Holth <dho...@gmail.com> wrote:
> 
> The wheel command implements but never fully realized the commands 'wheel 
> keygen', 'wheel sign' for a bundled signature scheme (where the signature is 
> inside the signed file) inspired by JAR signing and based on Ed25519 
> primitives + JSON web signature / JSON web key. The idea was to have wheel 
> automatically generate a signing key and always generate signed wheels, since 
> it's impossible to verify signatures if there are none. Successive releases 
> from the same author would tend to use the same keys; a TOFU (trust on first 
> use) model, a-la ssh, would warn you if the key changed. The public keys 
> would be distributed over a separate https:// server (perhaps the publisher's 
> personal web page, or an application could publish a list of public keys for 
> its dependencies as-tested). Instead of checking the hash of an exact release 
> artifact, you could use a similar syntax to check against a particular public 
> key and cover yourself for future releases. Instead of key revocation, you 
> could let the only valid signing keys be the ones currently available at the 
> key URL, like oauth2 https://www.googleapis.com/oauth2/v3/certs 
> <https://www.googleapis.com/oauth2/v3/certs>
> 
> The goal you'd want to shoot for is not 'is this package good' but 'am I 
> being targeted'. A log of timestamp signatures for everything uploaded to 
> PyPI could be very powerful here and might even be useful without publisher 
> signatures, so that you could at least know that you are downloading the same 
> reasonably old version of package X that everyone else is using. If there was 
> a publisher signature, the timestamp server would sign the publisher's 
> signature asserting 'this signature was valid at time X'.
> 
> On Tue, Mar 14, 2017 at 2:52 AM Nick Coghlan <ncogh...@gmail.com 
> <mailto:ncogh...@gmail.com>> wrote:
> On 14 March 2017 at 15:48, Glyph Lefkowitz <gl...@twistedmatrix.com 
> <mailto:gl...@twistedmatrix.com>> wrote:
> 
> 2. Except, as stated - i.e. hashes without signatures - this just means we 
> all trust Github rather than PyPI :).
> 
> Yeah, HTTPS would still be a common point of compromise - that kind of simple 
> scheme would just let the repo hosting and PyPI serve as cross-checks on each 
> other, such that you had to compromise both (or the original publisher's 
> system) in order to corrupt both the published artifact *and* the publisher's 
> record of the expected artifact hash.
> 
> It would also be enough to let publishers check that the artifacts that PyPI 
> is serving match what they originally uploaded - treating it as a QA problem 
> as much as a security one.
> 
> Cheers,
> Nick.
> 
> -- 
> Nick Coghlan   |   ncogh...@gmail.com <mailto:ncogh...@gmail.com>   |   
> Brisbane, Australia
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org 
> <mailto:Distutils-SIG@python.org>
> https://mail.python.org/mailman/listinfo/distutils-sig 
> <https://mail.python.org/mailman/listinfo/distutils-sig>

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] GnuPG signatures on PyPI: why so few?

2017-03-13 Thread Glyph Lefkowitz

> On Mar 13, 2017, at 9:23 PM, Nick Coghlan  wrote:
> 
> On 14 March 2017 at 03:46, Steve Dower  > wrote:
> Another drive-by contribution: what if twine printed the hashes for anything 
> it uploads with a message basically saying "here are the things you should 
> publish somewhere for this release so people can check the validity of your 
> packages after they download them"?
> 
> I suspect many publishers have never considered this is something they could 
> or should do. Some very basic prompting could easily lead to it becoming part 
> of the normal workflow.
> 
> Huh, and with most PyPI publishers using public version control systems, 
> their source control repo itself could even serve as "a trusted channel that 
> they control and the PyPI service can't influence". For example, the artifact 
> hashes could be written out by default to:
> 
> .released_artifacts//.sha256
> 
> And if twine sees the hash file exists before it starts the upload, it could 
> complain that the given artifact had already been published even before PyPI 
> complains about it.

1. This sounds like it could be very cool.

2. Except, as stated - i.e. hashes without signatures - this just means we all 
trust Github rather than PyPI :).

3. A simple signing scheme, like https://minilock.io but for plaintext 
signatures rather than encryption 
, could potentially address 
this problem.

4. Cool as that would be, someone would need to design that thing first, and 
that person would need to be a cryptographer.

5. Now all you need to do is design a globally addressable PKI system.  Good 
luck everybody ;-).

-glyph

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] GnuPG signatures on PyPI: why so few?

2017-03-12 Thread Glyph Lefkowitz

> On Mar 12, 2017, at 5:13 AM, Ben Finney  wrote:
> 
> Paul Moore  writes:
> 
>> One tool that needs improvement to be easier to use for this to happen
>> is GPG itself.
> 
> No disagreement from me on that. And indeed, the GnuPG project's chronic
> under-funding eventually drew attention from the new Core Infrastructure
> Initiative  to improve it
> faster than was historically the case.
> 
> This is thanks in large part to the amazing work of Nadia Eghbal
>  in drawing attention to how critical
> free software, such as GnuPG, benefits society enormously and must
> receive reliable funding from the organisations who benefit.
> 
> If anyone reading this works for any organisation that wants to ensure
> such critical free-software infrastructure continues to be consistently
> funded and maintained, encourage regular financial contribution to the
> Core Infrastructure Initiative 
> or similar projects.

No disrespect to GPG's maintainers, who are indeed beleaguered and underfunded, 
but the poor usability of the tool isn't entirely down to a lack of resources.

One reason we may not want to require or even encourage the use of GPG is that 
GPG is bad.  Publishing your own heartfelt screed about why you used to like 
GPG but really, we need to abandon it now, has become the national sport of the 
information security community:

https://blog.cryptographyengineering.com/2014/08/13/whats-matter-with-pgp/
https://blog.filippo.io/giving-up-on-long-term-pgp/
https://moxie.org/blog/gpg-and-me/

These posts are talking a lot about email, but many of the problems are just 
fundamental; in particular the "museum of 90s crypto" aspect is fundamentally 
un-solvable within the confines of the OpenPGP specification.  "Unusable email 
clients" in this case could be replaced with "unusable packaging tooling".

If you're retrieving packages from PyPI over TLS, they're already 
cryptographically signed at the time of retrieval, by an entity with a very 
good reputation in the community (the PSF) that you already have to trust 
anyway because that's where Python comes from.  So if we could get away from 
GPG as a specific piece of tooling here and focus on the problem a detached GPG 
signature could solve, it's "direct trust of packagers rather than the index".

The only way that Debian maintainers can supply this trust metadata right now 
is to manually populate debian/upstream/signing-key.asc.  This is a terrible 
mechanism that is full of flaws, but requiring a human being to at least look 
at the keys is at least a potential benefit because maybe they'll notice that 
it's odd that the key got rotated.  If PyPI required signatures from everybody 
then it would be very tempting to skip this manual step and just retrieve the 
signing key from the PyPI account uploading the packages, which is the exact 
same guarantee you had before via the crypto TLS gave you (i.e. the PSF via 
PyPI makes some highly ambiguous attestation as to the authenticity of the 
package, basically just "its name matches") but now you're involving a pile of 
highly-complex software with fundamentally worse crypto than OpenSSL would have 
given you.

To summarize: Even if we only cared about supplying package upstreams to Debian 
(and that is a tiny part of PyPI's mission), right now, using the existing 
tooling of uscan and lintian, the only security value that could _possibly_ be 
conveyed here would be an out-of-band conversation between the maintainer and 
upstream about what their signing keys are and how the signing process works.  
Any kind of automation would make it less likely that would happen, which means 
that providing tool support to automate this process would actually make things 
worse.

>> As a Windows user, I've "played" with it in the past, and found it
>> frustratingly difficult.
> 
> I hope many people here will find the guide published by the FSF, Email
> Self-Defense , a useful walk
> through how to set it up properly.
> 
> -- 
> \ “I must say that I find television very educational. The minute |
>  `\   somebody turns it on, I go to the library and read a book.” |
> _o__)—Groucho Marx |
> Ben Finney
> 
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Deprecating download counts in API?

2017-03-06 Thread Glyph Lefkowitz

> On Mar 6, 2017, at 3:34 AM, Donald Stufft <don...@stufft.io> wrote:
> 
> 
>> On Mar 6, 2017, at 1:41 AM, Glyph Lefkowitz <gl...@twistedmatrix.com 
>> <mailto:gl...@twistedmatrix.com>> wrote:
>> 
>> 
>>> On Mar 5, 2017, at 8:51 AM, Donald Stufft <don...@stufft.io 
>>> <mailto:don...@stufft.io>> wrote:
>>> 
>>> Unless there is some sort of massive outcry, I plan to deprecate and 
>>> ultimately remove the download counts available in the PyPI API, [...]
>> 
>> [...] But, as someone who's been vaguely annoyed that `vanity` doesn't work 
>> for a while, I wonder: shouldn't it be easy for someone familiar with both 
>> systems to simply implement the existing "download count" API as a legacy / 
>> compatibility wrapper around BigQuery? [...]
> 
> It’s not hard at all, it’d just be [...]

Thanks for that super detailed and exhaustive explanation, I have a much better 
handle on the issues involved now.  Sorry if you'd written it before and I'd 
missed it - I can now very clearly see why you want to get rid of it!

-glyph

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Deprecating download counts in API?

2017-03-05 Thread Glyph Lefkowitz

> On Mar 5, 2017, at 8:51 AM, Donald Stufft  wrote:
> 
> So, as most folks are aware PyPI has long had a cumulative download count 
> available in it’s API. This has been on and off again broken for a *long* 
> time and arguably the numbers in there have been “wrong” even when it was 
> working because we had no way to reproduce them from scratch (and thus 
> whenever a bug occurred we’d flat out lose data or add incorrect data with no 
> way to correct it).
> 
> In the meantime, we’ve gotten a much better source of querying for download 
> information available inside of Google’s BigQuery database [1][2]. Not only 
> is this able to be recreated “from scratch” so we can, if needed, fix massive 
> data bugs but it provides MUCH more information than the previous downloads 
> and a very powerful query language to go along with it.
> 
> Unless there is some sort of massive outcry, I plan to deprecate and 
> ultimately remove the download counts available in the PyPI API, instead 
> preferring people to start using the BigQuery data instead. This more or less 
> reflects the current state of things, since it has been on and off broken 
> (typically broken) for something like a year now.

I fully realize that if I really wanted this, I could do it myself, and the 
last thing you need is someone signing you up for more work :).  But, as 
someone who's been vaguely annoyed that `vanity` doesn't work for a while, I 
wonder: shouldn't it be easy for someone familiar with both systems to simply 
implement the existing "download count" API as a legacy / compatibility wrapper 
around BigQuery?  If that isn't trivial, doesn't that point to something flawed 
in the way the data is presented in BigQuery?

That said, I'm fully OK with the answer that even a tiny bit of work is too 
much, and the limited volunteer effort of PyPI should be spent elsewhere.

-glyph

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] distlib and wheel metadata

2017-02-15 Thread Glyph Lefkowitz

> On Feb 15, 2017, at 11:44 AM, Donald Stufft  wrote:
> 
> 
>> On Feb 15, 2017, at 1:15 PM, Daniel Holth > > wrote:
>> 
>> I also get a little frustrated with this kind of proposal "no pins" which I 
>> read as "annoy the publisher to try to prevent them from annoying the 
>> consumer". As a free software publisher I feel entitled to annoy the 
>> consumer, an activity I will indulge in inversely proportional to my desire 
>> for users. Who is the star?
>> 
>> It should be possible to publish applications to pypi. Much of the packaging 
>> we have is completely web application focused, these applications are not 
>> usually published at all.
>> 
> 
> 
> 
> I haven’t fully followed this thread, and while the recommendation is and 
> will always be to use the least strict version specifier that will work for 
> your application, I am pretty heavily -1 on mandating that people do not use 
> ``==``. I am also fairly heavily -1 on confusing the data model even more by 
> making two sets of dependencies, one that allows == and one that doesn’t. 

I hope I'm not repeating a suggestion that appears up-thread, but, if you want 
to distribute an application with pinned dependencies, you could always 
released 'foo-lib' with a lenient set of dependencies, and 'foo-app' which 
depends on 'foo-lib' but pins the transitive closure of all dependencies with 
'=='.  Your CI system could automatically release a new 'foo-app' every time 
any dependency has a new release and a build against the last release of 
'foo-app' passes.

-glyph

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] How to specify dependencies in Python

2017-01-16 Thread Glyph Lefkowitz

> On Jan 16, 2017, at 7:59 AM, Thomas Güttler  
> wrote:
> 
> I think requirements.txt should be the result of some kind of 
> Continous-Integration run.
> If all tests are successful, then requirements.txt should be created with 
> "pip freeze".
> 
> This means, that the Continous-Integration run does not use requirements.txt 
> to
> build its environment.
> 
> Next question: Where is the best place to store requirements.txt?
> 
> I think it should not be in the repo of a library. It should be somehow 
> outside.

I think I understand what you're trying to say here, but I think you have it 
backwards.

The best example I have made of the "right" way to do this is in this project: 
https://github.com/rackerlabs/mimic/ 

The project has both a setup.py and (several versions of) requirements.txt.

setup.py gives the abstract requirements for the project.  This is what should 
work, but has not necessarily been exactly tested to.

requirements.txt is the concrete requirements.  This is generated (more or 
less) by freezing an environment where one does `pip install .` to trigger the 
setup.py.

However, all continuous integration tests are run against the versions listed 
in requirements.txt (the details are here: 
https://github.com/rackerlabs/mimic/blob/5fae30d9e9a45c15f3f0a51fa436a7f25502b742/.travis/install.sh#L101-L105
 

 and here 
https://github.com/rackerlabs/mimic/blob/5fae30d9e9a45c15f3f0a51fa436a7f25502b742/.travis/run.sh#L18
 
)
 to ensure that new contributors always have a stable set of versions to test 
against, and their PR won't end up randomly having to fix some upgrade issue.

Finally, we use https://requires.io  to submit PRs every 
time one of the dependencies upgrades.  On a regular basis (every 6 months or 
so), these do cause errors; when they do, development unrelated to the version 
upgrade can continue on unimpeded, and the version-compatibility fix lives 
neatly in its own PR that solves just that problem.

It's a bit subtle to understand the distinction between setup.py and 
requirements.txt (https://caremad.io/posts/2013/07/setup-vs-requirement/ 
 is a good attempt, but 
I think stumbles over explaining some nuances that are obvious to dstufft and 
not to anyone else), but if you get them lined up correctly then a bunch of 
edge-cases work very nicely, and reasoning about deployment is much easier.

-glyph___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Announcement: TLSv1.2 will become mandatory in the future

2017-01-11 Thread Glyph Lefkowitz

> On Jan 11, 2017, at 7:40 PM, Nick Coghlan  wrote:
> 
> On 12 January 2017 at 13:00, Donald Stufft  wrote:
>> This doesn’t work well because it’s not something that pip is going to be
>> able to upgrade on Windows, because the .so will be locked when pip imports
>> it on Windows and we won’t be able to uninstall it to do an upgrade. We had
>> to disable the automatic use of pyOpenSSL for this reason too. The only C
>> stuff that pip can reliably use is the standard library.
> 
> Ugh, I'd completely forgotten about that limitation of Windows filesystems.
> 
> And the main alternatives I can think of involve copying files around
> as pip starts up, which would be unacceptably slow for a command line
> app :(


It's possible for Pip to notice that it wants to replace a particular file; you 
can "unlock" it by moving it aside.

https://serverfault.com/a/503769 
-glyph___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Maintaining a curated set of Python packages

2016-12-23 Thread Glyph Lefkowitz

> On Dec 22, 2016, at 11:15 PM, Nick Coghlan  wrote:
> 
> On 22 December 2016 at 09:08, Chris Barker  > wrote:
> And there are utilities that let you run a script in a given environment:
> 
> https://github.com/pelson/conda-execute 
> 
> 
> (and maybe others)
> 
> https://pypi.python.org/pypi/pipsi  (pip 
> Script Installer) creates a dedicated venv for the module and its 
> dependencies, and then adds symlinks from ~/.local/bin to any scripts 
> installed into the venv's bin directory. As Armin notes in the README, it's a 
> really nice way to handle utilities that happen to be written in Python and 
> published via PyPI, without having them impact any other aspect of your 
> system.

I just wanted to echo that this is a great tool, and it teaches really good 
habits (i.e. don't install your general-purpose python tools into 
project-specific virtual environments).

-glyph___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Can't upload sdist: "File already exists"

2016-12-22 Thread Glyph Lefkowitz

> On Dec 22, 2016, at 8:54 AM, Ian Cordasco  wrote:
> 
> On Thu, Dec 22, 2016 at 9:49 AM, Brett Cannon  wrote:
>> Because you already uploaded a wheel for version 0.1.2 you can't upload any
>> other files for that version, else people could accidentally upload e.g. an
>> sdist with code different from what was already uploaded in the wheel. If
>> you want an sdist then I would do another release as version 0.1.2post1 with
>> the wheel and sdist (or whatever the proper post release version format is;
>> on my phone so a pain to look up right now).
> 
> I'm pretty sure that's not correct. Twine is written to specifically
> upload the wheel first because PyPI will extract metadata from that
> and display it on the page. It won't do that if the sdist is uploaded
> first.
> 
> I'm not able to reproduce the behaviour Nick is seeing. My only guess
> is that something changed in Warehouse or the file existed, was
> deleted, and is now being re-uploaded with the same version. That's
> not something Warehouse or PyPI allows anymore (republishing with the
> same version)

Similarly; I've *definitely* done sdist releases after wheel releases of the 
same version in the past.

-glyph

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Maintaining a curated set of Python packages

2016-12-16 Thread Glyph Lefkowitz

> On Dec 16, 2016, at 5:07 AM, Nick Coghlan <ncogh...@gmail.com> wrote:
> 
> On 16 December 2016 at 20:57, Glyph Lefkowitz <gl...@twistedmatrix.com 
> <mailto:gl...@twistedmatrix.com>> wrote:
> 
> Anyhow, Xcode is far from perfect - many of the places it touches the UNIX 
> pipeline are extremely sharp edges you can easily impale yourself on (and 
> don't get me started about codesigning) - but it nevertheless points at a 
> different potential direction.  For example; why expose the concept of a 
> "virtual environment" directly at all?  "New Project" could just create a 
> requirements.txt and a setup.py for you, alongside a git repo and a 
> virtualenv for that project.  Or, the UI could be geared towards setting up a 
> tox.ini rather than a virtualenv, and run everything through tox so it's in 
> an isolated environment with defined requirements.  This is a best practice 
> anyway so why not make it easier to start early?
> 
> This might all be way too much work, but I think it's important to remember 
> it's possible.
> 
> Yeah, I think we agree more than we disagree here.

Quite.  But the devil's in the details :).

> The main thing is that one of the key ways newcomer-friendly environments 
> make themselves more approachable is to *constrain choice*.

I think framing this as "constraint" is a little misleading.  In a sense it is 
a constraint, but a better way to think of it is: provide a reasonable default. 
 Right now, the "default UI" that most users get is a bare bash prompt where 
commands like 'pip install' fail with an error for no reason they can discern.  
They can still choose to inject a different tool at any point in the process 
(after all, we're talking about frontends which create existing concepts like 
virtualenvs and package installations) if they so choose; they just get a 
default that does something - anything - useful.

> XCode usability benefits from being Apple-centric. Ditto for Visual Studio 
> and MS.
> 
> Linux and Python, by contrast, were both born out of a DIY culture where 
> folks being free to choose their own tools was initially perceived solely as 
> a highly desirable feature, rather than as a potential barrier to entry for 
> newcomers.
> 
> That means there's an argument to be made that something like YHat's Rodeo 
> [1] might be a better starting point for data analytics in Python than 
> jumping straight to Jupyter Notebook, and it's also why the Mu editor [2] 
> exists as a dedicated tool for folks learning Python by way of the micro:bit 
> project.
> 
> [1] http://rodeo.yhat.com/docs/ <http://rodeo.yhat.com/docs/>
> [2] http://codewith.mu/ <http://codewith.mu/>

Minor point - nobody should use Mu yet, at least not on the mac: 
https://github.com/mu-editor/mu/issues/206 
<https://github.com/mu-editor/mu/issues/206> :)

More significantly, I think any text editor will do just fine (as long as it's 
not Emacs or Vim) - I've had great success with 
http://www.barebones.com/products/textwrangler/ 
<http://www.barebones.com/products/textwrangler/>, and even Notepad will do in 
a pinch.  There are already pretty good integration points where editors can be 
told to open specific files.  One of my frustrations with the educational 
ecosystem is the focus on the (quite difficult) problem of providing students 
with a fully integrated text editing / script running / debugging environment, 
rather than figuring out how to orchestrate and launch the quite powerful and 
sophisticated tools we already have.

>> However, the reason I brought up the Curse and Firefox GUI examples was to 
>> emphasise the problems they hide from the default rich client experience:
>> 
>> - their default focus is on managing one environment per device
> 
> In the analogous Python tool, one could replace "per device" with "per 
> project" - and perhaps have a "default project" so something useful could 
> happen even before you've decided what you're doing...
> 
> But we've immediately bumped the complexity level up in doing so, and it's a 
> level of complexity that many people initially spending all of their 
> development time on a single project may not need. 

I think we're underestimating potential programming students.  The idea of 
managing multiple documents is likely something they're familiar with from word 
processing apps.  If not, then fine - we can start them off with a default 
document.

> I thought this thread was already interminable, I look forward to reading the 
> never-ending rest of it now that you've raised the grim spectre of the PyPI 
> user-ratings feature from the dead :).
> 
> All the arguments against integrating user ratings into a service that's 
> focused on lowering b

Re: [Distutils] Maintaining a curated set of Python packages

2016-12-16 Thread Glyph Lefkowitz

> On Dec 16, 2016, at 5:51 AM, Daniel Holth  wrote:
> 
> I'm also a visual studio code fan. It is the first editor I've tried that 
> feels lightweight like Vim but has the power of many plugins. That, and the 
> text rendering is excellent.
> 
> https://pypi.python.org/pypi/Stallion  
> is a lovely GUI package manager.

There's a lot to like here - no disrespect to the Stallion team - but it's 
worth remembering this lesson from Havoc Pennington: 
https://blog.ometer.com/2016/01/26/the-dangerous-ui-team/ 


The major reason UI is important for this use-case - onboarding of new people 
to Python programming - is to give them discoverability on terms they're 
already familiar with.  That means that the first "UI" element has to be a 
cross-platform UI bundle.  Stallion is still a thing you have to install (and 
from what I can see, a thing you have to install into a virtualenv?)

> One possibility to consider is that virtualenv itself is a bad idea. Why 
> should the Python interpreter executable, rather than the program being run, 
> determine the set of packages that is available for import? It is confusing 
> and inconvenient to have to deal with environments at all. Yes, even if you 
> are using a helper. Maybe there can be a better way to manage dependencies 
> that is not completely disjoint from setup.py.


I can see why you'd say that, but I disagree.  I think the name "virtualenv" is 
really confusing, but the general idea of "it's the interpreter and not the 
app" is a very powerful concept because you can run a REPL (or a notebook, or a 
debugger, or a doc generator, or any other dev tool) in the same context as 
your application code, without actually loading or executing any specific thing 
from your application code.  Virtualenv also lets you easily control which 
Python version or interpreter (hello, pypy!) is being used in each context.

-glyph

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Maintaining a curated set of Python packages

2016-12-16 Thread Glyph Lefkowitz

> On Dec 15, 2016, at 9:23 PM, Nick Coghlan <ncogh...@gmail.com> wrote:
> 
> On 16 December 2016 at 14:29, Glyph Lefkowitz <gl...@twistedmatrix.com 
> <mailto:gl...@twistedmatrix.com>> wrote:
>> On Dec 15, 2016, at 8:18 PM, Nick Coghlan <ncogh...@gmail.com 
>> <mailto:ncogh...@gmail.com>> wrote:
> 
> At the beginning of your story you mentioned the GUI client - that is the 
> missing piece ;).  I've been saying for years that we need a Python.app that 
> lets you easily bootstrap all this stuff: walk you through installing C dev 
> tools if your packages need them, present a GUI search interface to finding 
> packages, present a normal "file->open" dialog for selecting a location for a 
> new virtualenv, automatically pop open a terminal, launching a Jupyter 
> notebook whose kernel is pointed at said environment...
> 
> It isn't really, as we started looking at this for IDLE, and the entire 
> current UX is just fundamentally beginner hostile:
> 
> - virtual environments are hard
> - requirements files are hard
> - knowing what packages are trustworthy and worth your time is hard
> - limiting students to a set of "known safe" packages is hard
> - components that assume command line use are hard
> 
> They're especially hard if the only way to distribute a fix is to release an 
> entire new edition of CPython rather than having IDLE talk to a (preferably 
> configurable) backend cloud service for updated instructions.
> 
> So there's a reason so many learning and even full development environments 
> are moving online - they let the service provider deal with all the hassles 
> of providing an appropriately configured environment, while the students can 
> focus on learning how to code, and the developers can focus on defining their 
> application logic.

None of what you're saying is wrong here, so I don't want to disagree.

But, I think this is just one perspective; i.e. moving to a cloud environment 
is one approach to providing a more circumscribed environment, but embracing 
endpoint sandboxing is another.  For example, learning how to use Xcode is a 
fundamentally different (and easier!) sort of experience than learning the 
traditional UNIX development pipeline, due in large part to the fact that it 
provides a unified, discoverable interface.  This is despite the fact that 
Xcode projects are actually substantially more complex than their UNIX-y 
equivalents, due to the high levels of coupling and complexity in the way that 
you have to interface with certain system services (signing with entitlements, 
bundle metadata, etc).

You still have to retrieve many resources from the cloud - simulators, 
documentation, SDKs - but the UI tells you that you need those things, and 
straightforwardly automates the process of getting them.  Everything else that 
goes into a development project is not "environment setup", but a part of the 
Xcode project itself.  Similarly, version control (a git repository) is nearly 
implicitly a part of the project. It's tricky to even create one without a VCS 
backing it any more.

Anyhow, Xcode is far from perfect - many of the places it touches the UNIX 
pipeline are extremely sharp edges you can easily impale yourself on (and don't 
get me started about codesigning) - but it nevertheless points at a different 
potential direction.  For example; why expose the concept of a "virtual 
environment" directly at all?  "New Project" could just create a 
requirements.txt and a setup.py for you, alongside a git repo and a virtualenv 
for that project.  Or, the UI could be geared towards setting up a tox.ini 
rather than a virtualenv, and run everything through tox so it's in an isolated 
environment with defined requirements.  This is a best practice anyway so why 
not make it easier to start early?

This might all be way too much work, but I think it's important to remember 
it's possible.

> However, the reason I brought up the Curse and Firefox GUI examples was to 
> emphasise the problems they hide from the default rich client experience:
> 
> - their default focus is on managing one environment per device

In the analogous Python tool, one could replace "per device" with "per project" 
- and perhaps have a "default project" so something useful could happen even 
before you've decided what you're doing...

> - they both may require environment restarts for changes to take effect

... one could just put a little blinking red light on any jupyter windows whose 
kernels need to be restarted :) ...

> - they both reference an at least somewhat moderated back end (by Curse in 
> the Curse client case, by Mozilla in the Firefox case)
> - they both incorporate popularity metrics and addon ratings into the client 
> experience

Re: [Distutils] Maintaining a curated set of Python packages

2016-12-15 Thread Glyph Lefkowitz

> On Dec 15, 2016, at 8:33 PM, Donald Stufft <don...@stufft.io> wrote:
> 
> 
>> On Dec 15, 2016, at 11:29 PM, Glyph Lefkowitz <gl...@twistedmatrix.com 
>> <mailto:gl...@twistedmatrix.com>> wrote:
>> 
>> User-curated package sets strikes me as the _lowest_ priority feature out of 
>> all of those
> 
> 
> 
> I don’t think anyone in the PyPA is planning on working on this currently. It 
> was a possible idea that was spawned from this thread. However the nature of 
> volunteer OSS is that volunteer time is not fungible and if someone feels 
> particularly enthused about this idea they are free to pursue it.

I did very consciously choose my words there: "strikes me as", not "is" ;-)

-glyph

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Maintaining a curated set of Python packages

2016-12-15 Thread Glyph Lefkowitz

> On Dec 15, 2016, at 8:18 PM, Nick Coghlan <ncogh...@gmail.com> wrote:
> 
> On 16 December 2016 at 07:14, Glyph Lefkowitz <gl...@twistedmatrix.com 
> <mailto:gl...@twistedmatrix.com>> wrote:
>> On Dec 15, 2016, at 6:39 AM, Donald Stufft <don...@stufft.io 
>> <mailto:don...@stufft.io>> wrote:
>> Theoretically we could allow people to not just select packages, but also 
>> package specifiers for their “curated package set”, so instead of saying 
>> “requests”, you could say “requests~=2.12” or “requests==2.12.2”. If we 
>> really wanted to get slick we could even provide a requirements.txt file 
>> format, and have people able to install the entire set by doing something 
>> like:
>> 
>> $ pip install -r 
>> https://pypi.org/sets/dstufft/my-cool-set/requirements.txt 
>> <https://pypi.org/sets/dstufft/my-cool-set/requirements.txt>
> Can't people already do this by publishing a package that just depends on 
> their whole 'package set'?
> 
> Technically, sure, but it adds a lot of overhead. The closest equivalent 
> right now would be maintaining a source control repo somewhere with various 
> requirements files in it.
> 
> However, at an ecosystem level, that doesn't have the same user experience 
> impact. The idea of building this into PyPI itself would be to *reshape the 
> learning curve of how people learn about dependency management as they're 
> introduced to Python*.
> 
> Going back to the CurseGaming example, I actually use the free version of 
> their client to manage the Warcraft addons on my gaming PC. The basic usage 
> model is really simple and (not coincidentally) very similar to the way the 
> Add-on manager works in Firefox and other GUI apps with integrated plugin 
> managers:
> 
> - you have an "Installed" tab for the addons you have installed
> - when you start the client, it checks for updates for all your installed 
> addons and the out of date ones gain an "Update" button
> - there's a separate tab where you can search all the available addons and 
> install new ones
> 
> I've never used any of Curse's other clients (like the Minecraft or Kerbal 
> Space Program ones), but I assume they operate in a similar way.
> 
> The paid tier of the Curse Client, and the account sync feature of Firefox, 
> then offer the ability to synchronize your installed addons across machines. 
> (There are also a lot of similarities between this model and the way mobile 
> app stores work)
> 
> A comparable UX for Python/PyPI/pip would focus less on the 
> library-and-application development cases (where the presence of source 
> control is assumed), and more on the ad hoc scripting and learning-to-program 
> use cases, where you're typically more interested in "--user" installations 
> and the question of which parts of the Python ecosystem are just an import 
> away than you are in reproducability and maintainability.
> 
> The ecosystem level learning curve then becomes:
> 
> - did you know you can back up your list of user installed packages to PyPI?
> - did you know you can use PyPI to sync your user installs between systems?
> - did you know you can use PyPI to categorise your user installs and share 
> them with others?
> - OK, now it's time to start learning about version control, virtual 
> environments and automated testing
> 
> It wouldn't necessarily make sense to bake this *directly* into Warehouse, 
> and the Mozilla folks responsible for Firefox Sync could no doubt offer real 
> word guidance on the infrastructure and support needed to operate a service 
> like that at scale, but the core concept of allowing package management to be 
> introduced independently of both version control and virtual environments 
> sounds potentially valuable to me.

Yeah, I think that this focus on curating packages on PyPI is reminiscent about 
the old yarn about looking for lost keys under the streetlight because it's 
dark everywhere else.  We're all familiar with web services and data formats, 
so we want to somehow have a data format or a web service be the answer to this 
problem.  But I don't believe that's where the problem is.

("this problem" being "let's make it easy and fun to a) bootstrap a common 
Python experimentation environment across multiple machines and b) _know that 
you have to do that_")

At the beginning of your story you mentioned the GUI client - that is the 
missing piece ;).  I've been saying for years that we need a Python.app that 
lets you easily bootstrap all this stuff: walk you through installing C dev 
tools if your packages need them, present a GUI search interface to finding 
packages, present a normal "file->open" dialog for selec

Re: [Distutils] Maintaining a curated set of Python packages

2016-12-15 Thread Glyph Lefkowitz

> On Dec 15, 2016, at 6:39 AM, Donald Stufft  wrote:
> 
> 
>> On Dec 15, 2016, at 9:35 AM, Steve Dower > > wrote:
>> 
>> The "curated package sets" on PyPI idea sounds a bit like Steam's curator 
>> lists, which I like to think of as Twitter for game reviews. You can follow 
>> a curator to see their comments on particular games, and the most popular 
>> curators have their comments appear on the actual listings too.
>> 
>> Might be interesting to see how something like that worked for PyPI, though 
>> the initial investment is pretty high. (It doesn't solve the coherent bundle 
>> problem either, just the discovery of good libraries problem.)
>> 
> 
> Theoretically we could allow people to not just select packages, but also 
> package specifiers for their “curated package set”, so instead of saying 
> “requests”, you could say “requests~=2.12” or “requests==2.12.2”. If we 
> really wanted to get slick we could even provide a requirements.txt file 
> format, and have people able to install the entire set by doing something 
> like:
> 
> $ pip install -r 
> https://pypi.org/sets/dstufft/my-cool-set/requirements.txt 
> 

Can't people already do this by publishing a package that just depends on their 
whole 'package set'?

-glyph

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Maintaining a curated set of Python packages

2016-12-14 Thread Glyph Lefkowitz

> On Dec 14, 2016, at 9:41 AM, Chris Barker  wrote:
> 
> As pointed out by others, there are external groups doing "curating". 
> conda-forge is one such project, so I'll comment from that perspective:
> 
> 
> It's difficult because the definition of compatibility is highly dependent on
> the consumer's environment.  For example, C extension compatibility will
> depend on the version of libraries available on the platform versions you care
> about. 
> 
> Indeed -- which is why Anaconda and conda-forge are built on conda rather 
> than pip -- it is designed to handle these issues.
> 
> However with the many linux effort, and some efforts to kludge C libs into 
> binary wheels, pypi may just be able to handle more of these issues -- so 
> curating may have it's advantages.

I think it's unfair to describe these efforts as a "kludge"; many of the tools 
developed for manylinux1 et. al. are actually pretty sophisticated tooling with 
a mature ecosystem approach to library bundling.  Personally I have noticed a 
_massive_ reduction in the support overhead involved in getting new users spun 
up in the present Python packaging ecosystem.  Due to the availability of 
cross-platform wheels, it's possible to do a LOT more python development 
without a C compiler than it used to be.

-glyph___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


[Distutils] continuous integration options (was Re: Travis-CI is not open source, except in fact it *is* open source)

2016-11-03 Thread Glyph Lefkowitz

> On Nov 3, 2016, at 10:17 AM, Barry Warsaw  wrote:
> 
> On Nov 03, 2016, at 12:54 AM, Nick Coghlan wrote:
> 
>> This is also an area where I'm fine with recommending freemium
>> solutions if they're the lowest barrier to entry option for new users,
>> and "Use GitHub + Travis CI" qualifies on that front.
> 
> I won't rehash the GitHub/GitLab debate, but in some of my projects (hosted on
> GH) I've had to ditch Travis because of limitations on that platform.
> Specifically, I needed to run various tests on an exact specification of
> various Ubuntu platforms, e.g. does X run on an up-to-date Ubuntu Y.Z?
> 
> I originally used Docker for this, but our projects had additional
> constraints, such as needing to bind-mount, which aren't supported on the
> Travis+Docker platform.  So we ended up ditching the Travis integration and
> hooking our test suite into the Ubuntu autopkgtest system (which is nearly
> identical to the Debian autopkgtest system but runs on Ubuntu infrastructure).
> 
> Python may not be affected by similar constraints, but it is worth keeping in
> mind.  Travis isn't a perfect technical solution for all projects, but it may
> be good enough for Python.

I think phrasing this in terms of "perfect" and "good enough" presents a highly 
misleading framing.  Examined in this fashion, of course we may reluctantly use 
the "good enough" option, but don't we want the best option?

A better way to look at it is cost vs. benefit.

How much does it cost you in terms of time and money to run and administer the 
full spectrum of "real" operating systems X.Z that you wish to support?  How 
much does it cost in terms of waiting for all that extra build infrastructure 
to run all the time?  How much additional confidence and assurance that it will 
work does that buy you, over the confidence of passing tests within a docker 
container?  Is that additional confidence worth the investment of resources?

Of course, volunteer-driven projects are not concerned directly with top-level 
management allocation of ostensibly fungible resources, and so a hard "costly" 
solution that someone is interested in and committed to is far less expensive 
than a "cheap" solution that everyone finds boring, so we have to take that 
into account as well.

As it happens, Twisted has a massive investment in existing Buildbot CI 
infrastructure _as well as_ Travis and Appveyor.  Travis and Appveyor address 
something that our CI can't, which is allowing unauthenticated builds from 
randos issuing their first pull requests.  This gives contributors much faster 
feedback which is adequate for the majority of changes.

However, many of our ancillary projects, which do not have as many 
platform-sensitive components, are built using Travis only, and that's a very 
good compromise for them.  It has allowed us to maintain a much larger and more 
diverse ecosystem with a much smaller team than we used to be able to.

In the future, we may have to move to a different CI service, but I can tell 
you that for sure that 90% of the work involved in getting builds to run on 
Travis is transferrable to any platform that can run a shell script.  There's a 
bit of YAML configuration we would need to replicate, and we might have to do 
some fancy dancing with Docker to get other ancillary services run on the 
backend in some other way, but I would not worry about vendor lock-in at all 
for this sort of service.  Probably, the amount of time and energy on system 
maintenance that Travis saves us in a given week is enough to balance out all 
the possible future migration work.

-glyph___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Code for inserting paths into system configuration?

2016-09-13 Thread Glyph Lefkowitz

> On Sep 13, 2016, at 12:58 PM, Matthew Brett  wrote:
> 
> Hi,
> 
> I was reflecting on the detail of putting pip user bin directories on
> the user's path, and was then thinking of making a tiny pip package
> like this:
> 
> pip install pip_user_config
> python -m pip_user_config
> 
> where the latter would cleverly insert the user bin directory into the
> user's configuration.
> 
> Does anyone know of any code to do something like that?

This is shell- and platform-specific.  This would be a great idea!  But it 
would be a challenging integration project.

-glyph


___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 527 - Removing Un(der)used file types/extensions on PyPI

2016-08-23 Thread Glyph Lefkowitz

> On Aug 23, 2016, at 1:08 PM, Glyph Lefkowitz <gl...@twistedmatrix.com> wrote:
> 
> ... especially as compared to binary eggs ...

Wheels.  especially as compared to binary wheels.

-g___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 527 - Removing Un(der)used file types/extensions on PyPI

2016-08-23 Thread Glyph Lefkowitz

> On Aug 23, 2016, at 12:03 PM, M.-A. Lemburg  wrote:
> 
> On 23.08.2016 18:46, Donald Stufft wrote:
>> Since it seemed like there was enough here for a proper PEP I went ahead and
>> write one up, which is now PEP 527. The tl;dr of it is that:
>> 
>> * Everything but sdist, bdist_wheel, and bdist_egg get deprecated.
> 
> -1 on removing bdist_wininst and bdist_msi. If PyPI is supposed
> to retain the status of the main website to go search for Python
> package downloads, it needs to be able to provide ways of hosting
> all distribution types which are supported by distutils, including
> ones which target platform configuration management system such as
> the Windows one.

I started off at maybe -0 myself for removing format support from PyPI, but 
reading this rationale for preserving these misfeatures has made me a strong +1!

> The number of downloads is really irrelevant for this kind of
> argument.

It's totally relevant.  The packaging community has limited resources, and we 
should use those resources to serve the users that actually exist, not pretend 
people.  The data available from the current system is important to 
reality-check our assumptions about which sorts of people are in fact real.

That's not to say that "downloads" is the perfect metric, but it's what we've 
got to work with, and so we have to pay attention to it unless something better 
is proposed.

> Since the PEP proposes to keep the existing uploads
> around, I also don't follow the argument of reduced maintenance.
> PyPI will still have to host and support downloading those file
> types.

When the person responsible for the vast majority of the maintenance burden 
says "this will increase / decrease the maintenance burden" then I tend to 
believe them.  Perhaps this section should be better-motivated in the PEP but I 
would be _extremely_ surprised if Donald were wrong about this particular point.

> To me, all this sounds a lot like eventually turning PyPI into a
> pip package index, which no longer serves the original intent of
> a Python package index. I think that's taking a wrong turn in the
> development of such an index.

A "pip package index" - that would be fantastic!  Right now, the most confusing 
thing about the Python ecosystem is the vast diversity of general-purpose 
installation tools; manual invocation of setup.py, easy_install, pip install, 
manual Windows installers... if PyPI can centralize all of this stuff around a 
single installer and get some good, hard and fast de-facto standards around how 
it's done, that would be much better for user experience.  Frankly it would be 
better for future installers as well, since it would be possible to calibrate 
for "just works" much better than we can currently.

But of course, it raises the question: why do you think it is a bad thing?

In particular, bdist_wininst and bdist_msi (which Twisted supported for a long 
time, and still builds, so it's not like I don't understand their benefits and 
history!) are incompatible with virtualenvs, and make development under Windows 
harder, especially as compared to binary eggs.  The presence of these builds 
confuses users and creates more problems than it solves in every interaction 
I've had with onboarding people onto Python projects in the last couple of 
years.

> IMO, we should aim to reunite separate indexes such as the
> one used for conda or the win32 index maintained by
> Christoph Golke back into PyPI, not create even more
> separation by removing platform specific formats.

Absolutely not.  This would be a disaster.  Conda is a general-purpose 
cross-language package distribution environment.  It hosts packages for C 
dependencies.  If we want to host packages for Conda, we should also be hosting 
packages for competing projects at that scope of the ecosystem, which means at 
the very least adding Homebrew, MacPorts, and maybe Chocolatey and NuGet 
support too.

In other words, Conda made several different choices about its architecture 
specifically because it wants to serve a slightly distinct audience from the 
broader PyPI, and that's fine!  We should not feel pressure to standardize and 
force everyone into a one-size-fits-all model.  Conda seems to be doing fine 
and getting plenty of adoption on its own, and not hurting PyPI's success at 
all.

There are several Steam games and Mac apps written in Python too, but I would 
hope that it's obvious why we should not be running a competitor to the Steam 
Store, Mac App Store, or Ubuntu Software Center.  If people feel that PyPI 
ought to be a general-purpose CDN for anything vaguely Python-adjacent, we 
might need a more general anti-goals meta-PEP that specifically rules out 
consideration of this sort of scope creep for PyPI's design in the future.

>> * The only allowed extension for sdist is ``.tar.gz``.
> 
> Strong -1 on this part. .tar.gz may be a good choice for Unix,
> but it definitely isn't for Windows. Even for Unix, .zip files
> 

Re: [Distutils] Deprecating little used file types/extensions on PyPI?

2016-08-15 Thread Glyph Lefkowitz

> On Aug 15, 2016, at 1:56 PM, Donald Stufft  wrote:
> 
> My main thought regarding this is that bdist_dmg != all dmg files (similarly 
> for msi and wininst). These are specific files created by distutils without a 
> standard or without the needed work to make them truly what users should be 
> using. I also think they are a different class of upload, the general use 
> case for PyPI's current file uploads are for automated installs (as evidenced 
> by the simple API and mirroring).

I guess I'm just a little confused - are we talking about just hiding them from 
some parts of the API or disallowing their upload entirely?

If we're talking about the literal output of bdist_dmg and bdist_rpm I probably 
agree that they're almost useless.

> If we want to enable dmg, msi, etc uploads that are not the bdist_* variety 
> for automated tooling, then we could do something like "related files" people 
> can upload which don't get mirrored for pip and which don't show up in the 
> repo API. Since they will be classified differently we could also do better 
> work around the ux of discovering them and separate them from the 50 wheels 
> that some projects end up uploading and make them more obviously visible. I 
> don't know if pypi as a distribution for _end user_ (vs developer/power user) 
> software makes sense or not, but if it does we should support it better than 
> accidentally via distutils. 

My concern here is that if someone has a hacky workaround working with the 
current system, it might be better to add support for the new thing ("related 
files") before killing the old thing.  If the plan is to do them both anyway, 
wouldn't it be better to do it in that order?  As a community (and I mean the 
broader open source community here, not distutils-sig; if anything distutils is 
way better about this) we have an unfortunate habit of killing 
potentially-useful-but-sub-optimal stuff, wandering off for half a decade, and 
then only adding the better thing after the fact.

-glyph
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Deprecating little used file types/extensions on PyPI?

2016-08-15 Thread Glyph Lefkowitz

> On Aug 15, 2016, at 12:09 PM, Donald Stufft  wrote:
> 
> Next we have bdist_dmg, bdist_msi, and bdist_winist. I'm lumping these 
> together
> because they're all OS specific installers for OSs that don't already have 
> some
> sort of repository. This lack of a repository format for them means that 
> random
> downloads are already the norm for people using these systems. For these, I
> think the usage numbers for bdist_dmg and bdist_msi easily suggest that they
> are not very important to continue to support, but it would be weird to
> eliminate them without also elminating bdist_wininst. The wininst format has
> the most usage out of all of the seldom used formats, however when we look at
> the downloads for the last 30 days only 0.42% of the downloads were for 
> wininst
> files, so I think that it's pretty safe to remove them. I think in the past,
> these were more important due to the lack of real binary packages on Windows,
> but I think in 2016 we have wheel, and Wheel is a better solution. If however
> we want to keep them, then I think it's pretty safe to remove them from our
> /simple/ pages and any future repository pages and modify our mirroring 
> tooling
> to stop mirroring them. IOW, to treat them as some sort of "additional upload"
> rather than release uploads to PyPI.

I think you have a better handle on this than I do, but I did just want to 
provide a little input.  I think we should be cautious in the way these are 
disabled, because it's already hard enough to produce user-facing software with 
Python. We don't want to throw up yet another roadblock to creating a 
layperson-friendly download.  If they're not used now, it's not necessarily an 
indication of whether we _want_ them to be used in the future.

Also, since these formats aren't readily 'pip install'-able, and are really 
only suitable for applications anyway, perhaps the download numbers are skewed? 
 Automated systems doing 1000s of builds per day are likely to massively 
inflate download counts even if they're used by a far smaller number of users.

Anyway, like I said: not an expert here, just wanted to make sure the "python 
for desktop software (even if it's not used much right now)" angle is 
considered as well.

-glyph

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] license for setuptools

2016-08-12 Thread Glyph Lefkowitz
Yes, it's called "the apache license" - and I'm pretty sure setuptools rejected 
requiring that for good reason :-).

CLAs are a much bigger issue and are generally quite controversial.  Having one 
is nice if you need to do some kind of legal action but they are also a big 
impediment to contribution, and tracking them is a nightmare.  Not to mention 
the fact that there's almost no point unless you can go back and get one from 
everyone who has ever contributed.

-g

> On Aug 12, 2016, at 11:52 AM, Nick Timkovich <prometheus...@gmail.com> wrote:
> 
> Might be wandering away from simply tacking on a license, but some related 
> legalese: Is there some canned contributor license agreement (CLA) that could 
> also be applied to make clear that contributors must license their 
> contributions to the project(s) in kind? Python has it's own 
> (https://www.python.org/psf/contrib/ <https://www.python.org/psf/contrib/>) 
> but that seems very formal. Is there a simpler one that just makes it 
> implicit that 'submitting a patch/PR = you agree to license that code to the 
> project for redistribution" or whatever.
> 
> On Fri, Aug 12, 2016 at 12:34 PM, Glyph Lefkowitz <gl...@twistedmatrix.com 
> <mailto:gl...@twistedmatrix.com>> wrote:
> Thanks for submitting this PR, Ian! :)
> 
> > On Aug 12, 2016, at 6:24 AM, Ian Cordasco <graffatcolmin...@gmail.com 
> > <mailto:graffatcolmin...@gmail.com>> wrote:
> >
> > Thanks for that Geoffrey. There's a PR to add it as Jason decided to
> > accept it. I think we can all relax now. Okay?
> >
> > On Fri, Aug 12, 2016 at 8:00 AM, Geoffrey Spear <geoffsp...@gmail.com 
> > <mailto:geoffsp...@gmail.com>> wrote:
> >> It seems a bit silly to claim that a license that contains the sentence 
> >> "The
> >> above copyright notice and this permission notice shall be included in all
> >> copies or substantial portions of the Software" shouldn't have a copy
> >> included with the software it applies to since it literally says you need
> >> it.
> >>
> >> (IANAL)
> >>
> >> On Thu, Aug 11, 2016 at 10:45 PM, Eric Dill <thediz...@gmail.com 
> >> <mailto:thediz...@gmail.com>> wrote:
> >>>
> >>> Hi Claude,
> >>>
> >>> There was a recent discussion of the lack of a license file in setuptools
> >>> here: https://github.com/pypa/setuptools/issues/612 
> >>> <https://github.com/pypa/setuptools/issues/612> and another important
> >>> discussion here: https://github.com/pypa/setuptools/issues/132 
> >>> <https://github.com/pypa/setuptools/issues/132>.  This is
> >>> probably the most relevant quotable bit from those two issues, from Jason
> >>> Coombs (the primary developer of setuptools:
> >>>
> >>> "The [License :: OSI Approved :: MIT License] classifier isn't a
> >>> suggestion but a declaration and follows the distutils guide for 
> >>> declaring a
> >>> license. I consider inclusion of a license file redundant and error 
> >>> prone."
> >>>
> >>> Hopefully this resolves your issue about not having an explicit license
> >>> file.
> >>>
> >>> Best,
> >>>
> >>> Eric
> >>>
> >>> On Thu, Aug 11, 2016 at 10:20 PM Marinier, Claude
> >>> <claude.marin...@lmco.com <mailto:claude.marin...@lmco.com>> wrote:
> >>>>
> >>>> Good afternoon (well it’s afternoon here in the EDT zone),
> >>>>
> >>>>
> >>>>
> >>>> I am in the process of requesting the installation of Python 3 with
> >>>> matplotlib. The company needs to approve licenses but I cannot find the
> >>>> license for setuptools. The description here says it uses an MIT license 
> >>>> but
> >>>> I cannot confirm this. On github, the file setup.py says the same thing.
> >>>>
> >>>>
> >>>>
> >>>> License :: OSI Approved :: MIT License
> >>>>
> >>>>
> >>>>
> >>>> Could the maintainer please add an explicit license file.
> >>>>
> >>>>
> >>>>
> >>>> I would really like to use matplotlib but will not get approval unless we
> >>>> can confirm the license.
> >>>>
> >>>>
> >>>>
> >>>> Thank you.
> >>>>
> >>>>

Re: [Distutils] license for setuptools

2016-08-12 Thread Glyph Lefkowitz
Thanks for submitting this PR, Ian! :)

> On Aug 12, 2016, at 6:24 AM, Ian Cordasco  wrote:
> 
> Thanks for that Geoffrey. There's a PR to add it as Jason decided to
> accept it. I think we can all relax now. Okay?
> 
> On Fri, Aug 12, 2016 at 8:00 AM, Geoffrey Spear  wrote:
>> It seems a bit silly to claim that a license that contains the sentence "The
>> above copyright notice and this permission notice shall be included in all
>> copies or substantial portions of the Software" shouldn't have a copy
>> included with the software it applies to since it literally says you need
>> it.
>> 
>> (IANAL)
>> 
>> On Thu, Aug 11, 2016 at 10:45 PM, Eric Dill  wrote:
>>> 
>>> Hi Claude,
>>> 
>>> There was a recent discussion of the lack of a license file in setuptools
>>> here: https://github.com/pypa/setuptools/issues/612 and another important
>>> discussion here: https://github.com/pypa/setuptools/issues/132.  This is
>>> probably the most relevant quotable bit from those two issues, from Jason
>>> Coombs (the primary developer of setuptools:
>>> 
>>> "The [License :: OSI Approved :: MIT License] classifier isn't a
>>> suggestion but a declaration and follows the distutils guide for declaring a
>>> license. I consider inclusion of a license file redundant and error prone."
>>> 
>>> Hopefully this resolves your issue about not having an explicit license
>>> file.
>>> 
>>> Best,
>>> 
>>> Eric
>>> 
>>> On Thu, Aug 11, 2016 at 10:20 PM Marinier, Claude
>>>  wrote:
 
 Good afternoon (well it’s afternoon here in the EDT zone),
 
 
 
 I am in the process of requesting the installation of Python 3 with
 matplotlib. The company needs to approve licenses but I cannot find the
 license for setuptools. The description here says it uses an MIT license 
 but
 I cannot confirm this. On github, the file setup.py says the same thing.
 
 
 
 License :: OSI Approved :: MIT License
 
 
 
 Could the maintainer please add an explicit license file.
 
 
 
 I would really like to use matplotlib but will not get approval unless we
 can confirm the license.
 
 
 
 Thank you.
 
 
 
 --
 
 Claude Marinier
 
 
 
 
 
 ___
 Distutils-SIG maillist  -  Distutils-SIG@python.org
 https://mail.python.org/mailman/listinfo/distutils-sig
>>> 
>>> 
>>> ___
>>> Distutils-SIG maillist  -  Distutils-SIG@python.org
>>> https://mail.python.org/mailman/listinfo/distutils-sig
>>> 
>> 
>> 
>> ___
>> Distutils-SIG maillist  -  Distutils-SIG@python.org
>> https://mail.python.org/mailman/listinfo/distutils-sig
>> 
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Contributing money to package authors/maintainers via PyPI

2016-07-23 Thread Glyph Lefkowitz

> On Jul 23, 2016, at 12:11 PM, Donald Stufft  wrote:
> 
> 
>> On Jul 23, 2016, at 2:40 PM, Nicholas Chammas > > wrote:
>> 
>> I know a more concrete proposal would have to address a lot of details (e.g. 
>> like how to split contributions across multiple maintainers), and perhaps 
>> there is no way to find the resources to build or maintain such a thing in 
>> the first place. But just for now I’d like to separate essence of idea from 
>> the practical concerns of implementing it.
> 
> 
> I’m mulling over the idea in my head, but one other thing we’d need to figure 
> out is the *legality* of doing this and if it’s something the PSF is willing 
> to do at all.

This was my initial reaction as well.

It would be awesome if it worked!  It would potentially go a long way to 
addressing the now much-discussed problem of funding open source infrastructure 
>.
  But it is also a legal and financial mine-field.  Even if a lawyer says it's 
OK and it's possible to comply with the law, you still generate a lot of work 
for an accountant to actually do the complying.

https://gratipay.com  is a good, recent example of an 
apparently simple idea like this running into severe legal consequences and 
nearly imploding as a result.  Another potential problem that may not be 
initially obvious; due to the somewhat ambiguous nature of the funding 
structure, they also became a popular payment processor for nazis and white 
supremacists, since it's hard to get paid for producing nazi propaganda on 
other platforms.  Of course, PyPI might always be used as an update platform 
for malware or a C control point too, so it's not like there are no risks in 
operating it as it currently stands, but money always has the potential to make 
things worse.

I don't want to be doom-and-gloom here, in fact I would _very_ much like to see 
this project happen.  I just think that in order to do it in a way which 
doesn't backfire horribly, it has to be responsibly staffed at the outset so 
that problems like these, that we know about, can be addressed up front, and 
the inevitable ones that don't seem obvious at the moment have a clearly 
responsible person to go fix them as they arise, in a timely way.

-glyph___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Outdated packages on pypi

2016-07-13 Thread Glyph Lefkowitz

> On Jul 13, 2016, at 1:54 PM, Steve Dower  wrote:
> 
> Possibly such user-contributed content would be valuable anyway


https://alternativeto.net but for PyPI? :)

-glyph___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Outdated packages on pypi

2016-07-13 Thread Glyph Lefkowitz

> On Jul 12, 2016, at 9:54 PM, Donald Stufft <don...@stufft.io> wrote:
> 
> 
>> On Jul 12, 2016, at 4:45 PM, Glyph Lefkowitz <gl...@twistedmatrix.com> wrote:
>> 
>> My feeling is that there should be a "dead man's switch" sort of mechanism 
>> for this.  Require manual intervention from at least one package owner at 
>> least once a year.  I believe if you dig around in the archives there's been 
>> quite a bit of discussion around messaging to package owners and that sort 
>> of thing - and the main sticking point is that someone needs to volunteer to 
>> do the work on Warehouse.  Are you that person? :)
> 
> 
> I suspect any change like this will require some sort of PEP or something 
> similar to it. It’s something that I think is going to hard to get just right 
> (if it’s something we want to do at all).
> 
> Software can be “finished” without needing more releases,

"The software isn't finished until the last user is dead." :-)

> and sometimes projects stop getting updates until the maintainer has more 
> time (or a new maintainer comes along).

Yes; the whole point here is to have some way for people to know that a new 
maintainer is needed.

> An example is setuptools which had no releases between Oct 2009 and Jun 2013.

Arguably setuptools _was_ badly broken though, and if it had been obvious 
earlier on that it was in a bad situation perhaps we'd be further along by now 
:-).

> Another nice example is ``wincertstore`` which has had two releases one in 
> 2013 and one in 2014 and is one of the most downloaded projects on PyPI. It 
> doesn’t need any updates because it’s just a wrapper around Windows APIs via 
> ctypes.

Except it does need testing against new versions of Python.  No Python :: 3.5 
classifier on it, for example!  And right at the top of its description, a 
security fix.

The point of such a switch is to be able to push it and respond; not to tell 
the maintainer "you have to do a new release!" but rather to prompt the 
maintainer to explicitly acknowledge "the reason I have not done a new release 
is not that I haven't been paying attention; I am alive, I'm paying attention, 
and we don't need any maintenance, someone is still watching".

> Another thing we need to be careful about is what do we do once said dead 
> man’s switch triggers? We can’t just release the package to allow anyone to 
> register it, that’s just pointing a security shaped footgun at the foot of 
> every person using that project? It doesn’t make sense to block new uploads 
> for that project since there’s no point to disallowing new uploads. Flagging 
> it to allow someone to “take over” (possibly with some sort of review) has 
> some of the security shaped footguns as well as a problem with deciding who 
> to trust with a name or not.

The primary thing would be to have a banner on the page and a warning from `pip 
install´.  Those of us close to the heart of the Python community already have 
various ways of reading the tea leaves to know that things are likely to be 
unmaintained or bitrotting; the main purpose of such a feature would be to have 
an automated way for people who don't personally know all the prominent package 
authors and see them at conferences and meetups all the time to get this 
information.  For example: nobody should be using PIL, they should be using 
pillow.  Yet there's no way for a new user to figure this out by just looking 
at https://pypi.io/project/PIL/ :).

I think that the adjudication process for stealing a name from an existing 
owner is something that still bears discussion, but separately.  Whatever that 
process is, you'd have to go through it fully after a package becomes thusly 
"abandoned", and for the reasons you cite, it absolutely should not be 
automated.  Perhaps it shouldn't even be the way to deal with it - maybe the 
most you should be able to do in this case is to expand the "this is 
unmaintained" warning with a pointer to a different replacement name.

-glyph

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Outdated packages on pypi

2016-07-12 Thread Glyph Lefkowitz

> On Jul 12, 2016, at 4:55 AM, Dima Tisnek  wrote:
> 
> Hi all,
> 
> Is anyone working on pruning old packages from pypi?
> 
> I found something last updated in 2014, which, looking at the source
> appears half-done.
> Github link doesn't work any longer, no description, etc.
> 
> I managed to find author's email address out of band, and he responded
> that he can't remember the password, yada yada.
> 
> I wonder if some basic automation is possible here -- check if url's
> are reachable and if existing package satisfies basic requirements,
> failing that mark it as "possibly out of date"

My feeling is that there should be a "dead man's switch" sort of mechanism for 
this.  Require manual intervention from at least one package owner at least 
once a year.  I believe if you dig around in the archives there's been quite a 
bit of discussion around messaging to package owners and that sort of thing - 
and the main sticking point is that someone needs to volunteer to do the work 
on Warehouse.  Are you that person? :)

-glyph

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Don't Use `sudo pip install´ (was Re: [final version?] PEP 513…)

2016-02-17 Thread Glyph Lefkowitz

> On Feb 17, 2016, at 7:44 PM, Glyph Lefkowitz <gl...@twistedmatrix.com> wrote:
> 
> I think making pip do this might be mixing layers too much.  Frankly `pipsi´ 
> does almost the right thing; if `sudo pipsi´ put script symlinks in 
> /usr/local/bin/ instead of ~/.local/bin/ and put venvs into 
> /usr/local/lib/pipsi// instead of ~/.local/venvs/, it would 
> be almost exactly the right thing.

I filed an issue here - https://github.com/mitsuhiko/pipsi/issues/69 - so we 
can continue discussion of this specific solution in a more appropriate forum.

-glyph

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Don't Use `sudo pip install´ (was Re: [final version?] PEP 513…)

2016-02-17 Thread Glyph Lefkowitz

> On Feb 17, 2016, at 7:17 PM, Nathaniel Smith <n...@pobox.com> wrote:
> 
> On Tue, Feb 16, 2016 at 6:12 PM, Glyph Lefkowitz
> <gl...@twistedmatrix.com> wrote:
>> Here, I'll make it for you.  Assuming virtualenv is installed:
>> 
>> python -m virtualenv /usr/lib/supervisord/environment
>> /usr/lib/supervisord/environment/bin/pip install supervisord
>> ln -vs /usr/lib/supervisord/environment/bin/supervisor* /usr/bin
>> 
>> 
>> More tooling around this idiom would of course be nifty, but this is really
>> all it takes.
> 
> Maybe
> 
>  pip install --self-contained=/opt/supervisord supervisord
> 
> should do something like this?


I think making pip do this might be mixing layers too much.  Frankly `pipsi´ 
does almost the right thing; if `sudo pipsi´ put script symlinks in 
/usr/local/bin/ instead of ~/.local/bin/ and put venvs into 
/usr/local/lib/pipsi// instead of ~/.local/venvs/, it would 
be almost exactly the right thing.

(I previously said "/usr/bin/" but the whole point of /usr/local is that it's a 
place you can write to which _is_ on the default path but _isn't_ managed by 
the system package manager.)

Whatever the invocation is though, Noah has a point about system administrator 
expectations.  If you always have to manually specify a path for 
--self-contained, then there's going to be no standard place to go look to see 
what applications are installed via this mechanism, and it makes diagnostics 
harder.  There could of course be an option to put the install somewhere else, 
but if it's going to be pip, then it should be:

  pip install --self-contained supervisor

by default, and

  pip install --self-contained --self-contained-environment=/opt/supervisor 
supervisor

in the case where the user wants a non-standard location.

-glyph

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Don't Use `sudo pip install´ (was Re: [final version?] PEP 513…)

2016-02-17 Thread Glyph Lefkowitz

> On Feb 17, 2016, at 7:08 PM, Noah Kantrowitz  wrote:
> 
> Saying it's a good idea and we should move towards it is fine and I agree, 
> but that isn't grounds to remove the ability to do things the current way. So 
> you can warn people off from global installs but until there is at least some 
> community awareness of this other way to do things we can't remove support 
> entirely. It's going to be a very slow deprecation process.
> 
> --Noah

Sure.  We are also in agreement here, basically: in saying that pip should 
"error", I was describing an ideal state that would take years of education to 
get to (and I'm not sure that Donald even agrees we should go that way ;-)).  
But we can't even begin to move in that direction a little unless the better 
alternative is clearly explained and out in the zeitgeist for some time first.

-glyph___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Don't Use `sudo pip install´ (was Re: [final version?] PEP 513…)

2016-02-17 Thread Glyph Lefkowitz

> On Feb 16, 2016, at 6:22 PM, Noah Kantrowitz  wrote:
> 
> I'm not concerned with if the module is importable specifically, but I am 
> concerned with where the files will live overall. When building generic ops 
> tooling, being unsurprising is almost always the right move and I would be 
> surprised if supervisor installed to a custom virtualenv.

Would you not be surprised if installing supervisord upgraded e.g. `six´ or 
`setuptools´ and broke apport? or lsb_release? or dnf?  This type of version 
conflict is of course rare, but it is always possible, and every 'pip install' 
takes the system from a supported / supportable state to "???" depending on the 
dependencies of every other tool which may have been installed (and pip doesn't 
have a constraint solver for its dependencies, so you don't even know if the 
system gets formally broken by two explicitly conflicting requirements).

> It's a weird side effect of Python not having a great solution for 
> "application packaging" I guess? We've got standards for web-ish 
> applications, but not much for system services. I'm not saying I think 
> creating an isolated "global-ish" environment would be worse, I'm saying 
> nothing does that right now and I personally don't want to be the first 
> because that bring a lot of pain with it :-)

What makes the web-ish stuff "standard" is just that a lot of people are doing 
it.  So a lot of people should start doing this, and then it will also be a 
standard :-).

I can tell you that on systems where I've done this sort of thing, it has 
surprised no-one that I'm aware of and I have not had any issues to speak of.  
So I think you might be overestimating the risk.

In fairness though I've never written a clear explanation anywhere of why this 
is desirable; it strikes me as obvious but it is clearly not the present 
best-practice, which means somebody needs to do some thought-leadering.  So I 
owe you a blog post.

-glyph

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions

2016-02-17 Thread Glyph Lefkowitz

> On Feb 17, 2016, at 2:17 AM, David Cournapeau  wrote:
> 
> Sure, but the people I tend to see using `sudo pip` are not the kind of users 
> where that distinction is very useful.

It's hair-splitting but probably correct hair-splitting in terms of how it's 
detected.

> If there were a different simple, reliable way to avoid installing in system 
> python, I would be happy to change my own recommendations during sprints, 
> talks, etc...

Are you recommending 'sudo pip' right now?  Why not 'sudo virtualenv', then?

-g___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions

2016-02-17 Thread Glyph Lefkowitz

> On Feb 17, 2016, at 12:55 AM, Antoine Pitrou <solip...@pitrou.net> wrote:
> 
> On Tue, 16 Feb 2016 16:10:34 -0800
> Glyph Lefkowitz <gl...@twistedmatrix.com> wrote:
>> 
>> I am 100% on board with telling people "don't use `sudo pip install´".  
>> Frankly I have been telling the pip developers to just break this for years 
>> (see https://pip2014.com, which, much to my chagrin, still exists); `sudo 
>> pip install´ should just exit immediately with an error; to the extent that 
>> packagers need it, the only invocation that should work should be `sudo pip 
>> install --i-am-building-an-operating-system´.
> 
> [...] The problem is not the use of "sudo" or the
> invocation under root, it's to install into a system Python. So the
> solution should be to flag the system Python as not suitable for using
> pip into, not to forbid using pip under root.


I didn't mean to suggest that sudo /path/to/venv/bin/pip install should fail, 
so we are in agreement here.  The exact details of how pip detects the 
suitability of a given environment are up for discussion, it's just that the 
default behavior of `sudo pip install´ (install into package-manager-managed 
system prefix) is a bad idea.  Perhaps certain venvs should set this flag as 
well, to indicate that pip should not mess with it any more either.

-glyph
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions

2016-02-17 Thread Glyph Lefkowitz

> On Feb 16, 2016, at 11:55 PM, Robert T. McGibbon <rmcgi...@gmail.com> wrote:
> 
> On Tue, Feb 16, 2016 at 4:10 PM, Glyph Lefkowitz <gl...@twistedmatrix.com 
> <mailto:gl...@twistedmatrix.com>> wrote:
> This whole section is about a tool to automatically identify possible issues 
> with these wheels - https://www.python.org/dev/peps/pep-0513/#auditwheel 
> <https://www.python.org/dev/peps/pep-0513/#auditwheel> - so I don't even 
> really know what you mean by this comment.  I thought that the existence of 
> this tool is one of the best parts of this PEP!
> 
> Oh cool! Thanks, Glyph! I had a lot of fun writing it.

It really cuts to the heart of the problem with python builds: you can 
accidentally depend on some aspect of the platform in a way which requires 
nuanced understanding of the native build toolchain to understand.  For what 
it's worth this is definitely a problem on OS X and Windows as well 
(accidentally depending on homebrew or chocolatey for example); any chance 
you'll be extending it to deal with 'dumpbin' and 'otool' as well as 'ldd'?

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Don't Use `sudo pip install´ (was Re: [final version?] PEP 513…)

2016-02-16 Thread Glyph Lefkowitz

> On Feb 16, 2016, at 5:00 PM, Noah Kantrowitz <n...@coderanger.net 
> <mailto:n...@coderanger.net>> wrote:
> 
> 
>> On Feb 16, 2016, at 4:46 PM, Glyph Lefkowitz <gl...@twistedmatrix.com 
>> <mailto:gl...@twistedmatrix.com>> wrote:
>> 
>>> 
>>> On Feb 16, 2016, at 4:33 PM, Noah Kantrowitz <n...@coderanger.net 
>>> <mailto:n...@coderanger.net>> wrote:
>>> 
>>> 
>>>> On Feb 16, 2016, at 4:27 PM, Glyph Lefkowitz <gl...@twistedmatrix.com 
>>>> <mailto:gl...@twistedmatrix.com>> wrote:
>>>> 
>>>> 
>>>>> On Feb 16, 2016, at 4:13 PM, Noah Kantrowitz <n...@coderanger.net 
>>>>> <mailto:n...@coderanger.net>> wrote:
>>>>> 
>>>>> As someone that handles the tooling side, I don't care how it works as 
>>>>> long as there is an override for tooling a la Chef/Puppet. For stuff like 
>>>>> Supervisord, it is usually the least broken path to install the code 
>>>>> globally.
>>>> 
>>>> I don't know if this is the right venue for this discussion, but I do 
>>>> think it would be super valuable to hash this out for good.
>>>> 
>>>> Why does supervisord need to be installed in the global Python environment?
>>> 
>>> Where else would it go? I wouldn't want to assume virtualenv is installed 
>>> unless absolutely needed.
>> 
>> This I can understand, but: in this case, it is needed ;).
>> 
>>> Virtualenv is a project-centric view of the world which breaks down for 
>>> stuff that is actually global like system command line tools.
>> 
>> [citation needed].  In what way does it "break down"?  
>> https://pypi.python.org/pypi/pipsi <https://pypi.python.org/pypi/pipsi> is a 
>> nice proof-of-concept that dedicated virtualenvs are a better model for 
>> tooling than a big-ball-of-mud integrated system environment that may have 
>> multiple conflicting requirements.  Unfortunately it doesn't directly 
>> address this use-case because it assumes that it is doing per-user 
>> installations and not a system-global one, but the same principle holds: 
>> what version of `ipaddress´ that supervisord wants to use is irrelevant to 
>> the tools that came with your operating system, and similarly irrelevant to 
>> your application.
>> 
>> To be clear, what I'm proposing here is not "shove supervisord into a venv 
>> with the rest of your application", but rather, "each application should 
>> have its own venv".  In supervisord's case, "python" is an implementation 
>> detail, and therefore the public interface is /usr/bin/supervisord and 
>> /usr/bin/supervisorctl, not 'import supervisord'; those should just be 
>> symlinks into /usr/lib/supervisord/environment/bin/
> 
> That isn't a thing that exists currently, I would have to make it myself and 
> I wouldn't expect users to assume that is how I made it work. Given the 
> various flavors of user expectations and standards that exist for deploying 
> Python code, global does the least harm right now.

I don't think users who install supervisord necessarily think they ought to be 
able to import supervisord.  If they do expect that, they should probably 
revise their expectations.

Here, I'll make it for you.  Assuming virtualenv is installed:

python -m virtualenv /usr/lib/supervisord/environment
/usr/lib/supervisord/environment/bin/pip install supervisord
ln -vs /usr/lib/supervisord/environment/bin/supervisor* /usr/bin

More tooling around this idiom would of course be nifty, but this is really all 
it takes.

>> In fact, given that it is security-sensitive code that runs as root, it is 
>> extra important to isolate supervisord from your system environment for 
>> defense in depth, so that, for example, if, due to a bug, it can be coerced 
>> into importing an arbitrarily-named module, it has a restricted set and 
>> won't just load anything off the system.
> 
> Sounds cute but the threats that actually helps with seem really minor. If a 
> user can install stuff as root, they can probably do whatever they want 
> thanks to .pth files and other terrible things.

Once malicious code is installed in a root-executable location it's game over; 
I didn't mean to imply otherwise.  I'm saying that since supervisord might 
potentially import anything in its site-packages dir, this is just less code 
for you to worry about that might have security bugs in it.

One specific example of how you might do this is by specifying a 
protocol-defined codec; if you ever do .decode(user_data) on 

Re: [Distutils] Don't Use `sudo pip install´ (was Re: [final version?] PEP 513…)

2016-02-16 Thread Glyph Lefkowitz

> On Feb 16, 2016, at 4:33 PM, Noah Kantrowitz <n...@coderanger.net 
> <mailto:n...@coderanger.net>> wrote:
> 
> 
>> On Feb 16, 2016, at 4:27 PM, Glyph Lefkowitz <gl...@twistedmatrix.com 
>> <mailto:gl...@twistedmatrix.com>> wrote:
>> 
>> 
>>> On Feb 16, 2016, at 4:13 PM, Noah Kantrowitz <n...@coderanger.net 
>>> <mailto:n...@coderanger.net>> wrote:
>>> 
>>> As someone that handles the tooling side, I don't care how it works as long 
>>> as there is an override for tooling a la Chef/Puppet. For stuff like 
>>> Supervisord, it is usually the least broken path to install the code 
>>> globally.
>> 
>> I don't know if this is the right venue for this discussion, but I do think 
>> it would be super valuable to hash this out for good.
>> 
>> Why does supervisord need to be installed in the global Python environment?
> 
> Where else would it go? I wouldn't want to assume virtualenv is installed 
> unless absolutely needed.

This I can understand, but: in this case, it is needed ;).

> Virtualenv is a project-centric view of the world which breaks down for stuff 
> that is actually global like system command line tools.

[citation needed].  In what way does it "break down"?  
https://pypi.python.org/pypi/pipsi <https://pypi.python.org/pypi/pipsi> is a 
nice proof-of-concept that dedicated virtualenvs are a better model for tooling 
than a big-ball-of-mud integrated system environment that may have multiple 
conflicting requirements.  Unfortunately it doesn't directly address this 
use-case because it assumes that it is doing per-user installations and not a 
system-global one, but the same principle holds: what version of `ipaddress´ 
that supervisord wants to use is irrelevant to the tools that came with your 
operating system, and similarly irrelevant to your application.

To be clear, what I'm proposing here is not "shove supervisord into a venv with 
the rest of your application", but rather, "each application should have its 
own venv".  In supervisord's case, "python" is an implementation detail, and 
therefore the public interface is /usr/bin/supervisord and 
/usr/bin/supervisorctl, not 'import supervisord'; those should just be symlinks 
into /usr/lib/supervisord/environment/bin/

In fact, given that it is security-sensitive code that runs as root, it is 
extra important to isolate supervisord from your system environment for defense 
in depth, so that, for example, if, due to a bug, it can be coerced into 
importing an arbitrarily-named module, it has a restricted set and won't just 
load anything off the system.

> Compare with `npm install -g grunt-cli`.

npm is different because npm doesn't create top-level script binaries unless 
you pass the -g option, so you need to install global tooling stuff with -g.  
virtualenv is different (and, at least in this case, better).

-glyph___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


[Distutils] Don't Use `sudo pip install´ (was Re: [final version?] PEP 513…)

2016-02-16 Thread Glyph Lefkowitz

> On Feb 16, 2016, at 4:13 PM, Noah Kantrowitz  wrote:
> 
> As someone that handles the tooling side, I don't care how it works as long 
> as there is an override for tooling a la Chef/Puppet. For stuff like 
> Supervisord, it is usually the least broken path to install the code globally.


I don't know if this is the right venue for this discussion, but I do think it 
would be super valuable to hash this out for good.

Why does supervisord need to be installed in the global Python environment?

-glyph___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions

2016-02-16 Thread Glyph Lefkowitz

> On Feb 16, 2016, at 3:05 AM, Matthias Klose <d...@ubuntu.com> wrote:
> 
> On 02.02.2016 02:35, Glyph Lefkowitz wrote:
>> 
>>> On Feb 1, 2016, at 3:37 PM, Matthias Klose <d...@ubuntu.com> wrote:
>>> 
>>> On 30.01.2016 00:29, Nathaniel Smith wrote:
>>>> Hi all,
>>>> 
>>>> I think this is ready for pronouncement now -- thanks to everyone for
>>>> all their feedback over the last few weeks!
>>> 
>>> I don't think so.  I am biased because I'm the maintainer for Python in 
>>> Debian/Ubuntu.  So I would like to have some feedback from maintainers of 
>>> Python in other Linux distributions (Nick, no, you're not one of these).
>> 
>> Possibly, but it would be very helpful for such maintainers to limit their 
>> critique to "in what scenarios will this fail for users" and not have the 
>> whole peanut gallery chiming in with "well on _my_ platform we would have 
>> done it _this_ way".
>> 
>> I respect what you've done for Debian and Ubuntu, Matthias, and I use the 
>> heck out of that work, but honestly this whole message just comes across as 
>> sour grapes that someone didn't pick a super-old Debian instead of a 
>> super-old Red Hat.  I don't think it's promoting any progress.
> 
> You may call this sour grapes, but in the light of people installing
> these wheels to replace/upgrade system installed eggs, it becomes an issue. 
> It's fine to use such wheels in a virtual environment, however people tell 
> users to use these wheels to replace system installed packages, distros will 
> have a problem identifying issues.

I am 100% on board with telling people "don't use `sudo pip install´".  Frankly 
I have been telling the pip developers to just break this for years (see 
https://pip2014.com, which, much to my chagrin, still exists); `sudo pip 
install´ should just exit immediately with an error; to the extent that 
packagers need it, the only invocation that should work should be `sudo pip 
install --i-am-building-an-operating-system´.

But `sudo pip install´ of arbitrary packages is now, and always has been, 
basically broken; this PEP doesn't change that in any way I can see.  
Specifically, since there are tools in place to ensure that the extension 
modules will load just fine, this won't be any more broken than `sudo pip 
install´-ing random C extension modules is today.  If anything it will be more 
reliable, since a lot of people already build and ship wheels to their 
production linux environments, and don't always understand the nuances around 
having to build on a system with a native package set that exactly matches 
their target environment.

> There is a substantial amount of extensions built using C++; I didn't check 
> how many of these in c++0x/c++11 mode.  Until GCC 5, the c++11 ABI wasn't 
> stable, and upstream never promised forward compatibility, something that 
> even distros have to care about (usually by rebuilding packages before a 
> release).  So if you want a lowest common denominator, then maybe limit or 
> recommend the use of c++98 only.

Isn't this irrelevant as long as your entry-points are all 'extern "C"' and 
your C++ code statically links libstdc++?  The build toolchain in question 
doesn't include a dynamic libstdc++, does it?  If so, that's a pretty concrete 
problem with this proposal and it should be addressed.

>>> The proposal just takes some environment and declares that as a standard.  
>>> So everybody wanting to supply these wheels basically has to use this 
>>> environment.
>> 
>> There's already been lots of discussion about how this environment is a 
>> lowest common denominator.  Many other similar environments could _also_ be 
>> lowest common denominator.
> 
> sure, but then please call it what it is. centos5 or somelinux1.

The point of the wheel tag is that its output should work on many linuxes.  A 
'centos5' tag would imply that you can use arbitrary dynamic libraries (and 
perhaps even arbitrary packages!) from centos5, of which there are many; you 
can't, because auditwheel will yell at you.  It's the build environment plus 
restrictions around what you can depend on from that environment.

>> In the future, more specific and featureful distro tags sound like a good 
>> idea.  But could we please stop making the default position on distutils-sig 
>> "this doesn't cater to my one specific environment in the most optimal 
>> possible way, so let's give up on progress entirely"?  This is a good 
>> proposal that addresses environment portability and gives Python a 
>> substantially better build-artifact story than it currently has, in the 
>> environment most desperately need

Re: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions

2016-02-01 Thread Glyph Lefkowitz

> On Feb 1, 2016, at 3:37 PM, Matthias Klose  wrote:
> 
> On 30.01.2016 00:29, Nathaniel Smith wrote:
>> Hi all,
>> 
>> I think this is ready for pronouncement now -- thanks to everyone for
>> all their feedback over the last few weeks!
> 
> I don't think so.  I am biased because I'm the maintainer for Python in 
> Debian/Ubuntu.  So I would like to have some feedback from maintainers of 
> Python in other Linux distributions (Nick, no, you're not one of these).

Possibly, but it would be very helpful for such maintainers to limit their 
critique to "in what scenarios will this fail for users" and not have the whole 
peanut gallery chiming in with "well on _my_ platform we would have done it 
_this_ way".

I respect what you've done for Debian and Ubuntu, Matthias, and I use the heck 
out of that work, but honestly this whole message just comes across as sour 
grapes that someone didn't pick a super-old Debian instead of a super-old Red 
Hat.  I don't think it's promoting any progress.

> The proposal just takes some environment and declares that as a standard.  So 
> everybody wanting to supply these wheels basically has to use this 
> environment.

There's already been lots of discussion about how this environment is a lowest 
common denominator.  Many other similar environments could _also_ be lowest 
common denominator.

> Without giving any details, without giving any advise how to produce such 
> wheels in other environments. Without giving any hints how such wheels may be 
> broken with newer environments.

They won't be.  That's the whole point.

> Without mentioning this is am64/i386 only.

Wheels already have an architecture tag, separate from the platform tag, so 
this being "am64/i386" is irrelevant.

> There might be more. Pretty please be specific about your environment.  Have 
> a look how the LSB specifies requirements on the runtime environment ... and 
> then ask yourself why the lsb doesn't have any real value.


In the future, more specific and featureful distro tags sound like a good idea. 
 But could we please stop making the default position on distutils-sig "this 
doesn't cater to my one specific environment in the most optimal possible way, 
so let's give up on progress entirely"?  This is a good proposal that addresses 
environment portability and gives Python a substantially better build-artifact 
story than it currently has, in the environment most desperately needing one 
(server-side linux).  Could it be better?  Of course.  It could be lots better. 
 There are lots of use-cases for dynamically linked wheels and fancy new 
platform library features in newer linuxes.  But that can all come later, and 
none of it needs to have an impact on this specific proposal, right now.

-glyph

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] draft PEP: manylinux1

2016-01-30 Thread Glyph Lefkowitz

> On Jan 29, 2016, at 8:44 PM, Donald Stufft  wrote:
> 
> 
>> On Jan 29, 2016, at 2:35 PM, Nate Coraor > > wrote:
>> 
>> Is there a distro-specific wheel tagging PEP in development somewhere that I 
>> missed? If not, I will get the ball rolling on it.
> 
> 
> I think this a great idea, and I think it actually pairs nicely with the 
> manylinux proposal. It should be pretty easy to cover the vast bulk of users 
> with a handful of platform specific wheels (1-3ish) and then a manylinux 
> wheel to cover the rest. It would let a project use newer 
> toolchains/libraries in the common case, but still fall back to the older 
> ones on more unusual platforms.

Yes!  This would be fantastic.  There are some libraries you actually want to 
dynamically link against from the platform, especially if you're writing 
desktop apps.  On OS X you can do this because /System/*/ is more or less fixed 
when you are >= some version; on linux less so but it would be very nice to 
build artifacts for specific versions when possible.

-glyph

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] draft PEP: manylinux1

2016-01-24 Thread Glyph Lefkowitz

> On Jan 24, 2016, at 2:56 PM, Donald Stufft  wrote:
> 
> ... the excellent Barry Warsaw ...

Distro work is often thankless (especially here) so I just wanted to echo this: 
glory to the FLUFL, may his reign be <> short.

-glyph___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] heads-up on a plot to bring linux wheels to pypi

2016-01-14 Thread Glyph Lefkowitz

> On Jan 14, 2016, at 2:12 AM, Nick Coghlan  wrote:
> 
> In terms of non-scientific packages, the main group I'd suggest
> getting in touch with is pycryptography, as we'll probably want to
> baseline a more recent version of OpenSSL than the one in CentOS 5.11.
> 

1. It's "cryptography" (or "PyCA's Cryptography", or "cryptography.io"), not 
"pycryptography".  This is an important distinction because "PyCrypto" is the 
crappy, old thing you should not use, and "cryptography" is the new hotness.

2. On every other platform where they distribute wheels, the Cryptography 
developers have statically linked both OpenSSL and libffi; I was tangentially 
involved in the effort to do this on OS X, and in the process of debugging 
that, I learned that the Linux toolchain is fairly similar.  I would imagine 
that they'd want to statically link OpenSSL the same way, for the same reasons, 
on Linux.  Cryptography does regular releases to bundle in newer OpenSSLs, 
generally more often than the underlying platforms do.  (Since Cryptography 
does not directly export OpenSSL's API as such, it's easier to do multi-verison 
compatibility with Python than with C.)

In fact I am going to go out on a limb and say that I think Cryptography could 
be ready to go with this in a few weeks if PyPI just started allowing Linux 
wheels.  We've discussed using ancient-CentOS containers for building static 
binaries the same way PyPy does.  The potentially tricky part is just building 
the static new versions of OpenSSL from scratch on old systems, I think...

-glyph___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Building and installing packages on a (unix) system lacking network accesss

2015-12-23 Thread Glyph Lefkowitz

> On Dec 22, 2015, at 5:51 AM, KM  wrote:
> 
> Greetings distutils-sig,
> 
> I have a project with an autogenerated structure - that is, I ran a "helper" 
> application which creates a directory structure and a setup.py for me. I am 
> trying to build this package in a virtualenv on an isolated machine, 
> necessiating the step of downloading all the prerequisite packages and making 
> them available. I have done this and I can successfully install most of the 
> prerequisites by passing "--no-index" and "-f " to pip, making my full 
> command-line:
> 
> pip install -e . --no-index -f 
> 
> This works up until a certain point, where pip (or something launched by pip) 
> tries to download from the internet. 

[snip]

This is in fact the entire point of saying --no-index; "don't download stuff 
from the internet".  So it's working as designed.  You need to make sure all 
your dependencies are available before you run that `pip install´ command.  
Have you?

> I am hoping for some feedback or information that will allow me to install my 
> application with all of its dependencies downloaded from a locally-available 
> url or directory. Does anyone have any suggestions for me?

You're doing this in vaguely the right way with `pip install -f --no-index´.  
As Ben Finney already pointed out, these are not distutils options so the whole 
thing with --global-option is a complete red herring, give up on that :-).

In fact, as stated, your example works just fine:

$ mkdir offline_stuff
$ pip wheel --wheel-dir offline_stuff pytz
Collecting pytz
  Saved ./offline_stuff/pytz-2015.7-py2.py3-none-any.whl
Skipping pytz, due to already being wheel.
$ mktmpenv ; cd -
New python executable in tmp-a3e6ab08e84f351d/bin/python2.7
Also creating executable in tmp-a3e6ab08e84f351d/bin/python
Installing setuptools, pip, wheel...done.
virtualenvwrapper.user_scripts creating 
/Users/glyph/.virtualenvs/tmp-a3e6ab08e84f351d/bin/predeactivate
virtualenvwrapper.user_scripts creating 
/Users/glyph/.virtualenvs/tmp-a3e6ab08e84f351d/bin/postdeactivate
virtualenvwrapper.user_scripts creating 
/Users/glyph/.virtualenvs/tmp-a3e6ab08e84f351d/bin/preactivate
virtualenvwrapper.user_scripts creating 
/Users/glyph/.virtualenvs/tmp-a3e6ab08e84f351d/bin/postactivate
virtualenvwrapper.user_scripts creating 
/Users/glyph/.virtualenvs/tmp-a3e6ab08e84f351d/bin/get_env_details
This is a temporary environment. It will be deleted when you run 'deactivate'.
/Users/glyph
(tmp-a3e6ab08e84f351d)$ pip install -f offline_stuff --no-index pytz
Ignoring indexes: https://pypi.python.org/simple
Collecting pytz
Installing collected packages: pytz
Successfully installed pytz-2015.7
(tmp-a3e6ab08e84f351d)$ python -c 'import pytz; print(pytz)'

(tmp-a3e6ab08e84f351d)$ 

So if you could please provide a sample `setup.py´ or sample 
`requirements.txt´, along with exactly what unexpected output you got, it would 
be helpful in understanding what went wrong.  This should be a minimal example 
demonstrating just the problem you're seeing, not your whole project; see 
http://sscce.org for a longer explanation.

-glyph

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] workflow recommendations to update requirements.txt

2015-12-15 Thread Glyph Lefkowitz

> On Dec 15, 2015, at 8:56 PM, Chris Jerdonek  wrote:
> 
> Hi,
> 
> I have a development workflow question I was wondering if people on
> this list had a recommended solution for.
> 
> Say you're working on a web application that you deploy using a
> requirements.txt file.  And say you have a set of "abstract
> dependencies" that your application depends on.
> 
> What are some convenient ways of storing your abstract dependencies in
> source control and periodically generating an updated requirements
> file from that information (e.g. when your dependencies come out with
> new versions)?
> 
> The main idea that occurs to me is making a setup.py for the purposes
> of representing your abstract dependencies (e.g. using
> "install_requires," etc), creating a new virtualenv, running "pip
> install .", and then "pip freeze."
> 
> One problem with this approach is that the pip freeze output includes
> an entry for the setup.py application itself, when the output should
> only include the _dependencies_ of the application and not the
> application itself.  It also seems clunky to me to create a virtualenv
> and install dependencies only for the purposes of computing
> dependencies.
> 
> Thanks for any help or suggestions.

This is what I'm doing right now (occasionally manually curating the output of 
`pip freeze´) but I have heard good things about 
https://github.com/nvie/pip-tools/  and I 
intend to investigate it.  As I understand it, pip-compile is the tool you want.

-glyph___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] The future of invoking pip

2015-11-05 Thread Glyph Lefkowitz
> On Nov 5, 2015, at 6:36 PM, Donald Stufft  wrote:
> 
> I’m not really sure what the right answer is for something where the 
> particular version of Python you’re invoking it with (and that you’re 
> actually using Python) is important. python -m makes a lot of sense in that 
> area because it eliminates the need to have each tool create their own logic 
> for determining what python they are operating on but I think most people are 
> not going to be very familiar with the idea and I don’t know how well they’d 
> warm to it. The other option (that I can come up with) is baking that logic 
> into each tool (as pip and virtualenv do now) either via naming scheme or a 
> flag.

Rather than trying to figure out what the "right" way for users to invoke `pip´ 
to begin with is, why not just have Pip start providing more information about 
potential problems when you invoke it?

If you invoke 'pip[X.Y]' and it matches 'python -m pip' in your current 
virtualenv, don't say anything; similarly if you invoke 'python -m pip' and 
'which pip' matches.  But if there's a mismatch, pip can print information in 
both cases.  This would go a long way to alleviating the confusion that occurs 
when users back themselves into one of these corners, and would alert users to 
potential issues before they become a problem; right now you have to be a 
dogged investigative journalist to figure out why pip is doing the wrong thing 
in some cases.

-glyph

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] The future of invoking pip

2015-11-05 Thread Glyph Lefkowitz

> On Nov 5, 2015, at 7:34 PM, Robert Collins  wrote:
> 
> Why not? (Ignore the language I wrote my pseudocode in, an actual
> thing would be a Python script that install would turn into a .exe)


It was not clear, in the example that you gave, that I was supposed to ignore 
the example that you gave ;).

-g___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PyPi not allowing duplicate filenames

2015-10-14 Thread Glyph Lefkowitz

> On Oct 14, 2015, at 1:04 PM, Donald Stufft  wrote:
> 
> Generally within 60-120 seconds it’s available in mirrors (most of them 
> resync once a minute). If anyone has downloaded it then they will have pretty 
> much permanently cached the package, first in the download cache and then 
> again in the wheel cache (assuming it wasn’t a wheel already, and they had 
> that enabled). The original package was NumPy. It had 30,982 downloads in the 
> last day, so we can average that out to 1290 downloads an hour or 21 
> downloads a minute. If it takes you two minutes to notice it and delete it, 
> then there are ~40 people who already have the original version cached and 
> who will not notice the updated version.

While I don't think PyPI should allow modification of uploaded packages 
necessarily, I do think that Pip's caching is (A) too aggressive and (B) too 
opaque.  For example:

https://github.com/pypa/pip/issues/3127
https://github.com/pypa/pip/issues/3034
https://github.com/pypa/pip/issues/3025
https://github.com/pypa/pip/issues/2908
https://github.com/pypa/pip/issues/2882

etc, etc.

I know there are some directories platform-specific directories I can delete, 
but almost once a day I want a command like `pip cache show´ which can show me 
what is cached and when/where it was built, `pip cache clear´ or `pip cache 
remove twisted´ or `pip cache remove cffi>=1.0´.  I don't want to have to care 
if it's in the HTTP cache or the wheel cache, or how it got there; I also don't 
want to have to bust a ~200 megabyte cache that saves me hours a day just 
because there's one bad entry in there.

-glyph

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PyPi not allowing duplicate filenames

2015-10-14 Thread Glyph Lefkowitz

> On Oct 14, 2015, at 10:36 AM, Nathaniel Smith  wrote:
> 
> My feeling is that pypi is correct to disallow the mutation of releases once 
> they become public, but that the ergonomics around this could probably be 
> improved :-). A more general solution that might be nice to have Someday 
> would be if you could upload a release in one step, and then get a private 
> link to poke at what was uploaded and make sure it looks correct, before 
> making it public in a second step.
> 

IMHO it would be really neat if 'pip' could authenticate to PyPI, and the 
authentication could affect package visibility, so that you could end-to-end 
test 'pip install foo' and ensure that you get what you expect before pushing 
the go-live switch.

-glyph___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Where should I put tests when packaging python modules?

2015-10-06 Thread Glyph Lefkowitz

> On Oct 6, 2015, at 12:07 AM, Thomas Güttler  
> wrote:
> 
> Hi,
> 
> Where should I put tests when packaging python modules?
> 
> I want a "cowpath", an "obvious way"
> 
> Dear experts, please decide:
> 
> inside the module like this answer:
> 
> http://stackoverflow.com/questions/5341006/where-should-i-put-tests-when-packaging-python-modules
> 
> XOR
> 
> outside the module like this:
> 
> https://github.com/pypa/sampleproject/tree/master/tests
> 
> I think there is no need to hurry. Let's wait one week,
> and then check which one is preferred.
> 
> Regards,
>  Thomas Güttler

Inside the package.

If you put your tests outside your package, then you can't install the tests 
for two packages simultaneously, because everyone's tests are just in the 
top-level package "tests".  This tends to infest the whole package, since then 
tests import things from each other using 'from tests import ...'.  This is 
recommended by the hitchhiker's guide, and seconded by 
.

-glyph

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Unable to login to PyPi

2015-09-29 Thread Glyph Lefkowitz
If nobody has time to fix the code for OpenID, is there at least an admin with 
privileges to associate passwords with people's accounts?

-glyph

> On Sep 29, 2015, at 03:09, Mike O'Driscoll <mikeodrisc...@gmail.com> wrote:
> 
> I'm completely locked out.
> 
> If there is/was a way to add pure username password I'll do that when I can 
> get back in.
> 
> 
> --
> Mike O'Driscoll
> 
> On Mon, Sep 28, 2015 at 10:57 PM, Glyph Lefkowitz <gl...@twistedmatrix.com 
> <mailto:gl...@twistedmatrix.com>> wrote:
> Mike, do you have another way to authenticate to the site, or are you locked 
> out until OpenID works again?
> 
> -g
> 
>> On Sep 28, 2015, at 14:02, Richard Jones <rich...@python.org 
>> <mailto:rich...@python.org>> wrote:
>> 
>> Hi Mike,
>> 
>> Sorry, but this is a known problem that no-one has time to investigate or 
>> fix.
>> 
>> 
>>  Richard
>> 
>> On 29 September 2015 at 01:31, Mike O'Driscoll <mikeodrisc...@gmail.com 
>> <mailto:mikeodrisc...@gmail.com>> wrote:
>> Hello,
>> 
>> I have been unable to login to the PyPi site for nearly a month now via 
>> OpenID (launchpad).
>> 
>> I have the following ticket open but have gotten no traction:
>> https://bitbucket.org/pypa/pypi/issues/333/unable-to-login-via-openid 
>> <https://bitbucket.org/pypa/pypi/issues/333/unable-to-login-via-openid>
>> 
>> Any support would be appreciated.
>> 
>> Thanks,
>> 
>> --
>> Mike O'Driscoll
>> 
>> 
>> ___
>> Distutils-SIG maillist  -  Distutils-SIG@python.org 
>> <mailto:Distutils-SIG@python.org>
>> https://mail.python.org/mailman/listinfo/distutils-sig 
>> <https://mail.python.org/mailman/listinfo/distutils-sig>
>> 
>> 
>> ___
>> Distutils-SIG maillist  -  Distutils-SIG@python.org 
>> <mailto:Distutils-SIG@python.org>
>> https://mail.python.org/mailman/listinfo/distutils-sig 
>> <https://mail.python.org/mailman/listinfo/distutils-sig>
> 
> 

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Unable to login to PyPi

2015-09-28 Thread Glyph Lefkowitz
Mike, do you have another way to authenticate to the site, or are you locked 
out until OpenID works again?

-g

> On Sep 28, 2015, at 14:02, Richard Jones  wrote:
> 
> Hi Mike,
> 
> Sorry, but this is a known problem that no-one has time to investigate or fix.
> 
> 
>  Richard
> 
> On 29 September 2015 at 01:31, Mike O'Driscoll  > wrote:
> Hello,
> 
> I have been unable to login to the PyPi site for nearly a month now via 
> OpenID (launchpad).
> 
> I have the following ticket open but have gotten no traction:
> https://bitbucket.org/pypa/pypi/issues/333/unable-to-login-via-openid 
> 
> 
> Any support would be appreciated.
> 
> Thanks,
> 
> --
> Mike O'Driscoll
> 
> 
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org 
> 
> https://mail.python.org/mailman/listinfo/distutils-sig 
> 
> 
> 
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Twisted plugin system and Python packaging

2011-09-25 Thread Glyph Lefkowitz

On Sep 25, 2011, at 1:44 AM, P.J. Eby wrote:

 You may want to note that Twisted isn't the only distribution that ships 
 partial packages that aren't namespace packages.  The logilabs packages are 
 (or at any rate were) distributed this way, with a top-level package in one 
 distribution, and some modules (without an __init__) were shipped in other 
 distributions.  (I could be wrong, but I think the mx.* stuff works this way, 
 too.)

Thanks for that data point!  It is really good to know, I'll use it to bludgeon 
some people with next time this issue comes up (on this list, or on some Debian 
bug, or whatever) ;-).

 (These are all packages that are distutils-only, though, or have their own 
 extensions, IIRC.)

You mean extensions to distutils, or C extensions?  I think Twisted qualifies 
for both of those, and we definitely want to remain installable without 
setuptools, although we do use it for one or two things if it's available now.

-glyph___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Twisted plugin system and Python packaging

2011-09-24 Thread Glyph Lefkowitz

On Sep 24, 2011, at 11:01 PM, P.J. Eby wrote:

 This really sounds like a bug in pip; top_level.txt is not a replacement for 
 a proper uninstall log.

Thanks.  It appears that Pip's maintainers are (the teensiest bit grudgingly) 
coming around to this perspective as well: 
https://github.com/pypa/pip/issues/355#issuecomment-2182951

Hopefully we can get a solution that pretty much works everywhere, soon.

-glyph___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Name the software! Package quality tester.

2011-03-09 Thread Glyph Lefkowitz
On Mar 9, 2011, at 7:06 AM, Jim Fulton j...@zope.com wrote:

 They certainly aren't projects in any sense that most people would
 understand.  They are arguably products of projects. Of course, the
 term product has negative connotations for some folks.

Not for everybody!  As far as I am concerned, the whole Python packaging 
ecosystem (not to mention every Twisted-based plugin mechanism and extension 
point) is merely trying to re-ascend to the lofty heights once occupied by the 
beautiful completeness and usability of the zope2 product architecture :).  
(Not kidding!  I loved those things.)
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Python version in egg name

2011-02-05 Thread Glyph Lefkowitz
On Feb 2, 2011, at 2:49 PM, Matt Chaput wrote:

 Is there a way to set this value to py2.5 as a configuration option, other 
 than running setup.py using the Python 2.5 executable? Or not have it be part 
 of the egg filename at all? I'm using Python 2.7 as my default Python install 
 but I make sure that the code in my library is compatible with 2.5+. I've 
 already someone think the library is only for a later version of Python 
 because that's what I used to build the egg.

If you're building eggs, they're only going to be compatible with a single 
version of Python.  Bytecode isn't compatible across multiple versions.  If 
your module is pure python, you should just set up a linux VM old enough to 
have the Python versions you want or something :).

So you might be able to hack something together that would cause a broken egg 
to be installed for users of other versions of python, but you won't be able to 
fix this issue, no.

Otherwise, you can distribute some other format, like an sdist tarball, which 
has no pycs in it.



___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] An observation on how system packagers and developers can be friends

2010-10-28 Thread Glyph Lefkowitz
On Oct 28, 2010, at 6:08 PM, Tres Seaver wrote:

 Heh, agreed.  That bites me on about every third machine I set up for
 the first time.  The Usual Suspects (TM) are whatever the local
 packaging system calls the following (and their -dev or -devel packages,
 if split out):
 
 - - zlib
 - - ncurses
 - - bz2
 - - readline
 - - openssl

On Debian, there's a handy shortcut: 'apt-get build-dep', which will install 
the build dependencies for any given source package.  So 'apt-get build-dep 
python' will get you all set to build Python.

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] distribute and install --root

2010-05-10 Thread Glyph Lefkowitz

On May 10, 2010, at 9:14 AM, Nikolaus Rath wrote:

 Glyph Lefkowitz gl...@twistedmatrix.com writes:
 On May 9, 2010, at 5:47 PM, Tarek Ziadé wrote:
 
 Having .local here looks like a bug to me. Could you create an issue
 with details ?
 
 For what it's worth, I have had similar things happen when I had
 forogtten that I created a ~/.pydistutils.cfg. Have you made sure that
 there isn't one there?
 
 No, that was it. And I certainly don't remember ever touching or
 creating that file either. Thanks!

Does distutils have a flag for ignoring ~/.pydistutils.cfg, by the way?  I hit 
this fairly often and it's always a nasty surprise *right* at the end of a 
build process.  It would be nice to have all official-release scripts specify 
an option that specifically overrides per-user configuration.

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] distribute and install --root

2010-05-10 Thread Glyph Lefkowitz

On May 10, 2010, at 4:05 PM, Tarek Ziadé wrote:

 Yes, someone provided this patch a while ago and I've added it (it's
 --no-user-cfg  in 2.6).

Do you mean in 2.7?  I see it documented here: 
http://docs.python.org/dev/distutils/builtdist.html, but not here: 
http://docs.python.org/distutils/builtdist.html.

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] distribute and install --root

2010-05-09 Thread Glyph Lefkowitz

On May 9, 2010, at 5:47 PM, Tarek Ziadé wrote:

 Having .local here looks like a bug to me. Could you create an issue
 with details ?

For what it's worth, I have had similar things happen when I had forogtten that 
I created a  ~/.pydistutils.cfg.  Have you made sure that there isn't one there?

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] [sqlalchemy] Re: inability to pass setup.py command line arguments to dependency setups

2010-05-07 Thread Glyph Lefkowitz
On May 7, 2010, at 2:08 PM, Michael Bayer wrote:

 I'd only mention that Storm has a C extension/non C extension flag as well, 
 and only offers one source distribution on Pypi. You have to modify a 
 variable directly within setup.py.  Our setup.py features the same capability 
 (its just our C extension is off by default for 0.6 since it was just 
 written, which is the same case for when Storm first introduced its C 
 extension).


It occurs to me that Twisted has a similar problem (except there's no 
installation flag: it just builds the C extensions if it possibly can).

The problem I see here is that the dependencies list of a particular project 
should be a complete expression of the features that the source code requires 
to function properly.  If a C extension is present for optimization purposes 
only, then I don't think it ever needs to be mentioned in a dependency listing. 
 Performance tuning is a build and deployment issue, not a 
dependency-correctness issue.  However, if a C extension wraps features 
necessary for an application to work correctly, without which it will simply 
traceback and die, then it should be possible for the application to say I 
depend on this functionality.

After all, it's kind of bogus if I say I depend on library X, and then 
library X gets installed, but half of it is missing for some reason.  It's 
bogus if it's missing for any reason at all, really.  The C extension couldn't 
be compiled one is common, but there are other configuration and build issues 
which could prevent a distribution from being fully functional.

Is there already a good way to express a dependency on a portion of a source 
distribution, or optional features?  A way to list one source distribution on 
PyPI so that it will be present under multiple names, one for each optional 
chunk?

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Python people want CPAN and how the latter came about

2009-12-23 Thread Glyph Lefkowitz

On Dec 23, 2009, at 4:33 PM, Lennart Regebro wrote:
 
 1/ Missing packages (eg: Twisted is not there)
 
 The Twisted guys do not upload their packages to PyPI. I think that's
 a mistake, but it's hardly PyPI's fault. There is no law saying you
 have to use CPAN either.

For what it's worth, we don't upload because it's a big pain, and nobody cares 
anyway.

It's a big pain because there are two ways to upload, and neither one works for 
us.  We can't use 'setup.py upload' because we don't use 'sdist' to produce our 
tarball releases (although a discussion of why 'sdist' is insufficient is a 
topic for another post).  The other way to upload, manually interacting with a 
form in a web browser, is annoying and as far as I know it is hostile to 
automation.

Nobody cares because you can 'easy_install twisted' already, and you can find 
Twisted on the PyPI web page.  It's not clear to us what benefits uploading to 
PyPI would have beyond that.

If someone would like to give us a good reason to upload or explain how 
uploading might be made easy, maybe we'd start doing it :).___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Python people want CPAN and how the latter came about

2009-12-21 Thread Glyph Lefkowitz

On Dec 21, 2009, at 5:48 PM, David Cournapeau wrote:

 On Mon, Dec 21, 2009 at 7:13 PM, Lennart Regebro rege...@gmail.com wrote:
 What nobody still fails to explain in this discussion is what CPAN
 is and Why Python doesn't already have it.
 
 That's not the right question to ask. The problem is not much a
 feature problem as much as a fundamental implementation and state of
 mind. Reliable packaging requires explicit handling, where the whole
 python stack for packaging relies a lot on implicit behavior.

It is definitely the right question to ask, and it is very much a feature 
problem.

The missing feature is install what I mean. easy_install is missing by 
default in most cases, and then broken by default when you install it.  If a 
fresh python install, you cannot just type easy_install foo, or even 'cd foo; 
python setup.py install' and reliably get a copy of 'foo' installed for 
whatever user asked for it.  Instead you get piles of cryptic error messages 
until you learn how to use it and what command-line options to pass.

Now, CPAN is not perfect, and you can most definitely get cryptic error 
messages out of it.  But, *most of the time*, it just works, and *most of the 
time*, developers can install dependencies and users can install software 
without really thinking about it too hard.

Everything you're saying about mindset and standardization may be good, and in 
fact entirely necessary to achieving this goal.  But it is very important that, 
as a community, we:

  A) keep our eyes on the prize, and try to improve the default, out-of-the-box 
Python package installation experience wherever possible, and
  B) be clear about what the prize _is_.  It's really important to nail down 
what it is that we all agree needs to improve.  I say this because if someone 
wants to ask a question like what is this thing that everyone seems to say we 
should work on, I think it's important to answer it.

In one sense of not a feature problem, I think you're right.  The problem 
here is not a particular *advanced* feature, some more sophisticated option, 
although many features might help fix it: the problem is that the user 
experience of existing functionality is bad.

We're not hearing a lot of lucid articulation of what exactly the CPAN 
problem is, and I believe the reason is that when you actually look at the 
problems and describe them, they're easy to work around.  setup.py install 
doesn't work for users who aren't root?  Well, maybe that's not really a 
feature problem, it's a documentation problem.  For most of us it's pretty easy 
to set your PATH and PYTHONPATH and type --prefix and 
--single-version-externally-managed, and then everything works fine.  Or use 
something like virtualenv.  Or you can just use Python 2.6 and set only your 
PATH, as long as you know that Windows keeps things in one directory layout 
(%APPDATA%/Python) and POSIX another (~/.local/lib/site-packages).

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Install time prefixes and data files

2009-11-13 Thread Glyph Lefkowitz

On Nov 13, 2009, at 9:01 AM, Wolodja Wentland wrote:

 This would mean that the *default* install should be PEP 370 compliant
 or inside a virtualenv and that installing into system paths should be
 enabled explicitly.

+1.


___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Improving distutils vs redesigning it (was people want CPAN)

2009-11-11 Thread Glyph Lefkowitz

On Nov 11, 2009, at 10:04 PM, Robert Kern wrote:

 In our considered opinion, piecemeal changes probably aren't going to solve 
 the significant problems that we face. At best, they simply aren't going to 
 help; we wouldn't be able to use the new features until we can drop support 
 for Python 2.6.

While I can understand your frustration, it's important to step back and think 
about whether your problems are really impossible to solve.  In particular, I 
take issue with this idea that you can't work on stuff that's distributed with 
Python but depend on newer versions.  Isn't the whole point of much of 
setuptools' complexity supposed to be the fact that you can have side-by-side 
multi-version installations?  Even assuming that this functionality doesn't 
work at _all_, who is to say that you can't ask users to upgrade distutils?  or 
do a --prefix installation of distutils into a different directory?  or have a 
build-time option that installs the 'distutils' package as 
'distutils_plus_plus' and rewrites things as necessary?  or implement an 
alternative to require() which *does* work? or, assuming require() works for 
some cases but not yours, adapt it to your needs?

There are probably a dozen other ways that you *could* work on distutils and 
benefit more immediately from your efforts than the next Python release.  To 
think otherwise is a simply a failure of imagination.  Now, if you think it's 
*too hard* to do that, it might be interesting to hear why you think that, and 
what exactly the effort would be; a nebulous assertion that it's just too hard 
and we should throw our hands up (while I can definitely understand the impulse 
to make such an assertion) serves only to discourage everyone.

The fact that a package is in the standard library is not a death sentence.  
Releases can be made separately.  Heck, if you are doing good work on trunk but 
the release cycles are taking too long, quite frequently distributors will make 
packages out of your code at some revision of trunk rather than a release.  I 
maintained software for _years_ that required a more recent version of pysqlite 
bindings than were available in the standard library's 'sqlite3' module.  The 
'pysqlite2' project is alive and well, and we didn't have any significant 
problems.

Now, as Tarek suggests, it would be more worthwhile to discuss the *specifics* 
of the problems that you assert require blowing up the world, as more detailed 
understanding of those specifics will allow both people who want rewrites *and* 
people who want incremental improvements to proceed with better-informed.  Any 
language environment's package/distribute/build/install/run pipeline is 
complicated enough that one can have a lot of productive discussion just 
nailing down exactly what is wrong with it, before even talking about 
solutions, and Python is no exception.

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Improving distutils vs redesigning it (was people want CPAN)

2009-11-11 Thread Glyph Lefkowitz

On Nov 12, 2009, at 12:02 AM, David Cournapeau wrote:

 Glyph Lefkowitz wrote:
 There are probably a dozen other ways that you *could* work on distutils and 
 benefit more immediately from your efforts than the next Python release.  To 
 think otherwise is a simply a failure of imagination.  Now, if you think 
 it's *too hard* to do that, it might be interesting to hear why you think 
 that, and what exactly the effort would be; a nebulous assertion that it's 
 just too hard and we should throw our hands up (while I can definitely 
 understand the impulse to make such an assertion) serves only to discourage 
 everyone.
 
 I am trying to understand what is 'nebulous' about our claims. We have
 given plenty of hard and concrete examples of things which are
 problematic in distutils.

I'm sorry if I gave the impression that I was contesting that particular 
assertion.  We all agree that distutils has deep problems.

And, I don't think that everything that has been said is overgeneral or 
unhelpful.  Before I dive into more criticism, let me just say that I agree 
100% with Robert Kern's message where he says:

 In order to integrate this with setuptools' develop command (...) we need to 
 create a subclass of setuptool's develop command that will reinitialize 
 build_src with the appropriate option. Then we need to conditionally place 
 the develop command into the set of command classes so as not to introduce a 
 setuptools dependency on those people who don't want to use it.

 This is nuts.

This is completely correct.  I've done stuff like this, we've all probably done 
stuff like this.  Conditional monkeypatching and dynamic subclassing is all 
over the place in distutils extension code, and it is *completely* nuts.

Still, it would have been more helpful to point out how exactly this problem 
could be solved, and to present (for example) a description of similar objects 
politely interacting and delegating responsibility to one another to accomplish 
the same task.

I would definitely characterize these assertion from Robert as nebulous, 
given that the prior messages in the thread (as far as I can tell) do not 
describe the kind of massive-overhaul changes which would fix things, only the 
problems that currently exist:

 In our considered opinion, piecemeal changes probably aren't going to solve 
 the significant problems that we face.

Why not?  The whole of computer history is the story of piecemeal improvements 
of one kind or another; despite perennial claims that, for example, 
hierarchical filesystems or bit-mapped displays fundamentally cannot support 
one type of data or another, here we are.

Or this one, also from Robert:

 Mostly because I'm entirely uninterested in helping you make incremental 
 improvements that are going to break all the hard work we've already done 
 just to get things working as it is.


Why do incremental improvements have to break all the hard work that has 
already been done?  Surely this is what a compatibility policy is about.

Or this one, from you, which is more specific to a topic, but still doesn't 
really say anything useful that I can discern:

 I think the compiler class and the likes should simply be removed ... There 
 should not be any objects / classes for compilers, it is not flexible enough 
 ... You cannot obtain this with classes and objects (especially when you 
 start talking about performance: ...).


It's clear to me from the responses in this thread that I'm not the only one 
who is only getting vague hints of what you're actually talking about from 
language like this.

classes and objects have been used in many high-performance systems.  
Personally I find classes and objects fairly flexible as well.  In fact, if 
*I* were to make a nebulous claim about distutils' design structure, it would 
be that the parsimony with creating whole new classes and instantiating 
multiple objects is the problem; there should be more classes, more objects, 
less inheritance and fewer methods.  So why can't 

 The major progress in our build issues have
 been achieved by dropping distutils. Up to now, the only people who have
 claimed that distutils can solve our problems are the people who are not
 involved at all with our projects, and the people who claim distutils
 cannot solve our problems are the people involved with it. That's odd to
 say the least.

I'm not asserting that distutils can fix your problems; I don't know enough 
about your problems to say for sure.  Certainly it seems clear that present-day 
distutils cannot.  I just know that there are many people on this list who are 
committed to a particular approach to evolving distutils, and while there is a 
lot of value in clearly explaining problems with that approach so they can be 
addressed, it's unhelpful to keep asserting (as has been done many times in 
this thread) that incremental evolution cannot address these problems.  It's a 
religious belief either way: my experience suggests

Re: [Distutils] Improving distutils vs redesigning it (was people want CPAN)

2009-11-11 Thread Glyph Lefkowitz

On Nov 12, 2009, at 1:36 AM, Robert Kern wrote:

 Glyph Lefkowitz wrote:
 
 Still, it would have been more helpful to point out how exactly this problem 
 could be solved, (...)

 Sorry, I edited out the bit at the last minute where I explained that it 
 would be great to have a centralized option-managing object such that any 
 command can ask what options were set on any other regardless of the 
 dependencies between commands.

I'm familiar with the hazards of over-editing :).  I am often asked to shorten 
my messages, and when I try too hard to do so, I leave out important elements.

One of the things I'm asking for, actually, is fewer, longer messages, with 
more substantive points in them.  I know some people don't like that, but 
discussions about big, complex topics like this that try to address them one 
little conversational point at a time tend to get circular quickly.

 Perhaps in my head the analogy with biological evolution is unjustifiably 
 strong. Species can't always get from point A to point B while making viable 
 intermediates with incremental changes. Evolutionary deadends happen 
 frequently.

This is explicitly _not_ biological evolution.  For example, when you wonder 
about this:

 Since Tarek keeps asking us to make proposals without thinking about 
 compatibility, I wonder what policy is being kept in mind. My comment stems 
 from my worry about that attitude.


The way I'm interpreting Tarek's comments - and he can correct me if I'm wrong 
- is that the strategy is to short-circuit evolution.  We should decide where 
we want to go - which may be an apparently discontinuous place - then path-find 
there.  The path-finding is rarely as hard as it seems like it's going to be.  
The brute-force approach, which also happens to be an approximation of the 
Twisted framework compatibility baseline, is:

  release -1: import old; old.crummy_api()
  release 0: old.crummy_api() emits a PendingDeprecationWarning, new.good_api() 
introduced
  release 1: old.crummy_api emits DeprecationWarning pointing at 
new.good_api(), new.good_api() improved based on feedback from r0
  release 2: old.crummy_api raises DeprecationError pointing at new.good_api()
  release 3: old.crummy_api removed

This is almost a straw-man: you can do this for any old.crummy_api and 
new.good_api, regardless of whether the new thing actually satisfies the old 
thing's requirements at all.  It is often possible to do much better.  But the 
point is, if you have a clear new.good_api to get to, it's possible to do all 
*kinds* of crazy stuff in Python to emulate and re-direct the behavior of 
crummy_api, discover how it's being used, provide useful hints to the 
developer, etc.  It's a matter of how much effort you want to put into it.  For 
many distutils use-cases, it sounds to me like the path forward is to avoid 
using any API at all, and just ask most projects to provide static metadata.  
Build-heavy projects like numpy will require new integration points to do 
custom heavy lifting, but it will be easier to define those integration points 
if they're not something that every project under the sun will potentially need 
to interact with.

But, during the creative design process of good_api, it's often helpful to 
pretend crummy_api doesn't even exist, so you can design something good that 
solves its problems well, and address the translation as a truly separate issue.

 In software, design decisions early on affect how much change the software 
 can tolerate (which is why we are told to design for change).

Who's we, kimosabe?  *We* are told you aren't gonna need it; maybe some 
other people are told to design for change :).  In fact, I think that distutils 
is over-designed for change.  It has altogether too many different extension 
mechanisms, which often interfere with each other: subclassing, configuration 
files, including random bits of code in setup.py.  And then of course there's 
the monkey-patching for the cases that weren't covered :).

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] People want CPAN :-)

2009-11-07 Thread Glyph Lefkowitz


On Nov 6, 2009, at 12:53 PM, Guido van Rossum wrote:


I just found this comment on my blog. People have told me this in
person too, so I believe it is real pain (even if the solution may be
elusive and the suggested solutions may not work). But I don't know
how to improve the world. Is the work on distutils-sig going to be
enough? Or do we need some other kind of work in addition? Do we need
more than PyPI?


In my experience, when users say this, they just mean I tried  
easy_install and it broke.


PyPI doesn't have some deep, fundamental, architectural issue that  
prevents it from working.  The user experience of it is just buggy.   
Consider the difference between these two pages:


http://docs.webfaction.com/software/perl.html
http://docs.webfaction.com/software/python.html

Note that the 'python' page is more than twice as long, lists a ton of  
different installation options, and includes a gigantic  
troubleshooting section that apparently isn't necessary for perl.   
Note also that the Perl page is just a series of steps describing how  
to invoke the one installation mechanism, but the Python page is a  
hodgepodge of qualified instructions describing different possible  
mechanisms you can try.  It also appears that webfaction has modified  
the default environment's configuration to make their  
troubleshooting section *shorter* than it would have to be for more  
general Python software installation instructions.


The default behavior of most Python installation mechanisms - to my  
knowledge, 'python setup.py install', 'easy_install', and 'pip', will  
all raise exceptions by default on UNIX-y platforms, unless you're  
root.  On Windows (since a higher percentage of the user population  
walks around with admin rights all the time), the default invocations  
described by many project web pages will work if the installation is  
pure-python or if the author remembered to provide a Windows binary  
egg, but a common failure mode is you don't have a compiler.   
Similarly, on a Mac, you have to have Xcode installed, although Python  
itself works fine if you don't, so it seems like you don't.


Many of these tools *would* work by default with a small amount of  
configuration, a couple of environment variables, and clearer error  
messages that indicate (A) *that* you need to install a C compiler and  
(B) *where* you need to go to get a C compiler.


One project that would help a lot is just a easy python setup  
documentation project that describes, as simply as possible, in large  
fonts, how to get a working Python setup that adheres to a few  
conventions.  Just include the 2 lines of .bashrc and explain how to  
add them; don't debate the merits of ~/bin vs. ~/.local/bin vs. ~/opt/ 
bin (although come on, ~/.local/bin/ is _clearly_ the right name for  
it), just pick one for each platform and provide clear step-by-step  
instructions for getting it to work. put this in your ~/.bashrc:  
really big PRE tag with shell setup in it.  restart your shell.   
Anybody who has selected an alternate shell or done some other prior  
configuration will need to adjust their expectations, but we can worry  
about supporting unusual configurations when the community has a good  
answer for the default configuration.  (Although this is a good reason  
to do this as documentation and not attempt to write an  
autoconfigurating tool: an autoconfigurating tool needs to understand  
every possible nuance of the environment, but advanced users can  
notice which parts of the short document might not apply to them.)


I feel like I might be repeating this a bit much, but it's a really  
important point: many of the things I'm talking here are *not* about  
getting the code right as part of a Python tool, but in providing an  
easy, manageable way to integrate with _other_ tools that are outside  
of our collective control as Python package authors: the dynamic  
linker, the shell, the file manager and the C compiler (or lack  
thereof).  By providing a default user-home-directory installation  
location, Python itself is already doing almost as much as it can; if  
easy_install started installing things into that location by default  
*without* any of this bootstrapping documentation (or a very, very  
carefully written tool to do the bootstrapping for you) then importing  
pure Python packages might work great but scripts would be broken and  
any external shared libraries required by Python modules (even if they  
built correctly) would just give you an ImportError.


Once we have some kind of working consensus on this setup, the tools  
can change to support it: easy_install can default to installing  
things in the user's home directory in the case that (A) the  
environment is set up for it and (B) the user isn't an administrator.   
If the environment *isn't* set up, instead of spitting out twelve  
paragraphs explaining how really you should have read-write access to  
the location where 

Re: [Distutils] Distutils and Distribute roadmap (and some words on Virtualenv, Pip)

2009-10-27 Thread Glyph Lefkowitz


On Oct 27, 2009, at 7:41 PM, David Lyon wrote:


I'm not sure about that Tarek..

An .exe installer as a perfect binary format for python packages?

Are you serious?

That is the biggest security threat I can think of, asking python
users to run unverified, unsigned, un-trusted executable files on
their systems.


easy_install, pip, and indeed all of PyPI is basically a system for  
executing untrusted code, usually as a system administrator, straight  
off of what is effectively a wiki.


If you're concerned about security and distutils, there is a _lot_ of  
work to do.  There is no particular additional danger in executing  
a .exe rather than a setup.py.



___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] setup.cfg new format proposal

2009-09-14 Thread Glyph Lefkowitz
On Fri, Sep 11, 2009 at 1:40 PM, Jim Fulton j...@zope.com wrote:

 On Fri, Sep 11, 2009 at 11:49 AM, Tarek Ziadé ziade.ta...@gmail.com
 wrote:
  On Fri, Sep 11, 2009 at 5:36 PM, Jim Fulton j...@zope.com wrote:
  On Fri, Sep 11, 2009 at 11:14 AM, Tarek Ziadé ziade.ta...@gmail.com
 wrote:
  2009/9/11  exar...@twistedmatrix.com:
  On 02:33 pm, ziade.ta...@gmail.com wrote:
  I'm a little skeptical about creating a new mini language
 (particularly one
  with branching) for setup.cfg,
 
  Me too.


Me three.  Even after hearing the motivating use-cases, like:

 So you can for example define a list of dependencies that changes wether
  you are under windows or linux, etc..

 OK. I've never had that situation. I'm sure someone has, but it
 doesn't see common.


Twisted actually has this.  On Windows, we need pywin32 to implement many of
the features which can be implemented with just the stdlib on Posix.  But
this is the only thing I can imagine computing based on the execution
environment.

 try to run this with lxml source tarball:
 
  $ python setup.py --name
 
  I don't know what it is doing, but a whole lot more that what it is
  suppose to do
  just to let me know the 'name' metadata.
 
  So if name was in setup.cfg, it would be better.


One big problem I have with this discussion is that it seems to be
approaching the whole problem backwards.  Distutils projects seem to have a
history of defining new file formats rather than or before new APIs.
 (PKG-INFO, MANIFEST.in, egg-info, etc etc).

I may be missing some context, but I did about an hour of googling around
before responding here, and for the life of me I can't find a simple answer
to the question: how do I currently programatically get the metadata out of
a distribution?

The answer seems to be a bizarre combination of running setup.py, maybe
reading PKG-INFO, maybe something to do with egg-info...?  Or maybe making
some direct calls with xmlrpclib?  Or setuptools?  Setuptools must be able
to do it internally, but I can't find an API in the documentation.

Before trying to make the metadata static, let's try to have the metadata,
for real, at all.  Here's an example of something I want to be able to do
with metadata:

from distutils.something import (
load_metadata, pypi_metadata, installed_metadata)
meta = installed_metadata(Twisted)
meta2 = load_metadata(/home/glyph/Downloads/Twisted-x.y)
meta3 = pypi_metadata(Twisted)
if meta2.version  meta.version:
print 'More recent version of Twisted already downloaded!'
if meta3.version  meta2.version:
print 'More recent version of Twisted is available online!'


Important points here:

   - I can get the same metadata, in the same format, from PyPI, from a
   directory containing the distribution, and from my installed system.
   - There are supported functions that abstract this stuff away for me.
Working with the packages, I don't have to know how the metadata is
   actually declared.
   - There is a single data structure, hopefully with some API documentation
   I can look at.

Now, maybe the data structure that I want already exists and I just don't
know about it (is it a distutils Distribution, maybe?) but it's certainly
hard to get a handle on one.

In order to be useful, the code that loads this metadata is going to need to
execute setup.py sometimes anyway.  Tarek, as you said in your blog post,
there will always be edge cases.  So it doesn't seem too useful to be
focusing on getting rid of that requirement right away.  And when you
download from PyPI, you probably eventually have to execute a bunch of
untrusted code that somebody uploaded to a wiki, as root.  Again it would be
nice to fix that, but we have a long way to go :).

For what it's worth, on the implementation side of things, I think that the
library code on the client side should receive enough metadata that it can
do the job of applying the execution environment locally.  I don't
necessarily want to send pypi a bunch of information about my machine just
to figure out what distutils would tell me if I tried to install something.

If there were such an API then setup.cfg would simply be a small
implementation detail.  The problem with lxml will still need to be
addressed, by promoting awareness of the fact that your setup.py may be
executed to do things other than install or submit your package.  Before
adding conditional support to setup.cfg I would want to have a separate
'metadata.py' whose responsibilities were clearer: just produce the
metadata, don't do any pre-build steps.  Ideally that would be easy to load
correctly from setup.py as well.

Inspecting metadata could also be made easier by having an API more like
this for setup.py:

d = Distribution(... metadata )
if d.installing:
my_project.custom_prepare_for_build()
d.setup()

since you can't really tell what setup() is going to do until you've already
called it.

So finally, if there's still no standard programmatic way to get at the
distribution metadata 

Re: [Distutils] how do I add some functionality to my setup.py which is testable and modular?

2009-08-18 Thread Glyph Lefkowitz
On Tue, Aug 18, 2009 at 7:20 PM, Zooko Wilcox-O'Hearn zo...@zooko.comwrote:

 ... and Glyph refuses to accept my patch which fixes it because the fix
 isn't testable ...


Hi, Zooko :).

Since this message officially makes me That Jerk Who Is Forcing The Issue,
I've finally joined distutils-sig.

Thank you for writing up the whole thing with such a detailed bibliography.
 I'll reply to PJE's message shortly.

(For what it's worth, I wouldn't quite characterize it like that.  More like
Twisted and Nevow refuse to accept these patches because their policy
requires testability.  I didn't make a personal decision to exclude these
patches; the contributors have achieved a consensus about testability.  The
fact that we all arrived at that consensus because I said so in the first
place is irrelevant ;-)).
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] how do I add some functionality to my setup.py which is testable and modular?

2009-08-18 Thread Glyph Lefkowitz
On Sun, Aug 16, 2009 at 8:17 PM, P.J. Eby p...@telecommunity.com wrote:

 At 02:16 PM 8/16/2009 -0600, Zooko Wilcox-O'Hearn wrote:

 So it appears to me that none of these techniques are both modular/
 testable and compatible with distutils/setuptools/Distribute.  What
 are we to do?


 We could be modular if there was a way to specify pre-setup.py
 dependencies.


For a lot — although admittedly, not all — of the code in question, the only
dependency is on some code that lives in the package itself which explicitly
avoids depending on anything besides distutils.  That's not to say
pre-setup.py dependencies wouldn't be useful, but if we could formalize
making that case work (as it would if we could depend on the simplistic
environment that a distutils-only 'setup.py install' has), it would go a
long way towards fixing the larger problem.

Unfortunately, there isn't such a thing at the moment, short of calling
 setup() twice, and forcing the first call to have no script arguments, just
 a setup_requires argument.


So, modular is a slippery word.  Let me try to be a little more specific
about what I personally want; Zooko can elaborate, and I'm flexible on some
of it, but best to start with an ideal.

I have a development environment where sys.path is set up to point at the
source code for a set of working branches.  For the purposes of this
discussion let's say I've got Nevow, which contains nevow/__init__.py,
Twisted, which contains twisted/__init__.py, and Tahoe, which... well,
actually it contains src/allmydata/__init__.py but happily my setup can
deal with that.  My sys.path has [Twisted, Nevow, Tahoe/src] on the
end of it.  My $PATH (or %PATH%, as the case may be) has Twisted/bin,
Nevow/bin, Tahoe/bin.  I hope this convention is clear.

Now, here's the important point.  I want to run 'trial twisted', which is to
say, ~/.../Twisted/bin/trial twisted, and have it load the code from my
pre-existing Twisted/ sys.path entry.  I want to load and examine the
distribution metadata, which in the current context means running most of
what usually goes in setup.py.  I also want to be able to run *parts* of the
distribution process, to unit-test them, without actually invoking the
entire thing.  There are lots of reasons to want this:

   1. It's much faster to skip installation, especially if you're rapidly
   iterating over changes to a small piece of the distribution setup process
   2. It encourages splitting the distribution process up into smaller
   pieces (modularizing it) so that it can be re-used by other parts of the
   same project.
   3. It allows for independent testing of those same pieces so that when
   they are re-used, there is some existing expectation that they will behave
   as expected that isn't specific to installation of a particular package.
   4. By including it in the package, you allow dependencies of that package
   to use the packaging functionality as well, so that custom distribution
   stuff is done consistently across all parts of an ecosystem.

As some of Zooko's links suggest, the way I would *prefer* to do that is for
the distribution metadata to live in a module in 'twisted/', which can be
imported by setup.py as a normal python module, and to have setup.py itself
look like

from distutils import setup
from twisted.python.distribution import metadata
setup(**metadata)

or even better:

from twisted.python.distribution import autosetup
autosetup()

The buildbot, as it happens, has a similar setup.  There are specific
buildslaves that do a full system installation rather than just an 'svn up'
before running the tests, to do whole-system integration testing for the
installation procedure, but that process is much slower and more
disk-intensive, it increases wear and tear on the testing machines, and it
takes longer to provide feedback to developers who are sitting idle, so we
don't want to have it set up that way everywhere.

Of course, that'd only work if setuptools were present, and it would also
 force an immediate download of the build dependencies in question.
  Something like:

  try:
  from setuptools import Distribution
  except ImportError:
  pass
  else:
  Distribution(dict(setup_requires=[...]))


What goes in the ... is pretty important.  For one thing, I don't quite
understand the implications of this approach.  For another, I really don't
want to depend on setuptools, because we certainly need to keep supporting
non-setuptools environments.

If you want to get fancy, you could replace the pass with printing some
 user-readable instructions after attempting to see if your build-time
 dependencies are already present.


This strikes me as very non-modular.  If such a message is interesting or
important, presumably it needs to be localized, displayed by installers,
etc, and therefore belongs in a module somewhere.  Even if that module needs
to be bundled along with your application in order to make it work :).

Thanks for reading :).

[Distutils] [issue67] You can't tell easy_install not to talk to the network

2009-04-03 Thread Glyph Lefkowitz

New submission from Glyph Lefkowitz gl...@twistedmatrix.com:

If I want to use easy_install for nice dependency resolution, there's no way
(that I can figure out, at least) to tell it never talk to the network, use
only this local directory where you can find dependencies.

--
messages: 260
nosy: glyph
priority: feature
status: unread
title: You can't tell easy_install not to talk to the network

___
Setuptools tracker setupto...@bugs.python.org
http://bugs.python.org/setuptools/issue67
___
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig