Re: [Python-Dev] .pth files are evil
On May 9, 2009, at 9:39 AM, P.J. Eby wrote: It would be really straightforward, though, for someone to implement an easy_install variant that does this. Just invoke easy_install -Zmaxd /some/tmpdir packagelist to get a full set of unpacked .egg directories in /some/tmpdir, and then move the contents of the resulting .egg subdirs to the target location, renaming EGG-INFO subdirs to projectname-version.egg-info subdirs. Except for the renaming part, this is exactly what GNU stow does. (Of course, this ignores the issue of uninstalling previous versions, or overwriting of conflicting files in the target -- does pip handle these?) GNU stow does handle these issues. Regards, Zooko ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] .pth files are evil
GNU stow does handle these issues. If GNU stow solves all your problems, why do you want to use easy_install in the first place? Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] how GNU stow is complementary rather than alternative to distutils
On May 10, 2009, at 11:18 AM, Martin v. Löwis wrote: If GNU stow solves all your problems, why do you want to use easy_install in the first place? That's a good question. The answer is that there are two separate jobs: building executables and putting them in a directory structure of the appropriate shape for your system is one job, and installing or uninstalling that tree into your system is another. GNU stow does only the latter. The input to GNU stow is a set of executables, library files, etc., in a directory tree that is of the right shape for your system. For example, if you are on a Linux system, then your scripts all need to be in $prefix/bin/, your shared libs should be in $prefix/lib, your Python packages ought to be in $prefix/lib/python$x.$y/site- packages/, etc. GNU stow is blissfully ignorant about all issues of building binaries, and choosing where to place files, etc. -- that's the job of the build system of the package, e.g. the ./configure -- prefix=foo make make install for most C packages, or the python ./setup.py install --prefix=foo for Python packages using distutils (footnote 1). Once GNU stow has the well-shaped directory which is the output of the build process, then it follows a very dumb, completely reversible (uninstallable) process of symlinking those files into the system directory structure. It is a beautiful, elegant hack because it is sooo dumb. It is also very nice to use the same tool to manage packages written in any programming language, provided only that they can build a directory tree of the right shape and content. However, there are lots of things that it doesn't do, such as automatically acquiring and building dependencies, or producing executables for the target platform for each of your console scripts. Not to mention creating a directory named $prefx/lib/python $x.$y/site-packages and cp'ing your Python files into it. That's why you still need a build system even if you use GNU stow for an install-and-uninstall system. The thing that prevents this from working with setuptools is that setuptools creates a file named easy_install.pth during the python ./ setup.py install --prefix=foo if you build two different Python packages this way, they will each create an easy_install.pth file, and then when you ask GNU stow to link the two resulting packages into your system, it will say You are asking me to install two different packages which both claim that they need to write a file named '/usr/local/lib/python2.5/site-packages/easy_install.pth'. I'm too dumb to deal with this conflict, so I give up.. If I understand correctly, your (MvL's) suggestion that easy_install create a .pth file named easy_install-$PACKAGE-$VERSION.pth instead of easy_install.pth would indeed make it work with GNU stow. Regards, Zooko footnote 1: Aside from the .pth file issue, the other reason that setuptools doesn't work for this use while distutils does is that setuptools tries to hard to save you from making a mistake: maybe you don't know what you are doing if you ask it to install into a previously non-existent prefix dir foo. This one is easier to fix: http://bugs.python.org/setuptools/issue54 # be more like distutils with regard to --prefix= . ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] how GNU stow is complementary rather than alternative to distutils
Zooko Wilcox-O'Hearn wrote: On May 10, 2009, at 11:18 AM, Martin v. Löwis wrote: If GNU stow solves all your problems, why do you want to use easy_install in the first place? That's a good question. The answer is that there are two separate jobs: building executables and putting them in a directory structure of the appropriate shape for your system is one job, and installing or uninstalling that tree into your system is another. GNU stow does only the latter. And so does easy_install - it's job is *not* to build the executables and to put them in a directory structure. Instead, it's distutils/setuptools which has this job. The primary purpose of easy_install is to download the files from PyPI (IIUC). The thing that prevents this from working with setuptools is that setuptools creates a file named easy_install.pth It will stop doing that if you ask nicely. That's why I recommended earlier that you do ask it not to edit .pth files. If I understand correctly, your (MvL's) suggestion that easy_install create a .pth file named easy_install-$PACKAGE-$VERSION.pth instead of easy_install.pth would indeed make it work with GNU stow. My recommendation is that you use the already existing flag to setup.py install that stops it from editing .pth files. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] how GNU stow is complementary rather than alternative to distutils
following-up to my own post to mention one very important reason why anyone cares: On Sun, May 10, 2009 at 12:04 PM, Zooko Wilcox-O'Hearn zo...@zooko.com wrote: It is a beautiful, elegant hack because it is sooo dumb. It is also very nice to use the same tool to manage packages written in any programming language, provided only that they can build a directory tree of the right shape and content. And, you are not relying on the author of the package that you are installing to avoid accidentally or maliciously screwing up your system. You're not even relying on the authors of the *build system* (e.g. the authors of distutils or easy_install). You are relying *only* on GNU stow to avoid accidentally or maliciously screwing up your system, and GNU stow is very dumb, so it is easy to understand what it is going to do and why that isn't going to irreversibly screw up your system. That is: you don't run the build yourself and install into $prefix step as root. This is an important consideration for a lot of people, who absolutely refuse on principle to ever run sudo python ./setup.py on a system that they care about unless they wrote the setup.py script themselves. (Likewise they refuse to run sudo make install on packages written in C.) Regards, Zooko ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] how GNU stow is complementary rather than alternative to distutils
At 12:04 PM 5/10/2009 -0600, Zooko Wilcox-O'Hearn wrote: The thing that prevents this from working with setuptools is that setuptools creates a file named easy_install.pth during the python ./ setup.py install --prefix=foo if you build two different Python packages this way, they will each create an easy_install.pth file, and then when you ask GNU stow to link the two resulting packages into your system, it will say You are asking me to install two different packages which both claim that they need to write a file named '/usr/local/lib/python2.5/site-packages/easy_install.pth'. Adding --record and --single-version-externally-managed to that command line will prevent the .pth file from being used or needed, although I believe you already know this. (What that mode won't do is install dependencies automatically.) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] special method lookup: how much do we care?
Benjamin Peterson wrote: A while ago, Guido declared that all special method lookups on new-style classes bypass __getattr__ and __getattribute__. This almost completely consistent now, and I've been working on patching up a few incorrect cases. I've know hit __enter__ and __exit__. The compiler generates LOAD_ATTR instructions for these, so it uses the normal lookup. The only way I can see to fix this is add a new opcode which uses _PyObject_LookupSpecial, but I don't think we really care this much. Opinions? As Georg pointed out, the expectation was that we would eventually add a SETUP_WITH opcode that used the special method lookup (and hopefully speed with statements up to a point where they're competitive with writing out the associated try statement directly). The current code is the way it is because there is no LOAD_SPECIAL opcode and adding type dereferencing logic to the expansion would have been difficult without a custom opcode. For other special methods that are looked up from Python code, the closest we can ever get is to bypass the instance (i.e. using type(obj).__method__(obj, *args)) to avoid metaclass confusion. The type slots are even *more* special than that because they bypass __getattribute__ and __getattr__ even on the metaclass for speed reasons. There's a reason the docs already say that for a guaranteed override you *must* actually define the special method on the class rather than merely making it accessible via __getattr__ or even __getattribute__. The PyPy guys are right to think that some developer somewhere is going to rely on these implementation details in CPython at some point. However lots of developers rely on CPython ref counting as well, no matter how many times they're told not to do that if they want to support alternative interpreters. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia --- ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] special method lookup: how much do we care?
Nick Coghlan wrote: Benjamin Peterson wrote: A while ago, Guido declared that all special method lookups on new-style classes bypass __getattr__ and __getattribute__. This almost completely consistent now, and I've been working on patching up a few incorrect cases. I've know hit __enter__ and __exit__. The compiler generates LOAD_ATTR instructions for these, so it uses the normal lookup. The only way I can see to fix this is add a new opcode which uses _PyObject_LookupSpecial, but I don't think we really care this much. Opinions? As Georg pointed out, the expectation was that we would eventually add a SETUP_WITH opcode that used the special method lookup (and hopefully speed with statements up to a point where they're competitive with writing out the associated try statement directly). The current code is the way it is because there is no LOAD_SPECIAL opcode and adding type dereferencing logic to the expansion would have been difficult without a custom opcode. For other special methods that are looked up from Python code, the closest we can ever get is to bypass the instance (i.e. using type(obj).__method__(obj, *args)) to avoid metaclass confusion. The type slots are even *more* special than that because they bypass __getattribute__ and __getattr__ even on the metaclass for speed reasons. There's a reason the docs already say that for a guaranteed override you *must* actually define the special method on the class rather than merely making it accessible via __getattr__ or even __getattribute__. The PyPy guys are right to think that some developer somewhere is going to rely on these implementation details in CPython at some point. However lots of developers rely on CPython ref counting as well, no matter how many times they're told not to do that if they want to support alternative interpreters. It's actually very annoying for things like writing Mock or proxy objects when this behaviour is inconsistent (sorry should have spoken up earlier). The Python interpreter bases some of its decisions on whether these methods exist at all - and when you have objects that provide methods through __getattr__ then you can accidentally get screwed if magic method lookup returns an object unexpectedly when it should have raised an AttributeError. Of course for proxy objects it might be more convenient if *all* attribute access did go through __getattr__ - but with that not the case it is much better for it to be consistent rather than have to put in specific workaround code. All the best, Michael Cheers, Nick. -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] special method lookup: how much do we care?
Michael Foord wrote: Nick Coghlan wrote: Benjamin Peterson wrote: A while ago, Guido declared that all special method lookups on new-style classes bypass __getattr__ and __getattribute__. This almost completely consistent now, and I've been working on patching up a few incorrect cases. I've know hit __enter__ and __exit__. The compiler generates LOAD_ATTR instructions for these, so it uses the normal lookup. The only way I can see to fix this is add a new opcode which uses _PyObject_LookupSpecial, but I don't think we really care this much. Opinions? As Georg pointed out, the expectation was that we would eventually add a SETUP_WITH opcode that used the special method lookup (and hopefully speed with statements up to a point where they're competitive with writing out the associated try statement directly). The current code is the way it is because there is no LOAD_SPECIAL opcode and adding type dereferencing logic to the expansion would have been difficult without a custom opcode. For other special methods that are looked up from Python code, the closest we can ever get is to bypass the instance (i.e. using type(obj).__method__(obj, *args)) to avoid metaclass confusion. The type slots are even *more* special than that because they bypass __getattribute__ and __getattr__ even on the metaclass for speed reasons. There's a reason the docs already say that for a guaranteed override you *must* actually define the special method on the class rather than merely making it accessible via __getattr__ or even __getattribute__. The PyPy guys are right to think that some developer somewhere is going to rely on these implementation details in CPython at some point. However lots of developers rely on CPython ref counting as well, no matter how many times they're told not to do that if they want to support alternative interpreters. It's actually very annoying for things like writing Mock or proxy objects when this behaviour is inconsistent (sorry should have spoken up earlier). The Python interpreter bases some of its decisions on whether these methods exist at all - and when you have objects that provide methods through __getattr__ then you can accidentally get screwed if magic method lookup returns an object unexpectedly when it should have raised an AttributeError. Of course for proxy objects it might be more convenient if *all* attribute access did go through __getattr__ - but with that not the case it is much better for it to be consistent rather than have to put in specific workaround code. Suggestion: have something like from __future__ but affecting compile-time behaviour (like pragmas in some other languages), such as causing Python to generate bytecodes which perform all attribute access through __getattr__. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] .pth files are evil
On Sun, 10 May 2009 09:41:33 -0600, Zooko Wilcox-O'Hearn zo...@zooko.com wrote: (Of course, this ignores the issue of uninstalling previous versions, or overwriting of conflicting files in the target -- does pip handle these?) GNU stow does handle these issues. I'm not sure GNU stow will handle the .PTH when deinstalling packages. In easy_install.PTH there will be a list of all the packages installed. This list really needs to be edited once a package is removed. The .PTH files are a really good part of python. Definitely nothing evil about them. David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com