Jim Fulton wrote:
I actually tried to do this once before with zc.buildout, but I didn't
get far -- probably a result of lack of effort and lack of familiarity
with the overall stack. But I also recognize lots of the questions
about stuff like the zope.conf file and Data.fs that still seem
Certainly when you tried this, buildout was very young and we hadn't
written recipes to deal with these issues. We've made a lot of progress
Well, the last time I really used it was early December, and it still
felt slow and awkward to me at the time, with several funny quirks.
And frankly I like easy_install. It's
probably 10x faster than buildout.
I doubt that that is true now. Although that probably depends on what
you are doing. Early versions of buildout did a lot of things
inefficiently as I was still learning setuptools. Because of the way
that buildout caches index information, I expect that creating a
buildout from scratch that used a lot of eggs would be much faster than
using easy_install. One difference though is that buildout checks for
the most recent compatible versions of all of the eggs it's using every
time you run it, whereas, as I understand it, with workingenv, you'd
just run easy_install manually when you want a new egg.
Correct. The basic process with workingenv is:
1. Set it up.
2. Start installing stuff.
3. Try running stuff.
4. Realize you got it wrong, missed something, want to do more
development, return to 2.
I actually find myself doing the 2-4 loop pretty often, both in
development and when first deploying something. Just the amount of time
to do "bin/buildout -h" was substantial (though I don't really
understand why, except that buildout seemed to be working way too hard
to update itself).
You can bypass
the checks by running in offline mode. Then buildout runs very fast.
Because of the ability to share eggs accross buildouts, it is often
possible to run a buidout using lots of eggs in offline mode.
It has been suggested that there should be a mode for buildout that only
talks to the network when there isn't a local egg that satisfied a
requirement. This would make buildout work more like workingenv when
few if any eggs are actually needed.
Yes; more like easy_install does as well, actually. Though the way
easy_install works is hardly intuitive; I find myself frequently saying
"yes, you installed it, but did you -U install it?"
As for the technical reasons they don't work together:
* workingenv allows and leaves it to setuptools to maintain the package
installation database (basically easy-install.pth). This is not a very
good database, but eh. buildout doesn't really have a database, but
instead just enforces what buildout.cfg indicates.
buildout uses the buildout configuration file to store what you want.
It uses .installed.cfg to capture what you have. These are both
databases of sorts.
* workingenv relies on that database to give default versions and to
setup the Python path. The fixup it does of installed scripts is fairly
minimal, just setting up sys.path enough to force its site.py to get
called. buildout enumerates all the activated packages, and ignores
easy-install.pth. This is basically what makes it
Yup. I wanted something far more static and predictable for scripts
generated by buildout.
Plus buildout's desire to own everything and
destroy everything it does not own ;)
I'm not aware that it destroys anything. Could you be more specific?
Well, it owns parts, and the recipes control that. Doesn't it also
delete and reinstall there? How it treats each area of the buildout I'm
unclear. Simply making the file layout a bit more conventional, and
describing anything non-obvious, would make buildout feel a lot more
comfortable to the new user.
* As a result buildout supports multiple things in the same buildout
that have conflicting version requirements, but where the packages
themselves don't realize this (but the deployer does). If the packages
know their requirements then setuptools' native machinery allows things
to work fine.
Yes. I expect that usually, packages won't be very specific. The
buildout configuration file provides a place to be specific.
workingenv allows this, insofar as you can be specific while installing
things, and with the requirements file. But it doesn't make the
individual scripts very specific, if for instance appfoo requires
libX>1.0, and appbar requires libX>1.1, but you actually want appfoo to
use libX==1.0 and appbar to use libX==1.1 and install them in the same
buildout. That's the only case where buildout seems to be able to
express something workingenv can't.
* Some see bin/activate as a jail. Both workingenv and buildout are
deliberately jail-like. Both Jim and I loathe the non-repeatability of
system-wide installations (at least I think I can speak for him on that
one point ;). bin/activate lets you into that jail, and lets you work
there. There is no way into a buildout.
I'm not familiar with bin/activate, but it sounds like an interpreter
script created with buildout.
It's created by workingenv, and you have to source it because basically
its only function is to add the workingenv/lib/pythonX.Y to $PYTHONPATH.
Adding that path to $PYTHONPATH is the only thing that really
"activates" a workingenv.
Frankly this weirds me out,
and is a big part of my past frustration with it. Maybe that's because
I'm in the relatively uncommon situation that I actually know what's
going on under the hood of Python imports and packaging, and so it
bothers me that I can't debug things directly. Anyway, neither requires
activation when using scripts generated in the environment. And
bin/activate is really just something that sets PYTHONPATH and then does
other non-essential things like changing the prompt and $PATH -- I
should probably document that more clearly.
sounds a lot like an buildout interpreter script.
Once you've changed $PYTHONPATH any Python script will notice the
change. This can actually be a bit awkward if you have fully isolated
the working environment, as it means a script may not see the global
Python paths. But if you don't isolate the environment, the script can
see the workingenv path in addition to its own.
Neither can be entirely
compatible with a system-wide Python installation, because Python's
standard site.py f**ks up the environment really early in the process,
and avoiding that isn't all that easy.
This reminds me of a place where buildout is looser than workenv.
buildout doesn;t try to disable anything in the system python. It just
augments it. I always use a clean python, so avoiding customizations in
the Python I use isn't a problem. If I wanted to take advantage of
something in a system Python, as I occasionally do, I can do that with
I find the isolation useful when testing things for release; I can be
sure that I haven't been using any packages that I don't explicitly
include in the egg requirements or instructions. But it can be annoying
in other cases, like when there's a library that doesn't install cleanly
(of which there's still quite a few). Anyway, if you do want to include
the global packages, --site-packages will change your workingenv to do so.
It could be argued that workingenv's default should be to include
site-packages. Another option would be to have a tool that allows you
to easily include something from the system Python (probably just a tool
to manage a custom .pth file, which works even when setuptools' fairly
heroic attempts to fix broken setup.py's doesn't work).
Ian Bicking | [EMAIL PROTECTED] | http://blog.ianbicking.org
Zope-Dev maillist - Zope-Dev@zope.org
** No cross posts or HTML encoding! **
(Related lists -