Benji York wrote:
David Pratt wrote:
Hi Benji. This is exactly what I have been doing up till now and has
been working well for quick work on a local development machine. My
current thinking though is to take control of as much of the software
as possible so that development == deployment on my local machine to
mitigate the risk of breaking things even if it means more disk. I am
doing this in conjunction with stripping the deployment server to it's
barest bones and bringing as much of the software into the buildout as
possible.
I really would like to see a two stage buildout that does the python
construction with a python.cfg and then the main buildout with
buildout.cfg file as part of the standard fare. I'm trying a few
things today to see if a simple event class and callback can be used
to create the python first and have the callback's handler run the
main buildout as an experiment.
Where you draw the boundary line between the "environment" and the
"application" has a big impact on how you make these types of decisions.
You want to find the optimum place to draw that line so you end up
getting the most benefit from the least amount of work (it's kind of
analogous to the max-flow min-cut theorem from graph theory).
Hi Benji. You are absolutely right about this. Also I have to say the
amount of software going into things was sort of freaking me out over
the last year or so, particulary when all the eggified packages were
just rolling out. What I am coming to learn is that the set of software
that I am using, while relatively large is also reasonably finite. And I
mean the essential server software starts looking small in relation to
two or three hundred eggs after a while :-) Secondly, a buildout.cfg is
pretty readable - I like knowing I don't have to guess about how
something was built, its all in black and white with setting I no longer
need to look up or remember. In this way packaging systems like ports
and yum make you a little lax about a distribution and its settings.
I have a buildout recipe for gcc also, how ironic :-) Crazy thing was
that pyLucene up until recently would only work on certain platforms
with gcc-3.4.6, so I have had to go this route to construct a compiler
that would allow pyLucene to be built also.
I get your point though, and ports and yum are nice and easy to work
with. The way I am attempting to mitigate issues with buildouts and
servers is specialization. The fact that a server in as much as possible
is reliably the same is what I want for administration.
Regards,
David
Let me use an example to illustrate. Say you decided to build Python
with your buildout. After all, your app uses Python, so to have good
reproducibility you want to make sure Python is perfectly clean and
built repeatably so you don't get any surprises. Makes sense.
Your app is enjoying increased success and one day you need to add some
new servers to your cluster. You buy a few new machines, put your OS on
them, and build your app. You then run your tests and they fail.
Darn.
You investigate and find that your Python was built without support for
zip compression because the zlib development libraries aren't included
in your base OS install. Now you have a decision to make, do you add
zlib to your buildout, or do you add the zlib development package to
your OS?
If Python is part of the environment you add the zlib development
package to your OS. Of course that means you need a good way of
controlling what is in the OS (environment). Conveniently, there are
ways to do that (RPM, APT, ports, etc.).
At this point if you draw the app/environment boundary to include Python
in your app, you should add zlib to your buildout, right? What happens
if you hit a bug in GCC when compiling Python? Do you include GCC in
your buildout to make sure you get the right version? It gets worse the
deeper you go. :)
_______________________________________________
Zope3-users mailing list
Zope3-users@zope.org
http://mail.zope.org/mailman/listinfo/zope3-users