Hi Brian,

I've been over this a few times in our environment - and the best solution
I've been able to come up with has been to run multiple Apache servers.

Our environment involves simultaneous HEAD and branch development on our
Framework, as well as HEAD and branch development of applications based on
the Framework. We've got a multi-server development environment (dev only,
preview, and production) with CVS commits auto-published to the dev and
preview servers, according to CVS branch.

Our dev server has to support a stable CVS branch of the Framework in order
to facilitate stable application development. As a result, we're about to
begin maintaining multiple Apache processes, one per branch of the Framework
(HEAD, stable release branch, tagged release) in order to maintain a clean
environment and predictable application quality. Applications can be
simultaneously tested on the current Framework or a past tagged release, as
well as the stable branch and even the trunk.

My reasoning against solutions such as Apache::PerlVINC and StatINC is that
I want the environment in which an application is tested and developed to be
as close as possible to the final deployment environment. I even encourage
our developers to work with the proxy front-end so that they can see what
their cache-control and other headers are doing (of course, the back-end
Apache processes are always available to them on a different port). IMHO
these modules, while in many circumstances are quite useful, are not
applicable to our strict development environment. And while they may well
work okay in this environment, I don't want to take the chance because - I'm
a paranoid S.O.B. 8^)

Luckily, RAM is cheap these days...

Steve


-----Original Message-----
From: Brian Ferris [mailto:[EMAIL PROTECTED]]
Sent: Saturday, March 03, 2001 12:30 PM
To: [EMAIL PROTECTED]
Subject: Dynamic loading of development libraries

I'm currently a developer for an on-line publication using Apache /
mod_perl / Mason.  We currently have about six developers working on the
project and I have been running into problems with concurrent work on the
Perl libraries that power our site.

We use CVS to manage revisions, but the only way for a developer to see if
their code is working is to run it on our webserver.  However, mod_perl's
very purpose is to keep one copy of your modules loaded from the
start.  StatINC addresses this problem to a certain extent, but it fails
when you have multiple versions of a Perl module that you want to load
depending on which user is requesting.

I sort of got around this by modifying my Mason handler to examine the
requested URI ( ex. /dev/user_name/blah.html ) and loading the appropriate
module for that user.  Basically, this involved modifying the @INC
paths in the handler, requiring the modules, and then calling
the StatINC handler sub to reload any modified modules.  This sort
of screams hack and it never worked that well.  Processes would
load the proper module for one user, and then use that same module
to serve another user who was looking for his own modules.  Chaos
ensued...

I have a few ideas as to what I should try next.  Perhaps limiting
RequestsPerChild to 1, such that libraries don't get reused?  I don't know
what the ramifications of this are.

Short of running a webserver for each user (a bad solution in my
opinion) does anyone have ideas?

Thanks,
Brian Ferris

Reply via email to