Perrin Harkins wrote:
> 
> On Mon, 6 Nov 2000, Differentiated Software Solutions Pvt. Ltd wrote:
> > We want to share a variable across different httpd processes.
> > Our requirement is as follows :
> >
> > 1. We want to define one variable (which is a large hash).
> > 2. Every httpd should be able to access this variable (read-only).
> > 3. Periodically (every hour) we would like to have another mod_perl program to 
>refresh/recreate this large hash with new values
> > 4. After this, we want the new values in the hash to be available across httpds
> 
> If that's all you want to do, I would stay away from the notoriously slow
> and sometimes tricky IPC modules.  My dirt simple approach is to put the
> data in a file and then read it into each httpd.  (No point in trying to
> load it before forking if you're going to refresh it in an hour
> anyway.)  You can use Storable for your data format, which is compact and
> fast.  To check for an update, just stat the file and reload the data if
> it has a newer mtime.  If you don't like doing the stat every time, put a
> counter in a global variable and just stat once every 10 requests or
> something, like Apache::SizeLimit does.  If your data is too big to pull
> into memory, you can use a dbm file instead.
> 
> - Perrin

Have you benchmarked this vs IPC::ShareLite ?

I've heard similar rumours about IPC being slow - but is this using
Shareable (a pure perl implementation) or ShareLite (C / XS
implentation).

I would be interested in any results / ideas on how to do it.

Greg


Reply via email to