This is important when clustering for redundancy purposes, 

I'm trying to address 2 issues:

A. Avoiding a single point of failure associated with a 
   having a central repository for the data, such as a NFS
   share or a single database server.
B. Avoiding the overhead from using heavyweight tools like
   database replication.

So I've been thinking about how to pull that off, and I think 
I've figured out how, as long as I don't need every machine to 
have exactly the same version of the data structure at all times.

What it comes down to is implementing 2 classes: one implements 
a daemon running on each server in the cluster, responsible for
handling requests to update the data across the network and one 
a class usable inside mod_perl to handle local updates and inform 
other servers of updates.

I believe I wouldn't be the only person finding something like 
this terrifically useful.  Furthermore, I see that Cache::Cache 
could be the underlying basis for those classes.  Most of the 
deep network programming is already there in Net::Daemon.

What say y'all to something like Cache::Clustered::Server and
Cache::Clustered::Client::* ?

  --Christopher Everett

Reply via email to