On 5/19/07, Will Fould <[EMAIL PROTECTED]> wrote:
Here's the situation:  We have a fully normalized relational database
(mysql) now being accessed by a web application and to save a lot of complex
joins each time we grab rows from the database, I currently load and cache a
few simple hashes (1-10MB) in each apache processes with the corresponding
lookup data

Are you certain this is saving you all that much, compared to just
doing the joins?  With proper indexes, joins are fast.  It could be a
win to do them yourself, but it depends greatly on how much of the
data you end up displaying before the lookup tables change and have to
be re-fetched.

Is anyone doing something similar? I'm wondering if implementing a BerkleyDB
or another slave store on each web node with a tied hash (or something
similar) is feasible and if not, what a better solution might be.

Well, first of all, I wouldn't feed a tied hash to my neighbor's dog.
It's slower than method calls, and more confusing.

There are lots of things you could do here, but it's not clear to me
what it is that you don't like about your current method.  Is it that
when the database changes you have to do heavy queries from every
child process?  That also kills any sharing of the data.  Do you have
more than one server, or expect to soon?

- Perrin

Reply via email to