Saar Picker <[EMAIL PROTECTED]> writes:

> Thanks for responding. We currently are cacheing DB connections
> per-process. However, with 40-50 processes per server, and 4+ machines per
> DB server, and 3-5 connections per process, you can see how the number of
> connections per DB server gets rather large. I think the problem lies
> with the CODE references. I'll check out IPC::Shareable some more.

Firstly your numbers sound a little odd, 

a) why do you have more than one connection per process, I assume you're using
connection persistent connections of some sort (look at connect_cached for
example)

b) Why do you run as many as 50 processes, if you split off all the static
data onto a separate server you might find things run as fast or faster with
fewer processes. 

c) 100-200 connections might not be out of line for your database. If you're
using Oracle you might check into MTS for example which can handle 1000+
connections as easily as 1 connection if they're not doing any more database
work.

Your problems sharing connections across sessions has nothing to do with perl.
A database connection is more than just perl code, it's usually network socket
and it's usually managed by a C library from the database vendor. Worse, most
databases if not all databases tie the concept of session to the network
connection. If two processes end up writing to the same socket the database
will be very confused.

In any case you can't store a socket in a shared memory segment, it's not just
a piece of data. You would need to arrange to open all your sockets before
Apache forked or find some other way to distribute them. 

Then you would need to deal with the fact that your database library stores
some state information that would also need to be shared, either putting all
of that in shared memory or some other way. And you don't have access to it in
perl, you would need to do this in the DBD driver's C code and either use
explicit support from the library or add code to the low level DB library. 

Then you would need to write the perl layer to handle locking the handles to
avoid having two processes trying to use the same handle at the same time.

In other words, there's a lot of work to be done to do this using shared
memory, and not all the libraries would even support it. I'm not completely
sure any of them would.

DBI::Proxy works by having a single process do all the database work,
everything else talks to the proxy. This adds a layer of latency though.
Oracle has a native tool called Connection Manager that does something
similar. 

-- 
greg

Reply via email to