Malcom,

Thank you for your response..  It does sound like a possibility that the script isn't all the way into completion as we have query->process result->query->process result->query->disconnect (figuratively) scripts.  We were thinking of using SQLRelay as a connection pool and concentrator, thinking it would be less expensive to wrap a connection/cursor through a unix socket into a pool vs. throwing all this at a remote server.  Does that sound off-base in anyone's opinion?

Regards,

Rich

On Wed, 2004-12-01 at 18:10 -0500, Malcolm J Harwood wrote:
On Wednesday 1 December 2004 08:26 pm, Richard N. Fogle wrote:

> 1.  We disabled Apache::DBI - the server can generate thousands of
> queries per second and this feature literally made the CPU catch fire.

Odd. Normally (in my limited experience) it has the reverse effect as you 
aren't creating and destroying a connection every time.

> 3.  We have a disconnect at the end of each perl CGI.  Not sure if it is
> being reached, see no plausible reason why it shouldn't - the code isn't
> that complex.

Worth checking anyway.

> 4.  This is what we have in httpd.conf:
> SetHandler perl-script
> PerlHandler Apache::Registry

Registry will keep any globals around, so if $dbh is global it wont destroy it 
(though if disconnect is being called, it should disconnect it).

> 5.  We easily reach 1024 webserver processes, apache 1.3.

If each one is connecting to the db server (assuming you don't have any 
interprocess connection pooling), then that's your 1024+ db connections right 
there. If the connections are made but not disconnected until the end of the 
script, you would (I think) see a lot of "idle" connections that don't have 
an active query (it's already completed) because they haven't reached the end 
of the script and been disconnected yet.


-- 
I always wanted to be somebody, but I should have been more specific.
- Lily Tomlin

Reply via email to