> Arjen van der Meijden wrote:
> 
> > On 8-8-2004 16:29, Matt Clark wrote:
> > 
> >> There are two well-worn and very mature techniques for dealing with the
> >> issue of web apps using one DB connection per apache process, both of 
> >> which
> >> work extremely well and attack the issue at its source.
> >>
> >> 1)    Use a front-end caching proxy like Squid as an accelerator.  Static
> >> content will be served by the accelerator 99% of the time.  Additionally,
> >> large pages can be served immediately to the accelerator by Apache, which
> >> can then go on to serve another request without waiting for the end 
> >> user's
> >> dial-up connection to pull the data down.  Massive speedup, fewer apache
> >> processes needed.
> > 
> > 
> > Another version of this 1) is to run with a "content accelerator"; our 
> > "favourite" is to run Tux in front of Apache. It takes over the 
> > connection-handling stuff, has a very low memoryprofile (compared to 
> > Apache) and very little overhead. What it does, is to serve up all 
> > "simple" content (although you can have cgi/php/perl and other languages 
> > being processed by it, entirely disabling the need for apache in some 
> > cases) and forwards/proxies everything it doesn't understand to an 
> > Apache/other webserver running at the same machine (which runs on 
> > another port).
> > 
> > I think there are a few advantages over Squid; since it is partially 
> > done in kernel-space it can be slightly faster in serving up content, 
> > apart from its simplicity which will probably matter even more. You'll 
> > have no caching issues for pages that should not be cached or static 
> > files that change periodically (like every few seconds). Afaik Tux can 
> > handle more than 10 times as much ab-generated requests per second than 
> > a default-compiled Apache on the same machine.
> > And besides the speed-up, you can do any request you where able to do 
> > before, since Tux will simply forward it to Apache if it didn't 
> > understand it.
> > 
> > Anyway, apart from all that. Reducing the amount of apache-connections 
> > is nice, but not really the same as reducing the amount of 
> > pooled-connections using a db-pool... You may even be able to run with 
> > 1000 http-connections, 40 apache-processes and 10 db-connections. In 
> > case of the non-pooled setup, you'd still have 40 db-connections.
> > 
> > In a simple test I did, I did feel pgpool had quite some overhead 
> > though. So it should be well tested, to find out where the 
> > turnover-point is where it will be a gain instead of a loss...

I don't know what were the configurations you are using, but I noticed
that UNIX domain sockets are preferred for the connection bwteen
clients and pgpool. When I tested using pgbench -C (involving
connection estblishing for each transaction),
with-pgpool-configuration 10 times faster than without-pgpool-conf if
using UNIX domain sockets, while there is only 3.6 times speed up with
TCP/IP sockets.

> > Best regards,
> > 
> > Arjen van der Meijden
> > 
> 
> Other then images, there are very few static pages being loaded up by 
> the user.    Since they make up a very small portion of the traffic, it 
> tends to be an optimization we can forgo for now.
> 
> I attempted to make use of pgpool.   At the default 32 connections 
> pre-forked the webserver almost immediately tapped out the pgpool base 
> and content stopped being served because no new processes were being 
> forked to make up for it.
> 
> So I raised it to a higher value (256) and it immediately segfaulted and 
> dropped the core.    So not sure exactly how to proceed, since I rather 
> need the thing to fork additional servers as load hits and not the other 
> way around.

What version of pgpool did you test? I know that certain version
(actually 2.0.2) had such that problem. Can you try again with the
latest verison of pgpool? (it's 2.0.6).
--
Tatsuo Ishii

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to