-----Original Message-----
From: Jonathan Vanasco <[EMAIL PROTECTED]>


> 
> On Jun 16, 2007, at 11:13 AM, Perrin Harkins wrote:
> 
> > 300 is nothing for MySQL.  You should be able to handle a few
> thousand
> > on a machine with enough RAM.
> 
> agreed.  MySQL connections are cheap.  Postgres ones consume RAM and  
> kernel resources, and more than 50 sucks on a box.

The problem is with the Mysql documentation, they don't share the 
performance an tuning information for free and if you look on the web I 
couldn’t find much either. One thing I did found was that if you have 
more connection open Mysql has more open tables which has also a limit 
(open_files_limit). Maybe I have to raise that as well. 
 
> > If you already have a front-end proxy, you shouldn't have a lot of
> idle
> > servers holding db connections.  Assuming that your application uses
> > the database on every mod_perl request, only processes that are
> > actually sleeping will not be using their connection.

You mean the because of the connection timeout in Mysql the not used 
connections are dropped ?

One problem is that if users upload or download files, they use the 
backend mod_perl for that because of authorizing an logging. It all 
depends on the network speed of the users who up and download, especially 
upload speed can be a problem beacause the keep the 'fork' busy ( if I am 
seeing this correctly ). 

> 
> you can also look for db proxies/pools
> 
> i use/used pgpool , because i only use pg
> 
> i tested out sqlrelay a while back on mysql, and it seemed great
> 
> http://sqlrelay.sourceforge.net/

Looks good I will give that a try,

Thanks,


Reply via email to