Balkrishna Sharma b...@hotmail.com writes:
I will have a web application having postgres 8.4+ as backend. At any given
time, there will be max of 1000 parallel web-users interacting with the
database (read/write)
I wish to do performance testing of 1000 simultaneous read/write to
the
On Wed, 16 Jun 2010, Balkrishna Sharma wrote:
Hello,I will have a web application having postgres 8.4+ as backend. At
any given time, there will be max of 1000 parallel web-users interacting
with the database (read/write)I wish to do performance testing of 1000
simultaneous read/write to the
When you set up a server that has high throughput requirements, the last
thing you want to do is use it in a manner that cripples its throughput.
Don't try and have 1000 parallel Postgres backends - it will process
those queries slower than the optimal setup. You should aim to have
Pierre C li...@peufeu.com writes:
The same is true of a web server : 1000 active php interpreters (each eating
several megabytes or more) are not ideal for performance !
For php, I like lighttpd with php-fastcgi : the webserver proxies requests
to a small pool of php processes, which are only
Hello,I will have a web application having postgres 8.4+ as backend. At any
given time, there will be max of 1000 parallel web-users interacting with the
database (read/write)I wish to do performance testing of 1000 simultaneous
read/write to the database.
I can do a simple unix script on the
Balkrishna Sharma b...@hotmail.com wrote:
I wish to do performance testing of 1000 simultaneous read/write
to the database.
You should definitely be using a connection pool of some sort. Both
your throughput and response time will be better that way. You'll
want to test with different pool