On Fri, 11 Sep 1998, Chris Johnson wrote:
> Come on now - this isn't a real solution! If the site is popular then he
> will get more than 50 connections... fastcgi or not he will be over that
> limit.
Hm... i think on about three things:
- Opening a connection does have some ov
Come on now - this isn't a real solution! If the site is popular then he
will get more than 50 connections... fastcgi or not he will be over that
limit.
Chris
--
As I learn the innermost secrets of the people around me, they reward
me in many ways to keep me quiet.
On Fri, 11 Sep 1998, Aldrin
I have 11K records in my table with two fields being uniquely indexed,
would this matter. Everytime i do an update, which is 2 times a week,
i do all my adds,deletes,mods on the database then for all the text
fields that are blank i make them NUll as it feel it might increase
speed of a lookup
> When I try to access the database with 50 simultaneous connections,
>the database crash, when I use the -o-F option for postmaster, and
>doesn't crash if I do not use it.
I suggest you to reduce the load postgresql receives. A trick is to use
FastCGI, turning cgi sessions into full-time d
Hi there !
I got a problem with postgres on a heavy-loaded machine.
I use postgres for a web-server. I access the database with a perl-5
CGI.
When I try to access the database with 50 simultaneous connections,
the database crash, when I use the -o-F option for po
Greetings,
Normal vacuum don't do the trick. I made simple script which once in a
hour vacuums my most used table. Today, in the morning, i found that the
script crashed with message "Cannot write duplicate key ...". So it is not
right solution for the problem.
Dropping and recreating indexes is