Hi
According to postgres v/s mysql benchmarks, authentication in postgres takes
lions share in overhead, which can be minimised. Further more than 100
connections to database is performance hog for a single machine(Any idea if
postgres supports partitioning?).
One application should initiate the connection and threads should share that,
is the best way to reduce number of connections. Of coures that adds sync.
overhead to application but it's little compared to database performance loss.
And if you are using postgres serious, get latest version and *compile* it on
your own. It's just around 15 MB, affordable even on dialup. But you can add
hell of optimisations and hence performance to that.
Besides use 2.4.x kernel. They are far better for databases(According to linux
v/s freebsd benchmark). Besides it gives you raiserfs which is essential for
database IMHO....
I am quoting conclusions from benchmark, nothing of personal experinece except
that postgres has damn small initial footprint around an MB...
Shridhar
[EMAIL PROTECTED] wrote:
>
> AFAIK PostgreSQL allows for 32 simultaneous connects by default. To increase this to
>a max of 1024 you need to use the -N and -B switches while starting postmaster. To
>increase this even furthur or to change the default values you need to recompile. In
>case you do not need more than 32 (or 1024) simultaneous connects, make sure that
>you're using the persistant connect option in PHP/Java whatever. This will reuse an
>already open connection and also reduce overhead of opening and closing connections.
>Note that for persistant connections you need to explicitly close the connection when
>you're finished (both to free up resources and avoiding a security hazard).
>
> HTH,
> Indraneel
----------------------------------------------
LIH is all for free speech. But it was created
for a purpose. Violations of the rules of
this list will result in stern action.