Thanks,Eddy
       I try to reduce the max_connections,but seems nothing improved.
       I made a large max_connections because I have to increase the number
of connection pools.In requirement,the access or query may suddenly increase
to 3000 or more.so in next, I would tested with more connection pools.
       thanks for your document,I would read ASAP.
2009/5/14 Eddy Ernesto Baños Fernández <eeba...@estudiantes.uci.cu>

>  Take a look of the attachment.  I hope it helps.
>
> You must configure your postgresql.conf in the right way, 3000
> max_connections it´s a big mistake…. Try reduce the max_connections and use
> a pooling service instead, then run your tests again.
>
>
>
>
>
>
>
> *De:* pgsql-admin-ow...@postgresql.org [mailto:
> pgsql-admin-ow...@postgresql.org] *En nombre de *Tony Liao
> *Enviado el:* miércoles, 13 de mayo de 2009 22:39
> *Para:* pgsql-admin@postgresql.org
> *Asunto:* [ADMIN] how to improve performance in libpq?
>
>
>
> Hi,All,
>
>      I have a question in libpq.the postgresql dadabase about 32MB(base on
> backup),and I tried to analyze query as :EXPLAIN ANALYZE SELECT
> .........where id=123 ,got the total actual time is 2.882ms.
>
>      Now,I start a test programe base on ../src/test/examples/testlibpq.c
> ,which has 900 connection pools,and the id will be to increase for each
> query.In total I test 160000 queries, used time 1012s. That seems a greate
> different with 2.88ms/each query.    any idea?
>
>      test environment      :
>
>      database hardware    CUP  2*Xeon 5405
>
>                                     MEMORY  DDR2 16GB/800 with DIMM
>
>                                     HARDISK   160GB/8M SATA
>
>       network :LAN 1Gbps
>
>
>
>
>
>       configuration files:
>
>          max_connections = 3000
>
>          shared_buffers = 64MB
>
>          work_mem = 4MB
>
>          maintenance_work_mem = 4MB
>          max_stack_depth = 1MB
>          the others as default.
>
>
>
>         any idea?
>
>         I hope you can understand,thanks.
>

Reply via email to