>>So, is it client interface (ODBC, libpq) 's cost mainly due to TCP?
The difference as compare to your embedded DB you are seeing is mainly seems to be due to TCP. One optimization you can use is to use Unix-domain socket mode of PostgreSQL. You can refer unix_socket_directory parameter in postgresql.conf and other related parameters. I am suggesting you this as earlier you were using embedded DB, so your client/server should be on same machine. If now this is not the case then it will not work. Can you please clarify some more things like 1. After doing sequence scan, do you need all the records in client for which seq. scan is happening. If less records then why you have not created index. 2. What is exact scenario for fetching records From: pgsql-hackers-ow...@postgresql.org [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Zhou Han Sent: Wednesday, February 15, 2012 9:30 AM To: pgsql-hackers@postgresql.org Subject: [HACKERS] client performance v.s. server statistics Hi, I am checking a performance problem encountered after porting old embeded DB to postgreSQL. While the system is real-time sensitive, we are concerning for per-query cost. In our environment sequential scanning (select * from ...) for a table with tens of thousands of record costs 1 - 2 seconds, regardless of using ODBC driver or the "timing" result shown in psql client (which in turn, relies on libpq). However, using EXPLAIN ANALYZE, or checking the statistics in pg_stat_statement view, the query costs only less than 100ms. So, is it client interface (ODBC, libpq) 's cost mainly due to TCP? Has the pg_stat_statement or EXPLAIN ANALYZE included the cost of copying tuples from shared buffers to result sets? Could you experts share your views on this big gap? And any suggestions to optimise? P.S. In our original embeded DB a "fastpath" interface is provided to read directly from shared memory for the records, thus provides extremely realtime access (of course sacrifice some other features such as consistency). Best regards, Han