Hi,

I am checking a performance problem encountered after porting old embeded
DB to postgreSQL. While the system is real-time sensitive, we are
concerning for per-query cost. In our environment sequential scanning
(select * from ...) for a table with tens of thousands of record costs 1 -
2 seconds, regardless of using ODBC driver or the "timing" result shown in
psql client (which in turn, relies on libpq). However, using EXPLAIN
ANALYZE, or checking the statistics in pg_stat_statement view, the query
costs only less than 100ms.

So, is it client interface (ODBC, libpq) 's cost mainly due to TCP? Has the
pg_stat_statement or EXPLAIN ANALYZE included the cost of copying tuples
from shared buffers to result sets?

Could you experts share your views on this big gap? And any suggestions to
optimise?

P.S. In our original embeded DB a "fastpath" interface is provided to read
directly from shared memory for the records, thus provides extremely
realtime access (of course sacrifice some other features such as
consistency).

Best regards,
Han

Reply via email to