Re: [HACKERS] client performance v.s. server statistics

2012-02-15 Thread Amit Kapila
So, is it client interface (ODBC, libpq) 's cost mainly due to TCP?

 

The difference as compare to your embedded DB you are seeing is mainly seems
to be due to TCP.

One optimization you can use is to use Unix-domain socket mode of
PostgreSQL. You can refer unix_socket_directory parameter in postgresql.conf
and other related parameters. 

I am suggesting you this as earlier you were using embedded DB, so your
client/server should be on same machine. If now this is not the case then it
will not work.

 

Can you please clarify some more things like

1.  After doing sequence scan, do you need all the records in client for
which seq. scan is happening. If less records then why you have not created
index.

2.  What is exact scenario for fetching records

 

 

 

From: pgsql-hackers-ow...@postgresql.org
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Zhou Han
Sent: Wednesday, February 15, 2012 9:30 AM
To: pgsql-hackers@postgresql.org
Subject: [HACKERS] client performance v.s. server statistics

 

Hi,

I am checking a performance problem encountered after porting old embeded DB
to postgreSQL. While the system is real-time sensitive, we are concerning
for per-query cost. In our environment sequential scanning (select * from
...) for a table with tens of thousands of record costs 1 - 2 seconds,
regardless of using ODBC driver or the timing result shown in psql client
(which in turn, relies on libpq). However, using EXPLAIN ANALYZE, or
checking the statistics in pg_stat_statement view, the query costs only less
than 100ms.

So, is it client interface (ODBC, libpq) 's cost mainly due to TCP? Has the
pg_stat_statement or EXPLAIN ANALYZE included the cost of copying tuples
from shared buffers to result sets?

Could you experts share your views on this big gap? And any suggestions to
optimise?

P.S. In our original embeded DB a fastpath interface is provided to read
directly from shared memory for the records, thus provides extremely
realtime access (of course sacrifice some other features such as
consistency).

Best regards,
Han



Re: [HACKERS] client performance v.s. server statistics

2012-02-15 Thread Amit Kapila
So I want to know what exactly the operations are involved in the server
side statistics in EXPLAIN ANALYZE

It gives the time for execution of Query on server. According to my
knowledge, it doesn't account for data to send over TCP.

 

From: Zhou Han [mailto:zhou...@gmail.com] 
Sent: Wednesday, February 15, 2012 12:32 PM
To: Amit Kapila
Cc: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] client performance v.s. server statistics

 

Hi,

I have tried unix domain socket and the performance is similar with TCP
socket. It is MIPS architecture so memory copy to/from kernel can occupy
much time, and apparently using unit domain socket has no difference than
TCP in terms of memory copy. 

But it is still unbelievable for the ten-fold gap between the client side
statistic and the server side statistics. So I want to know what exactly the
operations are involved in the server side statistics in EXPLAIN ANALYZE.
May I check the code later on when I get time.

For the query itself, it was just for performance comparison. There are
other index based queries, which are of course much faster, but still result
in similar ten-fold of time gap between client side and server side
statistics.

I am thinking of non-kernel involved client interface, is there such an
option, or do I have to develop one from scratch?

Best regards,
Han

On Wed, Feb 15, 2012 at 1:23 PM, Amit Kapila amit.kap...@huawei.com wrote:

So, is it client interface (ODBC, libpq) 's cost mainly due to TCP?

 

The difference as compare to your embedded DB you are seeing is mainly seems
to be due to TCP.

One optimization you can use is to use Unix-domain socket mode of
PostgreSQL. You can refer unix_socket_directory parameter in postgresql.conf
and other related parameters. 

I am suggesting you this as earlier you were using embedded DB, so your
client/server should be on same machine. If now this is not the case then it
will not work.

 

Can you please clarify some more things like

1.  After doing sequence scan, do you need all the records in client for
which seq. scan is happening. If less records then why you have not created
index.

2.  What is exact scenario for fetching records

 

 

 

pgsql-hackers-ow...@postgresql.org
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Zhou Han
Sent: Wednesday, February 15, 2012 9:30 AM
To: pgsql-hackers@postgresql.org
Subject: [HACKERS] client performance v.s. server statistics

 

Hi,

I am checking a performance problem encountered after porting old embeded DB
to postgreSQL. While the system is real-time sensitive, we are concerning
for per-query cost. In our environment sequential scanning (select * from
...) for a table with tens of thousands of record costs 1 - 2 seconds,
regardless of using ODBC driver or the timing result shown in psql client
(which in turn, relies on libpq). However, using EXPLAIN ANALYZE, or
checking the statistics in pg_stat_statement view, the query costs only less
than 100ms.

rface (ODBC, libpq) 's cost mainly due to TCP? Has the pg_stat_statement or
EXPLAIN ANALYZE included the cost of copying tuples from shared buffers to
result sets?

Could you experts share your views on this big gap? And any suggestions to
optimise?

P.S. In our original embeded DB a fastpath interface is provided to read
directly from shared memory for the records, thus provides extremely
realtime access (of course sacrifice some other features such as
consistency).

Best regards,
Han

 



[HACKERS] client performance v.s. server statistics

2012-02-14 Thread Zhou Han
Hi,

I am checking a performance problem encountered after porting old embeded
DB to postgreSQL. While the system is real-time sensitive, we are
concerning for per-query cost. In our environment sequential scanning
(select * from ...) for a table with tens of thousands of record costs 1 -
2 seconds, regardless of using ODBC driver or the timing result shown in
psql client (which in turn, relies on libpq). However, using EXPLAIN
ANALYZE, or checking the statistics in pg_stat_statement view, the query
costs only less than 100ms.

So, is it client interface (ODBC, libpq) 's cost mainly due to TCP? Has the
pg_stat_statement or EXPLAIN ANALYZE included the cost of copying tuples
from shared buffers to result sets?

Could you experts share your views on this big gap? And any suggestions to
optimise?

P.S. In our original embeded DB a fastpath interface is provided to read
directly from shared memory for the records, thus provides extremely
realtime access (of course sacrifice some other features such as
consistency).

Best regards,
Han


Re: [HACKERS] client performance v.s. server statistics

2012-02-14 Thread Zhou Han
Hi,

I have tried unix domain socket and the performance is similar with TCP
socket. It is MIPS architecture so memory copy to/from kernel can occupy
much time, and apparently using unit domain socket has no difference than
TCP in terms of memory copy.

But it is still unbelievable for the ten-fold gap between the client side
statistic and the server side statistics. So I want to know what exactly
the operations are involved in the server side statistics in EXPLAIN
ANALYZE. May I check the code later on when I get time.

For the query itself, it was just for performance comparison. There are
other index based queries, which are of course much faster, but still
result in similar ten-fold of time gap between client side and server side
statistics.

I am thinking of non-kernel involved client interface, is there such an
option, or do I have to develop one from scratch?

Best regards,
Han

On Wed, Feb 15, 2012 at 1:23 PM, Amit Kapila amit.kap...@huawei.com wrote:

 So, is it client interface (ODBC, libpq) 's cost mainly due to TCP?

 ** **

 The difference as compare to your embedded DB you are seeing is mainly
 seems to be due to TCP.

 One optimization you can use is to use Unix-domain socket mode of
 PostgreSQL. You can refer unix_socket_directory parameter in
 postgresql.conf and other related parameters. 

 I am suggesting you this as earlier you were using embedded DB, so your
 client/server should be on same machine. If now this is not the case then
 it will not work.

 ** **

 Can you please clarify some more things like

 **1.  **After doing sequence scan, do you need all the records in
 client for which seq. scan is happening. If less records then why you have
 not created index.

 **2.  **What is exact scenario for fetching records

 ** **

 ** **

 ** **

 * pgsql-hackers-ow...@postgresql.org [mailto:
 pgsql-hackers-ow...@postgresql.org] On Behalf Of Zhou Han
 Sent: Wednesday, February 15, 2012 9:30 AM
 To: pgsql-hackers@postgresql.org
 Subject: [HACKERS] client performance v.s. server statistics*

 ** **

 Hi,

 I am checking a performance problem encountered after porting old embeded
 DB to postgreSQL. While the system is real-time sensitive, we are
 concerning for per-query cost. In our environment sequential scanning
 (select * from ...) for a table with tens of thousands of record costs 1 -
 2 seconds, regardless of using ODBC driver or the timing result shown in
 psql client (which in turn, relies on libpq). However, using EXPLAIN
 ANALYZE, or checking the statistics in pg_stat_statement view, the query
 costs only less than 100ms.
  rface (ODBC, libpq) 's cost mainly due to TCP? Has the pg_stat_statement
 or EXPLAIN ANALYZE included the cost of copying tuples from shared buffers
 to result sets?

 Could you experts share your views on this big gap? And any suggestions to
 optimise?

 P.S. In our original embeded DB a fastpath interface is provided to read
 directly from shared memory for the records, thus provides extremely
 realtime access (of course sacrifice some other features such as
 consistency).

 Best regards,
 Han