Using 9.0devel cvs HEAD, 2010.04.08.

I am trying to understand the performance difference
between primary and standby under a standard pgbench
read-only test.

server has 32 GB, 2 quadcores.

primary:
  tps = 34606.747930 (including connections establishing)
  tps = 34527.078068 (including connections establishing)
  tps = 34654.297319 (including connections establishing)

standby:
  tps = 700.346283 (including connections establishing)
  tps = 717.576886 (including connections establishing)
  tps = 740.522472 (including connections establishing)

transaction type: SELECT only
scaling factor: 1000
query mode: simple
number of clients: 20
number of threads: 1
duration: 900 s

both instances have
  max_connections = 100
  shared_buffers = 256MB
  checkpoint_segments = 50
  effective_cache_size= 16GB

See also:

http://archives.postgresql.org/pgsql-testers/2010-04/msg00005.php
     (differences with scale 10_000)

I understand that in the scale=1000 case, there is a huge
cache effect, but why doesn't that apply to the pgbench runs
against the standby?  (and for the scale=10_000 case the
differences are still rather large)

Maybe these differences are as expected.  I don't find
any explanation in the documentation.


thanks,

Erik Rijkers



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to