On Wed, Apr 6, 2016 at 3:22 PM, Andres Freund <and...@anarazel.de> wrote:

> Which scale did you initialize with? I'm trying to reproduce the
> workload on hydra as precisely as possible...

I tested with scale factor 300, shared buffer 8GB.

My test script is attached with the mail (perf_pgbench_ro.sh).

I have done some more test on power (same machine)

head + pinunpin-cas-9.patch + BufferDesc content lock to pointer (patch
attached: buffer_content_lock_ptr**.patch)

Ashutosh helped me in generating this patch (this is just temp patch to see
the pin/unpin behaviour if content lock is pointer)

    64 client
run1 497684
run2 543366
run3 476988
  128 Client
run1        740301
run2 482676
run3 474530
run4 480971
run5 757779

1. With 64 client I think whether we apply only
pinunpin-cas-9.patch or we apply pinunpin-cas-9.patch +
max TPS is ~550,000 and some fluctuations.

2. With 128, we saw in earlier post that with pinunpin we were getting max
TPS was 650,000 (even after converting BufferDesc to 64 bytes it was
650,000.  Now after converting content lock to pointer on top of pinunpin I
get max as ~750,000.

   - One more point to be noted, earlier it was varying from 250,000 to
650,000 but after converting content lock to pointer its     varying from
450,000 to 750000.


Head + buffer_content_lock_ptr_rebased_head_temp.patch

1. With this test reading is same as head, and can see same variance in

Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

Attachment: perf_pgbench_ro.sh
Description: Bourne shell script

Attachment: buffer_content_lock_ptr_rebased_head_temp.patch
Description: Binary data

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to