On 1/23/17 1:30 AM, Amit Kapila wrote:
On Sun, Jan 22, 2017 at 3:43 PM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:

That being said, I'm ready to do some benchmarking on this, so that we have
at least some numbers to argue about. Can we agree on a set of workloads
that we want to benchmark in the first round?


I think if we can get data for pgbench read-write workload when data
doesn't fit in shared buffers but fit in RAM, that can give us some
indication.  We can try by varying the ratio of shared buffers w.r.t
data.  This should exercise the checksum code both when buffers are
evicted and at next read.  I think it also makes sense to check the
WAL data size for each of those runs.

I tried testing this (and thought I sent an email about it but don't see it now :/). Unfortunately, on my laptop I wasn't getting terribly consistent runs; I was seeing +/- ~8% TPS. Sometimes checksumps appeared to add ~10% overhead, but it was hard to tell.

If someone has a more stable (is in, dedicated) setup, testing would be useful.

BTW, I ran the test with small (default 128MB) shared_buffers, scale 50 (800MB database), sync_commit = off, checkpoint_timeout = 1min, to try and significantly increase the rate of buffers being written out.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to