On Thu, Aug 4, 2016 at 10:48 AM, Greg Stark <st...@mit.edu> wrote: > I'm trying to run pgbench on a moderately beefy machine (4-core 3.4GHz > with 32G of ram and md mirrored spinning rust drives) at scale 300 > with 32 clients with duration of 15min. I'm getting TPS numbers > between 60-150 which seems surprisingly low to me and also to several > people on IRC.
I am assuming "md mirrored" means you have two drives, so there is no striping. What is their RPM? Your IO system is inadequate and I'm only slightly surprised at the low TPS. > Now pg_test_fsync does seem to indicate that's not an unreasonable > commit rate if there was very little commit grouping going on: > > Compare file sync methods using one 8kB write: > open_datasync 100.781 ops/sec 9922 usecs/op > fdatasync 71.088 ops/sec 14067 usecs/op > > Compare file sync methods using two 8kB writes: > open_datasync 50.286 ops/sec 19886 usecs/op > fdatasync 80.349 ops/sec 12446 usecs/op > > And iostat does seem to indicate the drives are ~ 80% utilized with > high write await times So maybe this is just what the system is > capable of with synchronous_commit? As an experiment, turn off synchrounous_commit and see what happens. Mostly likely you are getting adequate commit grouping behavior, but that doesn't apply to the table data. Each transaction dirties some random page in the 3.9GB pgbench_accounts table, and there is no effective grouping of those writes. That data isn't written synchronously, but in the steady state it doesn't really matter because at some point the write rate has to equilibrate with the dirtying rate. If you can't make the disks faster then the TPS has to drop to meet them. The most likely mechanism for this to happen is that the disks are so harried trying to keep up with the dirty data eviction, that they can't service the sync calls from the commits in a timely matter. But if you took the sync calls out, the bottleneck would likely just move somewhere else with only modest overall improvement. The way to tune it would be to make shared_buffers large enough that all of pgbench_accounts fits in it, and increase checkpoint_segments and checkpoint_timeout as much as you can afford, and increase checkpoint_completion_target. Cheers, Jeff -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers