On 2014-10-09 16:01:55 +0200, Andres Freund wrote: > On 2014-10-09 18:17:09 +0530, Amit Kapila wrote: > > On Fri, Sep 26, 2014 at 7:04 PM, Robert Haas <robertmh...@gmail.com> wrote: > > > > > > On another point, I think it would be a good idea to rebase the > > > bgreclaimer patch over what I committed, so that we have a > > > clean patch against master to test with. > > > > Please find the rebased patch attached with this mail. I have taken > > some performance data as well and done some analysis based on > > the same. > > > > Performance Data > > ---------------------------- > > IBM POWER-8 24 cores, 192 hardware threads > > RAM = 492GB > > max_connections =300 > > Database Locale =C > > checkpoint_segments=256 > > checkpoint_timeout =15min > > shared_buffers=8GB > > scale factor = 5000 > > Client Count = number of concurrent sessions and threads (ex. -c 8 -j 8) > > Duration of each individual run = 5mins > > I don't think OLTP really is the best test case for this. Especially not > pgbench with relatilvely small rows *and* a uniform distribution of > access. > > Try parallel COPY TO. Batch write loads is where I've seen this hurt > badly.
As an example. The attached scripts go from: progress: 5.3 s, 20.9 tps, lat 368.917 ms stddev 49.655 progress: 10.1 s, 21.0 tps, lat 380.326 ms stddev 64.525 progress: 15.1 s, 14.1 tps, lat 568.108 ms stddev 226.040 progress: 20.4 s, 12.0 tps, lat 634.557 ms stddev 300.519 progress: 25.2 s, 17.5 tps, lat 461.738 ms stddev 136.257 progress: 30.2 s, 9.8 tps, lat 850.766 ms stddev 305.454 progress: 35.3 s, 12.2 tps, lat 670.473 ms stddev 271.075 progress: 40.2 s, 7.9 tps, lat 972.617 ms stddev 313.152 progress: 45.3 s, 14.9 tps, lat 546.056 ms stddev 211.987 progress: 50.2 s, 13.2 tps, lat 610.608 ms stddev 271.780 progress: 55.5 s, 16.9 tps, lat 468.757 ms stddev 156.516 progress: 60.5 s, 14.3 tps, lat 548.913 ms stddev 190.414 progress: 65.7 s, 9.3 tps, lat 821.293 ms stddev 353.665 progress: 70.1 s, 16.0 tps, lat 524.240 ms stddev 174.903 progress: 75.2 s, 17.0 tps, lat 485.692 ms stddev 194.273 progress: 80.2 s, 19.9 tps, lat 396.295 ms stddev 78.891 progress: 85.3 s, 18.3 tps, lat 423.744 ms stddev 105.798 progress: 90.1 s, 14.5 tps, lat 577.373 ms stddev 270.914 progress: 95.3 s, 12.0 tps, lat 649.434 ms stddev 247.001 progress: 100.3 s, 14.6 tps, lat 563.693 ms stddev 275.236 tps = 14.812222 (including connections establishing) to: progress: 5.1 s, 18.9 tps, lat 409.766 ms stddev 75.032 progress: 10.3 s, 20.2 tps, lat 396.781 ms stddev 67.593 progress: 15.1 s, 19.1 tps, lat 418.545 ms stddev 109.431 progress: 20.3 s, 20.6 tps, lat 388.606 ms stddev 74.259 progress: 25.1 s, 19.5 tps, lat 406.591 ms stddev 109.050 progress: 30.0 s, 19.1 tps, lat 420.199 ms stddev 157.005 progress: 35.0 s, 18.4 tps, lat 421.102 ms stddev 124.019 progress: 40.3 s, 12.3 tps, lat 640.640 ms stddev 88.409 progress: 45.2 s, 12.8 tps, lat 586.471 ms stddev 145.543 progress: 50.5 s, 6.9 tps, lat 1116.603 ms stddev 285.479 progress: 56.2 s, 6.3 tps, lat 1349.055 ms stddev 381.095 progress: 60.6 s, 7.9 tps, lat 1083.745 ms stddev 452.386 progress: 65.0 s, 9.6 tps, lat 805.981 ms stddev 273.845 progress: 71.1 s, 9.6 tps, lat 798.273 ms stddev 184.108 progress: 75.2 s, 9.3 tps, lat 950.131 ms stddev 150.870 progress: 80.8 s, 8.6 tps, lat 899.389 ms stddev 135.090 progress: 85.3 s, 8.8 tps, lat 928.183 ms stddev 152.056 progress: 90.9 s, 8.0 tps, lat 929.737 ms stddev 71.155 progress: 95.7 s, 9.0 tps, lat 968.070 ms stddev 127.824 progress: 100.3 s, 8.7 tps, lat 911.767 ms stddev 130.697 just by switching shared_buffers from 1 to 8GB. I haven't tried, but I hope that with an approach like your's this might become better. psql -f /tmp/prepare.sql pgbench -P5 -n -f /tmp/copy.sql -c 8 -j 8 -T 100 Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
CREATE OR REPLACE FUNCTION exec(text) returns text language plpgsql volatile AS $f$ BEGIN EXECUTE $1; RETURN $1; END; $f$; \o /dev/null SELECT exec('drop table if exists largedata_'||g.i||'; create unlogged table largedata_'||g.i||'(data bytea, id serial primary key);') FROM generate_series(0, 64) g(i); \o
COPY largedata_:client_id(data) FROM '/tmp/large' BINARY;
-- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers