Last year at this time, I was investigating things like ext3 vs xfs, how well Linux's dirty_bytes parameter worked, and how effective a couple of patches were on throughput & latency. The only patch that ended up applied for 9.1 was for fsync compaction. That was measurably better in terms of eliminating backend syncs altogether, and it also pulled up average TPS a bit on the database scales I picked out to test it on. That rambling group of test sets is available at http://highperfpostgres.com/pgbench-results/index.htm

For the first round of 9.2 testing under a write-heavy load, I started with 9.0 via the yum.postgresql.org packages for SL6, upgraded to 9.1 from there, and then used a source code build of 9.2 HEAD as of Feb 11 (58a9596ed4a509467e1781b433ff9c65a4e5b5ce). Attached is an Excel spreadsheet showing the major figures, along with a CSV formatted copy of that data too. Results that are ready so far are available at http://highperfpostgres.com/results-write-9.2-cf4/index.htm

Most of that is good; here's the best and worst parts of the news in compact form:

scale=500, db is 46% of RAM
Version Avg TPS
9.0  1961
9.1  2255
9.2  2525

scale=1000, db is 94% of RAM; clients=4
Version TPS
9.0  535
9.1  491 (-8.4% relative to 9.0)
9.2  338 (-31.2% relative to 9.1)

There's usually a tipping point with pgbench results, where the characteristics change quite a bit as the database exceeds total RAM size. You can see the background writer statistics change quite a bit around there too. Last year the sharpest part of that transition happened when exceeding total RAM; now it's happening just below that.

This test set takes about 26 hours to run in the stripped down form I'm comparing, which doesn't even bother trying larger than RAM scales like 2000 or 3000 that might also be helpful. Most of the runtime time is spent on the larger scale database tests, which unfortunately are the interesting ones this year. I'm torn at this point between chasing down where this regression came from, moving forward with testing the new patches proposed for this CF, and seeing if this regression also holds with SSD storage. Obvious big commit candidates to bisect this over are the bgwriter/checkpointer split (Nov 1) and the group commit changes (Jan 30). Now I get to pay for not having set this up to run automatically each week since earlier in the 9.2 development cycle.

If someone else wants to try and replicate the bad part of this, best guess for how is using the same minimal postgresql.conf changes I have here, and picking your database scale so that the test database just barely fits into RAM. pgbench gives rough 16MB of data per unit of scale, and scale=1000 is 15GB; percentages above are relative to the 16GB of RAM in my server. Client count should be small, number of physical cores is probably a good starter point (that's 4 in my system, I didn't test below that). At higher client counts, the general scalability improvements in 9.2 negate some of this downside.

= Server config =

The main change to the 8 hyperthreaded core test server (Intel i7-870) for this year is bumping it from 8GB to 16GB of RAM, which effectively doubles the scale I can reach before things slow dramatically. It's also been updated to run Scientific Linux 6.0, giving a slightly later kernel. That kernel does have different defaults for dirty_background_ratio and dirty_ratio, they're 10% and 20% now (compared to 5%/10% in last year's tests).

Drive set for tests I'm publishing so far is basically the same: 4-port Areca card with 256MB battery-backed cache, 3 disk RAID0 for the database, single disk for the WAL, all cheap 7200 RPM drives. The OS is a separate drive, not connected to the caching controller. That's also where the pgbench latency data is writing to. Idea is that this will be similar to having around 10 drives in a production server, where you'll also be using RAID1 for redundancy. I have some numbers brewing for this system running with an Intel 320 series SSD, too, but they're not ready yet.

= Test setup =

pgbench-tools has been upgraded to break down its graphs per test set now, and there's even a configuration option to use client-side Javascript to put that into a tab-like interface available. Thanks to Ben Bleything for that one.

Minimal changes were made to the postgresql.conf. shared_buffers=2GB, checkpoint_segments=64, and I left wal_buffers at its default so that 9.1 got credit for that going up. See http://highperfpostgres.com/results-write-9.2-cf4/541/pg_settings.txt for a full list of changes, drive mount options, and important kernel settings. Much of that data wasn't collected in last year's pgbench-tools runs.

= Results commentary =

For the most part the 9.2 results are quite good. The increase at high client counts is solid, as expected from all the lock refactoring this release has gotten. The smaller than RAM results that particularly benefited from the 9.1 changes, particularly the scale=500 ones, leaped as much in 9.2 as they did in 9.1. scale=500 and clients=96 is up 58% from 9.0 to 9.2 so far.

The problems are all around the higher scales. scale=4000 (58GB) was detuned an average of 1.7% in 9.1, which seemed a fair trade for how much the fsync compaction helped with worse case behavior. It drops another 7.2% on average in 9.2 so far though. The really bad one is scale=1000 (15GB, so barely fitting in RAM now; very different from scale=1000 last year). With this new kernel/more RAM/etc., I'm seeing an average of a 7% TPS drop for the 9.1 changes. The drop from 9.1 to 9.2 is another 26%.



--
Greg Smith   2ndQuadrant US    g...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com

,,9.0,9.0,9.0,9.0,9.1,9.1,9.1,9.1,9.2,9.2,9.2,9.2,"9.0 to 9.1","9.1 to 9.2","9.0 to 9.2"
"scale","clients","tps","avg_latency","90%<","max_latency","tps","avg_latency","90%<","max_latency","tps","avg_latency","90%<","max_latency","tps","tps","tps"
10,4,5186,0.8,0.8,651.0,5055,0.8,0.8,543.0,5067,0.8,0.8,305.8,-2.5%,0.24%,-2.3%
10,8,7367,1.1,1.3,1398.5,7172,1.1,1.4,1228.4,7185,1.1,1.4,1215.2,-2.6%,0.18%,-2.5%
10,16,8396,1.9,2.7,1572.2,8085,2.0,2.9,1572.8,8287,1.9,2.7,1317.0,-3.7%,2.50%,-1.3%
10,32,8181,3.9,6.7,2014.0,8009,4.0,6.9,1449.5,8126,3.9,6.8,1204.1,-2.1%,1.46%,-0.7%
10,64,7299,8.8,18.1,1093.8,7107,9.0,18.6,1355.6,7213,8.9,18.2,1256.6,-2.6%,1.49%,-1.2%
10,96,6848,14.0,31.0,1192.7,6659,14.4,31.8,1355.1,6791,14.1,31.0,1536.7,-2.8%,1.98%,-0.8%
100,4,3777,1.1,0.9,1747.3,3796,1.1,0.9,1398.5,3762,1.1,1.0,951.9,0.5%,-0.90%,-0.4%
100,8,5259,1.5,1.3,2586.2,5245,1.5,1.3,1346.5,5307,1.5,1.4,830.5,-0.3%,1.18%,0.9%
100,16,5621,2.8,2.8,2017.0,5547,2.9,3.0,1923.4,5805,2.8,2.6,1964.9,-1.3%,4.65%,3.3%
100,32,5636,5.7,6.2,2806.1,5740,5.6,7.0,2011.9,6110,5.2,6.8,2367.8,1.8%,6.45%,8.4%
100,64,5398,11.8,13.8,6895.0,5640,11.3,16.4,1418.1,5815,11.0,17.4,2887.0,4.5%,3.10%,7.7%
100,96,5075,18.9,19.6,10275.5,5219,18.4,23.2,2627.9,5600,17.1,24.2,2566.3,2.8%,7.30%,10.3%
500,4,1784,2.2,1.1,1635.2,1861,2.1,1.3,1183.3,1948,2.1,1.6,1073.4,4.3%,4.67%,9.2%
500,8,2041,3.9,3.0,1489.1,2083,3.9,10.3,1829.3,2324,3.5,8.5,1477.0,2.1%,11.57%,13.9%
500,16,2069,7.7,9.1,3281.5,2186,7.3,19.9,1820.5,2426,6.6,18.6,1411.6,5.7%,10.98%,17.3%
500,32,1966,16.3,9.3,3578.9,2297,13.9,32.9,1467.0,2587,12.4,28.7,2124.2,16.8%,12.63%,31.6%
500,64,1956,32.7,17.7,6143.4,2539,25.2,51.5,1343.8,2896,22.1,42.1,1554.2,29.8%,14.06%,48.1%
500,96,1950,49.2,28.5,8192.9,2566,37.4,73.3,1757.0,3087,31.1,54.6,2116.0,31.6%,20.30%,58.3%
1000,4,536,7.7,16.1,2330.8,491,8.6,18.0,2139.4,338,12.2,27.5,2478.4,-8.4%,-31.16%,-36.9%
1000,8,523,15.3,35.1,3119.0,484,16.6,39.0,2663.6,348,23.1,60.1,2898.4,-7.5%,-28.10%,-33.5%
1000,16,570,28.1,74.4,3268.8,525,30.5,81.5,3300.8,394,40.9,113.2,4073.4,-7.9%,-24.95%,-30.9%
1000,32,570,56.2,157.0,4605.6,526,60.8,174.7,4549.3,394,81.2,226.8,5282.8,-7.7%,-25.10%,-30.9%
1000,64,567,112.9,312.4,5319.7,531,120.5,331.4,5740.3,404,158.4,415.6,6833.6,-6.3%,-23.92%,-28.7%
1000,96,559,171.7,438.3,10531.1,537,178.7,465.0,7091.4,419,229.2,568.9,9295.7,-3.9%,-21.97%,-25.0%
4000,4,161,25.0,42.7,2371.6,157,25.6,43.6,2057.0,147,27.5,48.2,2138.6,-2.5%,-6.37%,-8.7%
4000,8,187,42.9,87.0,3756.5,183,43.8,89.1,3810.7,166,48.4,98.6,3015.1,-2.1%,-9.29%,-11.2%
4000,16,228,70.2,148.2,5319.3,225,71.1,151.8,5045.0,209,76.7,161.0,5786.7,-1.3%,-7.11%,-8.3%
4000,32,258,124.1,282.6,8987.5,255,125.7,279.1,9803.3,230,139.2,308.5,7841.0,-1.2%,-9.80%,-10.9%
4000,64,285,224.4,509.9,13750.8,283,225.9,488.0,10519.1,264,242.2,530.6,12439.7,-0.7%,-6.71%,-7.4%
4000,96,292,327.4,742.1,12718.6,285,336.2,731.9,17875.2,274,350.8,803.6,10358.4,-2.4%,-3.86%,-6.2%

Attachment: pgbench-9.2-cf4.xls
Description: MS-Excel spreadsheet

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to