Re: [HACKERS] basic pgbench runs with various performance-related patches
On Sat, Feb 4, 2012 at 11:59 AM, Greg Smith g...@2ndquadrant.com wrote: On 01/24/2012 08:58 AM, Robert Haas wrote: One somewhat odd thing about these numbers is that, on permanent tables, all of the patches seemed to show regressions vs. master in single-client throughput. That's a slightly difficult result to believe, though, so it's probably a testing artifact of some kind. It looks like you may have run the ones against master first, then the ones applying various patches. The one test artifact I have to be very careful to avoid in that situation is that later files on the physical disk are slower than earlier ones. There's a 30% differences between the fastest part of a regular hard drive, the logical beginning, and its end. Multiple test runs tend to creep forward onto later sections of disk, and be biased toward the earlier run in that case. To eliminate that bias when it gets bad, I normally either a) run each test 3 times, interleaved, or b) rebuild the filesystem in between each initdb. I'm not sure that's the problem you're running into, but it's the only one I've been hit by that matches the suspicious part of your results. I don't think that's it, because tests on various branches were interleaved; moreover, I don't believe master was the first one in the rotation. I think I had then in alphabetical order by branch name, actually. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] basic pgbench runs with various performance-related patches
On 01/24/2012 08:58 AM, Robert Haas wrote: One somewhat odd thing about these numbers is that, on permanent tables, all of the patches seemed to show regressions vs. master in single-client throughput. That's a slightly difficult result to believe, though, so it's probably a testing artifact of some kind. It looks like you may have run the ones against master first, then the ones applying various patches. The one test artifact I have to be very careful to avoid in that situation is that later files on the physical disk are slower than earlier ones. There's a 30% differences between the fastest part of a regular hard drive, the logical beginning, and its end. Multiple test runs tend to creep forward onto later sections of disk, and be biased toward the earlier run in that case. To eliminate that bias when it gets bad, I normally either a) run each test 3 times, interleaved, or b) rebuild the filesystem in between each initdb. I'm not sure that's the problem you're running into, but it's the only one I've been hit by that matches the suspicious part of your results. -- Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] basic pgbench runs with various performance-related patches
On Tue, Jan 24, 2012 at 1:26 AM, Tatsuo Ishii is...@postgresql.org wrote: ** pgbench, permanent tables, scale factor 100, 300 s ** 1 group-commit-2012-01-21 614.425851 -10.4% 8 group-commit-2012-01-21 4705.129896 +6.3% 16 group-commit-2012-01-21 7962.131701 +2.0% 24 group-commit-2012-01-21 13074.939290 -1.5% 32 group-commit-2012-01-21 12458.962510 +4.5% 80 group-commit-2012-01-21 12907.062908 +2.8% Interesting. Comparing with this: http://archives.postgresql.org/pgsql-hackers/2012-01/msg00804.php you achieved very small enhancement. Do you think of any reason which makes the difference? My test was run with synchronous_commit=off, so I didn't expect the group commit patch to have much of an impact. I included it mostly to see whether by chance it helped anyway (since it also helps other WAL flushes, not just commits) or whether it caused any regression. One somewhat odd thing about these numbers is that, on permanent tables, all of the patches seemed to show regressions vs. master in single-client throughput. That's a slightly difficult result to believe, though, so it's probably a testing artifact of some kind. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] basic pgbench runs with various performance-related patches
My test was run with synchronous_commit=off, so I didn't expect the group commit patch to have much of an impact. I included it mostly to see whether by chance it helped anyway (since it also helps other WAL flushes, not just commits) or whether it caused any regression. Oh, I see. One somewhat odd thing about these numbers is that, on permanent tables, all of the patches seemed to show regressions vs. master in single-client throughput. That's a slightly difficult result to believe, though, so it's probably a testing artifact of some kind. Maybe kernel cache effect? -- Tatsuo Ishii SRA OSS, Inc. Japan English: http://www.sraoss.co.jp/index_en.php Japanese: http://www.sraoss.co.jp -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] basic pgbench runs with various performance-related patches
There was finally some time available on Nate Boley's server, which he has been kind enough to make highly available for performance testing throughout this cycle, and I got a chance to run some benchmarks against a bunch of the perfomance-related patches in the current CommitFest. Specifically, I did my usual pgbench tests: 3 runs at scale factor 100, with various client counts. I realize that this is not the only or even most interesting thing to test, but I felt it would be useful to have this information as a baseline before proceeding to more complicated testing. I have another set of tests running now with a significantly different configuration that will hopefully provide some useful feedback on some of the things this test fails to capture, and will post the results of the tests (and the details of the test configuration) as soon as those results are in. For the most part, I only tested each patch individually, but in one case I also tested two patches together (buffreelistlock-reduction-v1 with freelist-ok-v2). Results are the median of three five-minute test runs, with one exception: buffreelistlock-reduction-v1 crapped out during one of the test runs with the following errors, so I've shown the results for both of the successful runs (though I'm not sure how relevant the numbers are given the errors, as I expect there is a bug here somewhere): log.ws.buffreelistlock-reduction-v1.1.100.300:ERROR: could not read block 0 in file base/20024/11780: read only 0 of 8192 bytes log.ws.buffreelistlock-reduction-v1.1.100.300:CONTEXT: automatic analyze of table rhaas.public.pgbench_branches log.ws.buffreelistlock-reduction-v1.1.100.300:ERROR: could not read block 0 in file base/20024/11780: read only 0 of 8192 bytes log.ws.buffreelistlock-reduction-v1.1.100.300:CONTEXT: automatic analyze of table rhaas.public.pgbench_tellers log.ws.buffreelistlock-reduction-v1.1.100.300:ERROR: could not read block 0 in file base/20024/11780: read only 0 of 8192 bytes log.ws.buffreelistlock-reduction-v1.1.100.300:CONTEXT: automatic analyze of table rhaas.pg_catalog.pg_database log.ws.buffreelistlock-reduction-v1.1.100.300:ERROR: could not read block 0 in file base/20024/11780: read only 0 of 8192 bytes log.ws.buffreelistlock-reduction-v1.1.100.300:STATEMENT: vacuum analyze pgbench_branches log.ws.buffreelistlock-reduction-v1.1.100.300:ERROR: could not read block 0 in file base/20024/11780: read only 0 of 8192 bytes log.ws.buffreelistlock-reduction-v1.1.100.300:STATEMENT: select count(*) from pgbench_branches Just for grins, I ran the same set of tests against REL9_1_STABLE, and the results of those tests are also included below. It's worth grinning about: on this test, at 32 clients, 9.2devel (as of commit 4f42b546fd87a80be30c53a0f2c897acb826ad52, on which all of these tests are based) is 25% faster on permanent tables, 109% faster on unlogged tables, and 474% faster on a SELECT-only test. Here's the test configuration: shared_buffers = 8GB maintenance_work_mem = 1GB synchronous_commit = off checkpoint_segments = 300 checkpoint_timeout = 15min checkpoint_completion_target = 0.9 wal_writer_delay = 20ms And here are the results. For everything against master, I've also included the percentage speedup or slowdown vs. the same test run against master. Many of these numbers are likely not statistically significant, though some clearly are. ** pgbench, permanent tables, scale factor 100, 300 s ** 1 master 686.038059 8 master 4425.79 16 master 7808.389490 24 master 13276.472813 32 master 11920.691220 80 master 12560.803169 1 REL9_1_STABLE 627.879523 -8.5% 8 REL9_1_STABLE 4188.731855 -5.4% 16 REL9_1_STABLE 7433.309556 -4.8% 24 REL9_1_STABLE 10496.411773 -20.9% 32 REL9_1_STABLE 9547.804833 -19.9% 80 REL9_1_STABLE 7197.655050 -42.7% 1 background-clean-slru-v2 629.518668 -8.2% 8 background-clean-slru-v2 4794.662182 +8.3% 16 background-clean-slru-v2 8062.151120 +3.2% 24 background-clean-slru-v2 13275.834722 -0.0% 32 background-clean-slru-v2 12024.410625 +0.9% 80 background-clean-slru-v2 12113.589954 -3.6% 1 buffreelistlock-reduction-v1 512.828482 -25.2% 8 buffreelistlock-reduction-v1 4765.576805 +7.7% 16 buffreelistlock-reduction-v1 8030.477792 +2.8% 24 buffreelistlock-reduction-v1 13118.481248 -1.2% 32 buffreelistlock-reduction-v1 11895.847998 -0.2% 80 buffreelistlock-reduction-v1 12015.291045 -4.3% 1 buffreelistlock-reduction-v1-freelist-ok-v2 621.960997 -9.3% 8 buffreelistlock-reduction-v1-freelist-ok-v2 4650.200642 +5.1% 16 buffreelistlock-reduction-v1-freelist-ok-v2 7999.167629 +2.4% 24 buffreelistlock-reduction-v1-freelist-ok-v2 13070.123153 -1.6% 32 buffreelistlock-reduction-v1-freelist-ok-v2 11808.986473 -0.9% 80 buffreelistlock-reduction-v1-freelist-ok-v2 12136.960028 -3.4% 1 freelist-ok-v2 629.832419 -8.2% 8 freelist-ok-v2 4800.267011 +8.5% 16 freelist-ok-v2 8018.571815 +2.7% 24 freelist-ok-v2 13122.167158 -1.2% 32 freelist-ok-v2 12004.261737 +0.7% 80 freelist-ok-v2 12188.211067 -3.0% 1
Re: [HACKERS] basic pgbench runs with various performance-related patches
On Mon, Jan 23, 2012 at 1:53 PM, Robert Haas robertmh...@gmail.com wrote: Results are the median of three five-minute test runs checkpoint_timeout = 15min Test duration is important for tests that don't relate to pure contention reduction, which is every patch apart from XLogInsert. We've discussed that before, so not sure what value you assign to these results. Very little, is my view, so I'm a little disappointed to see this post and the associated comments. I'm very happy to see that your personal work has resulted in gains and these results are valid tests of that work, IMHO. If you only measure throughput you're only measuring half of what users care about. We've not yet seen any tests that confirm that other important issues have not been made worse. Before commenting on individual patches its clear that the tests you've run aren't even designed to highlight the BufFreelistLock contention that is present in different configs, so that alone is sufficient to throw most of this away. On particular patches * background-clean-slru-v2 related very directly to reducing the response time spikes you showed us in your last set of results. Why not repeat those same tests?? * removebufmgrfreelist-v1 related to the impact of dropping tables/index/databases, so given the variability of the results, that at least shows it has no effect in the general case. And here are the results. For everything against master, I've also included the percentage speedup or slowdown vs. the same test run against master. Many of these numbers are likely not statistically significant, though some clearly are. with one exception: buffreelistlock-reduction-v1 crapped out during one of the test runs with the following errors That patch comes with the proviso, stated in comments: We didn't get the lock, but read the value anyway on the assumption that reading this value is atomic. So we seem to have proved that reading it without the lock isn't safe. The remaining patch you tested was withdrawn and not submitted to the CF. Sigh. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] basic pgbench runs with various performance-related patches
On Mon, Jan 23, 2012 at 9:31 AM, Simon Riggs si...@2ndquadrant.com wrote: Test duration is important for tests that don't relate to pure contention reduction, which is every patch apart from XLogInsert. Yes, I know. I already said that I was working on more tests to address other use cases. I'm very happy to see that your personal work has resulted in gains and these results are valid tests of that work, IMHO. If you only measure throughput you're only measuring half of what users care about. We've not yet seen any tests that confirm that other important issues have not been made worse. I personally think throughput is awfully important, but clearly latency matters as well, and that is why *even as we speak* I am running more tests. If there are other issues with which you are concerned besides latency and throughput, please say what they are. On particular patches * background-clean-slru-v2 related very directly to reducing the response time spikes you showed us in your last set of results. Why not repeat those same tests?? I'm working on it. Actually, I'm attempting to improve my previous test configuration by making some alterations per some of your previous suggestions. I plan to post the results of those tests once I have run them. * removebufmgrfreelist-v1 related to the impact of dropping tables/index/databases, so given the variability of the results, that at least shows it has no effect in the general case. I think it needs some tests with a larger scale factor before drawing any general conclusions, since this test, as you mentioned above, doesn't involve much buffer eviction. As it turns out, I am working on running such tests. That patch comes with the proviso, stated in comments: We didn't get the lock, but read the value anyway on the assumption that reading this value is atomic. So we seem to have proved that reading it without the lock isn't safe. I am not sure what's going on with that patch, but clearly something isn't working right. I don't know whether it's that or something else, but it does look like there's a bug. The remaining patch you tested was withdrawn and not submitted to the CF. Oh. Which one was that? I thought all of these were in play. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] basic pgbench runs with various performance-related patches
On Mon, Jan 23, 2012 at 3:09 PM, Robert Haas robertmh...@gmail.com wrote: I'm working on it. Good, thanks for the update. The remaining patch you tested was withdrawn and not submitted to the CF. Oh. Which one was that? I thought all of these were in play. freelist_ok was a prototype for testing/discussion, which contained an arguable heuristic. I guess that means its also in play, but I wasn't thinking we'd be able to assemble clear evidence for 9.2. The other patches have clearer and specific roles without heuristics (mostly), so are at least viable for 9.2, though still requiring agreement. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] basic pgbench runs with various performance-related patches
On Mon, Jan 23, 2012 at 10:35 AM, Simon Riggs si...@2ndquadrant.com wrote: freelist_ok was a prototype for testing/discussion, which contained an arguable heuristic. I guess that means its also in play, but I wasn't thinking we'd be able to assemble clear evidence for 9.2. OK, that one is still in the test runs I am doing right now, but I will drop it from future batches to save time and energy that can be better spent on things we have a chance of getting done for 9.2. The other patches have clearer and specific roles without heuristics (mostly), so are at least viable for 9.2, though still requiring agreement. I think we must also drop removebufmgrfreelist-v1 from consideration, unless you want to go over it some more and try to figure out a fix for whatever caused it to crap out on these tests. IIUC, that corresponds to this CommitFest entry: https://commitfest.postgresql.org/action/patch_view?id=744 Whatever is wrong must be something that happens pretty darn infrequently, since it only happened on one test run out of 54, which also means that if you do want to pursue that one we'll have to go over it pretty darn carefully to make sure that we've fixed that issue and don't have any others. I have to admit my personal preference is for postponing that one to 9.3 anyway, since there are some related issues I'd like to experiment with. But let me know how you'd like to proceed. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] basic pgbench runs with various performance-related patches
On Mon, Jan 23, 2012 at 3:49 PM, Robert Haas robertmh...@gmail.com wrote: The other patches have clearer and specific roles without heuristics (mostly), so are at least viable for 9.2, though still requiring agreement. I think we must also drop removebufmgrfreelist-v1 from consideration, ... I think you misidentify the patch. Earlier you said it that buffreelistlock-reduction-v1 crapped out and I already said that the assumption in the code clearly doesn't hold, implying the patch was dropped. The removebufmgrfreelist and its alternate patch is still valid, with applicability to special cases. I've written another patch to assist with testing/assessment of the problems, attached. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services diff --git a/src/backend/storage/buffer/freelist.c b/src/backend/storage/buffer/freelist.c index 3e62448..36b0160 100644 --- a/src/backend/storage/buffer/freelist.c +++ b/src/backend/storage/buffer/freelist.c @@ -17,6 +17,7 @@ #include storage/buf_internals.h #include storage/bufmgr.h +#include utils/timestamp.h /* @@ -41,6 +42,21 @@ typedef struct */ uint32 completePasses; /* Complete cycles of the clock sweep */ uint32 numBufferAllocs; /* Buffers allocated since last reset */ + + /* + * Wait Statistics + */ + long waitBufferAllocSecs; + int waitBufferAllocUSecs; + int waitBufferAlloc; + + long waitBufferFreeSecs; + int waitBufferFreeUSecs; + int waitBufferFree; + + long waitSyncStartSecs; + int waitSyncStartUSecs; + int waitSyncStart; } BufferStrategyControl; /* Pointers to shared state */ @@ -125,7 +141,29 @@ StrategyGetBuffer(BufferAccessStrategy strategy, bool *lock_held) /* Nope, so lock the freelist */ *lock_held = true; - LWLockAcquire(BufFreelistLock, LW_EXCLUSIVE); + if (!LWLockConditionalAcquire(BufFreelistLock, LW_EXCLUSIVE)) + { + TimestampTz waitStart = GetCurrentTimestamp(); + TimestampTz waitEnd; + long wait_secs; + int wait_usecs; + + LWLockAcquire(BufFreelistLock, LW_EXCLUSIVE); + + waitEnd = GetCurrentTimestamp(); + + TimestampDifference(waitStart, waitEnd, + wait_secs, wait_usecs); + + StrategyControl-waitBufferAllocSecs += wait_secs; + StrategyControl-waitBufferAllocUSecs += wait_usecs; + if (StrategyControl-waitBufferAllocUSecs 100) + { + StrategyControl-waitBufferAllocUSecs -= 100; + StrategyControl-waitBufferAllocSecs += 1; + } + StrategyControl-waitBufferAlloc++; + } /* * We count buffer allocation requests so that the bgwriter can estimate @@ -223,7 +261,29 @@ StrategyGetBuffer(BufferAccessStrategy strategy, bool *lock_held) void StrategyFreeBuffer(volatile BufferDesc *buf) { - LWLockAcquire(BufFreelistLock, LW_EXCLUSIVE); + if (!LWLockConditionalAcquire(BufFreelistLock, LW_EXCLUSIVE)) + { + TimestampTz waitStart = GetCurrentTimestamp(); + TimestampTz waitEnd; + long wait_secs; + int wait_usecs; + + LWLockAcquire(BufFreelistLock, LW_EXCLUSIVE); + + waitEnd = GetCurrentTimestamp(); + + TimestampDifference(waitStart, waitEnd, + wait_secs, wait_usecs); + + StrategyControl-waitBufferFreeSecs += wait_secs; + StrategyControl-waitBufferFreeUSecs += wait_usecs; + if (StrategyControl-waitBufferFreeUSecs 100) + { + StrategyControl-waitBufferFreeUSecs -= 100; + StrategyControl-waitBufferFreeSecs += 1; + } + StrategyControl-waitBufferFree++; + } /* * It is possible that we are told to put something in the freelist that @@ -256,7 +316,30 @@ StrategySyncStart(uint32 *complete_passes, uint32 *num_buf_alloc) { int result; - LWLockAcquire(BufFreelistLock, LW_EXCLUSIVE); + if (!LWLockConditionalAcquire(BufFreelistLock, LW_EXCLUSIVE)) + { + TimestampTz waitStart = GetCurrentTimestamp(); + TimestampTz waitEnd; + long wait_secs; + int wait_usecs; + + LWLockAcquire(BufFreelistLock, LW_EXCLUSIVE); + + waitEnd = GetCurrentTimestamp(); + + TimestampDifference(waitStart, waitEnd, + wait_secs, wait_usecs); + + StrategyControl-waitSyncStartSecs += wait_secs; + StrategyControl-waitSyncStartUSecs += wait_usecs; + if (StrategyControl-waitSyncStartUSecs 100) + { + StrategyControl-waitSyncStartUSecs -= 100; + StrategyControl-waitSyncStartSecs += 1; + } + StrategyControl-waitSyncStart++; + } + result = StrategyControl-nextVictimBuffer; if (complete_passes) *complete_passes = StrategyControl-completePasses; @@ -265,7 +348,59 @@ StrategySyncStart(uint32 *complete_passes, uint32 *num_buf_alloc) *num_buf_alloc = StrategyControl-numBufferAllocs; StrategyControl-numBufferAllocs = 0; } + else + { + long waitBufferAllocSecs; + int waitBufferAllocUSecs; + int waitBufferAlloc; + + long waitBufferFreeSecs; + int waitBufferFreeUSecs; + int waitBufferFree; + + long waitSyncStartSecs; + int waitSyncStartUSecs; + int waitSyncStart; + + waitBufferAllocSecs = StrategyControl-waitBufferAllocSecs; + waitBufferAllocUSecs =
Re: [HACKERS] basic pgbench runs with various performance-related patches
On Mon, Jan 23, 2012 at 7:52 PM, Simon Riggs si...@2ndquadrant.com wrote: On Mon, Jan 23, 2012 at 3:49 PM, Robert Haas robertmh...@gmail.com wrote: The other patches have clearer and specific roles without heuristics (mostly), so are at least viable for 9.2, though still requiring agreement. I think we must also drop removebufmgrfreelist-v1 from consideration, ... I think you misidentify the patch. Earlier you said it that buffreelistlock-reduction-v1 crapped out and I already said that the assumption in the code clearly doesn't hold, implying the patch was dropped. Argh. I am clearly having a senior moment here, a few years early. So is it correct to say that both of the patches associated with message attached to the following CommitFest entry are now off the table for 9.2? https://commitfest.postgresql.org/action/patch_view?id=743 -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] basic pgbench runs with various performance-related patches
** pgbench, permanent tables, scale factor 100, 300 s ** 1 group-commit-2012-01-21 614.425851 -10.4% 8 group-commit-2012-01-21 4705.129896 +6.3% 16 group-commit-2012-01-21 7962.131701 +2.0% 24 group-commit-2012-01-21 13074.939290 -1.5% 32 group-commit-2012-01-21 12458.962510 +4.5% 80 group-commit-2012-01-21 12907.062908 +2.8% Interesting. Comparing with this: http://archives.postgresql.org/pgsql-hackers/2012-01/msg00804.php you achieved very small enhancement. Do you think of any reason which makes the difference? -- Tatsuo Ishii SRA OSS, Inc. Japan English: http://www.sraoss.co.jp/index_en.php Japanese: http://www.sraoss.co.jp -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] basic pgbench runs with various performance-related patches
On 24 January 2012 06:26, Tatsuo Ishii is...@postgresql.org wrote: ** pgbench, permanent tables, scale factor 100, 300 s ** 1 group-commit-2012-01-21 614.425851 -10.4% 8 group-commit-2012-01-21 4705.129896 +6.3% 16 group-commit-2012-01-21 7962.131701 +2.0% 24 group-commit-2012-01-21 13074.939290 -1.5% 32 group-commit-2012-01-21 12458.962510 +4.5% 80 group-commit-2012-01-21 12907.062908 +2.8% Interesting. Comparing with this: http://archives.postgresql.org/pgsql-hackers/2012-01/msg00804.php you achieved very small enhancement. Do you think of any reason which makes the difference? Presumably this system has a battery-backed cache, whereas my numbers were obtained on my laptop. -- Peter Geoghegan http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training and Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers