On Thu, Apr 18, 2013 at 3:21 AM, Greg Smith <g...@2ndquadrant.com> wrote: > On 4/17/13 6:32 PM, Tom Lane wrote: >> >> The more I read of this thread, the more unhappy I get. It appears that >> the entire design process is being driven by micro-optimization for CPUs >> being built by Intel in 2013. > > > And that's not going to get anyone past review, since all the tests I've > been doing the last two weeks are on how fast an AMD Opteron 6234 with OS > cache >> shared_buffers can run this. The main thing I'm still worried > about is what happens when you have a fast machine that can move memory > around very quickly and an in-memory workload, but it's hamstrung by the > checksum computation--and it's not a 2013 Intel machine. > > The question I started with here was answered to some depth and then skipped > past. I'd like to jerk attention back to that, since I thought some good > answers from Ants went by. Is there a simple way to optimize the committed > CRC computation (or a similar one with the same error detection properties) > based on either: > > a) Knowing that the input will be a 8K page, rather than the existing use > case with an arbitrary sized WAL section. > > b) Straightforward code rearrangement or optimization flags. > > That was all I thought was still feasible to consider changing for 9.3 a few > weeks ago. And the possible scope has only been shrinking since then.
Nothing from the two points, but the CRC calculation algorithm can be switched out for slice-by-4 or slice-by-8 variant. Speed up was around factor of 4 if I remember correctly. >> And I reiterate that there is theory out there about the error detection >> capabilities of CRCs. I'm not seeing any theory here, which leaves me >> with very little confidence that we know what we're doing. > > > Let me see if I can summarize where the messages flying by are at since > you'd like to close this topic for now: > > -Original checksum feature used Fletcher checksums. Its main problems, to > quote wikipedia, include that it "cannot distinguish between blocks of all 0 > bits and blocks of all 1 bits". That was only the most glaring problem. > -Committed checksum feature uses truncated CRC-32. This has known good > error detection properties, but is expensive to compute. There's reason to > believe that particular computation will become cheaper on future platforms > though. But taking full advantage of that will require adding CPU-specific > code to the database. Actually the state is that with the polynomial used there is currently close to zero hope of CPUs optimizing for us. By switching the polynomial we can have hardware acceleration on Intel CPUs, little hope of others supporting given that AMD hasn't by now and Intel touts around patents in this area. However the calculation can be made about factor of 4 faster by restructuring the calculation. This optimization is plain C and not CPU specific. The committed checksum is an order of magnitude slower than the Fletcher one that was performance tested with the patch. > -The latest idea is using the Fowler–Noll–Vo hash function: > https://en.wikipedia.org/wiki/Fowler_Noll_Vo_hash There's 20 years of > research around when that is good or bad. The exactly properties depend on > magic "FNV primes": http://isthe.com/chongo/tech/comp/fnv/#fnv-prime that > can vary based on both your target block size and how many bytes you'll > process at a time. For PostgreSQL checksums, one of the common > problems--getting an even distribution of the hashed values--isn't important > the way it is for other types of hashes. Ants and Florian have now dug into > how exactly that and specific CPU optimization concerns impact the best > approach for 8K database pages. This is very clearly a 9.4 project that is > just getting started. I'm not sure about the 9.4 part: if we ship with the builtin CRC as committed, there is a 100% chance that we will want to switch out the algorithm in 9.4, and there will be quite a large subset of users that will find the performance unusable. If we change it to whatever we come up with here, there is a small chance that the algorithm will give worse than expected error detection rate in some circumstances and we will want offer a better algorithm. More probably it will be good enough and the low performance hit will allow more users to turn it on. This is a 16bit checksum that we talking about, not SHA-1, it is expected to occasionally fail to detect errors. I can provide you with a patch of the generic version of any of the discussed algorithms within an hour, leaving plenty of time in beta or in 9.4 to accommodate the optimized versions. It's literally a dozen self contained lines of code. Regards, Ants Aasma -- Cybertec Schönig & Schönig GmbH Gröhrmühlgasse 26 A-2700 Wiener Neustadt Web: http://www.postgresql-support.de -- Sent via pgsql-hackers mailing list (email@example.com) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers