I've been doing a bit of benchmarking and real-world performance testing, and have found some curious results.

The load in question is a fairly-busy machine hosting a web service that uses Postgresql as its back end.

"Conventional Wisdom" is that you want to run an 8k record size to match Postgresql's inherent write size for the database.

However, operational experience says this may no longer be the case now that modern ZFS systems support LZ4 compression, because modern CPUs can compress fast enough that they overrun raw I/O capacity. This in turn means that the recordsize is no longer the record size, basically, and Postgresql's on-disk file format is rather compressible -- indeed, in actual performance on my dataset it appears to be roughly 1.24x which is nothing to sneeze at.

The odd thing is that I am getting better performance with a 128k record size on this application than I get with an 8k one! Not only is the system faster to respond subjectively and can it sustain a higher TPS load objectively but the I/O busy percentage as measured during operation is MARKEDLY lower (by nearly an order of magnitude!)

This is not expected behavior!

What I am curious about, however, is the xlog -- that appears to suffer pretty badly from 128k record size, although it compresses even more-materially; 1.94x (!)

The files in the xlog directory are large (16MB each) and thus "first blush" would be that having a larger record size for that storage area would help. It appears that instead it hurts.

Ideas?

--
-- Karl
k...@denninger.net


Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to