On Wed, Mar 16, 2022 at 11:36:56AM -0700, Nathan Bossart wrote: > Thinking further, is simply reducing the number of TOAST chunks the right > thing to look at? If I want to add a TOAST attribute that requires 100,000 > chunks, and you told me that I could save 10% in the read path for an extra > 250 chunks of disk space, I would probably choose read performance. If I > wanted to add 100,000 attributes that were each 3 chunks, and you told me > that I could save 10% in the read path for an extra 75,000 chunks of disk > space, I might choose the extra disk space. These are admittedly extreme > (and maybe even impossible) examples, but my point is that the amount of > disk space you are willing to give up may be related to the size of the > attribute. And maybe one way to extract additional read performance with > this optimization is to use a variable threshold so that we are more likely > to use it for large attributes.
I might be overthinking this. Maybe it is enough to skip compressing the attribute whenever compression saves no more than some percentage of the uncompressed attribute size. A conservative default setting might be something like 5% or 10%. -- Nathan Bossart Amazon Web Services: https://aws.amazon.com