> 21 янв. 2022 г., в 05:19, Shawn Debnath <s...@amazon.com> написал(а):
> 
> On Thu, Jan 20, 2022 at 09:21:24PM +0500, Andrey Borodin wrote:
>> CAUTION: This email originated from outside of the organization. Do not 
>> click links or open attachments unless you can confirm the sender and know 
>> the content is safe.
>> 
>>> 20 янв. 2022 г., в 20:44, Shawn Debnath <s...@amazon.com> написал(а):
>> Can you please also test 2nd patch against a large multixact SLRUs?
>> 2nd patch is not intended to do make thing better on default buffer sizes. 
>> It must save the perfromance in case of really huge SLRU buffers.
> 
> Test was performed on 128/256 for multixact offset/members cache as 
> stated in my previous email.  Sure I can test it for higher values - but 
> what's a real world value that would make sense? We have been using this 
> configuration successfully for a few of our customers that ran into 
> MultiXact contention.

Sorry, seems like I misinterpreted results yesterday.
I had one concern about 1st patch step: it makes CLOG buffers size dependent on 
shared_buffers. But in your tests you seem to have already exercised 
xact_buffers = 24576 without noticeable degradation. Is it correct? I doubt a 
little bit that linear search among 24K elements on each CLOG access does not 
incur performance impact, but your tests seem to prove it.

IMV splitting SLRU buffer into banks would make sense for values greater than 
1<<10. But you are right that 256 seems enough to cope with most of problems of 
multixacts so far. I just thought about stressing SLRU buffers with multixacts 
to be sure that CLOG buffers will not suffer degradation. But yes, it's too 
indirect test.

Maybe, just to be sure, let's repeat tests with autovacuum turned off to stress 
xact_buffers? 

Thanks!

Best regards, Andrey Borodin.

Reply via email to