Hi, Thanks for the work on this, but I am -1 on this idea. It sounds like the type of workload that could become problematic here is one in which there is a massive delete of data which leaves many truncate-able pages at the end of the table and vacuum kicks in while there is heavy concurrent reads on the table, so the AEL becomes a bottleneck. This is actually much worse on the hot standbys, because it does not have provisions to prioritize normal backends for obtaining the exclusive lock.
The way I edalth with large deletes such as this in the past is to perform them in batches and perhaps run vacuum after every few batches to amortize the work vacuum has to perform. But also, there are 2 options available that could help here which is "VACUUM (truncate off)" ( v12+) which turns off the truncation, and also 0164a0f9ee now allows a user to disable the truncate work for autovacuum [0] I think it's better to simply disable the truncate work and perform it at a later time than introduce some new limit to how many times the truncation can be suspended. In the type of workload you are referring to, it is likely that the truncation will end up not completing, so why even try at all? [0] https://postgr.es/m/Z2DE4lDX4tHqNGZt%40dev.null -- Sami Imseih Amazon Web Services (AWS)