Hi All,

Simplified Scenario:
40+ gig table with 20+ indexes receiving log event type records in an append 
only fashion.
One of the columns is a sequence ID.
PG version 11.4

If I start a vacuum on this table when sequence ID is 1 million, and the table 
continues to accumulate new events, when vacuum completes, should I be thinking 
about the state of the table and indexes as being vacuumed up to 1 million? Or 
is vacuum also operating on records inserted after vacuum started?

Is there any reason to think about this differently when the vacuum is manual 
as opposed to auto?

I attempted to deal with the append only issue corrected in PG13 by setting 
autovacuum_freeze_max_age low enough to trigger vacuum but that has been such a 
wild card I abandoned it. I'm now trying to find a formula to trigger a manual 
vacuum. There are stats gathering processes pulling from the most recently 
added data all the time so my priority is to keep performance high for those. 
Secondly, there's a ton of memory allotted so running vacuum freeze before 
table pages age out of memory seems like a good idea.

I am having trouble with autovacuum keeping up and I suspect there's a 
communication problem with the stats collector but have put off looking into 
that because updating to PG15 was "right around the corner". Meanwhile, I have 
had to run multiple-thread scripts to keep up vacuum. Manual vacuums always 
work and updates stats correctly but autovacuums hang and occupies workers 
doing nothing.

I realize these are 2 completely different issues. I would love to have better 
understanding of the initial question but it feels like I may not have to be 
very aware of how that works if I knew what was causing the latter issue. Any 
hints, pointers and references are always appreciated.

- ​Senor

Reply via email to