Bill Chandler wrote:
Thanks. Yes, I understand that not having a large enough max_fsm_pages is a problem and I think that it is most likely the case for the client. What I wasn't sure of was if the index bloat we're seeing is the result of the "bleeding" you're talking about or something else.
If I deleted 75% of the rows but had a max_fsm_pages setting that still exceeded the pages required (as indicated in VACUUM output), would that solve my indexing problem or would I still need to REINDEX after such a purge?
I don't believe VACUUM re-packs indexes. It just removes empty index pages. So if you have 1000 index pages all with 1 entry in them, vacuum cannot reclaim any pages. REINDEX re-packs the pages to 90% full.
fsm just needs to hold enough pages that all requests have free space that can be used before your next vacuum. It is just a map letting postgres know where space is available for a new fill.
Description: OpenPGP digital signature