On 4/21/15 3:21 PM, Robert Haas wrote:
It's possible that we could use this infrastructure to freeze more
aggressively in other circumstances.  For example, perhaps VACUUM
should freeze any page it intends to mark all-visible.  That's not a
guaranteed win, because it might increase WAL volume: setting a page
all-visible does not emit an FPI for that page, but freezing any tuple
on it would, if the page hasn't otherwise been modified since the last
checkpoint.  Even if that were no issue, the freezing itself must be
WAL-logged.  But if we could somehow get to a place where all-visible
=> frozen, then autovacuum would never need to visit all-visible
pages, a huge win.

I don't know how bad the extra WAL traffic would be; we'd obviously need to incur it eventually, so it's a question of how common it is for a page to go all-visible but then go not-all-visible again before freezing. It would presumably be far more traffic than some form of a FrozenMap though...

We could also attack the problem from the other end.  Instead of
trying to set the bits on the individual tuples, we could decide that
whenever a page is marked all-visible, we regard it as frozen
regardless of the bits set or not set on the individual tuples.
Anybody who wants to modify the page must freeze any unfrozen tuples
"for real" before clearing the visibility map bit.  This would have
the same end result as the previous idea: all-visible would
essentially imply frozen, and autovacuum could ignore those pages
categorically.

Pushing what's currently background work onto foreground processes doesn't seem like a good idea...

I'm not saying those ideas don't have problems, because they do.  But
I think they are worth further exploring.  The main reason I gave up
on that is because Heikki was working on the XID-to-LSN mapping stuff.
That seemed like a better approach than either of the above, so as
long as Heikki was working on that, there wasn't much reason to pursue
more lowbrow approaches.  Clearly, though, we need to do something
about this.  Freezing is a big problem for lots of users.

Did XID-LSN die? I see at the bottom of the thread it was returned with feedback; I guess Heikki just hasn't had time and there's no major blockers? From what I remember this is probably a better solution, but if it's not going to make it into 9.6 then we should probably at least look further into a FM.

All that having been said, I don't think adding a new fork is a good
approach.  We already have problems pretty commonly where our
customers complain about running out of inodes.  Adding another fork
for every table would exacerbate that problem considerably.

Andres idea of adding this to the VM may work well to handle that. It would double the size of the VM, but it would still be a ratio of 32,000-1 compared to heap size, or 2MB for a 64GB table.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to