On 10/21/14, 5:39 PM, Alvaro Herrera wrote:
Jim Nasby wrote:
Currently, a non-freeze vacuum will punt on any page it can't get a
cleanup lock on, with no retry. Presumably this should be a rare
occurrence, but I think it's bad that we just assume that and won't
warn the user if something bad is going on.
I think if you really want to attack this problem, rather than just
being noisy about it, what you could do is to keep a record of which
page numbers you had to skip, and then once you're done with your first
scan you go back and retry the lock on the pages you skipped.
I'm OK with that if the community is; I was just trying for minimum
invasiveness.
If I go this route, I'd like some input though...
- How to handle storing the blockIDs. Fixed size array or something fancier?
What should we limit it to, especially since we're already allocating
maintenance_work_mem for the tid array.
- What happens if we run out of space to remember skipped blocks? I could do
something like what we do for running out of space in the dead_tuples array,
but I'm worried that will add a serious amount of complexity, especially since
re-processing these blocks could be what actually pushes us over the limit.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers