Greg Stark wrote:
On Sun, Feb 28, 2010 at 5:28 AM, Greg Smith <g...@2ndquadrant.com> wrote:
The idea of the workaround is that if you have a single long-running query
to execute, and you want to make sure it doesn't get canceled because of a
vacuum cleanup, you just have it connect back to the master to keep an open
snapshot the whole time.
Also, I'm not sure this actually works. When your client makes this
additional connection to the master it's connecting at some
transaction in the future from the slave's point of view. The master
could have already vacuumed away some record which the snapshot the
client gets on the slave will have in view.
Right, and there was an additional comment in the docs alluding to some
sleep time on the master that intends to try and improve thins. If you
knew how long archive_timeout was you could try to sleep longer than it
to try and increase your odds of avoiding an ugly spot. But there are
race conditions galore possible here, particularly if your archiver or
standby catchup is backlogged.
Still it's a handy practical trick even if it isn't 100% guaranteed to
work. But I don't think it provides the basis for something we can
bake in.
Agreed on both counts, which is why it's in the current docs as a
workaround people can consider, but not what I've been advocating as the
right way to proceed.
--
Greg Smith 2ndQuadrant US Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com www.2ndQuadrant.us
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers