On Sat, Mar 19, 2016 at 1:27 AM, Jeff Janes <jeff.ja...@gmail.com> wrote:

> I'm not sure if this is operating as expected.
> I set the value to 1min.
> I set up a test like this:
> pgbench -i
> pgbench -c4 -j4 -T 3600 &
> ### watch the size of branches table
> while (true) ; do psql -c "\dt+" | fgrep _branches; sleep 10; done &
> ### set up a long lived snapshot.
> psql -c 'begin; set transaction isolation level repeatable read;
> select sum(bbalance) from pgbench_branches; select pg_sleep(300);
> select sum(bbalance) from pgbench_branches;'
> As this runs, I can see the size of the pgbench_branches bloating once
> the snapshot is taken, and continues bloating at a linear rate for the
> full 300 seconds.
> Once the 300 second pg_sleep is up, the long-lived snapshot holder
> receives an error once it tries to access the table again, and then
> the bloat stops increasing.  But shouldn't the bloat have stopped
> increasing as soon as the snapshot became doomed, which would be after
> a minute or so?

This is actually operating as intended, not a bug.  Try running a
manual VACUUM command about two minutes after the snapshot is taken
and you should get a handle on what's going on.  The old tuples
become eligible for vacuuming after one minute, but that doesn't
necessarily mean that autovacuum jumps in and that the space starts
getting reused.  The manual vacuum will allow that, as you should
see on your monitoring window.  A connection should not get the
error just because it is using a snapshot that tries to look at
data that might be wrong, and the connection holding the long-lived
snapshot doesn't do that until it awakes from the sleep and runs
the next SELECT command.

All is well as far as I can see here.

Thanks for checking, though!  It is an interesting test!

Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to