On 2016-05-24 11:24:44 -0500, Kevin Grittner wrote: > On Fri, May 6, 2016 at 8:28 PM, Kevin Grittner <kgri...@gmail.com> wrote: > > On Fri, May 6, 2016 at 7:48 PM, Andres Freund <and...@anarazel.de> wrote: > > >> That comment reminds me of a question I had: Did you consider the effect > >> of this patch on analyze? It uses a snapshot, and by memory you've not > >> built in a defense against analyze being cancelled. > > > > Will need to check on that. > > With a 1min threshold, after loading a table "v" with a million > rows, beginning a repeatable read transaction on a different > connection and opening a cursor against that table, deleting almost > all rows on the original connection, and waiting a few minutes I > see this in the open transaction: > > test=# analyze verbose v; > INFO: analyzing "public.v" > INFO: "v": scanned 4425 of 4425 pages, containing 1999 live rows and > 0 dead rows; 1999 rows in sample, 1999 estimated total rows > ANALYZE > test=# select count(*) from v; > ERROR: snapshot too old > > Meanwhile, no errors appeared in the log from autovacuum.
I'd guess that that problem could only be reproduced if autoanalyze takes longer than the timeout, and there's rows pruned after it has started? Analyze IIRC acquires a new snapshot when getting sample rows, so it'll not necessarily trigger in the above scenario, right? Is there anything preventing this from becoming a problem? Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (firstname.lastname@example.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers