On Fri, Mar 4, 2011 at 8:20 AM, Robert Haas <robertmh...@gmail.com> wrote:
> On Fri, Mar 4, 2011 at 4:21 AM, Matt Burke <mattbli...@icritical.com> wrote:
>> Robert Haas wrote:
>>> Old row versions have to be kept around until they're no longer of
>>> interest to any still-running transaction.
>>
>> Thanks for the explanation.
>>
>> Regarding the snippet above, why would the intermediate history of
>> multiply-modified uncommitted rows be of interest to anything, or is the
>> current behaviour simply "cheaper" overall in terms of cpu/developer time?
>
> Because in theory you could have a cursor open.  You could open a
> cursor, start to read from it, then make an update.  Now the cursor
> needs to see things as they were before the update.
>
> We might be able to do some optimization here if we had some
> infrastructure to detect when a backend has no registered snapshots
> with a command-ID less than the command-ID of the currently active
> snapshot, but nobody's put in the effort to figure out exactly what's
> involved and whether it makes sense.  It's a problem I'm interested
> in, but #(needed round-tuits) > #(actual round-tuits).

Not just cursors, but pl/pgsql for example is also pretty aggressive
about grabbing snapshots.  Also,t' is a matter of speculation if the
case of a single row being updated a high number of times in a single
transaction merits such complicated optimizations.

It bears repeating: Explicit transaction control (beyond the dblink
type hacks that currently exist) in backend scripting would solve many
cases where this is a problem in practice without having to muck
around in the mvcc engine.  Autonomous transactions are another way to
do this...

merlin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to