On Wed, May 12, 2010 at 11:47 AM, Tom Lane <[email protected]> wrote: > "Kevin Grittner" <[email protected]> writes: >> You're updating the row 100000 times within a single transaction. I >> don't *think* HOT will reclaim a version of a row until the >> transaction which completed it is done and no other transactions can >> see that version any longer. It does raise the question, though -- >> couldn't a HOT update of a tuple *which was written by the same >> transaction* do an "update in place"? > > Well ... in the first place there is not, ever, any such thing as > "update in place". The correct question to ask is whether we could > vacuum away the older elements of the HOT chain on the grounds that they > are no longer of interest. What we would see is tuples with xmin equal > to xmax and cmin different from cmax. The problem then is to determine > whether there are any live snapshots with curcid between cmin and cmax. > There is 0 hope of doing that from outside the originating backend. > Now if heap_page_prune() is being run by the same backend that generated > the in-doubt tuples, which I will agree is likely in a case like this, > in principle we could do it. Not sure if it's really worth the trouble > and nonorthogonal behavior.
update of same row in a single transaction is not going to come up that much and there are a number of simple work arounds to get better performance. isn't it possible to skip the snapshot check for temp tables though? merlin -- Sent via pgsql-hackers mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
