Tom Lane <[EMAIL PROTECTED]> writes:

> Gregory Stark <[EMAIL PROTECTED]> writes:
>> Frankly the whole phantom commandid thing sounds more complicated. You 
>> *still*
>> need a local state data structure that *still* has to spill to disk and now
>> it's much harder to characterize how large it will grow since it depends on
>> arbitrary combinations of cmin and cmax.
> Yeah, but it requires only one entry when a command processes
> arbitrarily large numbers of tuples, so in practice it's not going to
> need to spill to disk.  

Well there's a reason we support commandids up to 4 billion. One of the common
use cases of bulk loading data in a series of individual inserts would cause
such a structure to spill to disk. As would ISAM style programming that steps
through a large data set and updates rows one by one.

> What Heikki wants to do will require an entry in local memory for *each
> tuple* modified by a transaction. That will ruin performance on a regular
> basis.

Sure, but that's the same amount of data all those useless cmin/cmaxes are
taking up now, actually it's less, it's only 6 bytes instead of 8. Even
assuming no clever data structures compress it. And that data doesn't have to
be fsynced so it can sit in filesystem cache and get spooled out to disk
lazily. If you touch a million records in your transaction in one of the
peculiar situations that require keeping the data you're talking about a few
megs of cache sacrificed during that one operation versus extra i/o on every

I should probably let Heikki defend his idea though. I guess I was just
feeling argumentative. I'm know he's thought through the same things.

  Gregory Stark

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to