> Bruce Momjian <[EMAIL PROTECTED]> writes:
> >> Basically, move the first 100 rows to the end of the table file, then take
> >> 100 and write it to position 0, 101 to position 1, etc ... that way, at
> >> max, you are using ( tuple * 100 ) bytes of disk space, vs 2x the table
> >> size ... either method is going to lock the file for a period of time, but
> >> one is much more friendly as far as disk space is concerned *plus*, if RAM
> >> is available for this, it might even be something that the backend could
> >> use up to -S blocks of RAM to do it off disk?  If I set -S to 64meg, and
> >> the table is 24Meg in size, it could do it all in memory?
> 
> > Yes, I liked that too.
> 
> What happens if you crash partway through?
> 
> I don't think it's possible to build a crash-robust rewriting ALTER
> process that doesn't use 2X disk space: you must have all the old tuples
> AND all the new tuples down on disk simultaneously just before you
> commit.  The only way around 2X disk space is to adopt some logical
> renumbering approach to the columns, so that you can pretend the dropped
> column isn't there anymore when it really still is.

Yes, I liked the 2X disk space, and making the new tuples visible all at
once at the end.

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  [EMAIL PROTECTED]               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026

Reply via email to