On Sun, Aug 15, 2010 at 5:39 PM, Tom Lane <t...@sss.pgh.pa.us> wrote: >> Could we avoid this >> altogether by allocating a new relfilenode on truncate? > > Then we'd have to copy all the data we *didn't* truncate, which is > hardly likely to be a win.
Oh, sorry. I was thinking we were talking about complete truncation rather than partial truncation. I'm still pretty unhappy with the proposed fix, though, because it gives up performance in a broad range of cases to cater to an extremely narrow failure case. Considering the rarity of the proposed problem, are we sure that it isn't better to adopt a solution like what Heikki proposed? If truncation fails, try to zero the pages; if that also fails, PANIC. I'm really reluctant to back-patch a performance regression. Perhaps, as Greg Stark says, there are a variety of ways that this can happen - but they're all pretty rare, and seem to require a fairly substantial amount of broken-ness. If we're in a situation where we can't reliably update our disk files, it seems optimistic to assume that keeping on running is going to be a whole lot better than PANICing. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers