I work with a large and wide table (about 300 million rows, about 50 columns), 
and from time to time, we get business requirements to make some modifications. 
But sometimes, it's just some plain mistake. This has happened to us a few 
weeks ago where someone made a mistake and we had to update a single column of 
a large and wide table. Literally, the source data screwed up a zip code and we 
had to patch on our end.

Anyways... Query ran was:
    update T set source_id = substr(sourceId, 2, 10);
Took about 10h and created 100's of millions of dead tuples, causing another 
couple of hours of vacuum.

This was done during a maintenance window, and that table is read-only except 
when we ETL data to it on a weekly basis, and so I was just wondering why I 
should pay the "bloat" penalty for this type of transaction. Is there a trick 
that could be use here?

More generally, I suspect that the MVCC architecture is so deep that something 
like LOCK TABLE, which would guarantee that there won't be contentions, 
couldn't be used as a heuristic to not create dead tuples? That would make 
quite a performance improvement for this type of work though.

Thank you,

Reply via email to