On Thu, 26 Jun 2008, Holger Hoffstaette wrote:
How do large databases treat mass updates? AFAIK both DB2 and Oracle use MVCC (maybe a different kind?) as well
An intro to the other approaches used by Oracle and DB2 (not MVCC) is at http://wiki.postgresql.org/wiki/Why_PostgreSQL_Instead_of_MySQL:_Comparing_Reliability_and_Speed_in_2007#Transaction_Locking_and_Scalability (a URL which I really need to shorten one day).
Are there no options (algorithms) for adaptively choosing different update strategies that do not incur the full MVCC overhead?
If you stare at the big picture of PostgreSQL's design, you might notice that it usually aims to do things one way and get that implementation right for the database's intended audience. That intended audience cares about data integrity and correctness and is willing to suffer the overhead that goes along with operating that way. There's few "I don't care about reliability here so long as it's fast" switches you can flip, and not having duplicate code paths to support them helps keep the code simpler and therefore more reliable.
This whole area is one of those good/fast/cheap trios. If you want good transaction guarantees on updates, you either get the hardware and settings right to handle that (!cheap), or it's slow. The idea of providing a !good/fast/cheap option for updates might have some theoretical value, but I think you'd find it hard to get enough support for that idea to get work done on it compared to the other things developer time is being spent on right now.
-- * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance