Tomas Vondra wrote: > Hi, > > attached is a patch that improves performance when dropping multiple > tables within a transaction. Instead of scanning the shared buffers for > each table separately, the patch removes this and evicts all the tables > in a single pass through shared buffers.
Made some tweaks and pushed (added comments to new functions, ensure that we never try to palloc(0), renamed DropRelFileNodeAllBuffers to plural, made the "use bsearch" logic a bit simpler). > Our system creates a lot of "working tables" (even 100.000) and we need > to perform garbage collection (dropping obsolete tables) regularly. This > often took ~ 1 hour, because we're using big AWS instances with lots of > RAM (which tends to be slower than RAM on bare hw). After applying this > patch and dropping tables in groups of 100, the gc runs in less than 4 > minutes (i.e. a 15x speed-up). I'm curious -- why would you drop tables in groups of 100 instead of just doing the 100,000 in a single transaction? Maybe that's faster now, because you'd do a single scan of the buffer pool instead of 1000? (I'm assuming that "in groups of" means you do each group in a separate transaction) -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers