On Thu, May 31, 2012 at 11:09 AM, Sergey Koposov <kopo...@ast.cam.ac.uk> wrote: > On Thu, 31 May 2012, Simon Riggs wrote: > >> >> That struck me as a safe and easy optimisation. This was a problem I'd >> been trying to optimise for 9.2, so I've written a patch that appears >> simple and clean enough to be applied directly. > > > Thanks! The patch indeed improved the timings, The dropping of 100 tables in > a single commit before the patch took ~ 50 seconds, now it takes ~ 5 sec (it > would be nice to reduce it further though, because the dropping of 10000 > tables still takes ~10 min).
I'm surprised it helped that much. I thought the most it could theoretically could help would be a factor of 4. I tried the initially unlocked test, and for me it cut the time by a factor of 3. But I only have a 1GB shared_buffers at the max, I would expect it help more at larger sizes because there is a constant overhead not related to scanning the shared buffers which gets diluted out the larger shared_buffers is. I added to that a drop-all very similar to what Simon posted and got another factor of 3. But, if you can do this during a maintenance window, then just restarting with a much smaller shared_buffers should give you a much larger speed up than either or both of these. If I can extrapolate up to 10G from my current curve, setting it to 8MB instead would give a speed up of nearly 400 fold. Cheers, Jeff -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers