On Sat, Mar 12, 2016 at 5:31 AM, Jim Nasby <jim.na...@bluetreble.com> wrote:

> FWIW, this is definitely a real possibility in any shop that has very high
> downtime costs and high transaction rates.
> I also think some kind of clamp is a good idea. It's not that uncommon to
> run max_connections significantly higher than 100, so the extension could
> be way larger than 16MB. In those cases this patch could actually make
> things far worse as everyone backs up waiting on the OS to extend many MB
> when all you actually needed were a couple dozen more pages.

I agree, We can have some max limit on number of extra pages, What other
thinks ?

> BTW, how was *20 arrived at? ISTM that if you have a lot of concurrent
> demand for extension that means you're running lots of small DML
> operations, not really big ones. I'd think that would make *1 more
> appropriate.

*1 will not solve this problem, Here the main problem was many people are
sleep/wakeup on the extension lock and that was causing the bottleneck. So
if we do *1 this will satisfy only current requesters which has already
waited on the lock. But our goal is to avoid backends from requesting this

Idea of Finding the requester to get the statistics on this locks (load on
the lock) and extend in multiple of load so that in future this situation
will be avoided for long time and again when happen next time extend in
multiple of load.

How 20 comes ?
  I tested with Multiple clients loads 1..64,  with multiple load size 4
byte records to 1KB Records,  COPY/ INSERT and found 20 works best.

Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

Reply via email to