On 14/03/16 03:29, Dilip Kumar wrote:

On Mon, Mar 14, 2016 at 5:02 AM, Jim Nasby <jim.na...@bluetreble.com
<mailto:jim.na...@bluetreble.com>> wrote:

    Well, 16MB is 2K pages, which is what you'd get if 100 connections
    were all blocked and we're doing 20 pages per waiter. That seems
    like a really extreme scenario, so maybe 4MB is a good compromise.
    That's unlikely to be hit in most cases, unlikely to put a ton of
    stress on IO, even with magnetic media (assuming the whole 4MB is
    queued to write in one shot...). 4MB would still reduce the number
    of locks by 500x.


In my performance results given up thread, we are getting max
performance at 32 clients, means at a time we are extending 32*20 ~= max
(600) pages at a time. So now with 4MB limit (max 512 pages) Results
will looks similar. So we need to take a decision whether 4MB is good
limit, should I change it ?



Well any value we choose will be very arbitrary. If we look at it from the point of maximum absolute disk space we allocate for relation at once, the 4MB limit would represent 2.5 orders of magnitude change. That sounds like enough for one release cycle, I think we can further tune it if the need arises in next one. (with my love for round numbers I would have suggested 8MB as that's 3 orders of magnitude, but I am fine with 4MB as well)

--
  Petr Jelinek                  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to