On Tue, Apr 11, 2017 at 2:59 PM, Claudio Freire <klaussfre...@gmail.com> wrote:
> On Tue, Apr 11, 2017 at 3:53 PM, Robert Haas <robertmh...@gmail.com> wrote:
>> 1TB / 8kB per page * 60 tuples/page * 20% * 6 bytes/tuple = 9216MB of
>> maintenance_work_mem
>> So we'll allocate 128MB+256MB+512MB+1GB+2GB+4GB which won't be quite
>> enough so we'll allocate another 8GB, for a total of 16256MB, but more
>> than three-quarters of that last allocation ends up being wasted.
>> I've been told on this list before that doubling is the one true way
>> of increasing the size of an allocated chunk of memory, but I'm still
>> a bit unconvinced.
> There you're wrong. The allocation is capped to 1GB, so wastage has an
> upper bound of 1GB.

Ah, OK.  Sorry, didn't really look at the code.  I stand corrected,
but then it seems a bit strange to me that the largest and smallest
allocations are only 8x different.  I still don't really understand
what that buys us.  What would we lose if we just made 'em all 128MB?

Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to