On Thu, Jun 28, 2012 at 7:27 PM, Andres Freund <and...@2ndquadrant.com> wrote:
> On Thursday, June 28, 2012 07:19:46 PM Magnus Hagander wrote:
>> On Thu, Jun 28, 2012 at 7:15 PM, Robert Haas <robertmh...@gmail.com> wrote:
>> > On Thu, Jun 28, 2012 at 12:13 PM, Thom Brown <t...@linux.com> wrote:
>> >> On 64-bit Linux, if I allocate more shared buffers than the system is
>> >> capable of reserving, it doesn't start.  This is expected, but there's
>> >> no error logged anywhere (actually, nothing logged at all), and the
>> >> postmaster.pid file is left behind after this failure.
>> >
>> > Fixed.
>> >
>> > However, I discovered something unpleasant.  With the new code, on
>> > MacOS X, if you set shared_buffers to say 3200GB, the server happily
>> > starts up.  Or at least the shared memory allocation goes through just
>> > fine.  The postmaster then sits there apparently forever without
>> > emitting any log messages, which I eventually discovered was because
>> > it's busy initializing a billion or so spinlocks.
>> >
>> > I'm pretty sure that this machine does not have >3TB of virtual
>> > memory, even counting swap.  So that means that MacOS X has absolutely
>> > no common sense whatsoever as far as anonymous shared memory
>> > allocations go.  Not sure exactly what to do about that.  Linux is
>> > more sensible, at least on the system I tested, and fails cleanly.
>>
>> What happens if you mlock() it into memory - does that fail quickly?
>> Is that not something we might want to do *anyway*?
> You normally can only mlock() mminor amounts of memory without changing
> settings. Requiring to change that setting (aside that mlocking would be a bad
> idea imo) would run contrary to the point of the patch, wouldn't it? ;)

It would. I wasn't aware of that limitation :)

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to