On Thu, Jun 28, 2012 at 7:15 PM, Robert Haas <[email protected]> wrote: > On Thu, Jun 28, 2012 at 12:13 PM, Thom Brown <[email protected]> wrote: >> On 64-bit Linux, if I allocate more shared buffers than the system is >> capable of reserving, it doesn't start. This is expected, but there's >> no error logged anywhere (actually, nothing logged at all), and the >> postmaster.pid file is left behind after this failure. > > Fixed. > > However, I discovered something unpleasant. With the new code, on > MacOS X, if you set shared_buffers to say 3200GB, the server happily > starts up. Or at least the shared memory allocation goes through just > fine. The postmaster then sits there apparently forever without > emitting any log messages, which I eventually discovered was because > it's busy initializing a billion or so spinlocks. > > I'm pretty sure that this machine does not have >3TB of virtual > memory, even counting swap. So that means that MacOS X has absolutely > no common sense whatsoever as far as anonymous shared memory > allocations go. Not sure exactly what to do about that. Linux is > more sensible, at least on the system I tested, and fails cleanly.
What happens if you mlock() it into memory - does that fail quickly? Is that not something we might want to do *anyway*? -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
