On Thu, Jun 28, 2012 at 2:51 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Robert Haas <robertmh...@gmail.com> writes:
>> I tried this.  At least on my fairly vanilla MacOS X desktop, an mlock
>> for a larger amount of memory than was conveniently on hand (4GB, on a
>> 4GB box) neither succeeded nor failed in a timely fashion but instead
>> progressively hung the machine, apparently trying to progressively
>> push every available page out to swap.  After 5 minutes or so I could
>> no longer move the mouse.  After about 20 minutes I gave up and hit
>> the reset button.  So there's apparently no value to this as a
>> diagnostic tool, at least on this platform.
> Fun.  I wonder if other BSDen are as brain-dead as OSX on this point.
> It'd probably at least be worth filing a bug report with Apple about it.

Just for fun, I tried writing a program that does power-of-two-sized
malloc requests.

The first one that failed - on my 4GB Mac, remember - was for
140737488355328 bytes.  Yeah, that' s right: 128 TB.

According to the Google, there is absolutely no way of gettIng MacOS X
not to overcommit like crazy.  You can read the amount of system
memory by using sysctl() to fetch hw.memsize, but it's not really
clear how much that helps.  We could refuse to start up if the shared
memory allocation is >= hw.memsize, but even an amount slightly less
than that seems like enough to send the machine into a tailspin, so
I'm not sure that really gets us anywhere.

One idea I had was to LOG the size of the shared memory allocation
just before allocating it.  That way, if your system goes into the
tank, there will at least be something in the log.  But that would be
useless chatter for most users.

Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to