> According to the Google, there is absolutely no way of gettIng MacOS X
> not to overcommit like crazy.  

Well, this is one of a long list of broken things about OSX.  If you
want to see *real* breakage, do some IO performance testing of HFS+

FWIW, I have this issue with Mac desktop applications on my MacBook,
which will happily memory leak until I run out of swap space.

> You can read the amount of system
> memory by using sysctl() to fetch hw.memsize, but it's not really
> clear how much that helps.  We could refuse to start up if the shared
> memory allocation is >= hw.memsize, but even an amount slightly less
> than that seems like enough to send the machine into a tailspin, so
> I'm not sure that really gets us anywhere.

I still think it would help.  User errors in allocating shmmem are more
likely to be order-of-magnitude errors ("I meant 500MB, not 500GB!")
than be matters of 20% of RAM over.

> One idea I had was to LOG the size of the shared memory allocation
> just before allocating it.  That way, if your system goes into the
> tank, there will at least be something in the log.  But that would be
> useless chatter for most users.

Yes, but it would provide mailing list, IRC and StackExchange quick answers.

"I started up PostgreSQL and my MacBook crashed."

"Find the file postgres.log.  What's the last 10 lines?"

So neither of those things *fixes* the problem ... ultimately, it's
Apple's problem and we can't fix it ... but both of them make it
somewhat better.

The other thing which will avoid the problem for most Mac users is if we
simply allocate 10% of RAM at initdb as a default.  If we do that, then
90% of users will never touch Shmem themselves, and not have the
opportunity to mess up.

Josh Berkus
PostgreSQL Experts Inc.

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to