On Mon, Nov 29, 2010 at 12:12 AM, Tom Lane <t...@sss.pgh.pa.us> wrote: >> I would expect that you can just iterate through the size possibilities >> pretty quickly and just use the first one that works -- no /proc >> groveling. > > It's not really that easy, because (at least on the kernel version I > tested) it's not the shmget that fails, it's the later shmat. Releasing > and reacquiring the shm segment would require significant code > restructuring, and at least on some platforms could produce weird > failure cases --- I seem to recall having heard of kernels where the > release isn't instantaneous, so that you could run up against SHMMAX > for no apparent reason. Really you do want to scrape the value. >
Couldn't we just round the shared memory allocation down to a multiple of 4MB? That would handle all older architectures where the size is 2MB or 4MB. I see online that IA64 supports larger page sizes up to 256MB but then could we make it the user's problem if they change their hugepagesize to a larger value to pick a value of shared_buffers that will fit cleanly? We might need to rejigger things so that the shared memory segment is exactly the size of shared_buffers and any other shared data structures are in a separate segment though for that to work. -- greg -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers