On 22/05/13 09:13, Simon Riggs wrote:
I worked up a small patch to support Terabyte setting for memory.
Which is OK, but it only works for 1TB, not for 2TB or above.
Which highlights that since we measure things in kB, we have an
inherent limit of 2047GB for our memory settings. It isn't beyond
belief we'll want to go that high, or at least won't be by end 2014
and will be annoying sometime before 2020.
Solution seems to be to support something potentially bigger than INT
for GUCs. So we can reclassify GUC_UNIT_MEMORY according to the
platform we're on.
Opinions?
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
I suspect it should be fixed before it starts being a problem, for 2
reasons:
1. best to panic early while we have time
(or more prosaically: doing it soon gives us more time to get it
right without undue pressure)
2. not able to cope with 2TB and above might put off companies with
seriously massive databases from moving to Postgres
Probably an idea to check what other values should be increased as well.
Cheers,
Gavin