On 1/13/14, 3:04 PM, Jeff Janes wrote:

I think the above is pretty simple for both interaction (allow us to inject a 
clean page into the file page cache) and policy (forget it after you hand it to 
us, then remember it again when we hand it back to you clean).  And I think it 
would pretty likely be an improvement over what we currently do.  But I think 
it is probably the wrong way to get the improvement.  I think the real problem 
is that we don't trust ourselves to manage more of the memory ourselves.

As far as I know, we still don't have a publicly disclosable and readily 
reproducible test case for the reports of performance degradation when we have 
more than 8GB in shared_buffers. If we had one of those, we could likely reduce 
the double buffering problem by fixing our own scalability issues and therefore 
taking responsibility for more of the data ourselves.

While I agree we need to fix the 8GB limit, we're always going to have a 
problem here unless we put A LOT of new abilities into our memory capabilities. 
Like, for example, stealing memory from shared buffers to support a sort. Or 
implementing a system-wide limit on WORK_MEM. Or both.

I would much rather teach the OS and Postgres to work together on memory 
management than for us to try and re-implement everything the OS has already 
done for us.
--
Jim C. Nasby, Data Architect                       j...@nasby.net
512.569.9461 (cell)                         http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to