On 11/14/14, 5:00 PM, Mark Kirkwood wrote:

as the 'rule of thumb' for setting shared_buffers. However I was recently 
benchmarking a machine with a lot of ram (1TB) and entirely SSD storage [1], and 
that seemed quite happy with 50GB of shared buffers (better performance than with 
8GB). Now shared_buffers was not the variable we were concentrating on so I didn't 
get too carried away and try much bigger than about 100GB - but this seems like a 
good thing to come out with some numbers for i.e pgbench read write and read only 
tps vs shared_buffers 1 -> 100 GB in size.

What PG version?

One of the huge issues with large shared_buffers is the immense overhead you end 
up with for running the clock sweep, and on most systems that overhead is born by 
every backend individually. You will only see that overhead if your database is 
larger than shared bufers, because you only pay it when you need to evict a 
buffer. I suspect you'd actually need a database at least 2x > shared_buffers 
for it to really start showing up.

[1] I may be in a position to benchmark the machines these replaced at some not 
to distant time. These are the previous generation (0.5TB ram, 32 cores and all 
SSD storage) but probably still good for this test.

Awesome! If there's possibility of developers getting direct access, I suspect 
folks on -hackers would be interested. If not but you're willing to run tests 
for folks, they'd still be interested. :)
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to