On 02/18/2018 02:41 PM, Vitaliy Garnashevich wrote:
> 
>> I certainly wouldn't recommend using 1/2 of RAM right away. There's
>> a good chance it would be a waste of memory - for example due to
>> double buffering, which effectively reduces "total" cache hit
>> ratio.
> 
> Double buffering is often mentioned in context of tuning shared
> buffers. Is there a tool to actually measure the amount of double
> buffering happening in the system?
> 

I'm not aware of such tool. But I suppose it could be done by
integrating information from pg_buffercache and pgfincore [1].

[1] https://github.com/klando/pgfincore

>> Those evictions are performed either by backends or bgwriter, both
>> of which are less efficient than checkpointer. Not only can
>> checkpointer perform various optimizations (e.g. sorting buffers to
>> make the writes more sequential), but it also writes each dirty
>> buffer just once. With smaller shared_buffers the page may have be
>> written multiple times.
> 
> In the case when shared_buffers cover most of RAM, most of writes
> should happen by checkpointer, and cache hit ratio should be high. So
> a hypothetical question: Could shared_buffers=200GB on a 250 GB RAM
> server ever be a reasonable setting? (assuming there are no other
> applications running except postgres, and 50GB is enough for
> allocating work_mem/maintenance_work_mem and for serving queries)
> 

It depends on how large is the active part of the data set is. If it
fits into 200GB but not to smaller shared buffers (say, 100GB), then
using 200GB may be a win.

If the active set is much larger than RAM, smaller shared_buffer values
work better in my experience.


regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Reply via email to