On Saturday, February 9, 2013, Scott Marlowe wrote:

> On Sat, Feb 9, 2013 at 1:16 PM, Jeff Janes 
> <jeff.ja...@gmail.com<javascript:;>>
> wrote:
> > On Sat, Feb 9, 2013 at 6:51 AM, Scott Marlowe 
> > <scott.marl...@gmail.com<javascript:;>>
> wrote:
> >> On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes 
> >> <charle...@outlook.com<javascript:;>>
> wrote:
> >>> I've benchmarked shared_buffers with high and low settings, in a server
> >>> dedicated to postgres with 48GB my settings are:
> >>> shared_buffers = 37GB
> >>> effective_cache_size = 38GB
> >>>
> >>> Having a small number and depending on OS caching is unpredictable, if
> the
> >>> server is dedicated to postgres you want make sure postgres has the
> memory.
> >>> A random unrelated process doing a cat /dev/sda1 should not destroy
> postgres
> >>> buffers.
> >>> I agree your problem is most related to dirty background ration, where
> >>> buffers are READ only and have nothing to do with disk writes.
> >>
> >> You make an assertion here but do not tell us of your benchmarking
> >> methods.
> >
> > Well, he is not the only one committing that sin.
>
> I'm not asking for a complete low level view.  but it would be nice to
> know if he's benchmarking heavy read or write loads, lots of users, a
> few users, something.  All we get is "I've benchmarked a lot" followed
> by "don't let the OS do the caching."  At least with my testing I was
> using a large transactional system (heavy write) and there I KNOW from
> testing that large shared_buffers do nothing but get in the way.
>

Can you see this with pgbench workloads? (it is certainly write heavy)

I've tried to reproduce these problems, and was never able to.


>
> all the rest of the stuff you mention is why we have effective cache
> size which tells postgresql about how much of the data CAN be cached.
>

The effective_cache_size setting does not figure into any of the things I
mentioned.

Cheers,

Jeff

Reply via email to