Yeah with 500K shared buffers and multiples of backends, we could achieve
noticeable savings with this. And that is why it will be difficult to show
the performance gains by running just pgbench/dbt2 on medium scale machines.
One way of looking at this could be that memory saved here, could lead to
more critical usage elsewhere... But agreed, it is hard to show with just
some performance runs.
On 3/5/07, Stefan Kaltenbrunner <[EMAIL PROTECTED]> wrote:
Tom Lane wrote:
> NikhilS <[EMAIL PROTECTED]> writes:
>> What is the opinion of the list as to the best way of measuring if the
>> following implementation is ok?
>> As mentioned in earlier mails, this will reduce the per-backend usage
>> memory by an amount which will be a fraction (single digit percentage)
>> of (NBuffers
>> * int) size. I have done pgbench/dbt2 runs and I do not see any
>> impact because of this.
> I find it extremely telling that you don't claim to have seen any
> positive impact either.
> I think that the original argument
> is basically bogus. At 500000 buffers (4GB in shared memory) the
> per-backend space for PrivateRefCount is still only 2MB, which is
> simply not as significant as Simon claims; a backend needs at least
> that much for catalog caches etc. There is, furthermore, no evidence
> that running shared_buffers that high is a good idea in the first
> place, or that there aren't other performance bottlenecks that will
> manifest before this one becomes interesting.
hmm - we are continuily running into people with dedicated servers that
have 16GB RAM or even more available and most tuning docs recommend some
20-30% of system RAM to get dedicated to shared_buffers. So having some
500k buffers allocated does not sound so unrealistic in practise and
combined with the fact that people often have a few hundred backends
that could add up to some noticable overhead.
If that is actually a problem given that those people tend to have heaps
of memory is another story but if we can preserve some memory ...