<lots of good stuff clipped>

> If you draw a graph of speedup (y) against cache size as a 
> % of total database size, the graph looks like an upside-down "L" - i.e.
> the graph rises steeply as you give it more memory, then turns sharply at a
> particular point, after which it flattens out. The "turning point" is the
> "sweet spot" we all seek - the optimum amount of cache memory to allocate -
> but this spot depends upon the worklaod and database size, not on available
> RAM on the system under test.

Hmmm ... how do you explain, then the "camel hump" nature of the real 
performance?    That is, when we allocated even a few MB more than the 
"optimum" ~190MB, overall performance stated to drop quickly.   The result is 
that allocating 2x optimum RAM is nearly as bad as allocating too little 
(e.g. 8MB).  

The only explanation I've heard of this so far is that there is a significant 
loss of efficiency with larger caches.  Or do you see the loss of 200MB out 
of 3500MB would actually affect the Kernel cache that much?

Anyway, one test of your theory that I can run immediately is to run the exact 
same workload on a bigger, faster server and see if the desired quantity of 
shared_buffers is roughly the same.  I'm hoping that you're wrong -- not 
because I don't find your argument persuasive, but because if you're right it 
leaves us without any reasonable ability to recommend shared_buffer settings.


Josh Berkus
Aglio Database Solutions
San Francisco

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

Reply via email to