Rod Taylor wrote:
| ARC still helps, since it makes sure the shared_buffers don't all get
| flushed from the useful small datasets when a seq scan gets executed.

I'm still not convinced. Why the last backend alive, have to throw away
bunch of memory copied in the SHM? And again, the ARC is a replacement
policy for a cache, which one ?


As you know, ARC is a recent addition. I've not seen any benchmarks
demonstrating that the optimal SHARED_BUFFERS setting is different today
than it was in the past.

We know it's changed, but the old buffer strategy had an equally hard
time with a small buffer as it did a large one. Does that mean the
middle of the curve is still at 15k buffers but the extremes are handled
better? Or something completely different?

Please feel free to benchmark 7.5 (OSDL folks should be able to help us
as well) and report back.

I know, I know.

We were discussing about the fact that postgres use a his own cache or not;
and for the OP pleasure then if is possible retrieve hit and miss information
from that cache.

For benchmarch may be is better that you look not at the particular implementation
done in postgresql but at the general improvements that the ARC replacement
policy introduce. If I'm not wrong till now postgres was using an LRU,
around you can find some articles like these:

http://www.almaden.ibm.com/StorageSystems/autonomic_storage/ARC/rj10284.pdf
http://www.almaden.ibm.com/cs/people/dmodha/arcfast.pdf

where are showns the improvements.

As you wrote no one did benchmarks on demostrating with the "brute force" that
ARC is better but on the paper should be.


Regards Gaetano Mendola












---------------------------(end of broadcast)--------------------------- TIP 8: explain analyze is your friend

Reply via email to