Trying to think a little out of the box, how "common" is it in modern operating systems to be able to swap out shared memory?

Maybe we're not using the ARC algorithm correctly after all. The ARC algorithm does not consider the second level OS buffer cache in it's design. Maybe the total size of the ARC cache directory should not be 2x the size of what is configured as the shared buffer cache, but rather 2x the size of the effective cache size (in 8k pages). If we assume that the job of the T1 queue is better done by the OS buffers anyway (and this is what this discussion seems to point out), we shouldn't hold them in shared buffers (only very few of them and evict them ASAP). We just account for them and assume that the OS will have those cached that we find in our T1 directory. I think with the right configuration for effective cache size, this is a fair assumption. The T2 queue represents the frequently used blocks. If our implementation would cause the unused/used portions of the buffers not to move around, the OS will swap out currently unused portions of the shared buffer cache and utilize those as OS buffers.

To verify this theory it would be interesting what the ARC strategy after a long DBT run with a "large" buffer cache thinks a good T2 size would be. Enabling the strategy debug message and running postmaster with -d1 will show that. In theory, this size should be anywhere near the sweet spot.


# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== [EMAIL PROTECTED] #

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

Reply via email to