Greg Stark <[EMAIL PROTECTED]> writes: > 8.0, on the other hand, has a new algorithm that specifically tries to > protect against the shared buffers being blown out by a sequential > scan. But that will only help if it's the shared buffers being > thrashed that's hurting you, not the entire OS file system cache.
Something we ought to think about sometime: what are the performance implications of the real-world situation that we have another level of caching sitting underneath us? AFAIK all the theoretical studies we've looked at consider only a single level of caching. But for example, if our buffer management algorithm recognizes an index page as being heavily hit and therefore keeps it in cache for a long time, then when it does fall out of cache you can be sure it's going to need to be read from disk when it's next used, because the OS-level buffer cache has not seen a call for that page in a long time. Contrariwise a page that we think is only on the fringe of usefulness is going to stay in the OS cache because we repeatedly drop it and then have to ask for it again. I have no idea how to model this situation, but it seems like it needs some careful thought. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])