> My concern is that this kind of testing has very little relevance to the 
> real world of multiuser processing where contention for the cache becomes an 
> issue.  It may be that, at least in the current situation, postgres is 
> giving too much weight to seq scans based on single user, straight line 

To be fair, a large index scan can easily throw the buffers out of whack
as well. An index scan on 0.1% of a table with 1 billion tuples will
have a similar impact to buffers as a sequential scan of a table with 1
million tuples.

Any solution fixing buffers should probably not take into consideration
the method being performed (do you really want to skip caching a
sequential scan of a 2 tuple table because it didn't use an index) but
the volume of data involved as compared to the size of the cache.

I've often wondered if a single 1GB toasted tuple could wipe out the
buffers. I would suppose that toast doesn't bypass them.
-- 
Rod Taylor <[EMAIL PROTECTED]>


---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Reply via email to