uh, you can always load a table in cache by doing a seq scan on it... like select count(1) from table or something... this doesn't work for indexes of course, but you can always look in the system catalogs, find the filename for the index, then just open() it from an external program and read it without caring for the data... it'll save you the seeks in the index... of course you'll have problems with file permissions etc, not mentioning security, locking, etc, etc, etc, is that worth the trouble ?
On Wed, 3 Nov 2004 14:35:28 -0500, Andrew Sullivan <[EMAIL PROTECTED]> wrote:
On Wed, Nov 03, 2004 at 12:12:43PM -0700, [EMAIL PROTECTED] wrote:That's correct - I'd like to be able to keep particular indexes in RAM available all the time
If these are queries that run frequently, then the relevant cache will probably remain populated. If they _don't_ run frequently, why do you want to force the memory to be used to optimise something that is uncommon? But in any case, there's no mechanism to do this.
 there are in fact limits on the caching: if your data set is larger than memory, for instance, there's no way it will all stay cached. Also, VACUUM does nasty things to the cache. It is hoped that nastiness is fixed in 8.0.
---------------------------(end of broadcast)--------------------------- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])