Jeffrey Tenny wrote:
> I have a largely table-append-only application where most transactions
> are read-intensive and many are read-only.  The transactions may span
> many tables, and in some cases might need to pull 70 MB of data out of a
> couple of the larger tables.
> In 7.3, I don't seem to see any file system or other caching that helps
> with repeated reads of the 70MB of data.  Secondary fetches are pretty
> much as slow as the first fetch. (The 70MB in this example might take
> place via 2000 calls to a parameterized statement via JDBC).
> Were there changes after 7.3 w.r.t. caching of data? I read this list
> and see people saying that 8.0 will use the native file system cache to
> good effect.  Is this true? Is it supposed to work with 7.3?  Is there
> something I need to do to get postgresql to take advatage of large ram
> systems?
> Thanks for any advice.

Well, first off, the general recommendation is probably that 7.3 is
really old, and you should try to upgrade to at least 7.4, though
recommended to 8.0.

The bigger questions: How much RAM do you have? How busy is your system?

8.0 doesn't really do anything to do make the system cache the data.
What kernel are you using?

Also, if your tables are small enough, and your RAM is big enough, you
might already have everything cached.

One way to flush the caches, is to allocate a bunch of memory, and then
scan through it. Or maybe mmap a really big file, and access every byte.
But if your kernel is smart enough, it could certainly deallocate pages
after you stopped accessing them, so I can't say for sure that you can
flush the memory cache. Usually, I believe these methods are sufficient.


Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to