I have a largely table-append-only application where most transactions are read-intensive and many are read-only. The transactions may span many tables, and in some cases might need to pull 70 MB of data out of a couple of the larger tables.

In 7.3, I don't seem to see any file system or other caching that helps with repeated reads of the 70MB of data. Secondary fetches are pretty much as slow as the first fetch. (The 70MB in this example might take place via 2000 calls to a parameterized statement via JDBC).

Were there changes after 7.3 w.r.t. caching of data? I read this list and see people saying that 8.0 will use the native file system cache to good effect. Is this true? Is it supposed to work with 7.3? Is there something I need to do to get postgresql to take advatage of large ram systems?

Thanks for any advice.

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

              http://www.postgresql.org/docs/faq

Reply via email to