Hi,
I'm really stuck and I wonder if any of you could help.
I have an application which will be sitting on a quite large database
(roughly 8-16GB). The nature of the application is such that, on a second by
second basis, the working set of the database is likely to be a substantial
portion (e.g.
The disk cache on most operating systems is optimized. Plus, keeping
shared buffers low gives you more room to bump up the sort memory, which
will make your big queries run faster.
Thanks merlin,
Whether the OS caches the data or PG does, you still want it cached. If your
sorting backends
Thanks, Chris.
What is it about the buffer cache that makes it so unhappy being able to
hold everything? I don't want to be seen as a cache hit fascist, but
isn't
it just better if the data is just *there*, available in the
postmaster's
address space ready for each backend process to
Rogers [EMAIL PROTECTED]
To: Andy Ballingall [EMAIL PROTECTED]
Sent: Friday, July 09, 2004 10:40 PM
Subject: Re: [PERFORM] Working on huge RAM based datasets
On Fri, 2004-07-09 at 02:28, Andy Ballingall wrote:
After all, we're now seeing the first wave of 'reasonably priced' 64 bit
servers
Sorry for the late reply - I've been away.
Merlin, I'd like to come back with a few more points!
That's the whole point: memory is a limited resource. If pg is
crawling, then the problem is simple: you need more memory.
My posting only relates to the scenario where RAM is not a limiting
a close coupling between the apache
server responsible for a region and the database it hits.
Any insights gratefully received!
Andy Ballingall
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Why not just query adjacent databases, rather than copying the data around?
The reasons I didn't choose this way were:
1) I didn't think there's a way to write a query that can act on the data in
two
Databases as though it was all in one, and I didn't want to get into merging
multiple database