On Tue, Feb 03, 2004 at 13:54:17 +0100,
David Teran <[EMAIL PROTECTED]> wrote:
> we are trying to speed up a database which has about 3 GB of data. The
> server has 8 GB RAM and we wonder how we can ensure that the whole DB
> is read into RAM. We hope that this will speed up some queries.
The OS should do this on its own. What you don't want to do is set
shared_buffers (in postgresql.conf) too high. From what I remember
from past discussions it should be something between about 1000 and 10000.
sort_mem is trickier since that memory is per sort and a single query
can potentially generate multiple parallel sorts. You will have to
make some guesses for this based on what you think the number of concurrent
sorts will be when the system is stressed and not use too much memory.
You might also find that after a point you don't get a lot of benefit
from increasing sort_mem.
You should set effective_cache_size pretty large. Probably you want to
subtract the space used by shared_buffers and sort_mem (* times estimated
parallel sorts) and what you think is reasonable overhead for other
processes from the 8GB of memory.
Since your DB's disk blocks will almost certainly all be in buffer cache,
you are going to want to set random_page_cost to be pretty close to 1.
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match