[EMAIL PROTECTED] wrote:
max_connections = 160
shared_buffers = 2048 [Total = 2.5 Gb.]
sort_mem = 8192 [Total = 1280 Mb.]
vacuum_mem = 16384
effective_cache_size = 128897 [= 1007 Mb. = 1 Gb. ]
Will it be more suitable for my server than before?
I would keep shared_buffers in the 10000->20000 range, as this is
allocated *once* into shared memory, so only uses 80->160 Mb in *total*.
You mean that if I increase the share buffer to arround 12000 [160
comnnections
] , this will not affect the mem. usage ?
shared_buffers = 12000 will use 12000*8192 bytes (i.e about 96Mb). It is
shared, so no matter how many connections you have it will only use 96M.
The lower sort_mem will help reduce memory pressure (as this is
allocated for every backend connection) and this will help performance -
*unless* you have lots of queries that need to sort large datasets. If
so, then these will hammer your i/o subsystem, possibly canceling any
gain from freeing up more memory. So there is a need to understand what
sort of workload you have!
Will the increasing in effective cache size to arround 200000 make a little
bit
improvement ? Do you think so?
I would leave it at the figure you proposed (128897), and monitor your
performance.
(you can always increase it later and see what the effect is).
regards
Mark
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match