Scott Marlowe wrote:
On Fri, 2004-08-06 at 22:02, Martin Foster wrote:

Scott Marlowe wrote:


On Fri, 2004-08-06 at 17:24, Gaetano Mendola wrote:


Martin Foster wrote:



Gaetano Mendola wrote:



Let start from your postgres configuration:

shared_buffers = 8192 <==== This is really too small for your configuration
sort_mem = 2048


wal_buffers = 128 <==== This is really too small for your configuration

effective_cache_size = 16000

change this values in:

shared_buffers = 50000
sort_mem = 16084

wal_buffers = 1500

effective_cache_size = 32000


to bump up the shm usage you have to configure your OS in order to be allowed to use that ammount of SHM.

This are the numbers that I feel good for your HW, the second step now is
analyze your queries


These changes have yielded some visible improvements, with load averages rarely going over the anything noticeable. However, I do have a question on the matter, why do these values seem to be far higher then what a frequently pointed to document would indicate as necessary?


http://www.varlena.com/GeneralBits/Tidbits/perf.html

I am simply curious, as this clearly shows that my understanding of PostgreSQL is clearly lacking when it comes to tweaking for the hardware.

Unfortunately there is no a "wizard tuning" for postgres so each one of us have a own "school". The data I gave you are oversized to be sure to achieve improvements. Now you can start to decrease these values ( starting from the wal_buffers ) in order to find the good compromise with your HW.


FYI, my school of tuning is to change one thing at a time some
reasonable percentage (shared_buffers from 1000 to 2000) and measure the
change under simulated load.  Make another change, test it, chart the
shape of the change line.  It should look something like this for most
folks:

shared_buffers | q/s (more is better)
100 | 20
200 | 45
400 | 80
1000 | 100
... levels out here...
8000 | 110
10000 | 108
20000 | 40
30000 | 20

Note it going back down as we exceed our memory and start swapping
shared_buffers.  Where that happens on your machine is determined by
many things like your machine's memory, memory bandwidth, type of load,
etc... but it will happen on most machines and when it does, it often
happens at the worst times, under heavy parallel load.

Unless testing shows it's faster, 10000 or 25% of mem (whichever is
less) is usually a pretty good setting for shared_buffers.  Large data
sets may require more than 10000, but going over 25% on machines with
large memory is usually a mistake, especially servers that do anything
other than just PostgreSQL.

You're absolutely right about one thing, there's no automatic wizard for
tuning this stuff.


Which rather points out the crux of the problem. This is a live system, meaning changes made need to be as informed as possible, and that changing values for the sake of testing can lead to potential problems in service.


But if you make those changes slowly, as I was showing, you should see
the small deleterious effects like I was showing long before they become
catastrophic. To just jump shared_buffers to 50000 is not a good idea,
especially if the sweet spot is likely lower than that.

As you can see 50000 are less then 20% of his total memory and I strongly fell that 50000 is not oversized for his hardware ( as wal_buffers isn't), may be could be for his database activity but for sure that value ( values ) can not be source of problems.

I'd like to have a wizard that could be run also for hours in order to find the
good compromise for all GUC parameters , may be a genetic algoritm can help.

Regards
Gaetano Mendola
























---------------------------(end of broadcast)--------------------------- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match

Reply via email to