Tom Lane wrote: <snip>
<snip>What I would really like to do is set the default shared_buffers to 1000. That would be 8 meg worth of shared buffer space. Coupled with more-realistic settings for FSM size, we'd probably be talking a shared memory request approaching 16 meg. This is not enough RAM to bother any modern machine from a performance standpoint, but there are probably quite a few platforms out there that would need an increase in their stock SHMMAX kernel setting before they'd take it.
Totally agree with this. We really, really, really, really need to get the default to a point where we have _decent_ default performance.
Yep.The alternative approach is to leave the settings where they are, and to try to put more emphasis in the documentation on the fact that the factory-default settings produce a toy configuration that you *must* adjust upward for decent performance. But we've not had a lot of success spreading that word, I think. With SHMMMAX too small, you do at least get a pretty specific error message telling you so.Comments?
Here's an *unfortunately very common* scenario, that again unfortunately, a _seemingly large_ amount of people fall for.
a) Someone decides to "benchmark" database XYZ vs PostgreSQL vs other databases
b) Said benchmarking person knows very little about PostgreSQL, so they install the RPM's, packages, or whatever, and "it works". Then they run whatever benchmark they've downloaded, or designed, or whatever
c) PostgreSQL, being practically unconfigured, runs at the pace of a slow, mostly-disabled snail.
d) Said benchmarking person gets better performance from the other databases (also set to their default settings) and thinks "PostgreSQL has lots of features, and it's free, but it's Too Slow".
Yes, this kind of testing shouldn't even _pretend_ to have any real world credibility.
e) Said benchmarking person tells everyone they know, _and_ everyone they meet about their results. Some of them even create nice looking or profesional looking web pages about it.
f) People who know even _less_ than the benchmarking person hear about the test, or read the result, and don't know any better than to believe it at face value. So, they install whatever system was recommended.
g) Over time, the benchmarking person gets the hang of their chosen database more and writes further articles about it, and doesn't generally look any further afield than it for say... a couple of years. By this time, they've already influenced a couple of thousand people in the non-optimal direction.
h) Arrgh. With better defaults, our next release would _appear_ to be a lot faster to quite a few people, just because they have no idea about tuning.
So, as sad as this scenario is, better defaults will probably encourage a lot more newbies to get involved, and that'll eventually translate into a lot more experienced users, and a few more coders to assist. ;-)
Personally I'd be a bunch happier if we set the buffers so high that we definitely have decent performance, and the people that want to run PostgreSQL are forced to make the choice of either:
1) Adjust their system settings to allow PostgreSQL to run properly, or
2) Manually adjust the PostgreSQL settings to run memory-constrained
This way, PostgreSQL either runs decently, or they are _aware_ that they're limiting it. That should cut down on the false benchmarks (hopefully).
Regards and best wishes,
regards, tom lane
-- "My grandfather once told me that there are two kinds of people: those who work and those who take the credit. He told me to try to be in the first group; there was less competition there." - Indira Gandhi ---------------------------(end of broadcast)--------------------------- TIP 4: Don't 'kill -9' the postmaster