People seemed to like the idea:
Add a script to ask system configuration questions and tune
postgresql.conf.
---
Bruce Momjian wrote:
Peter Eisentraut wrote:
Tom Lane writes:
Well, as I commented
Nutshell:
Easy to install but is horribly slow.
or
Took a couple of minutes to configure and it rocks!
Since when is it easy to install on win32?
The easiest way I know of is through Cygwin, then you have to worry about
installing the IPC service (an getting the
Josh Berkus wrote:
Uh ... do we have a basis for recommending any particular sets of
parameters for these different scenarios? This could be a good idea
in the abstract, but I'm not sure I know enough to fill in the details.
Sure.
Mostly-Read database, few users, good hardware,
Bruce Momjian wrote:
We could prevent the postmaster from starting unless they run pg_tune or
if they have modified postgresql.conf from the default. Of course,
that's pretty drastic.
If you're going to do that, then you may as well make the defaults
something that will perform reasonably
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
So, my idea is to add a message at the end of initdb that states people
should run the pgtune script before running a production server.
Do people read what initdb has to say?
IIRC, the RPM install scripts hide initdb's output from
On Tue, Feb 11, 2003 at 05:25:29PM -0700, Rick Gigger wrote:
The type of person who can't configure it or doesnt' think to try is
probably not doing a project that requires any serious performance.
I have piles of email, have fielded thousands of phone calls, and
have had many conversations
Tom Lane writes:
Well, as I commented later in that mail, I feel that 1000 buffers is
a reasonable choice --- but I have to admit that I have no hard data
to back up that feeling.
I know you like it in that range, and 4 or 8 MB of buffers by default
should not be a problem. But personally I
Peter Eisentraut [EMAIL PROTECTED] writes:
I know you like it in that range, and 4 or 8 MB of buffers by default
should not be a problem. But personally I think if the optimal buffer
size does not depend on both the physical RAM you want to dedicate to
PostgreSQL and the nature and size of
Peter Eisentraut wrote:
Tom Lane writes:
Well, as I commented later in that mail, I feel that 1000 buffers is
a reasonable choice --- but I have to admit that I have no hard data
to back up that feeling.
I know you like it in that range, and 4 or 8 MB of buffers by default
should not
On Tue, 2003-02-11 at 10:20, Tom Lane wrote:
Merlin Moncure [EMAIL PROTECTED] writes:
May I make a suggestion that maybe it is time to start thinking about
tuning the default config file, IMHO its just a little bit too
conservative,
It's a lot too conservative. I've been thinking for
Tom Lane wrote:
"Merlin Moncure" [EMAIL PROTECTED] writes:
May I make a suggestion that maybe it is time to start thinking about
tuning the default config file, IMHO its just a little bit too
conservative,
It's a lot too conservative. I've been thinking for awhile
Tom Lane wrote:
snip
What I would really like to do is set the default shared_buffers to
1000. That would be 8 meg worth of shared buffer space. Coupled with
more-realistic settings for FSM size, we'd probably be talking a shared
memory request approaching 16 meg. This is not enough RAM to
Greg Copeland wrote:
I'd personally rather have people stumble trying to get PostgreSQL
running, up front, rather than allowing the lowest common denominator
more easily run PostgreSQL only to be disappointed with it and move on.
After it's all said and done, I would rather someone simply
On Tue, 2003-02-11 at 12:10, Steve Crawford wrote:
A quick-'n'-dirty first step would be more comments in postgresql.conf. Most
This will not solve the issue with the large number of users who have no
interest in looking at the config file -- but are interested in
publishing their results.
--
What if we supplied several sample .conf files, and let the user choose
which to copy into the database directory? We could have a high read
Exactly my first thought when reading the proposal for a setting suited for
performance tests.
performance profile, and a transaction database
mlw [EMAIL PROTECTED] writes:
This attitude sucks. If you want a product to be used, you must put the
effort into making it usable.
[snip]
AFAICT, you are flaming Greg for recommending the exact same thing you
are recommending. Please calm down and read again.
On Tue, 2003-02-11 at 12:08, Justin Clift wrote:
b) Said benchmarking person knows very little about PostgreSQL, so they
install the RPM's, packages, or whatever, and it works. Then they run
whatever benchmark they've downloaded, or designed, or whatever
Out of curiosity, how feasible is
Apology
After Mark calms down and, in fact, sees that Greg was saying the right thing
after all, chagrin is the only word.
I'm sorry.
Greg Copeland wrote:
On Tue, 2003-02-11 at 11:23, mlw wrote:
Greg Copeland wrote:
I'd personally rather have people stumble
My other pet peeve is the default max connections setting. This should be
higher if possible, but of course, there's always the possibility of
running out of file descriptors.
Apache has a default max children of 150, and if using PHP or another
language that runs as an apache module, it is
scott.marlowe [EMAIL PROTECTED] writes:
Is setting the max connections to something like 200 reasonable, or likely
to cause too many problems?
That would likely run into number-of-semaphores limitations (SEMMNI,
SEMMNS). We do not seem to have as good documentation about changing
that as we
Tom Lane wrote:
I think that what this discussion is really leading up to is that we
are going to decide to apply the same principle to performance. The
out-of-the-box settings ought to give reasonable performance, and if
your system can't handle it, you should have to take explicit action
to
On Tuesday 11 February 2003 13:03, Robert Treat wrote:
On Tue, 2003-02-11 at 12:08, Justin Clift wrote:
b) Said benchmarking person knows very little about PostgreSQL, so they
install the RPM's, packages, or whatever, and it works. Then they run
whatever benchmark they've downloaded, or
On Tue, 2003-02-11 at 10:20, Tom Lane wrote:
Merlin Moncure [EMAIL PROTECTED] writes:
May I make a suggestion that maybe it is time to start thinking about
tuning the default config file, IMHO its just a little bit too
conservative,
It's a lot too conservative. I've been thinking
On Tue, 11 Feb 2003, Tom Lane wrote:
It's a lot too conservative. I've been thinking for awhile that we
should adjust the defaults.
Some of these issues could be made to Just Go Away with some code
changes. For example, using mmap rather than SysV shared memory
would automatically optimize
On Tue, 11 Feb 2003, Rick Gigger wrote:
The type of person who can't configure it or doesnt' think to try is
probably not doing a project that requires any serious performance. As long
as you are running it on decent hardware postgres will run fantastic for
anything but a very heavy load. I
On Wed, 12 Feb 2003, Curt Sampson wrote:
On Tue, 11 Feb 2003, Tom Lane wrote:
It's a lot too conservative. I've been thinking for awhile that we
should adjust the defaults.
Some of these issues could be made to Just Go Away with some code
changes. For example, using mmap rather than
On Tue, Feb 11, 2003 at 17:42:06 -0700,
scott.marlowe [EMAIL PROTECTED] wrote:
The poor performance of Postgresql in it's current default configuration
HAS cost us users, trust me, I know a few we've almost lost where I work
that I converted after some quick tweaking of their database.
After it's all said and done, I would rather someone simply say, it's
beyond my skill set, and attempt to get help or walk away. That seems
better than them being able to run it and say, it's a dog, spreading
word-of-mouth as such after they left PostgreSQL behind. Worse yet,
those that do
On Tuesday 11 Feb 2003 10:56 pm, you wrote:
Josh Berkus [EMAIL PROTECTED] writes:
What if we supplied several sample .conf files, and let the user choose
which to copy into the database directory? We could have a high read
performance profile, and a transaction database profile, and a
Tom Lane writes:
We could retarget to try to stay under SHMMAX=4M, which I think is
the next boundary that's significant in terms of real-world platforms
(isn't that the default SHMMAX on some BSDen?). That would allow us
350 or so shared_buffers, which is better, but still not really a
Tom, Justin,
What I would really like to do is set the default shared_buffers to
1000. That would be 8 meg worth of shared buffer space. Coupled with
more-realistic settings for FSM size, we'd probably be talking a shared
memory request approaching 16 meg. This is not enough RAM to
Josh Berkus [EMAIL PROTECTED] writes:
What if we supplied several sample .conf files, and let the user choose which
to copy into the database directory? We could have a high read
performance profile, and a transaction database profile, and a
workstation profile, and a low impact profile.
Josh Berkus wrote:
Tom, Justin,
snip
What if we supplied several sample .conf files, and let the user choose which
to copy into the database directory? We could have a high read
performance profile, and a transaction database profile, and a
workstation profile, and a low impact profile.
Tom Lane wrote:
snip
Uh ... do we have a basis for recommending any particular sets of
parameters for these different scenarios? This could be a good idea
in the abstract, but I'm not sure I know enough to fill in the details.
A lower-tech way to accomplish the same result is to document these
Tom, Justin,
Uh ... do we have a basis for recommending any particular sets of
parameters for these different scenarios? This could be a good idea
in the abstract, but I'm not sure I know enough to fill in the details.
Sure.
Mostly-Read database, few users, good hardware, complex
Justin Clift [EMAIL PROTECTED] writes:
Tom Lane wrote:
Uh ... do we have a basis for recommending any particular sets of
parameters for these different scenarios? This could be a good idea
in the abstract, but I'm not sure I know enough to fill in the details.
Without too much hacking
On Tue, 2003-02-11 at 13:01, Tom Lane wrote:
Jon Griffin [EMAIL PROTECTED] writes:
So it appears that linux at least is way above your 8 meg point, unless I
am missing something.
Yeah, AFAIK all recent Linuxen are well above the range of parameters
that I was suggesting (and even if they
Peter Eisentraut [EMAIL PROTECTED] writes:
Tom Lane writes:
We could retarget to try to stay under SHMMAX=4M, which I think is
the next boundary that's significant in terms of real-world platforms
(isn't that the default SHMMAX on some BSDen?). That would allow us
350 or so shared_buffers,
We could retarget to try to stay under SHMMAX=4M, which I think is
the next boundary that's significant in terms of real-world platforms
(isn't that the default SHMMAX on some BSDen?). That would allow us
350 or so shared_buffers, which is better, but still not really a
serious choice
A separate line of investigation is what is the lowest common
denominator nowadays? I think we've established that SHMMAX=1M
is obsolete, but what replaces it as the next LCD? 4M seems to be
correct for some BSD flavors, and I can confirm that that's the
current default for Mac OS X --- any
40 matches
Mail list logo