On Fri, May 27, 2011 at 7:19 PM, Jeff Davis pg...@j-davis.com wrote:
On Thu, 2011-05-26 at 09:31 -0500, Merlin Moncure wrote:
Where they are most helpful is for masking of i/o if
a page gets dirtied 1 times before it's written out to the heap
Another possible benefit of higher shared_buffers
On 05/27/2011 07:30 PM, Mark Kirkwood wrote:
Greg, having an example with some discussion like this in the docs
would probably be helpful.
If we put that example into the docs, two years from now there will be
people showing up here saying I used the recommended configuration from
the docs
On Thu, May 26, 2011 at 6:10 PM, Greg Smith g...@2ndquadrant.com wrote:
Merlin Moncure wrote:
So, the challenge is this: I'd like to see repeatable test cases that
demonstrate regular performance gains 20%. Double bonus points for
cases that show gains 50%.
Do I run around challenging
So how far do you go? 128MB? 32MB? 4MB?
Anecdotal and an assumption, but I'm pretty confident that on any server
with at least 1GB of dedicated RAM, setting it any lower than 200MB is not
even going to help latency (assuming checkpoint and log configuration is
in the realm of sane, and
Scott Carey sc...@richrelevance.com wrote:
So how far do you go? 128MB? 32MB? 4MB?
Under 8.2 we had to keep shared_buffers less than the RAM on our BBU
RAID controller, which had 256MB -- so it worked best with
shared_buffers in the 160MB to 200MB range. With 8.3 we found that
anywhere
Scott Carey wrote:
And there is an OS component to it too. You can actually get away with
shared_buffers at 90% of RAM on Solaris. Linux will explode if you try
that (unless recent kernels have fixed its shared memory accounting).
You can use much larger values for shared_buffers on
Merlin Moncure wrote:
That's just plain unfair: I didn't challenge your suggestion nor give
you homework.
I was stuck either responding to your challenge, or leaving the
impression I hadn't done the research to back the suggestions I make if
I didn't. That made it a mandatory homework
On Fri, May 27, 2011 at 1:47 PM, Greg Smith g...@2ndquadrant.com wrote:
Merlin Moncure wrote:
That's just plain unfair: I didn't challenge your suggestion nor give
you homework.
I was stuck either responding to your challenge, or leaving the impression I
hadn't done the research to back the
On Fri, May 27, 2011 at 2:47 PM, Greg Smith g...@2ndquadrant.com wrote:
Any attempt to make a serious change to the documentation around performance
turns into a bikeshedding epic, where the burden of proof to make a change
is too large to be worth the trouble to me anymore. I first started
After failing to get even basic good recommendations for
checkpoint_segments into the docs, I completely gave up on focusing there as
my primary way to spread this sort of information.
Hmm. That's rather unfortunate. +1 for revisiting that topic, if you
have the energy for it.
Another
On Fri, May 27, 2011 at 9:24 PM, Maciek Sakrejda msakre...@truviso.com wrote:
Another +1. While I understand that this is not simple, many users
will not look outside of standard docs, especially when first
evaluating PostgreSQL. Merlin is right that the current wording does
not really mention
Hello performers, I've long been unhappy with the standard advice
given for setting shared buffers. This includes the stupendously
vague comments in the standard documentation, which suggest certain
settings in order to get 'good performance'. Performance of what?
Connection negotiation speed?
Merlin Moncure mmonc...@gmail.com wrote:
So, the challenge is this: I'd like to see repeatable test cases
that demonstrate regular performance gains 20%. Double bonus
points for cases that show gains 50%.
Are you talking throughput, maximum latency, or some other metric?
In our shop
On Thu, May 26, 2011 at 10:10 AM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Merlin Moncure mmonc...@gmail.com wrote:
So, the challenge is this: I'd like to see repeatable test cases
that demonstrate regular performance gains 20%. Double bonus
points for cases that show gains 50%.
On Thu, May 26, 2011 at 5:36 PM, Merlin Moncure mmonc...@gmail.com wrote:
Point being: cranking buffers
may have been the bee's knees with, say, the 8.2 buffer manager, but
present and future improvements may have render that change moot or
even counter productive.
I suggest you read the docs
On Thu, May 26, 2011 at 10:45 AM, Claudio Freire klaussfre...@gmail.com wrote:
On Thu, May 26, 2011 at 5:36 PM, Merlin Moncure mmonc...@gmail.com wrote:
Point being: cranking buffers
may have been the bee's knees with, say, the 8.2 buffer manager, but
present and future improvements may have
On Thu, May 26, 2011 at 6:02 PM, Merlin Moncure mmonc...@gmail.com wrote:
The point is what we can prove, because going through the
motions of doing that is useful.
Exactly, and whatever you can prove will be workload-dependant.
So you can't prove anything generally, since no single setting is
On Thu, May 26, 2011 at 11:37 AM, Claudio Freire klaussfre...@gmail.com wrote:
On Thu, May 26, 2011 at 6:02 PM, Merlin Moncure mmonc...@gmail.com wrote:
The point is what we can prove, because going through the
motions of doing that is useful.
Exactly, and whatever you can prove will be
Merlin Moncure mmonc...@gmail.com wrote:
Kevin Grittner kevin.gritt...@wicourts.gov wrote:
Merlin Moncure mmonc...@gmail.com wrote:
So, the challenge is this: I'd like to see repeatable test cases
that demonstrate regular performance gains 20%. Double bonus
points for cases that show gains
Merlin Moncure wrote:
So, the challenge is this: I'd like to see repeatable test cases that
demonstrate regular performance gains 20%. Double bonus points for
cases that show gains 50%.
Do I run around challenging your suggestions and giving you homework?
You have no idea how much eye
On Thu, May 26, 2011 at 4:10 PM, Greg Smith g...@2ndquadrant.com wrote:
As for figuring out how this impacts more complicated cases, I hear
somebody wrote a book or something that went into pages and pages of detail
about all this. You might want to check it out.
I was just going to
21 matches
Mail list logo