Greg Smith [EMAIL PROTECTED] writes:
Tom gets credit for naming the attached patch, which is my latest attempt to
finalize what has been called the Automatic adjustment of
bgwriter_lru_maxpages patch for 8.3; that's not what it does anymore but
that's where it started.
I've applied this
On Tue, 25 Sep 2007, Tom Lane wrote:
-Heikki didn't like the way I pass information back from SyncOneBuffer
back to the background writer.
I didn't either --- it was too complicated and not actually doing
anything useful.
I suspect someone (possibly me) may want to put back some of that same
It was suggested to me today that I should clarify how others should be
able to test this patch themselves by writing a sort of performance
reviewer's guide; that information has been scattered among material
covering development. That's what you'll find below. Let me know if any
of it seems
On Sat, 8 Sep 2007, Greg Smith wrote:
Here's the results I got when I pushed the time down significantly from the
defaults
info | set | tps | cleaner_pct
---+-+--+-
jit multiplier=1.0
On Fri, 7 Sep 2007, Simon Riggs wrote:
For me, the bgwriter should sleep for at most 10ms at a time.
Here's the results I got when I pushed the time down significantly from
the defaults, with some of the earlier results for comparision:
info | set
Greg Smith [EMAIL PROTECTED] writes:
If anyone has a reason why they feel the bgwriter_delay needs to be a
tunable or why the rate might need to run even faster than 10ms, now would
be a good time to say why.
You'd be hard-wiring the thing to wake up 100 times per second? Doesn't
sound like
On Sat, 8 Sep 2007, Tom Lane wrote:
I've already gotten flak about the current default of 200ms:
https://bugzilla.redhat.com/show_bug.cgi?id=252129
I can't imagine that folk with those types of goals will tolerate an
un-tunable 10ms cycle.
That's the counter-example for why lowering the
Greg Smith [EMAIL PROTECTED] writes:
On Sat, 8 Sep 2007, Tom Lane wrote:
In fact, given the numbers you show here, I'd say you should leave the
default cycle time at 200ms. The 10ms value is eating way more CPU and
producing absolutely no measured benefit relative to 200ms...
My server is
Greg Smith [EMAIL PROTECTED] writes:
On Sat, 8 Sep 2007, Tom Lane wrote:
I've already gotten flak about the current default of 200ms:
https://bugzilla.redhat.com/show_bug.cgi?id=252129
I can't imagine that folk with those types of goals will tolerate an
un-tunable 10ms cycle.
That's the
On Sat, 8 Sep 2007, Tom Lane wrote:
It might be interesting to consider making the delay auto-tune: if you
wake up and find nothing (much) to do, sleep longer the next time,
conversely shorten the delay when work picks up. Something for 8.4,
though, at this point.
I have a couple of pages of
On Thu, 6 Sep 2007, Decibel! wrote:
I don't know that there should be a direct correlation, but ISTM that
scan_whole_pool_seconds should take checkpoint intervals into account
somehow.
Any direct correlation is weak at this point. The LRU cleaner has a small
impact on checkpoints, in that
Greg Smith wrote:
On Sat, 8 Sep 2007, Tom Lane wrote:
It might be interesting to consider making the delay auto-tune: if you
wake up and find nothing (much) to do, sleep longer the next time,
conversely shorten the delay when work picks up. Something for 8.4,
though, at this point.
I have
On Fri, 2007-09-07 at 11:48 -0400, Greg Smith wrote:
On Fri, 7 Sep 2007, Simon Riggs wrote:
I think that is what we should be measuring, perhaps in a simple way
such as calculating the 90th percentile of the response time
distribution.
I do track the 90th percentile numbers, but in
On Wed, 2007-09-05 at 23:31 -0400, Greg Smith wrote:
Tom gets credit for naming the attached patch, which is my latest attempt to
finalize what has been called the Automatic adjustment of
bgwriter_lru_maxpages patch for 8.3; that's not what it does anymore but
that's where it started.
On Fri, 7 Sep 2007, Simon Riggs wrote:
I think that is what we should be measuring, perhaps in a simple way
such as calculating the 90th percentile of the response time
distribution.
I do track the 90th percentile numbers, but in these pgbench tests where
I'm writing as fast as possible
On Fri, 7 Sep 2007, Simon Riggs wrote:
I think we should do some more basic tests to see where those outliers
come from. We need to establish a clear link between number of dirty
writes and response time.
With the test I'm running, which is specifically designed to aggrevate
this behavior,
On Thu, 6 Sep 2007, Kevin Grittner wrote:
If you exposed the scan_whole_pool_seconds as a tunable GUC, that would
allay all of my concerns about this patch. Basically, our problems were
resolved by getting all dirty buffers out to the OS cache within two
seconds
Unfortunately it wouldn't
On Wed, Sep 5, 2007 at 10:31 PM, in message
[EMAIL PROTECTED], Greg Smith
[EMAIL PROTECTED] wrote:
-There are two magic constants in the code:
int smoothing_samples = 16;
float scan_whole_pool_seconds = 120.0;
I personally
don't feel like these constants need
On Thu, Sep 6, 2007 at 11:27 AM, in message
[EMAIL PROTECTED], Greg Smith
[EMAIL PROTECTED] wrote:
On Thu, 6 Sep 2007, Kevin Grittner wrote:
I have been staring carefully at your configuration recently, and I would
wager that you could turn off the LRU writer altogether and still meet
Kevin Grittner [EMAIL PROTECTED] writes:
On Thu, Sep 6, 2007 at 11:27 AM, in message
[EMAIL PROTECTED], Greg Smith
[EMAIL PROTECTED] wrote:
With the default delay of 200ms, this has the LRU-writer scanning the
whole pool every 1 second,
Whoa! Apparently I've totally misread the
On Thu, 6 Sep 2007, Kevin Grittner wrote:
I thought that the bgwriter_lru_percent was scanned from the lru end
each time; I would not expect that it would ever get beyond the oldest
10%.
You're correct; I stated that badly. What I should have said is that your
LRU writer could potentially
On Thu, Sep 06, 2007 at 09:20:31AM -0500, Kevin Grittner wrote:
On Wed, Sep 5, 2007 at 10:31 PM, in message
[EMAIL PROTECTED], Greg Smith
[EMAIL PROTECTED] wrote:
-There are two magic constants in the code:
int smoothing_samples = 16;
float
22 matches
Mail list logo