On Sat, Jun 8, 2013 at 10:00 PM, Greg Smith g...@2ndquadrant.com wrote:
But I have neither any firsthand experience nor any
empirical reason to presume that the write limit needs to be lower
when the read-rate is high.
No argument from me that that this is an uncommon issue. Before getting
On Thu, Jun 6, 2013 at 2:27 PM, Andres Freund and...@2ndquadrant.comwrote:
On 2013-06-06 12:34:01 -0700, Jeff Janes wrote:
On Fri, May 24, 2013 at 11:51 AM, Greg Smith g...@2ndquadrant.com
wrote:
On 5/24/13 9:21 AM, Robert Haas wrote:
But I wonder if we wouldn't be better off
On Thu, Jun 6, 2013 at 1:02 PM, Robert Haas robertmh...@gmail.com wrote:
If we can see our way clear to ripping out the autovacuum costing
stuff and replacing them with a read rate limit and a dirty rate
limit, I'd be in favor of that. The current system limits the linear
combination of
On 6/8/13 4:43 PM, Jeff Janes wrote:
Also, in all the anecdotes I've been hearing about autovacuum causing
problems from too much IO, in which people can identify the specific
problem, it has always been the write pressure, not the read, that
caused the problem. Should the default be to have
On Sat, Jun 8, 2013 at 1:57 PM, Greg Smith g...@2ndquadrant.com wrote:
On 6/8/13 4:43 PM, Jeff Janes wrote:
Also, in all the anecdotes I've been hearing about autovacuum causing
problems from too much IO, in which people can identify the specific
problem, it has always been the write
Greg Smith g...@2ndquadrant.com wrote:
I suspect the reason we don't see as many complaints is that a
lot more systems can handle 7.8MB/s of random reads then there
are ones that can do 3.9MB/s of random writes. If we removed
that read limit, a lot more complaints would start rolling in
On 6/8/13 5:17 PM, Jeff Janes wrote:
But my gut feeling is that if autovacuum is trying to read faster than
the hardware will support, it will just automatically get throttled, by
inherent IO waits, at a level which can be comfortably supported. And
this will cause minimal interference with
On 6/8/13 5:20 PM, Kevin Grittner wrote:
I'll believe that all of that is true, but I think there's another
reason. With multiple layers of write cache (PostgreSQL
shared_buffers, OS cache, controller or SSD cache) I think there's
a tendency for an avalanche effect. Dirty pages stick to cache
On Sat, Jun 8, 2013 at 4:43 PM, Jeff Janes jeff.ja...@gmail.com wrote:
I don't know what two independent setting would look like. Say you keep two
independent counters, where each can trigger a sleep, and the triggering of
that sleep clears only its own counter. Now you still have a limit on
On Thu, Jun 6, 2013 at 7:36 PM, Greg Smith g...@2ndquadrant.com wrote:
I have also subjected some busy sites to a field test here since the
original discussion, to try and nail down if this is really necessary. So
far I haven't gotten any objections, and I've seen one serious improvement,
On 6/7/13 10:14 AM, Robert Haas wrote:
If the page hit limit goes away, the user with a single core server who used
to having autovacuum only pillage shared_buffers at 78MB/s might complain
that if it became unbounded.
Except that it shouldn't become unbounded, because of the ring-buffer
On Fri, Jun 7, 2013 at 11:35 AM, Greg Smith g...@2ndquadrant.com wrote:
I wasn't talking about disruption of the data that's in the buffer cache.
The only time the scenario I was describing plays out is when the data is
already in shared_buffers. The concern is damage done to the CPU's data
On 6/7/13 12:42 PM, Robert Haas wrote:
GUCs in terms of units that are meaningful to the user. One could
have something like io_rate_limit (measured in MB/s),
io_read_multiplier = 1.0, io_dirty_multiplier = 1.0, and I think that
would be reasonably clear.
There's one other way to frame this:
On Fri, Jun 7, 2013 at 12:55 PM, Greg Smith g...@2ndquadrant.com wrote:
On 6/7/13 12:42 PM, Robert Haas wrote:
GUCs in terms of units that are meaningful to the user. One could
have something like io_rate_limit (measured in MB/s),
io_read_multiplier = 1.0, io_dirty_multiplier = 1.0, and I
On Fri, May 24, 2013 at 11:51 AM, Greg Smith g...@2ndquadrant.com wrote:
On 5/24/13 9:21 AM, Robert Haas wrote:
But I wonder if we wouldn't be better off coming up with a little more
user-friendly API. Instead of exposing a cost delay, a cost limit,
and various charges, perhaps we should
On Thu, Jun 6, 2013 at 3:34 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Fri, May 24, 2013 at 11:51 AM, Greg Smith g...@2ndquadrant.com wrote:
On 5/24/13 9:21 AM, Robert Haas wrote:
But I wonder if we wouldn't be better off coming up with a little more
user-friendly API. Instead of
On 2013-06-06 12:34:01 -0700, Jeff Janes wrote:
On Fri, May 24, 2013 at 11:51 AM, Greg Smith g...@2ndquadrant.com wrote:
On 5/24/13 9:21 AM, Robert Haas wrote:
But I wonder if we wouldn't be better off coming up with a little more
user-friendly API. Instead of exposing a cost delay, a
On Thu, May 23, 2013 at 7:27 PM, Greg Smith g...@2ndquadrant.com wrote:
I'm working on a new project here that I wanted to announce, just to keep
from duplicating effort in this area. I've started to add a cost limit
delay for regular statements. The idea is that you set a new
On 5/24/13 8:21 AM, Robert Haas wrote:
On Thu, May 23, 2013 at 7:27 PM, Greg Smithg...@2ndquadrant.com wrote:
I'm working on a new project here that I wanted to announce, just to keep
from duplicating effort in this area. I've started to add a cost limit
delay for regular statements. The
On 5/24/13 10:36 AM, Jim Nasby wrote:
Instead of KB/s, could we look at how much time one process is spending
waiting on IO vs the rest of the cluster? Is it reasonable for us to
measure IO wait time for every request, at least on the most popular OSes?
It's not just an OS specific issue. The
On 5/24/13 9:21 AM, Robert Haas wrote:
But I wonder if we wouldn't be better off coming up with a little more
user-friendly API. Instead of exposing a cost delay, a cost limit,
and various charges, perhaps we should just provide limits measured in
KB/s, like dirty_rate_limit = amount of data
On Fri, May 24, 2013 at 10:36 AM, Jim Nasby j...@nasby.net wrote:
Doesn't that hit the old issue of not knowing if a read came from FS cache
or disk? I realize that the current cost_delay mechanism suffers from that
too, but since the API is lower level that restriction is much more
apparent.
On Thu, May 23, 2013 at 8:27 PM, Greg Smith g...@2ndquadrant.com wrote:
The main unintended consequences issue I've found so far is when a cost
delayed statement holds a heavy lock. Autovacuum has some protection
against letting processes with an exclusive lock on a table go to sleep. It
On 5/23/13 7:34 PM, Claudio Freire wrote:
Why not make the delay conditional on the amount of concurrency, kinda
like the commit_delay? Although in this case, it should only count
unwaiting connections.
The test run by commit_delay is way too heavy to run after every block
is processed. That
On Thu, May 23, 2013 at 8:46 PM, Greg Smith g...@2ndquadrant.com wrote:
On 5/23/13 7:34 PM, Claudio Freire wrote:
Why not make the delay conditional on the amount of concurrency, kinda
like the commit_delay? Although in this case, it should only count
unwaiting connections.
The test run by
On 5/23/13 7:56 PM, Claudio Freire wrote:
Besides of the obvious option of making a lighter check (doesn't have
to be 100% precise), wouldn't this check be done when it would
otherwise sleep? Is it so heavy still in that context?
A commit to typical 7200RPM disk is about 10ms, while
26 matches
Mail list logo