On 17 January 2014 16:10, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Andres Freund <and...@2ndquadrant.com> writes:
>> On 2014-01-17 09:04:54 -0500, Robert Haas wrote:
>>> That having been said, I bet it could be done at the tail of
>> I don't think there are many locations where this would be ok. Sleeping
>> while holding exclusive buffer locks? Quite possibly inside a criticial
> More or less by definition, you're always doing both when you call
>> Surely not.
> I agree. It's got to be somewhere further up the call stack.
> I'm inclined to think that what we ought to do is reconceptualize
> vacuum_delay_point() as something a bit more generic, and sprinkle
> calls to it in a few more places than now.
Agreed; that was the original plan, but implementation delays
prevented the whole vision/discussion/implementation. Requirements
from various areas include WAL rate limiting for replication, I/O rate
limiting, hard CPU and I/O limits for security and mixed workload
I'd still like to get something on this in 9.4 that alleviates the
replication issues, leaving wider changes for later releases.
The vacuum_* parameters don't allow any control over WAL production,
which is often the limiting factor. I could, for example, introduce a
new parameter for vacuum_cost_delay that provides a weighting for each
new BLCKSZ chunk of WAL, then rename all parameters to a more general
form. Or I could forget that and just press ahead with the patch as
is, providing a cleaner interface in next release.
> It's also interesting to wonder about the relationship to
> CHECK_FOR_INTERRUPTS --- although I think that currently, we assume
> that that's *cheap* (1 test and branch) as long as nothing is pending.
> I don't want to see a bunch of arithmetic added to it.
I'll call these new calls CHECK_FOR_RESOURCES() to allow us to
implement various kinds of resource checking in future.
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Sent via pgsql-hackers mailing list (email@example.com)
To make changes to your subscription: