On 12/11/2013 11:37 AM, Simon Riggs wrote:> On 11 December 2013 17:57,
Robert Haas <robertmh...@gmail.com> wrote:
>> Extensive testing will be needed to prove
>> that the new algorithm doesn't perform worse than the current
>> algorithm in any important cases.
> Agreed, but the amount of testing seems equivalent in both cases,
> assuming we weren't going to skip it for this patch.
No performance testing is required for this patch. The effect of memory
limits on vacuum are already well-known and well-understood.
> With considerable regret, I don't see how this solves the problem at
> hand. We can and should do better.
I strongly disagree. The problem we are dealing with currently is that
two resource limits which should have *always* been independent of each
other are currently conflated into a single GUC variable. This forces
users to remember to set maintenance_work_mem interactively every time
they want to run a manual VACUUM, because the setting in postgresql.conf
is needed to tune autovacuum.
In other words, we are having an issue with *non-atomic data*, and this
patch partially fixes that.
Would it be better to have an admissions-control policy engine for
launching autovacuum which takes into account available RAM, estimated
costs of concurrent vacuums, current CPU activity, and which tables are
in cache? Yes. And if you started on that now, you might have it ready
And, for that matter, accepting this patch by no means blocks doing
something more sophisticated in the future.
PostgreSQL Experts Inc.
Sent via pgsql-hackers mailing list (email@example.com)
To make changes to your subscription: