On Thu, May 15, 2014 at 8:06 AM, Bruce Momjian <br...@momjian.us> wrote:
> On Tue, May 6, 2014 at 11:15:17PM +0100, Simon Riggs wrote:
>> > Well, for what it's worth, I've encountered systems where setting
>> > effective_cache_size too low resulted in bad query plans, but I've
>> > never encountered the reverse situation.
>> I agree with that.
>> Though that misses my point, which is that you can't know that all of
>> that memory is truly available on a server with many concurrent users.
>> Choosing settings that undercost memory intensive plans are not the
>> best choice for a default strategy in a mixed workload when cache may
>> be better used elsewhere, even if such settings make sense for some
>> individual users.
> This is the same problem we had with auto-tuning work_mem, in that we
> didn't know what other concurrent activity was happening. Seems we need
> concurrent activity detection before auto-tuning work_mem and
I think it's worse than that: we don't even know what else is
happening *in the same query*. For example, look at this:
That's pretty awful, and it's just one example of a broader class of
problems that we haven't even tried to solve. We really need a way to
limit memory usage on a per-query basis rather than a per-node basis.
For example, consider a query plan that needs to do four sorts. If
work_mem = 64MB, we'll happily use 256MB total, 64MB for each sort.
Now, that might cause the system to swap: since there are four sorts,
maybe we ought to have used only 16MB per sort, and switched to a heap
sort if that wasn't enough. But it's even subtler than that: if we
had known when building the query plan that we only had 16MB per sort
rather than 64MB per sort, we would potentially have estimated higher
costs for those sorts in the first place, which might have led to a
different plan that needed fewer sorts to begin with.
When you start to try to balance memory usage across multiple
backends, things get even more complicated. If the first query that
starts up is allowed to use all the available memory, and we respond
to that by lowering the effective value of work_mem to something very
small, a second query that shows up a bit later might choose a very
inefficient plan as a result. That in turn might cause heavy I/O load
on the system for a long time, making the first query run very slowly.
We might have been better off just letting the first query finish,
and the running the second one (with a much better plan) after it was
done. Or, maybe we should have only let the first query take a
certain fraction (half? 10%?) of the available memory, so that there
was more left for the second guy. But that could be wrong too - it
might cause the first plan to be unnecessarily inefficient when nobody
was planning to run any other queries anyway. Plus, DBAs hate it when
plans change on them unexpectedly, so anything that involves a
feedback loop between current utilization and query plans will be
unpopular with some people for that reason.
These are hard problems.
The Enterprise PostgreSQL Company
Sent via pgsql-hackers mailing list (firstname.lastname@example.org)
To make changes to your subscription: