On Thu, Sep 24, 2015 at 1:58 PM, Tomas Vondra
> Meh, you're right - I got the math wrong. It's 1.3% in both cases.
> However the question still stands - why should we handle the over-estimate
> in one case and not the other? We're wasting the same fraction of memory in
> both cases.
Well, I think we're going around in circles here. It doesn't seem
likely that either of us will convince the other.
But for the record, I agree with you that in the scenario you lay out,
it's the about the same problem in both cases. I could argue that
it's slightly different because of [ tedious and somewhat tenuous
argument omitted ], but I'll spare you that. However, consider the
alternative scenario where, on the same machine, perhaps even in the
same query, we perform two hash joins, one of which involves hashing a
small table (say, 2MB) and one of which involves hashing a big table
(say, 2GB). If the small query uses twice the intended amount of
memory, probably nothing bad will happen. If the big query does the
same thing, a bad outcome is much more likely. Say the machine has
16GB of RAM. Well, a 2MB over-allocation is not going to break the
world. A 2GB over-allocation very well might.
I really don't see why this is a controversial proposition. It seems
clearly as daylight from here.
The Enterprise PostgreSQL Company
Sent via pgsql-hackers mailing list (firstname.lastname@example.org)
To make changes to your subscription: