On Sun, 2013-03-31 at 15:45 -0400, Tom Lane wrote: > Really, when we're traipsing down a bucket > list, skipping over bucket entries with the wrong hash code is just > about free, or at least it's a whole lot cheaper than applying ExecQual. > > Perhaps what we should do is charge the hash_qual_cost only for some > small multiple of the number of tuples that we expect will *pass* the > hash quals, which is a number we have to compute anyway. The multiple > would represent the rate of hash-code collisions we expect.
+1. > I'd still be inclined to charge something per bucket entry, but it > should be really small, perhaps on the order of 0.01 times > cpu_operator_cost. > Or we could just drop that term entirely. FWIW, either of those are fine with me based on my limited experience. > Maybe what we should be doing with the bucketsize numbers is estimating > peak memory consumption to gate whether we'll accept the plan at all, > rather than adding terms to the cost estimate. Sounds reasonable. Ideally, we'd have a way to continue executing even in that case; but that's a project by itself, and would make it even more difficult to cost accurately. Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers