I'm trying to figure out why hash joins seem to be systematically underused in my hands. In the case I am immediately looking at it prefers a merge join with both inputs getting seq scanned and sorted, despite the hash join being actually 2 to 3 times faster, where inputs and intermediate working sets are all in memory. I normally wouldn't worry about a factor of 3 error, but I see this a lot in many different situations. The row estimates are very close to actual, the errors is only in the cpu estimates.
A hash join is charged cpu_tuple_cost for each inner tuple for inserting it into the hash table: * charge one cpu_operator_cost for each column's hash function. Also, * tack on one cpu_tuple_cost per inner row, to model the costs of * inserting the row into the hashtable. But a sort is not charged a similar charge to insert a tuple into the sort memory pool: * Also charge a small amount (arbitrarily set equal to operator cost) per * extracted tuple. We don't charge cpu_tuple_cost because a Sort node * doesn't do qual-checking or projection, so it has less overhead than * most plan nodes. Note it's correct to use tuples not output_tuples Are these operations different enough to justify this difference? The qual-checking (and I think projection) needed on a hash join should have already been performed by and costed to the seq scan feeding the hashjoin, right? Cheers, Jeff