Updated patch attached:

1) Removes #if 0 optimizations

2) Changes #if 0 to #if NOT_USED for code that's there for completeness and to
   keep the code self-documenting purposes rather but isn't needed by anything

3) Fixed cost model to represent bounded sorts

Attachment: sort-limit-v7.patch.gz
Description: Binary data

"Gregory Stark" <[EMAIL PROTECTED]> writes:

> "Heikki Linnakangas" <[EMAIL PROTECTED]> writes:
>> There's a few blocks of code surrounded with "#if 0 - #endif". Are those just
>> leftovers that should be removed, or are things that still need to finished 
>> and
>> enabled?
> Uhm, I don't remember, will go look, thanks.

Ok, they were left over code from an optimization that I decided wasn't very
important to pursue. The code that was ifdef'd out detected when disk sorts
could abort a disk sort merge because it had already generated enough tuples
for to satisfy the limit. 

But I never wrote the code to actually abort the run and it looks a bit
tricky. In any case the disk sort use case is extremely narrow, you would need
something like "LIMIT 50000" or more to do it and it would have to be a an
input table huge enough to cause multiple rounds of merges.

I think I've figured out how to adjust the cost model. It turns out that it
doesn't usually matter whether the cost model is correct since any case where
the optimization kicks in is a case you're reading a small portion of the
input so it's a case where an index would be *much* better if available. So
the only times the optimization is used is when there's no index available.
Nonetheless it's nice to get the estimates right so that higher levels in the
plan get reasonable values.

I think I figured out how to do the cost model. At least the results are
reasonable. I'm not sure if I've done it the "right" way though.

  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to