>This of course requires the VolcanoCost to be adapted.

What do you think of HepPlanner?
It uses RelOptCostImpl.FACTORY by default which explicitly ignores CPU and
IO cost factors :((

Regarding cost#rows, there's a problem: cost#rows field does not add well
when computing cumulative cost.
What if we put the number of _rejected_ rows to the cost#rows field?

Then the field would have certain meaning:
* If the value is high, the plan is probably rejecting a lot of unrelated
rows, thus it is suboptimal
* Extra Project/Calc nodes won't artificially increase rows in the cost
fields. Currently each Project adds "rows" which is not very good.
* It is clear what to put to the rows field: "rejected rows" is
more-or-less understandable. For Project it would be 0.
* Join/Filter/Calc nodes would show "estimated number of returned rows=X
(from metadataquery), rejected rows=Y (from cost)" which would help
understanding where the time is spent

That is inspired by PostgreSQL's "rows removed by filter" when running
explain analyze (which is statement execution + collecting statistics on
each execution plan node):
http://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.2#Explain_improvements

Vladimir

Reply via email to