[
https://issues.apache.org/jira/browse/DERBY-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16097372#comment-16097372
]
Bryan Pendleton commented on DERBY-6938:
----------------------------------------
Thank you for exploring this behavior in more detail, and for the clear
explanation. It is very helpful!
Perhaps the critical question here involves the Optimizer's prediction about
whether the intermediate results will fit in memory or not. It seems like that
is the
area where the quality and accuracy of the cardinality and selectivity
estimates is crucial, since if those estimates are poor, the Optimizer will
have an inaccurate prediction about whether the intermediate results
will fit into memory or not, and it might avoid a hash join in a case where it
would in fact be the best approach, or vice versa might choose a hash join
in a case where the intermediate results are in fact very large and thus the
hash join will perform very poorly.
> Obtain cardinality estimates and true estimates for base tables as well as
> for intermediate results for queries involving multiple joins.
> -------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: DERBY-6938
> URL: https://issues.apache.org/jira/browse/DERBY-6938
> Project: Derby
> Issue Type: Sub-task
> Components: SQL
> Reporter: Harshvardhan Gupta
> Assignee: Harshvardhan Gupta
> Attachments: explain.txt, traceout.txt
>
>
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)