[ 
https://issues.apache.org/jira/browse/DERBY-1906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dag H. Wanvik updated DERBY-1906:
---------------------------------

    Derby Categories: [Performance]

> Investigate appropriateness of current optimizer timeout mechanism.
> -------------------------------------------------------------------
>
>                 Key: DERBY-1906
>                 URL: https://issues.apache.org/jira/browse/DERBY-1906
>             Project: Derby
>          Issue Type: Task
>          Components: SQL
>    Affects Versions: 10.1.3.2, 10.2.1.6, 10.3.1.4
>            Reporter: A B
>
> The Derby optimizer timeout mechanism is based on the assumption that cost 
> estimates calculated by the optimizer represent the number of milliseconds 
> the optimizer thinks it will take for the query to execute.  In order for 
> this to be effective the cost estimates need to be relatively accurate--or 
> else, as described in DERBY-1905, the timeout mechanism becomes pretty 
> ineffective pretty quickly.
> While it is true that incorrect cost estimates are perhaps the biggest cause 
> of inefficient optimizer timeout, I can't help but wonder if there might be 
> some other way to handle optimizer timeout that is not so dependent on the 
> correctness of a costEstimate-to-milliseconds mapping.  Maybe, for example, 
> an algorithm based on delta's between query plan cost estimates would be more 
> appropriate?
> I do not know what alternatives exist or which, if any, would be more "right" 
> for Derby than a millisecond-based approach, but I think it's worth searching 
> out and/or coming up with algorithms/ideas that might prove more effective 
> for Derby optimizer timeout--especially when dealing with very deeply-nested 
> queries that have large FROM lists, such as those attached to DERBY-1777.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to