[ 
https://issues.apache.org/jira/browse/IMPALA-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenzhe Zhou updated IMPALA-12909:
---------------------------------
    Description: 
For a query which access multiple JDBC tables, Planner generate single node 
plan. It's better to generate distributed plan so that JDBC read could be 
scheduled on executors. This restriction is due to current design of External 
data source framework because scan is single threaded. DataSourceScanNode 
cannot run in node other than coordinator. 



  was:
For a query which access multiple JDBC tables, Planner generate single node 
plan. It's better to generate distributed plan so that Impala could open 
multiple JDBC connections in parallel. This restriction is due to current 
design of External data source framework because scan is single threaded. 
DataSourceScanNode cannot run in node other than coordinator. 
There is no issue for query with join between JDBC table and non JDBC table. We 
have this issue only for all scans as JDBC table scans.


> Generate distributed plan for query accessing multiple JDBC tables
> ------------------------------------------------------------------
>
>                 Key: IMPALA-12909
>                 URL: https://issues.apache.org/jira/browse/IMPALA-12909
>             Project: IMPALA
>          Issue Type: Sub-task
>          Components: Frontend
>            Reporter: Wenzhe Zhou
>            Assignee: Pranav Yogi Lodha
>            Priority: Major
>
> For a query which access multiple JDBC tables, Planner generate single node 
> plan. It's better to generate distributed plan so that JDBC read could be 
> scheduled on executors. This restriction is due to current design of External 
> data source framework because scan is single threaded. DataSourceScanNode 
> cannot run in node other than coordinator. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to