[ 
https://issues.apache.org/jira/browse/HIVE-12951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120629#comment-15120629
 ] 

Rui Li commented on HIVE-12951:
-------------------------------

Thanks Xuefu for the clarifications!
I think "expected resources" is something like {{spark.executor.instances}} 
(not considering dynamic allocation). These spark configurations are intended 
for job execution, i.e. pre-warm before scheduling any tasks. But they won't 
help deciding the parallelism on hive side. Therefore what we do here still 
makes sense.

The patch LGTM. One concern is that it may cause some tests diff. Otherwise +1.

> Reduce Spark executor prewarm timeout to 5s
> -------------------------------------------
>
>                 Key: HIVE-12951
>                 URL: https://issues.apache.org/jira/browse/HIVE-12951
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>    Affects Versions: 1.2.0
>            Reporter: Xuefu Zhang
>            Assignee: Xuefu Zhang
>         Attachments: HIVE-12951.patch
>
>
> Currently it's set to 30s, which tends to be longer than needed. Reduce it to 
> 5s, only considering jvm startup time. (Eventually, we may want to make this 
> configurable.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to