[ 
https://issues.apache.org/jira/browse/HIVE-12951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120795#comment-15120795
 ] 

Rui Li commented on HIVE-12951:
-------------------------------

Generally speaking, I think we have a better chance to get more reducers with 
expected resources, because RM won't allocate more resources than requested, 
right? But #reducers is not determined by #executors alone, and like you said 
we will need to handle dynamic allocations differently. So I agree we can 
decide this later when there's concrete need for it.

> Reduce Spark executor prewarm timeout to 5s
> -------------------------------------------
>
>                 Key: HIVE-12951
>                 URL: https://issues.apache.org/jira/browse/HIVE-12951
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>    Affects Versions: 1.2.0
>            Reporter: Xuefu Zhang
>            Assignee: Xuefu Zhang
>         Attachments: HIVE-12951.patch
>
>
> Currently it's set to 30s, which tends to be longer than needed. Reduce it to 
> 5s, only considering jvm startup time. (Eventually, we may want to make this 
> configurable.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to