[ 
https://issues.apache.org/jira/browse/HIVE-12650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15126866#comment-15126866
 ] 

Xuefu Zhang commented on HIVE-12650:
------------------------------------

Thanks for the clarification, [~vanzin]. I agree with you. Do you know what 
factors (such as a lack of available executors) might make Spark AM wait for 
SparkContext to be initialized for longer period of time (say, a minute)? The 
problem seems to be that Hive times out first while the AM still appears 
running, waiting for the context to be initialized. It will eventually fail 
either the context gets initialized for timeout occurs. This might look a bit 
confusing. I'm think if we make Hive waits longer than that, then we can avoid 
the scenario. Any further thoughts?


> Increase default value of hive.spark.client.server.connect.timeout to exceeds 
> spark.yarn.am.waitTime
> ----------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-12650
>                 URL: https://issues.apache.org/jira/browse/HIVE-12650
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 1.1.1, 1.2.1
>            Reporter: JoneZhang
>            Assignee: Xuefu Zhang
>
> I think hive.spark.client.server.connect.timeout should be set greater than 
> spark.yarn.am.waitTime. The default value for 
> spark.yarn.am.waitTime is 100s, and the default value for 
> hive.spark.client.server.connect.timeout is 90s, which is not good. We can 
> increase it to a larger value such as 120s.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to