[ 
https://issues.apache.org/jira/browse/HIVE-12650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15129392#comment-15129392
 ] 

Xuefu Zhang commented on HIVE-12650:
------------------------------------

Thanks, [~vanzin].

If there is no timeout in spark-submit (wait indefinitely), I'm wondering what 
happens if the cluster is busy. Here is my speculation. Hive will time out 
first (also corresponding to Rui's observation), but spark-submit will continue 
to run. If a container becomes available, Spark AM will start and connect to 
Hive. Hive of course refuses. Then, AM will error out.

I'm not sure if this what the user experienced. It would be good if we can 
cancel the submit. However, it doesn't look too bad even if we decide to live 
with it.

Unless [[email protected]] can provide more info, it doesn't seem we can 
do much here.

> Increase default value of hive.spark.client.server.connect.timeout to exceeds 
> spark.yarn.am.waitTime
> ----------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-12650
>                 URL: https://issues.apache.org/jira/browse/HIVE-12650
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 1.1.1, 1.2.1
>            Reporter: JoneZhang
>            Assignee: Xuefu Zhang
>
> I think hive.spark.client.server.connect.timeout should be set greater than 
> spark.yarn.am.waitTime. The default value for 
> spark.yarn.am.waitTime is 100s, and the default value for 
> hive.spark.client.server.connect.timeout is 90s, which is not good. We can 
> increase it to a larger value such as 120s.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to