[
https://issues.apache.org/jira/browse/HIVE-12650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Rui Li updated HIVE-12650:
--------------------------
Attachment: HIVE-12650.1.patch
Assigned this to me and upload a patch.
The main change in the patch is that we don't error out when timeout
pre-warming. I think it makes sense because we already allow timeout in
pre-warm so it shouldn't be a fatal error.
The patch also adds some explanations in error messages so it should be more
user friendly.
One thing I noticed when I tested the patch is that, with yarn-client mode, we
may end up with hanging spark AM trying to connect to the driver that has
already timed out. But I think we have to live with that and the AM will
eventually give up and exit.
> Spark-submit is killed when Hive times out. Killing spark-submit doesn't
> cancel AM request. When AM is finally launched, it tries to connect back to
> Hive and gets refused.
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: HIVE-12650
> URL: https://issues.apache.org/jira/browse/HIVE-12650
> Project: Hive
> Issue Type: Bug
> Affects Versions: 1.1.1, 1.2.1
> Reporter: JoneZhang
> Assignee: Rui Li
> Attachments: HIVE-12650.1.patch
>
>
> I think hive.spark.client.server.connect.timeout should be set greater than
> spark.yarn.am.waitTime. The default value for
> spark.yarn.am.waitTime is 100s, and the default value for
> hive.spark.client.server.connect.timeout is 90s, which is not good. We can
> increase it to a larger value such as 120s.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)