[
https://issues.apache.org/jira/browse/HIVE-12650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15127507#comment-15127507
]
Xuefu Zhang commented on HIVE-12650:
------------------------------------
Here is the log that provided the the JIRA creator:
{code}
Logs of Application_1448873753366_121022 as follows(same as
application_1448873753366_121055):
Container: container_1448873753366_121022_03_000001 on 10.226.136.122_8041
============================================================================
LogType: stderr
LogLength: 4664
Log Contents:
Please use CMSClassUnloadingEnabled in place of CMSPermGenSweepingEnabled in
the future
Please use CMSClassUnloadingEnabled in place of CMSPermGenSweepingEnabled in
the future
15/12/09 16:29:45 INFO yarn.ApplicationMaster: Registered signal handlers for
[TERM, HUP, INT]
15/12/09 16:29:46 INFO yarn.ApplicationMaster: ApplicationAttemptId:
appattempt_1448873753366_121022_000003
15/12/09 16:29:47 INFO spark.SecurityManager: Changing view acls to: mqq
15/12/09 16:29:47 INFO spark.SecurityManager: Changing modify acls to: mqq
15/12/09 16:29:47 INFO spark.SecurityManager: SecurityManager: authentication
disabled; ui acls disabled; users with view permissions: Set(mqq); users with
modify permissions: Set(mqq)
15/12/09 16:29:47 INFO yarn.ApplicationMaster: Starting the user application in
a separate Thread
15/12/09 16:29:47 INFO yarn.ApplicationMaster: Waiting for spark context
initialization
15/12/09 16:29:47 INFO yarn.ApplicationMaster: Waiting for spark context
initialization ...
15/12/09 16:29:47 INFO client.RemoteDriver: Connecting to: 10.179.12.140:38842
15/12/09 16:29:48 WARN rpc.Rpc: Invalid log level null, reverting to default.
15/12/09 16:29:48 ERROR yarn.ApplicationMaster: User class threw exception:
java.util.concurrent.ExecutionException: javax.security.sasl.SaslException:
Client closed before SASL negotiation finished.
java.util.concurrent.ExecutionException: javax.security.sasl.SaslException:
Client closed before SASL negotiation finished.
at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:37)
at
org.apache.hive.spark.client.RemoteDriver.<init>(RemoteDriver.java:156)
at org.apache.hive.spark.client.RemoteDriver.main(RemoteDriver.java:556)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:483)
Caused by: javax.security.sasl.SaslException: Client closed before SASL
negotiation finished.
at
org.apache.hive.spark.client.rpc.Rpc$SaslClientHandler.dispose(Rpc.java:449)
at
org.apache.hive.spark.client.rpc.SaslHandler.channelInactive(SaslHandler.java:90)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:233)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:219)
at
io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
at
org.apache.hive.spark.client.rpc.KryoMessageCodec.channelInactive(KryoMessageCodec.java:127)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:233)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:219)
at
io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:233)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:219)
at
io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:769)
at
io.netty.channel.AbstractChannel$AbstractUnsafe$5.run(AbstractChannel.java:567)
at
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
at java.lang.Thread.run(Thread.java:745)
15/12/09 16:29:48 INFO yarn.ApplicationMaster: Final app status: FAILED,
exitCode: 15, (reason: User class threw exception:
java.util.concurrent.ExecutionException: javax.security.sasl.SaslException:
Client closed before SASL negotiation finished.)
15/12/09 16:29:57 ERROR yarn.ApplicationMaster: SparkContext did not initialize
after waiting for 150000 ms. Please check earlier log output for errors.
Failing the application.
15/12/09 16:29:57 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster
with FAILED (diag message: User class threw exception:
java.util.concurrent.ExecutionException: javax.security.sasl.SaslException:
Client closed before SASL negotiation finished.)
15/12/09 16:29:57 INFO yarn.ApplicationMaster: Deleting staging directory
.sparkStaging/application_1448873753366_121022
15/12/09 16:29:57 INFO util.Utils: Shutdown hook called
LogType: stdout
LogLength: 216
Log Contents:
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
UseCompressedStrings; support was removed in 7.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
UseCompressedStrings; support was removed in 7.0
{code}
The interesting part of the log is:
{code}
15/12/09 16:29:57 ERROR yarn.ApplicationMaster: SparkContext did not initialize
after waiting for 150000 ms. Please check earlier log output for errors.
Failing the application.
{code}
I have shared with the thread in the user mailing list for reference via email.
> Increase default value of hive.spark.client.server.connect.timeout to exceeds
> spark.yarn.am.waitTime
> ----------------------------------------------------------------------------------------------------
>
> Key: HIVE-12650
> URL: https://issues.apache.org/jira/browse/HIVE-12650
> Project: Hive
> Issue Type: Bug
> Affects Versions: 1.1.1, 1.2.1
> Reporter: JoneZhang
> Assignee: Xuefu Zhang
>
> I think hive.spark.client.server.connect.timeout should be set greater than
> spark.yarn.am.waitTime. The default value for
> spark.yarn.am.waitTime is 100s, and the default value for
> hive.spark.client.server.connect.timeout is 90s, which is not good. We can
> increase it to a larger value such as 120s.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)