[jira] [Commented] (HIVE-15859) Hive client side shows Spark Driver disconnected while Spark Driver side could not get RPC header
[ https://issues.apache.org/jira/browse/HIVE-15859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885126#comment-15885126 ] Xuefu Zhang commented on HIVE-15859: +1 > Hive client side shows Spark Driver disconnected while Spark Driver side > could not get RPC header > -- > > Key: HIVE-15859 > URL: https://issues.apache.org/jira/browse/HIVE-15859 > Project: Hive > Issue Type: Bug > Components: Hive, Spark >Affects Versions: 2.2.0 > Environment: hadoop2.7.1 > spark1.6.2 > hive2.2 >Reporter: KaiXu >Assignee: Rui Li > Attachments: HIVE-15859.1.patch, HIVE-15859.2.patch, > HIVE-15859.3.patch > > > Hive on Spark, failed with error: > {noformat} > 2017-02-08 09:50:59,331 Stage-2_0: 1039(+2)/1041 Stage-3_0: 796(+456)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:00,335 Stage-2_0: 1040(+1)/1041 Stage-3_0: 914(+398)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:01,338 Stage-2_0: 1041/1041 Finished Stage-3_0: > 961(+383)/1520 Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > Failed to monitor Job[ 2] with exception 'java.lang.IllegalStateException(RPC > channel is closed.)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > {noformat} > application log shows the driver commanded a shutdown with some unknown > reason, but hive's log shows Driver could not get RPC header( Expected RPC > header, got org.apache.hive.spark.client.rpc.Rpc$NullMessage instead). > {noformat} > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1169.0 in > stage 3.0 (TID 2519) > 17/02/08 09:51:04 INFO executor.CoarseGrainedExecutorBackend: Driver > commanded a shutdown > 17/02/08 09:51:04 INFO storage.MemoryStore: MemoryStore cleared > 17/02/08 09:51:04 INFO storage.BlockManager: BlockManager stopped > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 WARN executor.CoarseGrainedExecutorBackend: An unknown > (hsx-node1:42777) driver disconnected. > 17/02/08 09:51:04 ERROR executor.CoarseGrainedExecutorBackend: Driver > 192.168.1.1:42777 disassociated! Shutting down. > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1105.0 in > stage 3.0 (TID 2511) > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Shutdown hook called > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Shutting down remote daemon. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk6/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-71da1dfc-99bd-4687-bc2f-33452db8de3d > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk2/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-7f134d81-e77e-4b92-bd99-0a51d0962c14 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk5/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-77a90d63-fb05-4bc6-8d5e-1562cc502e6c > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Remote daemon shut down; proceeding with flushing remote transports. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk4/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-91f8b91a-114d-4340-8560-d3cd085c1cd4 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk1/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-a3c24f9e-8609-48f0-9d37-0de7ae06682a > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Remoting shut down. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk7/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-f6120a43-2158-4780-927c-c5786b78f53e > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk3/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-e17931ad-9e8a-45da-86f8-9a0fdca0fad1 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk8/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-4de34175-f871-4c28-8ec0-d2fc0020c5c3 > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1137.0 in > stage 3.0 (TID 2515) > 17/02/08 09:51:04 INFO executor.Executor:
[jira] [Commented] (HIVE-15859) Hive client side shows Spark Driver disconnected while Spark Driver side could not get RPC header
[ https://issues.apache.org/jira/browse/HIVE-15859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885092#comment-15885092 ] Hive QA commented on HIVE-15859: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12854804/HIVE-15859.3.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 10266 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14] (batchId=223) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/3801/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/3801/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-3801/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12854804 - PreCommit-HIVE-Build > Hive client side shows Spark Driver disconnected while Spark Driver side > could not get RPC header > -- > > Key: HIVE-15859 > URL: https://issues.apache.org/jira/browse/HIVE-15859 > Project: Hive > Issue Type: Bug > Components: Hive, Spark >Affects Versions: 2.2.0 > Environment: hadoop2.7.1 > spark1.6.2 > hive2.2 >Reporter: KaiXu >Assignee: Rui Li > Attachments: HIVE-15859.1.patch, HIVE-15859.2.patch, > HIVE-15859.3.patch > > > Hive on Spark, failed with error: > {noformat} > 2017-02-08 09:50:59,331 Stage-2_0: 1039(+2)/1041 Stage-3_0: 796(+456)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:00,335 Stage-2_0: 1040(+1)/1041 Stage-3_0: 914(+398)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:01,338 Stage-2_0: 1041/1041 Finished Stage-3_0: > 961(+383)/1520 Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > Failed to monitor Job[ 2] with exception 'java.lang.IllegalStateException(RPC > channel is closed.)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > {noformat} > application log shows the driver commanded a shutdown with some unknown > reason, but hive's log shows Driver could not get RPC header( Expected RPC > header, got org.apache.hive.spark.client.rpc.Rpc$NullMessage instead). > {noformat} > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1169.0 in > stage 3.0 (TID 2519) > 17/02/08 09:51:04 INFO executor.CoarseGrainedExecutorBackend: Driver > commanded a shutdown > 17/02/08 09:51:04 INFO storage.MemoryStore: MemoryStore cleared > 17/02/08 09:51:04 INFO storage.BlockManager: BlockManager stopped > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 WARN executor.CoarseGrainedExecutorBackend: An unknown > (hsx-node1:42777) driver disconnected. > 17/02/08 09:51:04 ERROR executor.CoarseGrainedExecutorBackend: Driver > 192.168.1.1:42777 disassociated! Shutting down. > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1105.0 in > stage 3.0 (TID 2511) > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Shutdown hook called > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Shutting down remote daemon. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk6/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-71da1dfc-99bd-4687-bc2f-33452db8de3d > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk2/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-7f134d81-e77e-4b92-bd99-0a51d0962c14 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk5/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-77a90d63-fb05-4bc6-8d5e-1562cc502e6c > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Remote daemon shut down; proceeding with flushing remote transports. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory >
[jira] [Commented] (HIVE-15859) Hive client side shows Spark Driver disconnected while Spark Driver side could not get RPC header
[ https://issues.apache.org/jira/browse/HIVE-15859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885007#comment-15885007 ] KaiXu commented on HIVE-15859: -- Hi [~xuefuz] and [~lirui], I have tried to run 3 times with the patch, currently the issue not occurred any more, though it's random previously, but can frequently reproduce. So I think the patch solved the issue, Thanks for all your efforts! > Hive client side shows Spark Driver disconnected while Spark Driver side > could not get RPC header > -- > > Key: HIVE-15859 > URL: https://issues.apache.org/jira/browse/HIVE-15859 > Project: Hive > Issue Type: Bug > Components: Hive, Spark >Affects Versions: 2.2.0 > Environment: hadoop2.7.1 > spark1.6.2 > hive2.2 >Reporter: KaiXu >Assignee: Rui Li > Attachments: HIVE-15859.1.patch, HIVE-15859.2.patch > > > Hive on Spark, failed with error: > {noformat} > 2017-02-08 09:50:59,331 Stage-2_0: 1039(+2)/1041 Stage-3_0: 796(+456)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:00,335 Stage-2_0: 1040(+1)/1041 Stage-3_0: 914(+398)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:01,338 Stage-2_0: 1041/1041 Finished Stage-3_0: > 961(+383)/1520 Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > Failed to monitor Job[ 2] with exception 'java.lang.IllegalStateException(RPC > channel is closed.)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > {noformat} > application log shows the driver commanded a shutdown with some unknown > reason, but hive's log shows Driver could not get RPC header( Expected RPC > header, got org.apache.hive.spark.client.rpc.Rpc$NullMessage instead). > {noformat} > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1169.0 in > stage 3.0 (TID 2519) > 17/02/08 09:51:04 INFO executor.CoarseGrainedExecutorBackend: Driver > commanded a shutdown > 17/02/08 09:51:04 INFO storage.MemoryStore: MemoryStore cleared > 17/02/08 09:51:04 INFO storage.BlockManager: BlockManager stopped > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 WARN executor.CoarseGrainedExecutorBackend: An unknown > (hsx-node1:42777) driver disconnected. > 17/02/08 09:51:04 ERROR executor.CoarseGrainedExecutorBackend: Driver > 192.168.1.1:42777 disassociated! Shutting down. > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1105.0 in > stage 3.0 (TID 2511) > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Shutdown hook called > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Shutting down remote daemon. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk6/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-71da1dfc-99bd-4687-bc2f-33452db8de3d > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk2/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-7f134d81-e77e-4b92-bd99-0a51d0962c14 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk5/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-77a90d63-fb05-4bc6-8d5e-1562cc502e6c > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Remote daemon shut down; proceeding with flushing remote transports. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk4/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-91f8b91a-114d-4340-8560-d3cd085c1cd4 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk1/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-a3c24f9e-8609-48f0-9d37-0de7ae06682a > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Remoting shut down. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk7/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-f6120a43-2158-4780-927c-c5786b78f53e > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk3/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-e17931ad-9e8a-45da-86f8-9a0fdca0fad1 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory >
[jira] [Commented] (HIVE-15859) Hive client side shows Spark Driver disconnected while Spark Driver side could not get RPC header
[ https://issues.apache.org/jira/browse/HIVE-15859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15880098#comment-15880098 ] Rui Li commented on HIVE-15859: --- I think I have managed to reproduce the issue by introducing some sleep between the message header and payload, and the issue happens consistently: {code} synchronized (channelLock) { channel.write(new MessageHeader(id, Rpc.MessageType.CALL)).addListener(listener); Thread.sleep(5000); channel.writeAndFlush(msg).addListener(listener); } {code} Also answering my own question in the previous [comment|https://issues.apache.org/jira/browse/HIVE-15859?focusedCommentId=15865239=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15865239]. The two successive message headers do cause trouble. However, we didn't log the full stack trace of the error [here|https://github.com/apache/hive/blob/master/spark-client/src/main/java/org/apache/hive/spark/client/rpc/RpcDispatcher.java#L158]. I modified that and found the actual error is a {{java.lang.NoSuchMethodException}}. This makes sense because we don't have a handle method for message header (we're expecting a payload). To conclude: the receiver receives two successive headers, hit the NoSuchMethodException, and then receives two successive payload and log the warning {{Expected RPC header, got XXX instead}}. This is inline with the hive log in the description. I verified the patch here can solve the issue (with the extra sleep). [~KaiXu] it'd be great if you can help verify it too. Thanks. > Hive client side shows Spark Driver disconnected while Spark Driver side > could not get RPC header > -- > > Key: HIVE-15859 > URL: https://issues.apache.org/jira/browse/HIVE-15859 > Project: Hive > Issue Type: Bug > Components: Hive, Spark >Affects Versions: 2.2.0 > Environment: hadoop2.7.1 > spark1.6.2 > hive2.2 >Reporter: KaiXu >Assignee: Rui Li > Attachments: HIVE-15859.1.patch, HIVE-15859.2.patch > > > Hive on Spark, failed with error: > {noformat} > 2017-02-08 09:50:59,331 Stage-2_0: 1039(+2)/1041 Stage-3_0: 796(+456)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:00,335 Stage-2_0: 1040(+1)/1041 Stage-3_0: 914(+398)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:01,338 Stage-2_0: 1041/1041 Finished Stage-3_0: > 961(+383)/1520 Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > Failed to monitor Job[ 2] with exception 'java.lang.IllegalStateException(RPC > channel is closed.)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > {noformat} > application log shows the driver commanded a shutdown with some unknown > reason, but hive's log shows Driver could not get RPC header( Expected RPC > header, got org.apache.hive.spark.client.rpc.Rpc$NullMessage instead). > {noformat} > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1169.0 in > stage 3.0 (TID 2519) > 17/02/08 09:51:04 INFO executor.CoarseGrainedExecutorBackend: Driver > commanded a shutdown > 17/02/08 09:51:04 INFO storage.MemoryStore: MemoryStore cleared > 17/02/08 09:51:04 INFO storage.BlockManager: BlockManager stopped > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 WARN executor.CoarseGrainedExecutorBackend: An unknown > (hsx-node1:42777) driver disconnected. > 17/02/08 09:51:04 ERROR executor.CoarseGrainedExecutorBackend: Driver > 192.168.1.1:42777 disassociated! Shutting down. > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1105.0 in > stage 3.0 (TID 2511) > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Shutdown hook called > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Shutting down remote daemon. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk6/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-71da1dfc-99bd-4687-bc2f-33452db8de3d > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk2/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-7f134d81-e77e-4b92-bd99-0a51d0962c14 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory >
[jira] [Commented] (HIVE-15859) Hive client side shows Spark Driver disconnected while Spark Driver side could not get RPC header
[ https://issues.apache.org/jira/browse/HIVE-15859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879995#comment-15879995 ] Rui Li commented on HIVE-15859: --- Hi [~xuefuz], netty's channel is thread safe. We can write to it concurrently in multiple threads. The problem is we divide each message into header and payload and write them to the channel separately. And thus the order can be messed up on receiver side. If we combine them into one message, I think we don't need to force all the writes thru event loop. I suppose we can try this way if the current approach doesn't solve the issue. > Hive client side shows Spark Driver disconnected while Spark Driver side > could not get RPC header > -- > > Key: HIVE-15859 > URL: https://issues.apache.org/jira/browse/HIVE-15859 > Project: Hive > Issue Type: Bug > Components: Hive, Spark >Affects Versions: 2.2.0 > Environment: hadoop2.7.1 > spark1.6.2 > hive2.2 >Reporter: KaiXu >Assignee: Rui Li > Attachments: HIVE-15859.1.patch, HIVE-15859.2.patch > > > Hive on Spark, failed with error: > {noformat} > 2017-02-08 09:50:59,331 Stage-2_0: 1039(+2)/1041 Stage-3_0: 796(+456)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:00,335 Stage-2_0: 1040(+1)/1041 Stage-3_0: 914(+398)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:01,338 Stage-2_0: 1041/1041 Finished Stage-3_0: > 961(+383)/1520 Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > Failed to monitor Job[ 2] with exception 'java.lang.IllegalStateException(RPC > channel is closed.)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > {noformat} > application log shows the driver commanded a shutdown with some unknown > reason, but hive's log shows Driver could not get RPC header( Expected RPC > header, got org.apache.hive.spark.client.rpc.Rpc$NullMessage instead). > {noformat} > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1169.0 in > stage 3.0 (TID 2519) > 17/02/08 09:51:04 INFO executor.CoarseGrainedExecutorBackend: Driver > commanded a shutdown > 17/02/08 09:51:04 INFO storage.MemoryStore: MemoryStore cleared > 17/02/08 09:51:04 INFO storage.BlockManager: BlockManager stopped > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 WARN executor.CoarseGrainedExecutorBackend: An unknown > (hsx-node1:42777) driver disconnected. > 17/02/08 09:51:04 ERROR executor.CoarseGrainedExecutorBackend: Driver > 192.168.1.1:42777 disassociated! Shutting down. > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1105.0 in > stage 3.0 (TID 2511) > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Shutdown hook called > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Shutting down remote daemon. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk6/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-71da1dfc-99bd-4687-bc2f-33452db8de3d > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk2/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-7f134d81-e77e-4b92-bd99-0a51d0962c14 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk5/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-77a90d63-fb05-4bc6-8d5e-1562cc502e6c > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Remote daemon shut down; proceeding with flushing remote transports. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk4/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-91f8b91a-114d-4340-8560-d3cd085c1cd4 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk1/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-a3c24f9e-8609-48f0-9d37-0de7ae06682a > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Remoting shut down. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk7/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-f6120a43-2158-4780-927c-c5786b78f53e > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory >
[jira] [Commented] (HIVE-15859) Hive client side shows Spark Driver disconnected while Spark Driver side could not get RPC header
[ https://issues.apache.org/jira/browse/HIVE-15859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879467#comment-15879467 ] Xuefu Zhang commented on HIVE-15859: It seems to me that option #2 can be on top of option #1 because we may want to let all messages go thru the event loop. If that's case, we can further implement option #2 as a followup. Thoughts? In the meantime, I'm very eager to know if this has addressed [~KaiXu]'s problem. > Hive client side shows Spark Driver disconnected while Spark Driver side > could not get RPC header > -- > > Key: HIVE-15859 > URL: https://issues.apache.org/jira/browse/HIVE-15859 > Project: Hive > Issue Type: Bug > Components: Hive, Spark >Affects Versions: 2.2.0 > Environment: hadoop2.7.1 > spark1.6.2 > hive2.2 >Reporter: KaiXu >Assignee: Rui Li > Attachments: HIVE-15859.1.patch, HIVE-15859.2.patch > > > Hive on Spark, failed with error: > {noformat} > 2017-02-08 09:50:59,331 Stage-2_0: 1039(+2)/1041 Stage-3_0: 796(+456)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:00,335 Stage-2_0: 1040(+1)/1041 Stage-3_0: 914(+398)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:01,338 Stage-2_0: 1041/1041 Finished Stage-3_0: > 961(+383)/1520 Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > Failed to monitor Job[ 2] with exception 'java.lang.IllegalStateException(RPC > channel is closed.)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > {noformat} > application log shows the driver commanded a shutdown with some unknown > reason, but hive's log shows Driver could not get RPC header( Expected RPC > header, got org.apache.hive.spark.client.rpc.Rpc$NullMessage instead). > {noformat} > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1169.0 in > stage 3.0 (TID 2519) > 17/02/08 09:51:04 INFO executor.CoarseGrainedExecutorBackend: Driver > commanded a shutdown > 17/02/08 09:51:04 INFO storage.MemoryStore: MemoryStore cleared > 17/02/08 09:51:04 INFO storage.BlockManager: BlockManager stopped > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 WARN executor.CoarseGrainedExecutorBackend: An unknown > (hsx-node1:42777) driver disconnected. > 17/02/08 09:51:04 ERROR executor.CoarseGrainedExecutorBackend: Driver > 192.168.1.1:42777 disassociated! Shutting down. > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1105.0 in > stage 3.0 (TID 2511) > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Shutdown hook called > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Shutting down remote daemon. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk6/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-71da1dfc-99bd-4687-bc2f-33452db8de3d > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk2/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-7f134d81-e77e-4b92-bd99-0a51d0962c14 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk5/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-77a90d63-fb05-4bc6-8d5e-1562cc502e6c > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Remote daemon shut down; proceeding with flushing remote transports. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk4/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-91f8b91a-114d-4340-8560-d3cd085c1cd4 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk1/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-a3c24f9e-8609-48f0-9d37-0de7ae06682a > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Remoting shut down. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk7/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-f6120a43-2158-4780-927c-c5786b78f53e > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk3/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-e17931ad-9e8a-45da-86f8-9a0fdca0fad1 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory >
[jira] [Commented] (HIVE-15859) Hive client side shows Spark Driver disconnected while Spark Driver side could not get RPC header
[ https://issues.apache.org/jira/browse/HIVE-15859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15878013#comment-15878013 ] Rui Li commented on HIVE-15859: --- Another way to fix (and also mentioned in the Livy PR) is to combine the message header with the payload. I think can have some class like {code} class RpcMessage { MessageHeader header; Object payload; } {code} so that we can send/receive the header and payload as a whole. It may be a more thorough way to avoid potential race conditions. [~xuefuz], [~vanzin] what do you think about it? > Hive client side shows Spark Driver disconnected while Spark Driver side > could not get RPC header > -- > > Key: HIVE-15859 > URL: https://issues.apache.org/jira/browse/HIVE-15859 > Project: Hive > Issue Type: Bug > Components: Hive, Spark >Affects Versions: 2.2.0 > Environment: hadoop2.7.1 > spark1.6.2 > hive2.2 >Reporter: KaiXu >Assignee: Rui Li > Attachments: HIVE-15859.1.patch, HIVE-15859.2.patch > > > Hive on Spark, failed with error: > {noformat} > 2017-02-08 09:50:59,331 Stage-2_0: 1039(+2)/1041 Stage-3_0: 796(+456)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:00,335 Stage-2_0: 1040(+1)/1041 Stage-3_0: 914(+398)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:01,338 Stage-2_0: 1041/1041 Finished Stage-3_0: > 961(+383)/1520 Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > Failed to monitor Job[ 2] with exception 'java.lang.IllegalStateException(RPC > channel is closed.)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > {noformat} > application log shows the driver commanded a shutdown with some unknown > reason, but hive's log shows Driver could not get RPC header( Expected RPC > header, got org.apache.hive.spark.client.rpc.Rpc$NullMessage instead). > {noformat} > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1169.0 in > stage 3.0 (TID 2519) > 17/02/08 09:51:04 INFO executor.CoarseGrainedExecutorBackend: Driver > commanded a shutdown > 17/02/08 09:51:04 INFO storage.MemoryStore: MemoryStore cleared > 17/02/08 09:51:04 INFO storage.BlockManager: BlockManager stopped > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 WARN executor.CoarseGrainedExecutorBackend: An unknown > (hsx-node1:42777) driver disconnected. > 17/02/08 09:51:04 ERROR executor.CoarseGrainedExecutorBackend: Driver > 192.168.1.1:42777 disassociated! Shutting down. > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1105.0 in > stage 3.0 (TID 2511) > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Shutdown hook called > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Shutting down remote daemon. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk6/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-71da1dfc-99bd-4687-bc2f-33452db8de3d > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk2/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-7f134d81-e77e-4b92-bd99-0a51d0962c14 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk5/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-77a90d63-fb05-4bc6-8d5e-1562cc502e6c > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Remote daemon shut down; proceeding with flushing remote transports. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk4/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-91f8b91a-114d-4340-8560-d3cd085c1cd4 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk1/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-a3c24f9e-8609-48f0-9d37-0de7ae06682a > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Remoting shut down. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk7/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-f6120a43-2158-4780-927c-c5786b78f53e > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory >
[jira] [Commented] (HIVE-15859) Hive client side shows Spark Driver disconnected while Spark Driver side could not get RPC header
[ https://issues.apache.org/jira/browse/HIVE-15859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877560#comment-15877560 ] KaiXu commented on HIVE-15859: -- Thanks all for the efforts, I will try the patch. > Hive client side shows Spark Driver disconnected while Spark Driver side > could not get RPC header > -- > > Key: HIVE-15859 > URL: https://issues.apache.org/jira/browse/HIVE-15859 > Project: Hive > Issue Type: Bug > Components: Hive, Spark >Affects Versions: 2.2.0 > Environment: hadoop2.7.1 > spark1.6.2 > hive2.2 >Reporter: KaiXu >Assignee: Rui Li > Attachments: HIVE-15859.1.patch, HIVE-15859.2.patch > > > Hive on Spark, failed with error: > {noformat} > 2017-02-08 09:50:59,331 Stage-2_0: 1039(+2)/1041 Stage-3_0: 796(+456)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:00,335 Stage-2_0: 1040(+1)/1041 Stage-3_0: 914(+398)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:01,338 Stage-2_0: 1041/1041 Finished Stage-3_0: > 961(+383)/1520 Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > Failed to monitor Job[ 2] with exception 'java.lang.IllegalStateException(RPC > channel is closed.)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > {noformat} > application log shows the driver commanded a shutdown with some unknown > reason, but hive's log shows Driver could not get RPC header( Expected RPC > header, got org.apache.hive.spark.client.rpc.Rpc$NullMessage instead). > {noformat} > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1169.0 in > stage 3.0 (TID 2519) > 17/02/08 09:51:04 INFO executor.CoarseGrainedExecutorBackend: Driver > commanded a shutdown > 17/02/08 09:51:04 INFO storage.MemoryStore: MemoryStore cleared > 17/02/08 09:51:04 INFO storage.BlockManager: BlockManager stopped > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 WARN executor.CoarseGrainedExecutorBackend: An unknown > (hsx-node1:42777) driver disconnected. > 17/02/08 09:51:04 ERROR executor.CoarseGrainedExecutorBackend: Driver > 192.168.1.1:42777 disassociated! Shutting down. > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1105.0 in > stage 3.0 (TID 2511) > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Shutdown hook called > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Shutting down remote daemon. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk6/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-71da1dfc-99bd-4687-bc2f-33452db8de3d > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk2/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-7f134d81-e77e-4b92-bd99-0a51d0962c14 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk5/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-77a90d63-fb05-4bc6-8d5e-1562cc502e6c > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Remote daemon shut down; proceeding with flushing remote transports. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk4/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-91f8b91a-114d-4340-8560-d3cd085c1cd4 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk1/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-a3c24f9e-8609-48f0-9d37-0de7ae06682a > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Remoting shut down. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk7/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-f6120a43-2158-4780-927c-c5786b78f53e > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk3/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-e17931ad-9e8a-45da-86f8-9a0fdca0fad1 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk8/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-4de34175-f871-4c28-8ec0-d2fc0020c5c3 > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1137.0 in > stage 3.0 (TID 2515) > 17/02/08 09:51:04 INFO
[jira] [Commented] (HIVE-15859) Hive client side shows Spark Driver disconnected while Spark Driver side could not get RPC header
[ https://issues.apache.org/jira/browse/HIVE-15859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876485#comment-15876485 ] Xuefu Zhang commented on HIVE-15859: [~KaiXu], please confirm if the patch here fixes your problem reported here. Thanks. > Hive client side shows Spark Driver disconnected while Spark Driver side > could not get RPC header > -- > > Key: HIVE-15859 > URL: https://issues.apache.org/jira/browse/HIVE-15859 > Project: Hive > Issue Type: Bug > Components: Hive, Spark >Affects Versions: 2.2.0 > Environment: hadoop2.7.1 > spark1.6.2 > hive2.2 >Reporter: KaiXu >Assignee: Rui Li > Attachments: HIVE-15859.1.patch, HIVE-15859.2.patch > > > Hive on Spark, failed with error: > {noformat} > 2017-02-08 09:50:59,331 Stage-2_0: 1039(+2)/1041 Stage-3_0: 796(+456)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:00,335 Stage-2_0: 1040(+1)/1041 Stage-3_0: 914(+398)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:01,338 Stage-2_0: 1041/1041 Finished Stage-3_0: > 961(+383)/1520 Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > Failed to monitor Job[ 2] with exception 'java.lang.IllegalStateException(RPC > channel is closed.)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > {noformat} > application log shows the driver commanded a shutdown with some unknown > reason, but hive's log shows Driver could not get RPC header( Expected RPC > header, got org.apache.hive.spark.client.rpc.Rpc$NullMessage instead). > {noformat} > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1169.0 in > stage 3.0 (TID 2519) > 17/02/08 09:51:04 INFO executor.CoarseGrainedExecutorBackend: Driver > commanded a shutdown > 17/02/08 09:51:04 INFO storage.MemoryStore: MemoryStore cleared > 17/02/08 09:51:04 INFO storage.BlockManager: BlockManager stopped > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 WARN executor.CoarseGrainedExecutorBackend: An unknown > (hsx-node1:42777) driver disconnected. > 17/02/08 09:51:04 ERROR executor.CoarseGrainedExecutorBackend: Driver > 192.168.1.1:42777 disassociated! Shutting down. > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1105.0 in > stage 3.0 (TID 2511) > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Shutdown hook called > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Shutting down remote daemon. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk6/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-71da1dfc-99bd-4687-bc2f-33452db8de3d > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk2/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-7f134d81-e77e-4b92-bd99-0a51d0962c14 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk5/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-77a90d63-fb05-4bc6-8d5e-1562cc502e6c > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Remote daemon shut down; proceeding with flushing remote transports. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk4/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-91f8b91a-114d-4340-8560-d3cd085c1cd4 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk1/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-a3c24f9e-8609-48f0-9d37-0de7ae06682a > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Remoting shut down. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk7/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-f6120a43-2158-4780-927c-c5786b78f53e > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk3/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-e17931ad-9e8a-45da-86f8-9a0fdca0fad1 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk8/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-4de34175-f871-4c28-8ec0-d2fc0020c5c3 > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1137.0 in > stage
[jira] [Commented] (HIVE-15859) Hive client side shows Spark Driver disconnected while Spark Driver side could not get RPC header
[ https://issues.apache.org/jira/browse/HIVE-15859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867735#comment-15867735 ] Hive QA commented on HIVE-15859: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12852760/HIVE-15859.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 10219 tests executed *Failed tests:* {noformat} TestDerbyConnector - did not produce a TEST-*.xml file (likely timed out) (batchId=235) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_rp_auto_join1] (batchId=3) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join31] (batchId=81) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_join_with_different_encryption_keys] (batchId=159) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[multiMapJoin2] (batchId=152) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver (batchId=160) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14] (batchId=223) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[join31] (batchId=133) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/3566/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/3566/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-3566/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 8 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12852760 - PreCommit-HIVE-Build > Hive client side shows Spark Driver disconnected while Spark Driver side > could not get RPC header > -- > > Key: HIVE-15859 > URL: https://issues.apache.org/jira/browse/HIVE-15859 > Project: Hive > Issue Type: Bug > Components: Hive, Spark >Affects Versions: 2.2.0 > Environment: hadoop2.7.1 > spark1.6.2 > hive2.2 >Reporter: KaiXu >Assignee: Rui Li > Attachments: HIVE-15859.1.patch, HIVE-15859.2.patch > > > Hive on Spark, failed with error: > {noformat} > 2017-02-08 09:50:59,331 Stage-2_0: 1039(+2)/1041 Stage-3_0: 796(+456)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:00,335 Stage-2_0: 1040(+1)/1041 Stage-3_0: 914(+398)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:01,338 Stage-2_0: 1041/1041 Finished Stage-3_0: > 961(+383)/1520 Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > Failed to monitor Job[ 2] with exception 'java.lang.IllegalStateException(RPC > channel is closed.)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > {noformat} > application log shows the driver commanded a shutdown with some unknown > reason, but hive's log shows Driver could not get RPC header( Expected RPC > header, got org.apache.hive.spark.client.rpc.Rpc$NullMessage instead). > {noformat} > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1169.0 in > stage 3.0 (TID 2519) > 17/02/08 09:51:04 INFO executor.CoarseGrainedExecutorBackend: Driver > commanded a shutdown > 17/02/08 09:51:04 INFO storage.MemoryStore: MemoryStore cleared > 17/02/08 09:51:04 INFO storage.BlockManager: BlockManager stopped > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 WARN executor.CoarseGrainedExecutorBackend: An unknown > (hsx-node1:42777) driver disconnected. > 17/02/08 09:51:04 ERROR executor.CoarseGrainedExecutorBackend: Driver > 192.168.1.1:42777 disassociated! Shutting down. > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1105.0 in > stage 3.0 (TID 2511) > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Shutdown hook called > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Shutting down remote daemon. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk6/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-71da1dfc-99bd-4687-bc2f-33452db8de3d > 17/02/08
[jira] [Commented] (HIVE-15859) Hive client side shows Spark Driver disconnected while Spark Driver side could not get RPC header
[ https://issues.apache.org/jira/browse/HIVE-15859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865696#comment-15865696 ] KaiXu commented on HIVE-15859: -- Thanks [~lirui] for your work, I found a similar issue and log on HIVE-15912, can you help to review? I will have a test after that. > Hive client side shows Spark Driver disconnected while Spark Driver side > could not get RPC header > -- > > Key: HIVE-15859 > URL: https://issues.apache.org/jira/browse/HIVE-15859 > Project: Hive > Issue Type: Bug > Components: Hive, Spark >Affects Versions: 2.2.0 > Environment: hadoop2.7.1 > spark1.6.2 > hive2.2 >Reporter: KaiXu >Assignee: Rui Li > Attachments: HIVE-15859.1.patch > > > Hive on Spark, failed with error: > {noformat} > 2017-02-08 09:50:59,331 Stage-2_0: 1039(+2)/1041 Stage-3_0: 796(+456)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:00,335 Stage-2_0: 1040(+1)/1041 Stage-3_0: 914(+398)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:01,338 Stage-2_0: 1041/1041 Finished Stage-3_0: > 961(+383)/1520 Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > Failed to monitor Job[ 2] with exception 'java.lang.IllegalStateException(RPC > channel is closed.)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > {noformat} > application log shows the driver commanded a shutdown with some unknown > reason, but hive's log shows Driver could not get RPC header( Expected RPC > header, got org.apache.hive.spark.client.rpc.Rpc$NullMessage instead). > {noformat} > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1169.0 in > stage 3.0 (TID 2519) > 17/02/08 09:51:04 INFO executor.CoarseGrainedExecutorBackend: Driver > commanded a shutdown > 17/02/08 09:51:04 INFO storage.MemoryStore: MemoryStore cleared > 17/02/08 09:51:04 INFO storage.BlockManager: BlockManager stopped > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 WARN executor.CoarseGrainedExecutorBackend: An unknown > (hsx-node1:42777) driver disconnected. > 17/02/08 09:51:04 ERROR executor.CoarseGrainedExecutorBackend: Driver > 192.168.1.1:42777 disassociated! Shutting down. > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1105.0 in > stage 3.0 (TID 2511) > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Shutdown hook called > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Shutting down remote daemon. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk6/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-71da1dfc-99bd-4687-bc2f-33452db8de3d > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk2/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-7f134d81-e77e-4b92-bd99-0a51d0962c14 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk5/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-77a90d63-fb05-4bc6-8d5e-1562cc502e6c > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Remote daemon shut down; proceeding with flushing remote transports. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk4/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-91f8b91a-114d-4340-8560-d3cd085c1cd4 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk1/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-a3c24f9e-8609-48f0-9d37-0de7ae06682a > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Remoting shut down. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk7/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-f6120a43-2158-4780-927c-c5786b78f53e > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk3/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-e17931ad-9e8a-45da-86f8-9a0fdca0fad1 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk8/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-4de34175-f871-4c28-8ec0-d2fc0020c5c3 > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task
[jira] [Commented] (HIVE-15859) Hive client side shows Spark Driver disconnected while Spark Driver side could not get RPC header
[ https://issues.apache.org/jira/browse/HIVE-15859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865680#comment-15865680 ] Hive QA commented on HIVE-15859: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12852521/HIVE-15859.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 10238 tests executed *Failed tests:* {noformat} TestDerbyConnector - did not produce a TEST-*.xml file (likely timed out) (batchId=235) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_join_with_different_encryption_keys] (batchId=159) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_orc_acid_part_update] (batchId=151) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14] (batchId=223) org.apache.hive.spark.client.rpc.TestRpc.testRpcDispatcher (batchId=274) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/3536/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/3536/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-3536/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 5 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12852521 - PreCommit-HIVE-Build > Hive client side shows Spark Driver disconnected while Spark Driver side > could not get RPC header > -- > > Key: HIVE-15859 > URL: https://issues.apache.org/jira/browse/HIVE-15859 > Project: Hive > Issue Type: Bug > Components: Hive, Spark >Affects Versions: 2.2.0 > Environment: hadoop2.7.1 > spark1.6.2 > hive2.2 >Reporter: KaiXu >Assignee: Rui Li > Attachments: HIVE-15859.1.patch > > > Hive on Spark, failed with error: > {noformat} > 2017-02-08 09:50:59,331 Stage-2_0: 1039(+2)/1041 Stage-3_0: 796(+456)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:00,335 Stage-2_0: 1040(+1)/1041 Stage-3_0: 914(+398)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:01,338 Stage-2_0: 1041/1041 Finished Stage-3_0: > 961(+383)/1520 Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > Failed to monitor Job[ 2] with exception 'java.lang.IllegalStateException(RPC > channel is closed.)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > {noformat} > application log shows the driver commanded a shutdown with some unknown > reason, but hive's log shows Driver could not get RPC header( Expected RPC > header, got org.apache.hive.spark.client.rpc.Rpc$NullMessage instead). > {noformat} > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1169.0 in > stage 3.0 (TID 2519) > 17/02/08 09:51:04 INFO executor.CoarseGrainedExecutorBackend: Driver > commanded a shutdown > 17/02/08 09:51:04 INFO storage.MemoryStore: MemoryStore cleared > 17/02/08 09:51:04 INFO storage.BlockManager: BlockManager stopped > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 WARN executor.CoarseGrainedExecutorBackend: An unknown > (hsx-node1:42777) driver disconnected. > 17/02/08 09:51:04 ERROR executor.CoarseGrainedExecutorBackend: Driver > 192.168.1.1:42777 disassociated! Shutting down. > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1105.0 in > stage 3.0 (TID 2511) > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Shutdown hook called > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Shutting down remote daemon. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk6/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-71da1dfc-99bd-4687-bc2f-33452db8de3d > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk2/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-7f134d81-e77e-4b92-bd99-0a51d0962c14 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory >
[jira] [Commented] (HIVE-15859) Hive client side shows Spark Driver disconnected while Spark Driver side could not get RPC header
[ https://issues.apache.org/jira/browse/HIVE-15859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865239#comment-15865239 ] Rui Li commented on HIVE-15859: --- Hi [~vanzin], thanks for providing the Livy PR. That's exactly the same issue I meant in my last comment. I'm wondering, if the message order gets messed like the PR mentioned: * Send call header * Send reply header * Send reply payload * Send call payload It means the receiver will receive two successive message headers. And it should happen before we hit the issue here. Not sure why it doesn't cause any trouble. Anyway I think Hive needs the same fix, and we can test if it fixes this one. > Hive client side shows Spark Driver disconnected while Spark Driver side > could not get RPC header > -- > > Key: HIVE-15859 > URL: https://issues.apache.org/jira/browse/HIVE-15859 > Project: Hive > Issue Type: Bug > Components: Hive, Spark >Affects Versions: 2.2.0 > Environment: hadoop2.7.1 > spark1.6.2 > hive2.2 >Reporter: KaiXu > > Hive on Spark, failed with error: > {noformat} > 2017-02-08 09:50:59,331 Stage-2_0: 1039(+2)/1041 Stage-3_0: 796(+456)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:00,335 Stage-2_0: 1040(+1)/1041 Stage-3_0: 914(+398)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:01,338 Stage-2_0: 1041/1041 Finished Stage-3_0: > 961(+383)/1520 Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > Failed to monitor Job[ 2] with exception 'java.lang.IllegalStateException(RPC > channel is closed.)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > {noformat} > application log shows the driver commanded a shutdown with some unknown > reason, but hive's log shows Driver could not get RPC header( Expected RPC > header, got org.apache.hive.spark.client.rpc.Rpc$NullMessage instead). > {noformat} > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1169.0 in > stage 3.0 (TID 2519) > 17/02/08 09:51:04 INFO executor.CoarseGrainedExecutorBackend: Driver > commanded a shutdown > 17/02/08 09:51:04 INFO storage.MemoryStore: MemoryStore cleared > 17/02/08 09:51:04 INFO storage.BlockManager: BlockManager stopped > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 WARN executor.CoarseGrainedExecutorBackend: An unknown > (hsx-node1:42777) driver disconnected. > 17/02/08 09:51:04 ERROR executor.CoarseGrainedExecutorBackend: Driver > 192.168.1.1:42777 disassociated! Shutting down. > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1105.0 in > stage 3.0 (TID 2511) > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Shutdown hook called > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Shutting down remote daemon. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk6/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-71da1dfc-99bd-4687-bc2f-33452db8de3d > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk2/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-7f134d81-e77e-4b92-bd99-0a51d0962c14 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk5/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-77a90d63-fb05-4bc6-8d5e-1562cc502e6c > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Remote daemon shut down; proceeding with flushing remote transports. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk4/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-91f8b91a-114d-4340-8560-d3cd085c1cd4 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk1/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-a3c24f9e-8609-48f0-9d37-0de7ae06682a > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Remoting shut down. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk7/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-f6120a43-2158-4780-927c-c5786b78f53e > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory >
[jira] [Commented] (HIVE-15859) Hive client side shows Spark Driver disconnected while Spark Driver side could not get RPC header
[ https://issues.apache.org/jira/browse/HIVE-15859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864063#comment-15864063 ] Marcelo Vanzin commented on HIVE-15859: --- {{RpcDispatcher::handleCall}} is on the read side, which is single-threaded in netty. So there's no need for synchronization there. The write side is multi-threaded so it needs to be thread-safe, maybe there's a problem there. I'd take a look at this: https://github.com/cloudera/livy/pull/274 Maybe the same problem exists in Hive code. > Hive client side shows Spark Driver disconnected while Spark Driver side > could not get RPC header > -- > > Key: HIVE-15859 > URL: https://issues.apache.org/jira/browse/HIVE-15859 > Project: Hive > Issue Type: Bug > Components: Hive, Spark >Affects Versions: 2.2.0 > Environment: hadoop2.7.1 > spark1.6.2 > hive2.2 >Reporter: KaiXu > > Hive on Spark, failed with error: > {noformat} > 2017-02-08 09:50:59,331 Stage-2_0: 1039(+2)/1041 Stage-3_0: 796(+456)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:00,335 Stage-2_0: 1040(+1)/1041 Stage-3_0: 914(+398)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:01,338 Stage-2_0: 1041/1041 Finished Stage-3_0: > 961(+383)/1520 Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > Failed to monitor Job[ 2] with exception 'java.lang.IllegalStateException(RPC > channel is closed.)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > {noformat} > application log shows the driver commanded a shutdown with some unknown > reason, but hive's log shows Driver could not get RPC header( Expected RPC > header, got org.apache.hive.spark.client.rpc.Rpc$NullMessage instead). > {noformat} > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1169.0 in > stage 3.0 (TID 2519) > 17/02/08 09:51:04 INFO executor.CoarseGrainedExecutorBackend: Driver > commanded a shutdown > 17/02/08 09:51:04 INFO storage.MemoryStore: MemoryStore cleared > 17/02/08 09:51:04 INFO storage.BlockManager: BlockManager stopped > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 WARN executor.CoarseGrainedExecutorBackend: An unknown > (hsx-node1:42777) driver disconnected. > 17/02/08 09:51:04 ERROR executor.CoarseGrainedExecutorBackend: Driver > 192.168.1.1:42777 disassociated! Shutting down. > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1105.0 in > stage 3.0 (TID 2511) > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Shutdown hook called > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Shutting down remote daemon. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk6/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-71da1dfc-99bd-4687-bc2f-33452db8de3d > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk2/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-7f134d81-e77e-4b92-bd99-0a51d0962c14 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk5/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-77a90d63-fb05-4bc6-8d5e-1562cc502e6c > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Remote daemon shut down; proceeding with flushing remote transports. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk4/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-91f8b91a-114d-4340-8560-d3cd085c1cd4 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk1/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-a3c24f9e-8609-48f0-9d37-0de7ae06682a > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Remoting shut down. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk7/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-f6120a43-2158-4780-927c-c5786b78f53e > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk3/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-e17931ad-9e8a-45da-86f8-9a0fdca0fad1 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory >
[jira] [Commented] (HIVE-15859) Hive client side shows Spark Driver disconnected while Spark Driver side could not get RPC header
[ https://issues.apache.org/jira/browse/HIVE-15859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863622#comment-15863622 ] Rui Li commented on HIVE-15859: --- I noticed writing to the channel is protected by channelLock in {{Rpc::call}}. However there's no synchronization for writing in {{RpcDispatcher::handleCall}}. Not sure whether that can be a problem. [~vanzin] would you mind share your thoughts on this? Thanks. > Hive client side shows Spark Driver disconnected while Spark Driver side > could not get RPC header > -- > > Key: HIVE-15859 > URL: https://issues.apache.org/jira/browse/HIVE-15859 > Project: Hive > Issue Type: Bug > Components: Hive, Spark >Affects Versions: 2.2.0 > Environment: hadoop2.7.1 > spark1.6.2 > hive2.2 >Reporter: KaiXu > > Hive on Spark, failed with error: > {noformat} > 2017-02-08 09:50:59,331 Stage-2_0: 1039(+2)/1041 Stage-3_0: 796(+456)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:00,335 Stage-2_0: 1040(+1)/1041 Stage-3_0: 914(+398)/1520 > Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > 2017-02-08 09:51:01,338 Stage-2_0: 1041/1041 Finished Stage-3_0: > 961(+383)/1520 Stage-4_0: 0/2021 Stage-5_0: 0/1009 Stage-6_0: 0/1 > Failed to monitor Job[ 2] with exception 'java.lang.IllegalStateException(RPC > channel is closed.)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > {noformat} > application log shows the driver commanded a shutdown with some unknown > reason, but hive's log shows Driver could not get RPC header( Expected RPC > header, got org.apache.hive.spark.client.rpc.Rpc$NullMessage instead). > {noformat} > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1169.0 in > stage 3.0 (TID 2519) > 17/02/08 09:51:04 INFO executor.CoarseGrainedExecutorBackend: Driver > commanded a shutdown > 17/02/08 09:51:04 INFO storage.MemoryStore: MemoryStore cleared > 17/02/08 09:51:04 INFO storage.BlockManager: BlockManager stopped > 17/02/08 09:51:04 INFO exec.Utilities: PLAN PATH = > hdfs://hsx-node1:8020/tmp/hive/root/b723c85d-2a7b-469e-bab1-9c165b25e656/hive_2017-02-08_09-49-37_890_6267025825539539056-1/-mr-10006/71a9dacb-a463-40ef-9e86-78d3b8e3738d/map.xml > 17/02/08 09:51:04 WARN executor.CoarseGrainedExecutorBackend: An unknown > (hsx-node1:42777) driver disconnected. > 17/02/08 09:51:04 ERROR executor.CoarseGrainedExecutorBackend: Driver > 192.168.1.1:42777 disassociated! Shutting down. > 17/02/08 09:51:04 INFO executor.Executor: Executor killed task 1105.0 in > stage 3.0 (TID 2511) > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Shutdown hook called > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Shutting down remote daemon. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk6/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-71da1dfc-99bd-4687-bc2f-33452db8de3d > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk2/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-7f134d81-e77e-4b92-bd99-0a51d0962c14 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk5/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-77a90d63-fb05-4bc6-8d5e-1562cc502e6c > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Remote daemon shut down; proceeding with flushing remote transports. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk4/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-91f8b91a-114d-4340-8560-d3cd085c1cd4 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk1/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-a3c24f9e-8609-48f0-9d37-0de7ae06682a > 17/02/08 09:51:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: > Remoting shut down. > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk7/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-f6120a43-2158-4780-927c-c5786b78f53e > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk3/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-e17931ad-9e8a-45da-86f8-9a0fdca0fad1 > 17/02/08 09:51:04 INFO util.ShutdownHookManager: Deleting directory > /mnt/disk8/yarn/nm/usercache/root/appcache/application_1486453422616_0150/spark-4de34175-f871-4c28-8ec0-d2fc0020c5c3 >