[ 
https://issues.apache.org/jira/browse/SPARK-2989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pengyanhong updated SPARK-2989:
-------------------------------

    Description: 
run a simple hive sql Spark App via yarn-cluster,  got 3 segments log content 
via yarn logs --applicationID command line, the detail as below:
* 1st segment is about the Driver & Application Master, everything is fine 
without error, start time is 16:43:49 and end time is 16:44:08.
* 2nd & 3rd segment is about  the Executor, the start time is 16:43:52, then 
from 16:44:38 encounter many times error as below:
{quote}
WARN org.apache.spark.Logging$class.logWarning(Logging.scala:91): Error sending 
message to BlockManagerMaster in 1 attempts
java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
        at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
        at 
scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
        at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
        at 
scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
        at scala.concurrent.Await$.result(package.scala:107)
        at 
org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:237)
        at 
org.apache.spark.storage.BlockManagerMaster.sendHeartBeat(BlockManagerMaster.scala:51)
        at 
org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$heartBeat(BlockManager.scala:113)
        at 
org.apache.spark.storage.BlockManager$$anonfun$initialize$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(BlockManager.scala:158)
        at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:790)
        at 
org.apache.spark.storage.BlockManager$$anonfun$initialize$1.apply$mcV$sp(BlockManager.scala:158)
        at akka.actor.Scheduler$$anon$9.run(Scheduler.scala:80)
        at 
akka.actor.LightArrayRevolverScheduler$$anon$3$$anon$2.run(Scheduler.scala:241)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
14/08/12 16:45:31 WARN 
org.apache.spark.Logging$class.logWarning(Logging.scala:91): Error sending 
message to BlockManagerMaster in 2 attempts
......
{quote}

confirmed that the date time of 3 nodes is sync.


  was:
run a simple hive sql Spark App via yarn-cluster,  got 3 segments log content 
via yarn logs --applicationID command line, the detail as below:
1st segment is about the Driver & Application Master, everything is fine 
without error, start time is 16:43:49 and end time is 16:44:08.
2nd & 3rd segment is about  the Executor, the start time is 16:43:52, then from 
16:44:38 encounter many times error as below:
WARN org.apache.spark.Logging$class.logWarning(Logging.scala:91): Error sending 
message to BlockManagerMaster in 1 attempts
java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
        at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
        at 
scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
        at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
        at 
scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
        at scala.concurrent.Await$.result(package.scala:107)
        at 
org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:237)
        at 
org.apache.spark.storage.BlockManagerMaster.sendHeartBeat(BlockManagerMaster.scala:51)
        at 
org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$heartBeat(BlockManager.scala:113)
        at 
org.apache.spark.storage.BlockManager$$anonfun$initialize$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(BlockManager.scala:158)
        at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:790)
        at 
org.apache.spark.storage.BlockManager$$anonfun$initialize$1.apply$mcV$sp(BlockManager.scala:158)
        at akka.actor.Scheduler$$anon$9.run(Scheduler.scala:80)
        at 
akka.actor.LightArrayRevolverScheduler$$anon$3$$anon$2.run(Scheduler.scala:241)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
14/08/12 16:45:31 WARN 
org.apache.spark.Logging$class.logWarning(Logging.scala:91): Error sending 
message to BlockManagerMaster in 2 attempts
......


confirmed that the date time of 3 nodes is sync.



> Error sending message to BlockManagerMaster
> -------------------------------------------
>
>                 Key: SPARK-2989
>                 URL: https://issues.apache.org/jira/browse/SPARK-2989
>             Project: Spark
>          Issue Type: Bug
>          Components: Block Manager, Deploy, SQL
>            Reporter: pengyanhong
>            Priority: Critical
>
> run a simple hive sql Spark App via yarn-cluster,  got 3 segments log content 
> via yarn logs --applicationID command line, the detail as below:
> * 1st segment is about the Driver & Application Master, everything is fine 
> without error, start time is 16:43:49 and end time is 16:44:08.
> * 2nd & 3rd segment is about  the Executor, the start time is 16:43:52, then 
> from 16:44:38 encounter many times error as below:
> {quote}
> WARN org.apache.spark.Logging$class.logWarning(Logging.scala:91): Error 
> sending message to BlockManagerMaster in 1 attempts
> java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
>       at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>       at 
> scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>       at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
>       at 
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>       at scala.concurrent.Await$.result(package.scala:107)
>       at 
> org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:237)
>       at 
> org.apache.spark.storage.BlockManagerMaster.sendHeartBeat(BlockManagerMaster.scala:51)
>       at 
> org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$heartBeat(BlockManager.scala:113)
>       at 
> org.apache.spark.storage.BlockManager$$anonfun$initialize$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(BlockManager.scala:158)
>       at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:790)
>       at 
> org.apache.spark.storage.BlockManager$$anonfun$initialize$1.apply$mcV$sp(BlockManager.scala:158)
>       at akka.actor.Scheduler$$anon$9.run(Scheduler.scala:80)
>       at 
> akka.actor.LightArrayRevolverScheduler$$anon$3$$anon$2.run(Scheduler.scala:241)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>       at java.lang.Thread.run(Thread.java:662)
> 14/08/12 16:45:31 WARN 
> org.apache.spark.Logging$class.logWarning(Logging.scala:91): Error sending 
> message to BlockManagerMaster in 2 attempts
> ......
> {quote}
> confirmed that the date time of 3 nodes is sync.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to