[ 
https://issues.apache.org/jira/browse/SPARK-15010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reynold Xin resolved SPARK-15010.
---------------------------------
       Resolution: Fixed
    Fix Version/s: 2.0.0

> Lots of error messages about accumulator in Spark shell when a task takes 
> some time to run
> ------------------------------------------------------------------------------------------
>
>                 Key: SPARK-15010
>                 URL: https://issues.apache.org/jira/browse/SPARK-15010
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core, Spark Shell
>    Affects Versions: 2.0.0
>            Reporter: Xiangrui Meng
>            Assignee: Wenchen Fan
>            Priority: Blocker
>             Fix For: 2.0.0
>
>
> I ran the following code in a Spark shell built with the latest master 
> (87ac84d43729c54be100bb9ad7dc6e8fa14b8805) but got lots of error messages 
> about accumulator. The job finished successfully, but the error messages made 
> the shell very hard to use.
> {code}
> sc.parallelize(0 until 1, 1).foreach { _ => Thread.sleep(20000) }
> {code}
> cc: [~rxin] [~cloud_fan]
> Error messages:
> {code:none}
> 16/04/29 11:59:23 ERROR Utils: Uncaught exception in thread 
> heartbeat-receiver-event-loop-thread
> java.lang.UnsupportedOperationException: Can't read accumulator value in task
>       at org.apache.spark.NewAccumulator.value(NewAccumulator.scala:137)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9$$anonfun$apply$10.apply(TaskSchedulerImpl.scala:394)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9$$anonfun$apply$10.apply(TaskSchedulerImpl.scala:394)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>       at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>       at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>       at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>       at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>       at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>       at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9.apply(TaskSchedulerImpl.scala:394)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9.apply(TaskSchedulerImpl.scala:392)
>       at scala.Option.map(Option.scala:146)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5.apply(TaskSchedulerImpl.scala:392)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5.apply(TaskSchedulerImpl.scala:391)
>       at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>       at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>       at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>       at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>       at 
> scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
>       at scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:186)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl.executorHeartbeatReceived(TaskSchedulerImpl.scala:391)
>       at 
> org.apache.spark.HeartbeatReceiver$$anonfun$receiveAndReply$1$$anon$2$$anonfun$run$2.apply$mcV$sp(HeartbeatReceiver.scala:128)
>       at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1219)
>       at 
> org.apache.spark.HeartbeatReceiver$$anonfun$receiveAndReply$1$$anon$2.run(HeartbeatReceiver.scala:127)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> 16/04/29 11:59:33 WARN NettyRpcEndpointRef: Error sending message [message = 
> Heartbeat(driver,[Lscala.Tuple2;@1cd9105c,BlockManagerId(driver, 
> 192.168.99.1, 60533))] in 1 attempts
> org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10 
> seconds]. This timeout is controlled by spark.executor.heartbeatInterval
>       at 
> org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
>       at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
>       at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
>       at 
> org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
>       at 
> org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:494)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply$mcV$sp(Executor.scala:523)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:523)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:523)
>       at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1793)
>       at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:523)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.concurrent.TimeoutException: Futures timed out after [10 
> seconds]
>       at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>       at 
> scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>       at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
>       at 
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>       at scala.concurrent.Await$.result(package.scala:190)
>       at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:81)
>       ... 14 more
> 16/04/29 11:59:36 ERROR Utils: Uncaught exception in thread 
> heartbeat-receiver-event-loop-thread
> java.lang.UnsupportedOperationException: Can't read accumulator value in task
>       at org.apache.spark.NewAccumulator.value(NewAccumulator.scala:137)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9$$anonfun$apply$10.apply(TaskSchedulerImpl.scala:394)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9$$anonfun$apply$10.apply(TaskSchedulerImpl.scala:394)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>       at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>       at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>       at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>       at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>       at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>       at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9.apply(TaskSchedulerImpl.scala:394)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9.apply(TaskSchedulerImpl.scala:392)
>       at scala.Option.map(Option.scala:146)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5.apply(TaskSchedulerImpl.scala:392)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5.apply(TaskSchedulerImpl.scala:391)
>       at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>       at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>       at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>       at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>       at 
> scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
>       at scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:186)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl.executorHeartbeatReceived(TaskSchedulerImpl.scala:391)
>       at 
> org.apache.spark.HeartbeatReceiver$$anonfun$receiveAndReply$1$$anon$2$$anonfun$run$2.apply$mcV$sp(HeartbeatReceiver.scala:128)
>       at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1219)
>       at 
> org.apache.spark.HeartbeatReceiver$$anonfun$receiveAndReply$1$$anon$2.run(HeartbeatReceiver.scala:127)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> 16/04/29 11:59:46 WARN NettyRpcEndpointRef: Error sending message [message = 
> Heartbeat(driver,[Lscala.Tuple2;@1cd9105c,BlockManagerId(driver, 
> 192.168.99.1, 60533))] in 2 attempts
> org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10 
> seconds]. This timeout is controlled by spark.executor.heartbeatInterval
>       at 
> org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
>       at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
>       at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
>       at 
> org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
>       at 
> org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:494)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply$mcV$sp(Executor.scala:523)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:523)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:523)
>       at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1793)
>       at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:523)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.concurrent.TimeoutException: Futures timed out after [10 
> seconds]
>       at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>       at 
> scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>       at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
>       at 
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>       at scala.concurrent.Await$.result(package.scala:190)
>       at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:81)
>       ... 14 more
> 16/04/29 11:59:49 ERROR Utils: Uncaught exception in thread 
> heartbeat-receiver-event-loop-thread
> java.lang.UnsupportedOperationException: Can't read accumulator value in task
>       at org.apache.spark.NewAccumulator.value(NewAccumulator.scala:137)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9$$anonfun$apply$10.apply(TaskSchedulerImpl.scala:394)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9$$anonfun$apply$10.apply(TaskSchedulerImpl.scala:394)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>       at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>       at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>       at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>       at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>       at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>       at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9.apply(TaskSchedulerImpl.scala:394)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9.apply(TaskSchedulerImpl.scala:392)
>       at scala.Option.map(Option.scala:146)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5.apply(TaskSchedulerImpl.scala:392)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5.apply(TaskSchedulerImpl.scala:391)
>       at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>       at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>       at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>       at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>       at 
> scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
>       at scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:186)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl.executorHeartbeatReceived(TaskSchedulerImpl.scala:391)
>       at 
> org.apache.spark.HeartbeatReceiver$$anonfun$receiveAndReply$1$$anon$2$$anonfun$run$2.apply$mcV$sp(HeartbeatReceiver.scala:128)
>       at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1219)
>       at 
> org.apache.spark.HeartbeatReceiver$$anonfun$receiveAndReply$1$$anon$2.run(HeartbeatReceiver.scala:127)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> 16/04/29 11:59:59 WARN NettyRpcEndpointRef: Error sending message [message = 
> Heartbeat(driver,[Lscala.Tuple2;@1cd9105c,BlockManagerId(driver, 
> 192.168.99.1, 60533))] in 3 attempts
> org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10 
> seconds]. This timeout is controlled by spark.executor.heartbeatInterval
>       at 
> org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
>       at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
>       at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
>       at 
> org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
>       at 
> org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:494)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply$mcV$sp(Executor.scala:523)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:523)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:523)
>       at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1793)
>       at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:523)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.concurrent.TimeoutException: Futures timed out after [10 
> seconds]
>       at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>       at 
> scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>       at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
>       at 
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>       at scala.concurrent.Await$.result(package.scala:190)
>       at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:81)
>       ... 14 more
> 16/04/29 11:59:59 WARN Executor: Issue communicating with driver in 
> heartbeater
> org.apache.spark.SparkException: Error sending message [message = 
> Heartbeat(driver,[Lscala.Tuple2;@1cd9105c,BlockManagerId(driver, 
> 192.168.99.1, 60533))]
>       at 
> org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:119)
>       at 
> org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:494)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply$mcV$sp(Executor.scala:523)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:523)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:523)
>       at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1793)
>       at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:523)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.spark.rpc.RpcTimeoutException: Futures timed out after 
> [10 seconds]. This timeout is controlled by spark.executor.heartbeatInterval
>       at 
> org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
>       at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
>       at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
>       at 
> org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
>       ... 13 more
> Caused by: java.util.concurrent.TimeoutException: Futures timed out after [10 
> seconds]
>       at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>       at 
> scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>       at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
>       at 
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>       at scala.concurrent.Await$.result(package.scala:190)
>       at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:81)
>       ... 14 more
> 16/04/29 11:59:59 ERROR Utils: Uncaught exception in thread 
> heartbeat-receiver-event-loop-thread
> java.lang.UnsupportedOperationException: Can't read accumulator value in task
>       at org.apache.spark.NewAccumulator.value(NewAccumulator.scala:137)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9$$anonfun$apply$10.apply(TaskSchedulerImpl.scala:394)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9$$anonfun$apply$10.apply(TaskSchedulerImpl.scala:394)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>       at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>       at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>       at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>       at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>       at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>       at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9.apply(TaskSchedulerImpl.scala:394)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9.apply(TaskSchedulerImpl.scala:392)
>       at scala.Option.map(Option.scala:146)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5.apply(TaskSchedulerImpl.scala:392)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5.apply(TaskSchedulerImpl.scala:391)
>       at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>       at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>       at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>       at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>       at 
> scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
>       at scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:186)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl.executorHeartbeatReceived(TaskSchedulerImpl.scala:391)
>       at 
> org.apache.spark.HeartbeatReceiver$$anonfun$receiveAndReply$1$$anon$2$$anonfun$run$2.apply$mcV$sp(HeartbeatReceiver.scala:128)
>       at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1219)
>       at 
> org.apache.spark.HeartbeatReceiver$$anonfun$receiveAndReply$1$$anon$2.run(HeartbeatReceiver.scala:127)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> 16/04/29 12:00:09 WARN NettyRpcEndpointRef: Error sending message [message = 
> Heartbeat(driver,[Lscala.Tuple2;@18ea6519,BlockManagerId(driver, 
> 192.168.99.1, 60533))] in 1 attempts
> org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply in 10 
> seconds. This timeout is controlled by spark.executor.heartbeatInterval
>       at 
> org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
>       at 
> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
>       at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:216)
>       at scala.util.Try$.apply(Try.scala:192)
>       at scala.util.Failure.recover(Try.scala:216)
>       at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326)
>       at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326)
>       at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>       at 
> com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293)
>       at 
> scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
>       at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
>       at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
>       at scala.concurrent.Promise$class.complete(Promise.scala:55)
>       at 
> scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153)
>       at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237)
>       at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237)
>       at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>       at 
> scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
>       at 
> scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
>       at 
> scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
>       at 
> scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
>       at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>       at 
> scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
>       at 
> scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
>       at 
> scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
>       at 
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
>       at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
>       at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
>       at scala.concurrent.Promise$class.tryFailure(Promise.scala:112)
>       at 
> scala.concurrent.impl.Promise$DefaultPromise.tryFailure(Promise.scala:153)
>       at 
> org.apache.spark.rpc.netty.NettyRpcEnv.org$apache$spark$rpc$netty$NettyRpcEnv$$onFailure$1(NettyRpcEnv.scala:205)
>       at 
> org.apache.spark.rpc.netty.NettyRpcEnv$$anon$1.run(NettyRpcEnv.scala:239)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.concurrent.TimeoutException: Cannot receive any reply in 
> 10 seconds
>       ... 8 more
> 16/04/29 12:00:12 ERROR Utils: Uncaught exception in thread 
> heartbeat-receiver-event-loop-thread
> java.lang.UnsupportedOperationException: Can't read accumulator value in task
>       at org.apache.spark.NewAccumulator.value(NewAccumulator.scala:137)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9$$anonfun$apply$10.apply(TaskSchedulerImpl.scala:394)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9$$anonfun$apply$10.apply(TaskSchedulerImpl.scala:394)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>       at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>       at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>       at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>       at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>       at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>       at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9.apply(TaskSchedulerImpl.scala:394)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9.apply(TaskSchedulerImpl.scala:392)
>       at scala.Option.map(Option.scala:146)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5.apply(TaskSchedulerImpl.scala:392)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5.apply(TaskSchedulerImpl.scala:391)
>       at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>       at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>       at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>       at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>       at 
> scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
>       at scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:186)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl.executorHeartbeatReceived(TaskSchedulerImpl.scala:391)
>       at 
> org.apache.spark.HeartbeatReceiver$$anonfun$receiveAndReply$1$$anon$2$$anonfun$run$2.apply$mcV$sp(HeartbeatReceiver.scala:128)
>       at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1219)
>       at 
> org.apache.spark.HeartbeatReceiver$$anonfun$receiveAndReply$1$$anon$2.run(HeartbeatReceiver.scala:127)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> 16/04/29 12:00:22 WARN NettyRpcEndpointRef: Error sending message [message = 
> Heartbeat(driver,[Lscala.Tuple2;@18ea6519,BlockManagerId(driver, 
> 192.168.99.1, 60533))] in 2 attempts
> org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply in 10 
> seconds. This timeout is controlled by spark.executor.heartbeatInterval
>       at 
> org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
>       at 
> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
>       at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:216)
>       at scala.util.Try$.apply(Try.scala:192)
>       at scala.util.Failure.recover(Try.scala:216)
>       at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326)
>       at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326)
>       at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>       at 
> com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293)
>       at 
> scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
>       at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
>       at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
>       at scala.concurrent.Promise$class.complete(Promise.scala:55)
>       at 
> scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153)
>       at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237)
>       at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237)
>       at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>       at 
> scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
>       at 
> scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
>       at 
> scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
>       at 
> scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
>       at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>       at 
> scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
>       at 
> scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
>       at 
> scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
>       at 
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
>       at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
>       at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
>       at scala.concurrent.Promise$class.tryFailure(Promise.scala:112)
>       at 
> scala.concurrent.impl.Promise$DefaultPromise.tryFailure(Promise.scala:153)
>       at 
> org.apache.spark.rpc.netty.NettyRpcEnv.org$apache$spark$rpc$netty$NettyRpcEnv$$onFailure$1(NettyRpcEnv.scala:205)
>       at 
> org.apache.spark.rpc.netty.NettyRpcEnv$$anon$1.run(NettyRpcEnv.scala:239)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.concurrent.TimeoutException: Cannot receive any reply in 
> 10 seconds
>       ... 8 more
> 16/04/29 12:00:25 ERROR Utils: Uncaught exception in thread 
> heartbeat-receiver-event-loop-thread
> java.lang.UnsupportedOperationException: Can't read accumulator value in task
>       at org.apache.spark.NewAccumulator.value(NewAccumulator.scala:137)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9$$anonfun$apply$10.apply(TaskSchedulerImpl.scala:394)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9$$anonfun$apply$10.apply(TaskSchedulerImpl.scala:394)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>       at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>       at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>       at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>       at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>       at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>       at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9.apply(TaskSchedulerImpl.scala:394)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9.apply(TaskSchedulerImpl.scala:392)
>       at scala.Option.map(Option.scala:146)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5.apply(TaskSchedulerImpl.scala:392)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5.apply(TaskSchedulerImpl.scala:391)
>       at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>       at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>       at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>       at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>       at 
> scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
>       at scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:186)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl.executorHeartbeatReceived(TaskSchedulerImpl.scala:391)
>       at 
> org.apache.spark.HeartbeatReceiver$$anonfun$receiveAndReply$1$$anon$2$$anonfun$run$2.apply$mcV$sp(HeartbeatReceiver.scala:128)
>       at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1219)
>       at 
> org.apache.spark.HeartbeatReceiver$$anonfun$receiveAndReply$1$$anon$2.run(HeartbeatReceiver.scala:127)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> 16/04/29 12:00:35 WARN NettyRpcEndpointRef: Error sending message [message = 
> Heartbeat(driver,[Lscala.Tuple2;@18ea6519,BlockManagerId(driver, 
> 192.168.99.1, 60533))] in 3 attempts
> org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10 
> seconds]. This timeout is controlled by spark.executor.heartbeatInterval
>       at 
> org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
>       at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
>       at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
>       at 
> org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
>       at 
> org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:494)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply$mcV$sp(Executor.scala:523)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:523)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:523)
>       at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1793)
>       at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:523)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.concurrent.TimeoutException: Futures timed out after [10 
> seconds]
>       at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>       at 
> scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>       at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
>       at 
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>       at scala.concurrent.Await$.result(package.scala:190)
>       at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:81)
>       ... 14 more
> 16/04/29 12:00:35 WARN Executor: Issue communicating with driver in 
> heartbeater
> org.apache.spark.SparkException: Error sending message [message = 
> Heartbeat(driver,[Lscala.Tuple2;@18ea6519,BlockManagerId(driver, 
> 192.168.99.1, 60533))]
>       at 
> org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:119)
>       at 
> org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:494)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply$mcV$sp(Executor.scala:523)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:523)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:523)
>       at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1793)
>       at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:523)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.spark.rpc.RpcTimeoutException: Futures timed out after 
> [10 seconds]. This timeout is controlled by spark.executor.heartbeatInterval
>       at 
> org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
>       at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
>       at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
>       at 
> org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
>       ... 13 more
> Caused by: java.util.concurrent.TimeoutException: Futures timed out after [10 
> seconds]
>       at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>       at 
> scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>       at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
>       at 
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>       at scala.concurrent.Await$.result(package.scala:190)
>       at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:81)
>       ... 14 more
> 16/04/29 12:00:35 ERROR Utils: Uncaught exception in thread 
> heartbeat-receiver-event-loop-thread
> java.lang.UnsupportedOperationException: Can't read accumulator value in task
>       at org.apache.spark.NewAccumulator.value(NewAccumulator.scala:137)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9$$anonfun$apply$10.apply(TaskSchedulerImpl.scala:394)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9$$anonfun$apply$10.apply(TaskSchedulerImpl.scala:394)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>       at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>       at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>       at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>       at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>       at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>       at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9.apply(TaskSchedulerImpl.scala:394)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9.apply(TaskSchedulerImpl.scala:392)
>       at scala.Option.map(Option.scala:146)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5.apply(TaskSchedulerImpl.scala:392)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5.apply(TaskSchedulerImpl.scala:391)
>       at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>       at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>       at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>       at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>       at 
> scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
>       at scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:186)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl.executorHeartbeatReceived(TaskSchedulerImpl.scala:391)
>       at 
> org.apache.spark.HeartbeatReceiver$$anonfun$receiveAndReply$1$$anon$2$$anonfun$run$2.apply$mcV$sp(HeartbeatReceiver.scala:128)
>       at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1219)
>       at 
> org.apache.spark.HeartbeatReceiver$$anonfun$receiveAndReply$1$$anon$2.run(HeartbeatReceiver.scala:127)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> 16/04/29 12:00:45 WARN NettyRpcEndpointRef: Error sending message [message = 
> Heartbeat(driver,[Lscala.Tuple2;@2f77feb,BlockManagerId(driver, 192.168.99.1, 
> 60533))] in 1 attempts
> org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10 
> seconds]. This timeout is controlled by spark.executor.heartbeatInterval
>       at 
> org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
>       at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
>       at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
>       at 
> org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
>       at 
> org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:494)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply$mcV$sp(Executor.scala:523)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:523)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:523)
>       at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1793)
>       at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:523)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.concurrent.TimeoutException: Futures timed out after [10 
> seconds]
>       at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>       at 
> scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>       at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
>       at 
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>       at scala.concurrent.Await$.result(package.scala:190)
>       at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:81)
>       ... 14 more
> 16/04/29 12:00:48 ERROR Utils: Uncaught exception in thread 
> heartbeat-receiver-event-loop-thread
> java.lang.UnsupportedOperationException: Can't read accumulator value in task
>       at org.apache.spark.NewAccumulator.value(NewAccumulator.scala:137)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9$$anonfun$apply$10.apply(TaskSchedulerImpl.scala:394)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9$$anonfun$apply$10.apply(TaskSchedulerImpl.scala:394)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>       at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>       at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>       at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>       at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>       at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>       at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9.apply(TaskSchedulerImpl.scala:394)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9.apply(TaskSchedulerImpl.scala:392)
>       at scala.Option.map(Option.scala:146)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5.apply(TaskSchedulerImpl.scala:392)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5.apply(TaskSchedulerImpl.scala:391)
>       at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>       at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>       at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>       at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>       at 
> scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
>       at scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:186)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl.executorHeartbeatReceived(TaskSchedulerImpl.scala:391)
>       at 
> org.apache.spark.HeartbeatReceiver$$anonfun$receiveAndReply$1$$anon$2$$anonfun$run$2.apply$mcV$sp(HeartbeatReceiver.scala:128)
>       at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1219)
>       at 
> org.apache.spark.HeartbeatReceiver$$anonfun$receiveAndReply$1$$anon$2.run(HeartbeatReceiver.scala:127)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> 16/04/29 12:00:58 WARN NettyRpcEndpointRef: Error sending message [message = 
> Heartbeat(driver,[Lscala.Tuple2;@2f77feb,BlockManagerId(driver, 192.168.99.1, 
> 60533))] in 2 attempts
> org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10 
> seconds]. This timeout is controlled by spark.executor.heartbeatInterval
>       at 
> org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
>       at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
>       at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
>       at 
> org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
>       at 
> org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:494)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply$mcV$sp(Executor.scala:523)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:523)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:523)
>       at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1793)
>       at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:523)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.concurrent.TimeoutException: Futures timed out after [10 
> seconds]
>       at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>       at 
> scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>       at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
>       at 
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>       at scala.concurrent.Await$.result(package.scala:190)
>       at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:81)
>       ... 14 more
> 16/04/29 12:01:01 ERROR Utils: Uncaught exception in thread 
> heartbeat-receiver-event-loop-thread
> java.lang.UnsupportedOperationException: Can't read accumulator value in task
>       at org.apache.spark.NewAccumulator.value(NewAccumulator.scala:137)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9$$anonfun$apply$10.apply(TaskSchedulerImpl.scala:394)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9$$anonfun$apply$10.apply(TaskSchedulerImpl.scala:394)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>       at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>       at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>       at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>       at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>       at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>       at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9.apply(TaskSchedulerImpl.scala:394)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9.apply(TaskSchedulerImpl.scala:392)
>       at scala.Option.map(Option.scala:146)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5.apply(TaskSchedulerImpl.scala:392)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5.apply(TaskSchedulerImpl.scala:391)
>       at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>       at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>       at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>       at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>       at 
> scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
>       at scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:186)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl.executorHeartbeatReceived(TaskSchedulerImpl.scala:391)
>       at 
> org.apache.spark.HeartbeatReceiver$$anonfun$receiveAndReply$1$$anon$2$$anonfun$run$2.apply$mcV$sp(HeartbeatReceiver.scala:128)
>       at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1219)
>       at 
> org.apache.spark.HeartbeatReceiver$$anonfun$receiveAndReply$1$$anon$2.run(HeartbeatReceiver.scala:127)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> 16/04/29 12:01:11 WARN NettyRpcEndpointRef: Error sending message [message = 
> Heartbeat(driver,[Lscala.Tuple2;@2f77feb,BlockManagerId(driver, 192.168.99.1, 
> 60533))] in 3 attempts
> org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10 
> seconds]. This timeout is controlled by spark.executor.heartbeatInterval
>       at 
> org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
>       at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
>       at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
>       at 
> org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
>       at 
> org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:494)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply$mcV$sp(Executor.scala:523)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:523)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:523)
>       at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1793)
>       at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:523)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.concurrent.TimeoutException: Futures timed out after [10 
> seconds]
>       at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>       at 
> scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>       at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
>       at 
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>       at scala.concurrent.Await$.result(package.scala:190)
>       at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:81)
>       ... 14 more
> 16/04/29 12:01:11 WARN Executor: Issue communicating with driver in 
> heartbeater
> org.apache.spark.SparkException: Error sending message [message = 
> Heartbeat(driver,[Lscala.Tuple2;@2f77feb,BlockManagerId(driver, 192.168.99.1, 
> 60533))]
>       at 
> org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:119)
>       at 
> org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:494)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply$mcV$sp(Executor.scala:523)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:523)
>       at 
> org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:523)
>       at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1793)
>       at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:523)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.spark.rpc.RpcTimeoutException: Futures timed out after 
> [10 seconds]. This timeout is controlled by spark.executor.heartbeatInterval
>       at 
> org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
>       at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
>       at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
>       at 
> org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
>       ... 13 more
> Caused by: java.util.concurrent.TimeoutException: Futures timed out after [10 
> seconds]
>       at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>       at 
> scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>       at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
>       at 
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>       at scala.concurrent.Await$.result(package.scala:190)
>       at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:81)
>       ... 14 more
> 16/04/29 12:01:11 ERROR Utils: Uncaught exception in thread 
> heartbeat-receiver-event-loop-thread
> java.lang.UnsupportedOperationException: Can't read accumulator value in task
>       at org.apache.spark.NewAccumulator.value(NewAccumulator.scala:137)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9$$anonfun$apply$10.apply(TaskSchedulerImpl.scala:394)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9$$anonfun$apply$10.apply(TaskSchedulerImpl.scala:394)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>       at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>       at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>       at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>       at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>       at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>       at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9.apply(TaskSchedulerImpl.scala:394)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5$$anonfun$apply$9.apply(TaskSchedulerImpl.scala:392)
>       at scala.Option.map(Option.scala:146)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5.apply(TaskSchedulerImpl.scala:392)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$5.apply(TaskSchedulerImpl.scala:391)
>       at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>       at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>       at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>       at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>       at 
> scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
>       at scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:186)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl.executorHeartbeatReceived(TaskSchedulerImpl.scala:391)
>       at 
> org.apache.spark.HeartbeatReceiver$$anonfun$receiveAndReply$1$$anon$2$$anonfun$run$2.apply$mcV$sp(HeartbeatReceiver.scala:128)
>       at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1219)
>       at 
> org.apache.spark.HeartbeatReceiver$$anonfun$receiveAndReply$1$$anon$2.run(HeartbeatReceiver.scala:127)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> 16/04/29 12:01:21 WARN NettyRpcEndpointRef: Error sending message [message = 
> Heartbeat(driver,[Lscala.Tuple2;@6837d38f,BlockManagerId(driver, 
> 192.168.99.1, 60533))] in 1 attempts
> org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply in 10 
> seconds. This timeout is controlled by spark.executor.heartbeatInterval
>       at 
> org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
>       at 
> org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
>       at 
> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
>       at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:216)
>       at scala.util.Try$.apply(Try.scala:192)
>       at scala.util.Failure.recover(Try.scala:216)
>       at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326)
>       at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326)
>       at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>       at 
> com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293)
>       at 
> scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
>       at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
>       at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
>       at scala.concurrent.Promise$class.complete(Promise.scala:55)
>       at 
> scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153)
>       at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237)
>       at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237)
>       at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>       at 
> scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
>       at 
> scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
>       at 
> scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
>       at 
> scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
>       at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>       at 
> scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
>       at 
> scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
>       at 
> scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
>       at 
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
>       at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
>       at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
>       at scala.concurrent.Promise$class.tryFailure(Promise.scala:112)
>       at 
> scala.concurrent.impl.Promise$DefaultPromise.tryFailure(Promise.scala:153)
>       at 
> org.apache.spark.rpc.netty.NettyRpcEnv.org$apache$spark$rpc$netty$NettyRpcEnv$$onFailure$1(NettyRpcEnv.scala:205)
>       at 
> org.apache.spark.rpc.netty.NettyRpcEnv$$anon$1.run(NettyRpcEnv.scala:239)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.concurrent.TimeoutException: Cannot receive any reply in 
> 10 seconds
>       ... 8 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to