Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/4363#discussion_r25084604
--- Diff: core/src/main/scala/org/apache/spark/HeartbeatReceiver.scala ---
@@ -17,33 +17,84 @@
package org.apache.spark
-import akka.actor.Actor
+import scala.concurrent.duration._
+import scala.collection.mutable
+
+import akka.actor.{Actor, Cancellable}
+
import org.apache.spark.executor.TaskMetrics
import org.apache.spark.storage.BlockManagerId
-import org.apache.spark.scheduler.TaskScheduler
+import org.apache.spark.scheduler.{SlaveLost, TaskScheduler}
import org.apache.spark.util.ActorLogReceive
/**
* A heartbeat from executors to the driver. This is a shared message used
by several internal
- * components to convey liveness or execution information for in-progress
tasks.
+ * components to convey liveness or execution information for in-progress
tasks. It will also
+ * expire the hosts that have not heartbeated for more than
spark.driver.executorTimeoutMs.
*/
private[spark] case class Heartbeat(
executorId: String,
taskMetrics: Array[(Long, TaskMetrics)], // taskId -> TaskMetrics
blockManagerId: BlockManagerId)
+private[spark] case object ExpireDeadHosts
+
private[spark] case class HeartbeatResponse(reregisterBlockManager:
Boolean)
/**
* Lives in the driver to receive heartbeats from executors..
*/
-private[spark] class HeartbeatReceiver(scheduler: TaskScheduler)
+private[spark] class HeartbeatReceiver(sc: SparkContext, scheduler:
TaskScheduler)
extends Actor with ActorLogReceive with Logging {
+ val executorLastSeen = new mutable.HashMap[String, Long]
+
+ val executorTimeout = sc.conf.getLong("spark.driver.executorTimeoutMs",
+ sc.conf.getLong("spark.storage.blockManagerSlaveTimeoutMs", 120 *
1000))
+
+ val checkTimeoutInterval =
sc.conf.getLong("spark.driver.executorTimeoutIntervalMs",
--- End diff --
`spark.network.timeout(Interval)` implies that this is the maximum amount
of time we're willing to wait on the network, when really this is the amount of
time the driver is willing to wait on an executor, whether it's garbage
collection, a node failure, or the network that's causing the problem. We also
don't necessarily want this linked to other network timeouts - I don't think it
would be weird for a user to desire different timeouts for driver-executor
heartbeats and shuffle communication.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]