Github user andrewor14 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4363#discussion_r25085897
  
    --- Diff: core/src/main/scala/org/apache/spark/HeartbeatReceiver.scala ---
    @@ -17,33 +17,84 @@
     
     package org.apache.spark
     
    -import akka.actor.Actor
    +import scala.concurrent.duration._
    +import scala.collection.mutable
    +
    +import akka.actor.{Actor, Cancellable}
    +
     import org.apache.spark.executor.TaskMetrics
     import org.apache.spark.storage.BlockManagerId
    -import org.apache.spark.scheduler.TaskScheduler
    +import org.apache.spark.scheduler.{SlaveLost, TaskScheduler}
     import org.apache.spark.util.ActorLogReceive
     
     /**
      * A heartbeat from executors to the driver. This is a shared message used 
by several internal
    - * components to convey liveness or execution information for in-progress 
tasks.
    + * components to convey liveness or execution information for in-progress 
tasks. It will also 
    + * expire the hosts that have not heartbeated for more than 
spark.driver.executorTimeoutMs.
      */
     private[spark] case class Heartbeat(
         executorId: String,
         taskMetrics: Array[(Long, TaskMetrics)], // taskId -> TaskMetrics
         blockManagerId: BlockManagerId)
     
    +private[spark] case object ExpireDeadHosts 
    +    
     private[spark] case class HeartbeatResponse(reregisterBlockManager: 
Boolean)
     
     /**
      * Lives in the driver to receive heartbeats from executors..
      */
    -private[spark] class HeartbeatReceiver(scheduler: TaskScheduler)
    +private[spark] class HeartbeatReceiver(sc: SparkContext, scheduler: 
TaskScheduler)
       extends Actor with ActorLogReceive with Logging {
     
    +  val executorLastSeen = new mutable.HashMap[String, Long]
    +  
    +  val executorTimeout = sc.conf.getLong("spark.driver.executorTimeoutMs", 
    +    sc.conf.getLong("spark.storage.blockManagerSlaveTimeoutMs", 120 * 
1000))
    +  
    +  val checkTimeoutInterval = 
sc.conf.getLong("spark.driver.executorTimeoutIntervalMs",
    --- End diff --
    
    @sryza if you read the documentation of `spark.network.timeout` it says it 
is the "default timeout for all network interactions" and one of the things it 
replaces is `spark.storage.blockManagerSlaveTimeoutMs`, which is essentially 
the driver-executor timeout (this was not implemented in the code for some 
reason, however). Everywhere else `spark.network.timeout` is used there might 
already be garbage collection (e.g. between executors themselves when fetching 
shuffle files) and other issues that would delay the response, so I don't see 
how the driver-executor case is special.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to