Github user xueyumusic commented on a diff in the pull request:
https://github.com/apache/spark/pull/21575#discussion_r196635402
--- Diff: core/src/main/scala/org/apache/spark/HeartbeatReceiver.scala ---
@@ -75,16 +76,18 @@ private[spark] class HeartbeatReceiver(sc:
SparkContext, clock: Clock)
// "spark.network.timeout" uses "seconds", while
`spark.storage.blockManagerSlaveTimeoutMs` uses
// "milliseconds"
private val slaveTimeoutMs =
- sc.conf.getTimeAsMs("spark.storage.blockManagerSlaveTimeoutMs", "120s")
+ sc.conf.getTimeAsMs("spark.storage.blockManagerSlaveTimeoutMs",
--- End diff --
I look at this carefully, I think your are right, thanks @jiangxb1987 . One
case that is not relevant with this PR is like this: set
spark.storage.blockManagerSlaveTimeoutMs=900ms and not configure
spark.network.timeout, then `executorTimeoutMs ` will be 0 since
getTimeAsSeconds loos precision for ms. This config maybe not reasonable. If
need fix how about add ensuring > 0 or make executorTimeoutMs's min value as 1,
@jiangxb1987 @zsxwing
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]