Github user BryanCutler commented on the issue:

    https://github.com/apache/spark/pull/18126
  
    No, I couldn't think of anything else without it being too long winded.  I
    agree that the `worker` prefix gives enough meaning, plus whomever uses
    this should already know the context that it's intended for.
    
    On Fri, May 26, 2017 at 4:37 PM, Shixiong Zhu <notificati...@github.com>
    wrote:
    
    > *@zsxwing* commented on this pull request.
    > ------------------------------
    >
    > In core/src/main/scala/org/apache/spark/deploy/worker/DriverRunner.scala
    > <https://github.com/apache/spark/pull/18126#discussion_r118805346>:
    >
    > > @@ -57,7 +57,8 @@ private[deploy] class DriverRunner(
    >    @volatile private[worker] var finalException: Option[Exception] = None
    >
    >    // Timeout to wait for when trying to terminate a driver.
    > -  private val DRIVER_TERMINATE_TIMEOUT_MS = 10 * 1000
    > +  private val DRIVER_TERMINATE_TIMEOUT_MS =
    > +    conf.getTimeAsMs("spark.worker.driverTerminateTimeout", "10s")
    >
    > spark.worker means this is only for Spark workers, so I think it should
    > be obvious. Do you have a better config name?
    >
    > —
    > You are receiving this because you were mentioned.
    > Reply to this email directly, view it on GitHub
    > <https://github.com/apache/spark/pull/18126#discussion_r118805346>, or 
mute
    > the thread
    > 
<https://github.com/notifications/unsubscribe-auth/AEUwdQJ6gVTbxvCqyzvN8y5l0OtfzwOEks5r92IhgaJpZM4NoEr0>
    > .
    >



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to