Github user susanxhuynh commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19510#discussion_r147277408
  
    --- Diff: docs/running-on-mesos.md ---
    @@ -613,6 +621,39 @@ See the [configuration page](configuration.html) for 
information on Spark config
         driver disconnects, the master immediately tears down the framework.
       </td>
     </tr>
    +<tr>
    +  <td><code>spark.mesos.rejectOfferDuration</code></td>
    +  <td><code>120s</code></td>
    +  <td>
    +    Time to consider unused resources refused, serves as a fallback of
    +    `spark.mesos.rejectOfferDurationForUnmetConstraints`,
    +    `spark.mesos.rejectOfferDurationForReachedMaxCores`,
    +    `spark.mesos.rejectOfferDurationForReachedMaxMem`
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.mesos.rejectOfferDurationForUnmetConstraints</code></td>
    +  <td><code>park.mesos.rejectOfferDuration</code></td>
    +  <td>
    +    Time to consider unused resources refused with unmet constraints
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.mesos.rejectOfferDurationForReachedMaxCores</code></td>
    +  <td><code>park.mesos.rejectOfferDuration</code></td>
    +  <td>
    +    Time to consider unused resources refused when maximum number of cores
    +    <code>spark.cores.max</code> is reached
    --- End diff --
    
    My suggestion: "Duration for which unused resources are considered 
declined, when maximum number of cores spark.cores.max has been reached."
    @ArtRand Is this the documentation you had in mind in 
https://issues.apache.org/jira/browse/SPARK-22133 ? Is this enough information 
for a non-Mesos expert to set this?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to