[GitHub] spark pull request #19555: [SPARK-22133][DOCS] Documentation for Mesos Rejec...

2017-11-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/19555


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19555: [SPARK-22133][DOCS] Documentation for Mesos Rejec...

2017-10-23 Thread windkit
Github user windkit commented on a diff in the pull request:

https://github.com/apache/spark/pull/19555#discussion_r146273016
  
--- Diff: docs/running-on-mesos.md ---
@@ -613,6 +621,41 @@ See the [configuration page](configuration.html) for 
information on Spark config
 driver disconnects, the master immediately tears down the framework.
   
 
+
+  spark.mesos.rejectOfferDuration
+  120s
+  
+The amount of time that the master will reject offer after declining 
--- End diff --

Thanks for the comment, I will update it.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19555: [SPARK-22133][DOCS] Documentation for Mesos Rejec...

2017-10-23 Thread windkit
Github user windkit commented on a diff in the pull request:

https://github.com/apache/spark/pull/19555#discussion_r146272841
  
--- Diff: docs/running-on-mesos.md ---
@@ -196,17 +196,18 @@ configuration variables:
 
 * Executor memory: `spark.executor.memory`
 * Executor cores: `spark.executor.cores`
-* Number of executors: `spark.cores.max`/`spark.executor.cores`
+* Number of executors: min(`spark.cores.max`/`spark.executor.cores`, 

+`spark.mem.max`/(`spark.executor.memory`+`spark.mesos.executor.memoryOverhead`))
 
 Please see the [Spark Configuration](configuration.html) page for
 details and default values.
 
 Executors are brought up eagerly when the application starts, until
-`spark.cores.max` is reached.  If you don't set `spark.cores.max`, the
-Spark application will reserve all resources offered to it by Mesos,
-so we of course urge you to set this variable in any sort of
-multi-tenant cluster, including one which runs multiple concurrent
-Spark applications.
+`spark.cores.max` or `spark.mem.max` is reached.  If you don't set 
+`spark.cores.max` and `spark.mem.max`, the Spark application will 
+reserve all resources offered to it by Mesos, so we of course urge 
--- End diff --

Agree. I will update it later on


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19555: [SPARK-22133][DOCS] Documentation for Mesos Rejec...

2017-10-23 Thread windkit
Github user windkit commented on a diff in the pull request:

https://github.com/apache/spark/pull/19555#discussion_r146272708
  
--- Diff: docs/running-on-mesos.md ---
@@ -196,17 +196,18 @@ configuration variables:
 
 * Executor memory: `spark.executor.memory`
 * Executor cores: `spark.executor.cores`
-* Number of executors: `spark.cores.max`/`spark.executor.cores`
+* Number of executors: min(`spark.cores.max`/`spark.executor.cores`, 

+`spark.mem.max`/(`spark.executor.memory`+`spark.mesos.executor.memoryOverhead`))
 
 Please see the [Spark Configuration](configuration.html) page for
 details and default values.
 
 Executors are brought up eagerly when the application starts, until
-`spark.cores.max` is reached.  If you don't set `spark.cores.max`, the
-Spark application will reserve all resources offered to it by Mesos,
-so we of course urge you to set this variable in any sort of
-multi-tenant cluster, including one which runs multiple concurrent
-Spark applications.
+`spark.cores.max` or `spark.mem.max` is reached.  If you don't set 
--- End diff --

@ArtRand Sure, I will move the documentation to 19510


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19555: [SPARK-22133][DOCS] Documentation for Mesos Rejec...

2017-10-23 Thread ArtRand
Github user ArtRand commented on a diff in the pull request:

https://github.com/apache/spark/pull/19555#discussion_r146244212
  
--- Diff: docs/running-on-mesos.md ---
@@ -613,6 +621,41 @@ See the [configuration page](configuration.html) for 
information on Spark config
 driver disconnects, the master immediately tears down the framework.
   
 
+
+  spark.mesos.rejectOfferDuration
+  120s
+  
+The amount of time that the master will reject offer after declining 
--- End diff --

This doesn't sound correct. The mesos.proto 
(https://github.com/apache/mesos/blob/master/include/mesos/mesos.proto#L2310) 
states:
```
Time to consider unused resources refused. Note that all unused
resources will be considered refused and use the default value
(below) regardless of whether Filters was passed to
SchedulerDriver::launchTasks. You MUST pass Filters with this
field set to change this behavior (i.e., get another offer which
includes unused resources sooner or later than the default).
``` 
some simple word-smithing or a link should make it clearer. 



---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19555: [SPARK-22133][DOCS] Documentation for Mesos Rejec...

2017-10-23 Thread ArtRand
Github user ArtRand commented on a diff in the pull request:

https://github.com/apache/spark/pull/19555#discussion_r146242613
  
--- Diff: docs/running-on-mesos.md ---
@@ -196,17 +196,18 @@ configuration variables:
 
 * Executor memory: `spark.executor.memory`
 * Executor cores: `spark.executor.cores`
-* Number of executors: `spark.cores.max`/`spark.executor.cores`
+* Number of executors: min(`spark.cores.max`/`spark.executor.cores`, 

+`spark.mem.max`/(`spark.executor.memory`+`spark.mesos.executor.memoryOverhead`))
 
 Please see the [Spark Configuration](configuration.html) page for
 details and default values.
 
 Executors are brought up eagerly when the application starts, until
-`spark.cores.max` is reached.  If you don't set `spark.cores.max`, the
-Spark application will reserve all resources offered to it by Mesos,
-so we of course urge you to set this variable in any sort of
-multi-tenant cluster, including one which runs multiple concurrent
-Spark applications.
+`spark.cores.max` or `spark.mem.max` is reached.  If you don't set 
--- End diff --

Could you please add these changes only in 
https://github.com/apache/spark/pull/19510/? 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19555: [SPARK-22133][DOCS] Documentation for Mesos Rejec...

2017-10-23 Thread ArtRand
Github user ArtRand commented on a diff in the pull request:

https://github.com/apache/spark/pull/19555#discussion_r146242931
  
--- Diff: docs/running-on-mesos.md ---
@@ -344,6 +345,13 @@ See the [configuration page](configuration.html) for 
information on Spark config
   
 
 
+  spark.mem.max
--- End diff --

As above, please add this in the separate PR. 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19555: [SPARK-22133][DOCS] Documentation for Mesos Rejec...

2017-10-23 Thread ArtRand
Github user ArtRand commented on a diff in the pull request:

https://github.com/apache/spark/pull/19555#discussion_r146242882
  
--- Diff: docs/running-on-mesos.md ---
@@ -196,17 +196,18 @@ configuration variables:
 
 * Executor memory: `spark.executor.memory`
 * Executor cores: `spark.executor.cores`
-* Number of executors: `spark.cores.max`/`spark.executor.cores`
+* Number of executors: min(`spark.cores.max`/`spark.executor.cores`, 

+`spark.mem.max`/(`spark.executor.memory`+`spark.mesos.executor.memoryOverhead`))
 
 Please see the [Spark Configuration](configuration.html) page for
 details and default values.
 
 Executors are brought up eagerly when the application starts, until
-`spark.cores.max` is reached.  If you don't set `spark.cores.max`, the
-Spark application will reserve all resources offered to it by Mesos,
-so we of course urge you to set this variable in any sort of
-multi-tenant cluster, including one which runs multiple concurrent
-Spark applications.
+`spark.cores.max` or `spark.mem.max` is reached.  If you don't set 
+`spark.cores.max` and `spark.mem.max`, the Spark application will 
+reserve all resources offered to it by Mesos, so we of course urge 
--- End diff --

`reserve` is probably not the correct term to use here. I would use 
`consume`, as Spark does not actually make resource reservations 
http://mesos.apache.org/documentation/latest/reservation/


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19555: [SPARK-22133][DOCS] Documentation for Mesos Rejec...

2017-10-23 Thread windkit
GitHub user windkit opened a pull request:

https://github.com/apache/spark/pull/19555

 [SPARK-22133][DOCS] Documentation for Mesos Reject Offer Configurations

## What changes were proposed in this pull request?
Documentation about Mesos Reject Offer Configurations

## Related PR
https://github.com/apache/spark/pull/19510 for `spark.mem.max`


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/windkit/spark spark_22133

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/19555.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #19555


commit 6c738fea83965a9c2a2448e0e42292d6c034cdf2
Author: Li, YanKit | Wilson | RIT 
Date:   2017-10-23T06:55:24Z

[SPARK-22133][DOCS] Documentation for Mesos Reject Offer Configurations

commit 614a4e0a741d96b0e96541d9afb6a72e53cc1d43
Author: Li, YanKit | Wilson | RIT 
Date:   2017-10-23T06:59:15Z

[SPARK-22133][DOCS] Mesos Reject Offer Configurations Documentation change 
for spark.mem.max




---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org