[GitHub] spark issue #19510: [SPARK-22292][Mesos] Added spark.mem.max support for Mes...

2017-10-31 Thread windkit
Github user windkit commented on the issue:

https://github.com/apache/spark/pull/19510
  
@susanxhuynh Thanks for reviewing.
I want to use both `spark.mem.max` and `spark.cores.max` to limit resource 
one task can use within the cluster.
Now I am setting up a common cluster for several users, they are allowed to 
configure `spark.executor.cores` and `spark.executor.memory` according to their 
need. I then need a limit for both cpu cores and memory.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19555: [SPARK-22133][DOCS] Documentation for Mesos Rejec...

2017-10-23 Thread windkit
Github user windkit commented on a diff in the pull request:

https://github.com/apache/spark/pull/19555#discussion_r146273016
  
--- Diff: docs/running-on-mesos.md ---
@@ -613,6 +621,41 @@ See the [configuration page](configuration.html) for 
information on Spark config
 driver disconnects, the master immediately tears down the framework.
   
 
+
+  spark.mesos.rejectOfferDuration
+  120s
+  
+The amount of time that the master will reject offer after declining 
--- End diff --

Thanks for the comment, I will update it.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19555: [SPARK-22133][DOCS] Documentation for Mesos Rejec...

2017-10-23 Thread windkit
Github user windkit commented on a diff in the pull request:

https://github.com/apache/spark/pull/19555#discussion_r146272841
  
--- Diff: docs/running-on-mesos.md ---
@@ -196,17 +196,18 @@ configuration variables:
 
 * Executor memory: `spark.executor.memory`
 * Executor cores: `spark.executor.cores`
-* Number of executors: `spark.cores.max`/`spark.executor.cores`
+* Number of executors: min(`spark.cores.max`/`spark.executor.cores`, 

+`spark.mem.max`/(`spark.executor.memory`+`spark.mesos.executor.memoryOverhead`))
 
 Please see the [Spark Configuration](configuration.html) page for
 details and default values.
 
 Executors are brought up eagerly when the application starts, until
-`spark.cores.max` is reached.  If you don't set `spark.cores.max`, the
-Spark application will reserve all resources offered to it by Mesos,
-so we of course urge you to set this variable in any sort of
-multi-tenant cluster, including one which runs multiple concurrent
-Spark applications.
+`spark.cores.max` or `spark.mem.max` is reached.  If you don't set 
+`spark.cores.max` and `spark.mem.max`, the Spark application will 
+reserve all resources offered to it by Mesos, so we of course urge 
--- End diff --

Agree. I will update it later on


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19555: [SPARK-22133][DOCS] Documentation for Mesos Rejec...

2017-10-23 Thread windkit
Github user windkit commented on a diff in the pull request:

https://github.com/apache/spark/pull/19555#discussion_r146272708
  
--- Diff: docs/running-on-mesos.md ---
@@ -196,17 +196,18 @@ configuration variables:
 
 * Executor memory: `spark.executor.memory`
 * Executor cores: `spark.executor.cores`
-* Number of executors: `spark.cores.max`/`spark.executor.cores`
+* Number of executors: min(`spark.cores.max`/`spark.executor.cores`, 

+`spark.mem.max`/(`spark.executor.memory`+`spark.mesos.executor.memoryOverhead`))
 
 Please see the [Spark Configuration](configuration.html) page for
 details and default values.
 
 Executors are brought up eagerly when the application starts, until
-`spark.cores.max` is reached.  If you don't set `spark.cores.max`, the
-Spark application will reserve all resources offered to it by Mesos,
-so we of course urge you to set this variable in any sort of
-multi-tenant cluster, including one which runs multiple concurrent
-Spark applications.
+`spark.cores.max` or `spark.mem.max` is reached.  If you don't set 
--- End diff --

@ArtRand Sure, I will move the documentation to 19510


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19510: [SPARK-22292][Mesos] Added spark.mem.max support ...

2017-10-23 Thread windkit
Github user windkit commented on a diff in the pull request:

https://github.com/apache/spark/pull/19510#discussion_r146180171
  
--- Diff: 
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
 ---
@@ -64,6 +64,7 @@ private[spark] class MesosCoarseGrainedSchedulerBackend(
   private val MAX_SLAVE_FAILURES = 2
 
   private val maxCoresOption = 
conf.getOption("spark.cores.max").map(_.toInt)
+  private val maxMemOption = 
conf.getOption("spark.mem.max").map(Utils.memoryStringToMb)
--- End diff --

Should I add the check with this PR or a separate one?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19555: [SPARK-22133][DOCS] Documentation for Mesos Rejec...

2017-10-23 Thread windkit
GitHub user windkit opened a pull request:

https://github.com/apache/spark/pull/19555

 [SPARK-22133][DOCS] Documentation for Mesos Reject Offer Configurations

## What changes were proposed in this pull request?
Documentation about Mesos Reject Offer Configurations

## Related PR
https://github.com/apache/spark/pull/19510 for `spark.mem.max`


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/windkit/spark spark_22133

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/19555.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #19555


commit 6c738fea83965a9c2a2448e0e42292d6c034cdf2
Author: Li, YanKit | Wilson | RIT <yankit...@rakuten.com>
Date:   2017-10-23T06:55:24Z

[SPARK-22133][DOCS] Documentation for Mesos Reject Offer Configurations

commit 614a4e0a741d96b0e96541d9afb6a72e53cc1d43
Author: Li, YanKit | Wilson | RIT <yankit...@rakuten.com>
Date:   2017-10-23T06:59:15Z

[SPARK-22133][DOCS] Mesos Reject Offer Configurations Documentation change 
for spark.mem.max




---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #19510: [SPARK-22292][Mesos] Added spark.mem.max support for Mes...

2017-10-20 Thread windkit
Github user windkit commented on the issue:

https://github.com/apache/spark/pull/19510
  
@skonto sure I can add those in, can you point to me where the 
documentation source code is?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19510: [SPARK-22292][Mesos] Added spark.mem.max support ...

2017-10-20 Thread windkit
Github user windkit commented on a diff in the pull request:

https://github.com/apache/spark/pull/19510#discussion_r145890559
  
--- Diff: 
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
 ---
@@ -64,6 +64,7 @@ private[spark] class MesosCoarseGrainedSchedulerBackend(
   private val MAX_SLAVE_FAILURES = 2
 
   private val maxCoresOption = 
conf.getOption("spark.cores.max").map(_.toInt)
+  private val maxMemOption = 
conf.getOption("spark.mem.max").map(Utils.memoryStringToMb)
--- End diff --

@skonto 
For cpus, I think we can compare with minCoresPerExecutor
For mem, calling the MesosSchedulerUtils.executorMemory to get the minimum 
requirement.

Then at here, we parse the option, check the minimum and if it is too 
small, throw exception?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19510: [SPARK-22292][Mesos] Added spark.mem.max support ...

2017-10-17 Thread windkit
GitHub user windkit opened a pull request:

https://github.com/apache/spark/pull/19510

[SPARK-22292][Mesos] Added spark.mem.max support for Mesos

## What changes were proposed in this pull request?

To limit the amount of resources a spark job accept from Mesos, currently 
we can only use `spark.cores.max` to limit in terms of cpu cores.
However, when we have big memory executors, it would consume all the 
resources.

This PR added `spark.mem.max` option for Mesos

## How was this patch tested?

Added Unit Test


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/windkit/spark mem_max

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/19510.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #19510


commit 7c9a1610291f5a98cc47447028d5378caffd3c51
Author: Li, YanKit | Wilson | RIT <yankit...@rakuten.com>
Date:   2017-10-17T05:54:06Z

[SPARK-22292][Mesos] Added spark.mem.max support for Mesos




---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org