[GitHub] spark pull request #19510: [SPARK-22292][Mesos] Added spark.mem.max support ...

2018-07-18 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/19510


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19510: [SPARK-22292][Mesos] Added spark.mem.max support ...

2017-10-26 Thread susanxhuynh
Github user susanxhuynh commented on a diff in the pull request:

https://github.com/apache/spark/pull/19510#discussion_r147265329
  
--- Diff: docs/running-on-mesos.md ---
@@ -613,6 +621,39 @@ See the [configuration page](configuration.html) for 
information on Spark config
 driver disconnects, the master immediately tears down the framework.
   
 
+
+  spark.mesos.rejectOfferDuration
+  120s
+  
+Time to consider unused resources refused, serves as a fallback of
+`spark.mesos.rejectOfferDurationForUnmetConstraints`,
+`spark.mesos.rejectOfferDurationForReachedMaxCores`,
+`spark.mesos.rejectOfferDurationForReachedMaxMem`
+  
+
+
+  spark.mesos.rejectOfferDurationForUnmetConstraints
+  park.mesos.rejectOfferDuration
+  
+Time to consider unused resources refused with unmet constraints
+  
+
+
+  spark.mesos.rejectOfferDurationForReachedMaxCores
+  park.mesos.rejectOfferDuration
--- End diff --

Typo: 'park'


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19510: [SPARK-22292][Mesos] Added spark.mem.max support ...

2017-10-26 Thread susanxhuynh
Github user susanxhuynh commented on a diff in the pull request:

https://github.com/apache/spark/pull/19510#discussion_r147265261
  
--- Diff: docs/running-on-mesos.md ---
@@ -613,6 +621,39 @@ See the [configuration page](configuration.html) for 
information on Spark config
 driver disconnects, the master immediately tears down the framework.
   
 
+
+  spark.mesos.rejectOfferDuration
+  120s
+  
+Time to consider unused resources refused, serves as a fallback of
+`spark.mesos.rejectOfferDurationForUnmetConstraints`,
+`spark.mesos.rejectOfferDurationForReachedMaxCores`,
+`spark.mesos.rejectOfferDurationForReachedMaxMem`
+  
+
+
+  spark.mesos.rejectOfferDurationForUnmetConstraints
+  park.mesos.rejectOfferDuration
--- End diff --

Typo: 'park'


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19510: [SPARK-22292][Mesos] Added spark.mem.max support ...

2017-10-26 Thread susanxhuynh
Github user susanxhuynh commented on a diff in the pull request:

https://github.com/apache/spark/pull/19510#discussion_r147274681
  
--- Diff: docs/running-on-mesos.md ---
@@ -344,6 +345,13 @@ See the [configuration page](configuration.html) for 
information on Spark config
   
 
 
+  spark.mem.max
+  (none)
+  
+Maximum amount of memory Spark accepts from Mesos launching executor.
--- End diff --

Maybe add "across the cluster (not from each machine)". And, something 
about there is no maximum if this property is not set.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19510: [SPARK-22292][Mesos] Added spark.mem.max support ...

2017-10-26 Thread susanxhuynh
Github user susanxhuynh commented on a diff in the pull request:

https://github.com/apache/spark/pull/19510#discussion_r147277408
  
--- Diff: docs/running-on-mesos.md ---
@@ -613,6 +621,39 @@ See the [configuration page](configuration.html) for 
information on Spark config
 driver disconnects, the master immediately tears down the framework.
   
 
+
+  spark.mesos.rejectOfferDuration
+  120s
+  
+Time to consider unused resources refused, serves as a fallback of
+`spark.mesos.rejectOfferDurationForUnmetConstraints`,
+`spark.mesos.rejectOfferDurationForReachedMaxCores`,
+`spark.mesos.rejectOfferDurationForReachedMaxMem`
+  
+
+
+  spark.mesos.rejectOfferDurationForUnmetConstraints
+  park.mesos.rejectOfferDuration
+  
+Time to consider unused resources refused with unmet constraints
+  
+
+
+  spark.mesos.rejectOfferDurationForReachedMaxCores
+  park.mesos.rejectOfferDuration
+  
+Time to consider unused resources refused when maximum number of cores
+spark.cores.max is reached
--- End diff --

My suggestion: "Duration for which unused resources are considered 
declined, when maximum number of cores spark.cores.max has been reached."
@ArtRand Is this the documentation you had in mind in 
https://issues.apache.org/jira/browse/SPARK-22133 ? Is this enough information 
for a non-Mesos expert to set this?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19510: [SPARK-22292][Mesos] Added spark.mem.max support ...

2017-10-26 Thread susanxhuynh
Github user susanxhuynh commented on a diff in the pull request:

https://github.com/apache/spark/pull/19510#discussion_r147265373
  
--- Diff: docs/running-on-mesos.md ---
@@ -613,6 +621,39 @@ See the [configuration page](configuration.html) for 
information on Spark config
 driver disconnects, the master immediately tears down the framework.
   
 
+
+  spark.mesos.rejectOfferDuration
+  120s
+  
+Time to consider unused resources refused, serves as a fallback of
+`spark.mesos.rejectOfferDurationForUnmetConstraints`,
+`spark.mesos.rejectOfferDurationForReachedMaxCores`,
+`spark.mesos.rejectOfferDurationForReachedMaxMem`
+  
+
+
+  spark.mesos.rejectOfferDurationForUnmetConstraints
+  park.mesos.rejectOfferDuration
+  
+Time to consider unused resources refused with unmet constraints
+  
+
+
+  spark.mesos.rejectOfferDurationForReachedMaxCores
+  park.mesos.rejectOfferDuration
+  
+Time to consider unused resources refused when maximum number of cores
+spark.cores.max is reached
+  
+
+
+  spark.mesos.rejectOfferDurationForReachedMaxMem
+  park.mesos.rejectOfferDuration
--- End diff --

Typo: 'park'


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19510: [SPARK-22292][Mesos] Added spark.mem.max support ...

2017-10-26 Thread ArtRand
Github user ArtRand commented on a diff in the pull request:

https://github.com/apache/spark/pull/19510#discussion_r147121858
  
--- Diff: 
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
 ---
@@ -64,6 +64,7 @@ private[spark] class MesosCoarseGrainedSchedulerBackend(
   private val MAX_SLAVE_FAILURES = 2
 
   private val maxCoresOption = 
conf.getOption("spark.cores.max").map(_.toInt)
+  private val maxMemOption = 
conf.getOption("spark.mem.max").map(Utils.memoryStringToMb)
--- End diff --

Do we need to have a check similar to 
https://github.com/apache/spark/blob/06df34d35ec088277445ef09cfb24bfe996f072e/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala#L73
 but for memory, so that we know we'll "land" on the maximum? 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19510: [SPARK-22292][Mesos] Added spark.mem.max support ...

2017-10-23 Thread windkit
Github user windkit commented on a diff in the pull request:

https://github.com/apache/spark/pull/19510#discussion_r146180171
  
--- Diff: 
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
 ---
@@ -64,6 +64,7 @@ private[spark] class MesosCoarseGrainedSchedulerBackend(
   private val MAX_SLAVE_FAILURES = 2
 
   private val maxCoresOption = 
conf.getOption("spark.cores.max").map(_.toInt)
+  private val maxMemOption = 
conf.getOption("spark.mem.max").map(Utils.memoryStringToMb)
--- End diff --

Should I add the check with this PR or a separate one?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19510: [SPARK-22292][Mesos] Added spark.mem.max support ...

2017-10-20 Thread skonto
Github user skonto commented on a diff in the pull request:

https://github.com/apache/spark/pull/19510#discussion_r145936265
  
--- Diff: 
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
 ---
@@ -64,6 +64,7 @@ private[spark] class MesosCoarseGrainedSchedulerBackend(
   private val MAX_SLAVE_FAILURES = 2
 
   private val maxCoresOption = 
conf.getOption("spark.cores.max").map(_.toInt)
+  private val maxMemOption = 
conf.getOption("spark.mem.max").map(Utils.memoryStringToMb)
--- End diff --

Exception I think would ok, the idea if something is never going to work 
let the user know, especially for the novice user or the minimum would be a 
warning if we dont want an exception thrown.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19510: [SPARK-22292][Mesos] Added spark.mem.max support ...

2017-10-20 Thread windkit
Github user windkit commented on a diff in the pull request:

https://github.com/apache/spark/pull/19510#discussion_r145890559
  
--- Diff: 
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
 ---
@@ -64,6 +64,7 @@ private[spark] class MesosCoarseGrainedSchedulerBackend(
   private val MAX_SLAVE_FAILURES = 2
 
   private val maxCoresOption = 
conf.getOption("spark.cores.max").map(_.toInt)
+  private val maxMemOption = 
conf.getOption("spark.mem.max").map(Utils.memoryStringToMb)
--- End diff --

@skonto 
For cpus, I think we can compare with minCoresPerExecutor
For mem, calling the MesosSchedulerUtils.executorMemory to get the minimum 
requirement.

Then at here, we parse the option, check the minimum and if it is too 
small, throw exception?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19510: [SPARK-22292][Mesos] Added spark.mem.max support ...

2017-10-18 Thread skonto
Github user skonto commented on a diff in the pull request:

https://github.com/apache/spark/pull/19510#discussion_r145389586
  
--- Diff: 
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
 ---
@@ -64,6 +64,7 @@ private[spark] class MesosCoarseGrainedSchedulerBackend(
   private val MAX_SLAVE_FAILURES = 2
 
   private val maxCoresOption = 
conf.getOption("spark.cores.max").map(_.toInt)
+  private val maxMemOption = 
conf.getOption("spark.mem.max").map(Utils.memoryStringToMb)
--- End diff --

Can we defend against minimum values? For example default executor memory 
is 1.4MB. We could calculate the value returned by 
MesosSchedulerUtils.executorMemory. I don't think these values calculated in 
canLaunchTask ever change.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19510: [SPARK-22292][Mesos] Added spark.mem.max support ...

2017-10-18 Thread skonto
Github user skonto commented on a diff in the pull request:

https://github.com/apache/spark/pull/19510#discussion_r145383146
  
--- Diff: 
resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackendSuite.scala
 ---
@@ -152,6 +152,23 @@ class MesosCoarseGrainedSchedulerBackendSuite extends 
SparkFunSuite
 assert(cpus == maxCores)
   }
 
+  test("mesos does not acquire more than spark.mem.max") {
+setBackend(Map("spark.mem.max" -> "2g",
+   "spark.executor.memory" -> "1g",
+   "spark.executor.cores" -> "1"))
+
+val executorMemory = backend.executorMemory(sc)
+
--- End diff --

remove space


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19510: [SPARK-22292][Mesos] Added spark.mem.max support ...

2017-10-17 Thread windkit
GitHub user windkit opened a pull request:

https://github.com/apache/spark/pull/19510

[SPARK-22292][Mesos] Added spark.mem.max support for Mesos

## What changes were proposed in this pull request?

To limit the amount of resources a spark job accept from Mesos, currently 
we can only use `spark.cores.max` to limit in terms of cpu cores.
However, when we have big memory executors, it would consume all the 
resources.

This PR added `spark.mem.max` option for Mesos

## How was this patch tested?

Added Unit Test


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/windkit/spark mem_max

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/19510.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #19510


commit 7c9a1610291f5a98cc47447028d5378caffd3c51
Author: Li, YanKit | Wilson | RIT 
Date:   2017-10-17T05:54:06Z

[SPARK-22292][Mesos] Added spark.mem.max support for Mesos




---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org