Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/731#discussion_r12511528
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -466,30 +466,14 @@ private[spark] class Master(
* launched an executor for the app on it (right now the standalone
backend doesn't like having
* two executors on the same worker).
*/
- def canUse(app: ApplicationInfo, worker: WorkerInfo): Boolean = {
- worker.memoryFree >= app.desc.memoryPerSlave &&
!worker.hasExecutor(app)
+ private def canUse(app: ApplicationInfo, worker: WorkerInfo): Boolean = {
+ worker.memoryFree >= app.desc.memoryPerExecutor &&
!worker.hasExecutor(app) &&
+ worker.coresFree > 0
--- End diff --
I am not sure about this, but does the above mean that an application can
be scheduled only once to a worker at a given point of time ?
So even if there are multiple cores, different partitions cant be executed
in parallel for an app on that worker ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---