[ 
https://issues.apache.org/jira/browse/SPARK-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon updated SPARK-6920:
--------------------------------
    Labels: bulk-closed  (was: )

> Be more explicit about references to "executor" and "task" in Spark on Mesos
> ----------------------------------------------------------------------------
>
>                 Key: SPARK-6920
>                 URL: https://issues.apache.org/jira/browse/SPARK-6920
>             Project: Spark
>          Issue Type: Bug
>          Components: Documentation, Mesos
>    Affects Versions: 1.0.0
>            Reporter: Andrew Or
>            Priority: Major
>              Labels: bulk-closed
>
> In both Spark and Mesos, the terms "executor" and "task" mean different 
> things depending on the context. In the past, this has caused a great deal of 
> confusion, to both the user and the developer of Spark.
> The consequences are real. For instance, the fine-grained 
> `MesosSchedulerBackend` code incorrectly uses `spark.tasks.cpus` as the 
> number of cores to grant the Mesos executor [0]. This is a result of 
> conflating the Mesos executor with the Spark executor, and the confusion is 
> reflected in the comment, where the "executor" here refers to the Mesos 
> executor, not the Spark executor.
> {code}
> // If the executor doesn't exist yet, subtract CPU for executor
> {code}
> We really need to be explicit about our references to these terms such that 
> this part of the Spark code can progress as quickly as other modules.
> [0] 
> https://github.com/apache/spark/blob/6de282e2de3cb69f9b746d03fde581429248824a/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackend.scala#L238



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to