[ 
https://issues.apache.org/jira/browse/SPARK-5095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Chen updated SPARK-5095:
--------------------------------
    Description: 
Currently in coarse grained mesos mode, it's expected that we only launch one 
Mesos executor that launches one JVM process to launch multiple spark executors.

However, this become a problem when the JVM process launched is larger than an 
ideal size (30gb is recommended value from databricks), which causes GC 
problems reported on the mailing list.

We should support launching mulitple executors when large enough resources are 
available for spark to use, and these resources are still under the configured 
limit.

This is also applicable when users want to specifiy number of executors to be 
launched on each node

  was:
Currently in coarse grained mesos mode, it's expected that we only launch one 
Mesos executor that launches one JVM process to launch multiple spark executors.

However, this become a problem when the JVM process launched is larger than an 
ideal size (30gb is recommended value from databricks), which causes GC 
problems reported on the mailing list.

We should support launching mulitple executors when large enough resources are 
available for spark to use, and these resources are still under the configured 
limit.


> Support launching multiple mesos executors in coarse grained mesos mode
> -----------------------------------------------------------------------
>
>                 Key: SPARK-5095
>                 URL: https://issues.apache.org/jira/browse/SPARK-5095
>             Project: Spark
>          Issue Type: Improvement
>          Components: Mesos
>            Reporter: Timothy Chen
>
> Currently in coarse grained mesos mode, it's expected that we only launch one 
> Mesos executor that launches one JVM process to launch multiple spark 
> executors.
> However, this become a problem when the JVM process launched is larger than 
> an ideal size (30gb is recommended value from databricks), which causes GC 
> problems reported on the mailing list.
> We should support launching mulitple executors when large enough resources 
> are available for spark to use, and these resources are still under the 
> configured limit.
> This is also applicable when users want to specifiy number of executors to be 
> launched on each node



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to