[ 
https://issues.apache.org/jira/browse/SPARK-4341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14206288#comment-14206288
 ] 

Sean Owen commented on SPARK-4341:
----------------------------------

Yes, but, the executors live as long as the app does. The app may invoke lots 
of operations, large and small, with different numbers of partitions each. It 
is not like MR, where one MR execute one map and one reduce. Too many splits 
does not waste resources; it means you incur the overhead of launching more 
tasks, but that's relatively small.

Concretely, how do you propose to set this automatically?

> Spark need to set num-executors automatically
> ---------------------------------------------
>
>                 Key: SPARK-4341
>                 URL: https://issues.apache.org/jira/browse/SPARK-4341
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 1.1.0
>            Reporter: Hong Shen
>
> The mapreduce job can set maptask automaticlly, but in spark, we have to set 
> num-executors, executor memory and cores. It's difficult for users to set 
> these args, especially for the users want to use spark sql. So when user 
> havn't set num-executors,  spark should set num-executors automatically 
> accroding to the input partitions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to