[ 
https://issues.apache.org/jira/browse/SPARK-4341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207899#comment-14207899
 ] 

Hong Shen commented on SPARK-4341:
----------------------------------

My main point is when running spark (especially spark SQL), not all user want 
to set parallelism  to match executors, we can provide a easy way for them to 
use spark.

> Spark need to set num-executors automatically
> ---------------------------------------------
>
>                 Key: SPARK-4341
>                 URL: https://issues.apache.org/jira/browse/SPARK-4341
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 1.1.0
>            Reporter: Hong Shen
>
> The mapreduce job can set maptask automaticlly, but in spark, we have to set 
> num-executors, executor memory and cores. It's difficult for users to set 
> these args, especially for the users want to use spark sql. So when user 
> havn't set num-executors,  spark should set num-executors automatically 
> accroding to the input partitions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to