Is it possible to extend this PR further (or create another PR) to allow for 
per-node configuration of workers? 
There are many discussions about heterogeneous spark cluster. Currently 
configuration on master will override those on the workers. Many spark users 
have the need for having machines with different cpu/memory capacities in the 
same cluster.
Du 

     On Wednesday, January 21, 2015 3:59 PM, Nan Zhu <zhunanmcg...@gmail.com> 
wrote:
   

 …not sure when will it be reviewed…
but for now you can work around by allowing multiple worker instances on a 
single machine 
http://spark.apache.org/docs/latest/spark-standalone.html
search SPARK_WORKER_INSTANCES
Best, 
-- Nan Zhuhttp://codingcat.me On Wednesday, January 21, 2015 at 6:50 PM, Larry 
Liu wrote: 
 Will  SPARK-1706 be included in next release?
On Wed, Jan 21, 2015 at 2:50 PM, Ted Yu <yuzhih...@gmail.com> wrote:

Please see SPARK-1706
On Wed, Jan 21, 2015 at 2:43 PM, Larry Liu <larryli...@gmail.com> wrote:

I tried to submit a job with  --conf "spark.cores.max=6"  or 
--total-executor-cores 6 on a standalone cluster. But I don't see more than 1 
executor on each worker. I am wondering how to use multiple executors when 
submitting jobs.
Thankslarry



 
  
 

   

Reply via email to