Hi Divya

I think, you did try to run spark jobs on yarn. And also I think, you would like to submit jobs to different queues on yarn for each. And maybe you need to prepare queues on yarn to run spark jobs by configuring scheduler.

Best regards,
KyeongHee


--------- Original Message ---------
Sender : Divya Gehlot <divya.htco...@gmail.com>
Date : 2016-09-14 15:08 (GMT+9)
Title : how to specify cores and executor to run spark jobs simultaneously

Hi,

I am on EMR cluster and My cluster configuration is as below:
Number of nodes including master node - 3
Memory:22.50 GB
VCores Total : 16
Active Nodes : 2
Spark version- 1.6.1

Parameter set in spark-default.conf

spark.executor.instances         2
spark.executor.cores             8
spark.driver.memory              10473M
spark.executor.memory            9658M
spark.default.parallelism        32

Would let me know if need any other info regarding the cluster .

The current configuration for spark-submit is 
--driver-memory 5G \
--executor-memory 2G \
--executor-cores 5 \
--num-executors 10 \


Currently  with the above job configuration if I try to run another spark job it will be in accepted state till the first one finishes .
How do I optimize or update the above spark-submit configurations to run some more spark jobs simultaneously 

Would really appreciate the help.

Thanks,
Divya 

 


Reply via email to