When you set .setMaster to local[4], it means that you are allocating 4 threads on your local machine. You can change it to local[1] to run it on a single thread.
If you are submitting the job to a standalone spark cluster and you wanted to limit the # cores for your job, then you can do it like *sparkConf.set("spark.cores.max", "224")* Thanks Best Regards On Wed, Aug 26, 2015 at 7:26 PM, anshu shukla <anshushuk...@gmail.com> wrote: > Hey , > > I need to set the number of cores from inside the topology . Its working > fine by setting in spark-env.sh but unable to do via setting key/value > for conf . > > SparkConf sparkConf = new > SparkConf().setAppName("JavaCustomReceiver").setMaster("local[4]"); > > if(toponame.equals("IdentityTopology")) > { > sparkConf.setExecutorEnv("SPARK_WORKER_CORES","1"); > } > > > > > -- > Thanks & Regards, > Anshu Shukla >