Hi,

Configuration: standalone cluster, Java, Spark 1.6.2, 24 cores

My process uses all the cores of my server (good), but I am trying to limit it 
so I can actually submit a second job.

I tried

                SparkConf conf = new SparkConf().setAppName("NC Eatery 
app").set("spark.executor.memory", "4g")
                                .setMaster("spark://10.0.100.120:7077");
                if (restId == 0) {
                        conf = conf.set("spark.executor.cores", "22");
                } else {
                        conf = conf.set("spark.executor.cores", "2");
                }
                JavaSparkContext javaSparkContext = new JavaSparkContext(conf);

and

                SparkConf conf = new SparkConf().setAppName("NC Eatery 
app").set("spark.executor.memory", "4g")
                                .setMaster("spark://10.0.100.120:7077");
                if (restId == 0) {
                        conf.set("spark.executor.cores", "22");
                } else {
                        conf.set("spark.executor.cores", "2");
                }
                JavaSparkContext javaSparkContext = new JavaSparkContext(conf);

but it does not seem to take it. Any hint?

jg


Reply via email to