can you try with : SparkConf conf = new SparkConf().setAppName("NC Eatery app").set( "spark.executor.memory", "4g") .setMaster("spark://10.0.100.120:7077"); if (restId == 0) { conf = conf.set("spark.executor.cores", "22"); } else { conf = conf.set("spark.executor.cores", "2"); } JavaSparkContext javaSparkContext = new JavaSparkContext(conf);
On Fri, Jul 15, 2016 at 2:31 PM, Jean Georges Perrin <j...@jgp.net> wrote: > Hi, > > Configuration: standalone cluster, Java, Spark 1.6.2, 24 cores > > My process uses all the cores of my server (good), but I am trying to > limit it so I can actually submit a second job. > > I tried > > SparkConf conf = new SparkConf().setAppName("NC Eatery app").set( > "spark.executor.memory", "4g") > .setMaster("spark://10.0.100.120:7077"); > if (restId == 0) { > conf = conf.set("spark.executor.cores", "22"); > } else { > conf = conf.set("spark.executor.cores", "2"); > } > JavaSparkContext javaSparkContext = new JavaSparkContext(conf); > > and > > SparkConf conf = new SparkConf().setAppName("NC Eatery app").set( > "spark.executor.memory", "4g") > .setMaster("spark://10.0.100.120:7077"); > if (restId == 0) { > conf.set("spark.executor.cores", "22"); > } else { > conf.set("spark.executor.cores", "2"); > } > JavaSparkContext javaSparkContext = new JavaSparkContext(conf); > > but it does not seem to take it. Any hint? > > jg > > > -- M'BAREK Med Nihed, Fedora Ambassador, TUNISIA, Northern Africa http://www.nihed.com <http://tn.linkedin.com/in/nihed>