+1 spark.executor.instances http://spark.apache.org/docs/latest/running-on-yarn.html Date: Fri, 9 Oct 2015 10:26:08 +0530 From: praag...@gmail.com To: users@zeppelin.incubator.apache.org Subject: Re: how to speed up zeppelin spark job?
try spark.executor.instances=N and to increase the memory per instance try spark.executor.memory=Nmb Regards, -Pranav. On 08/10/15 12:13 pm, ÐΞ€ρ@Ҝ (๏̯͡๏) wrote: Is this number of cores per executor ? I would like to increase number of executors from 2 to a high value like 300 as I have 300 node cluster On Wed, Oct 7, 2015 at 9:24 PM Mina Lee <mina...@nflabs.com> wrote: You can change number of executors by modifying your spark interpreter property `spark.cores.max` in Interpreter tab. On Thu, Oct 8, 2015 at 2:22 AM, ÐΞ€ρ@Ҝ (๏̯͡๏) <deepuj...@gmail.com> wrote: Any suggestions ? On Sun, Oct 4, 2015 at 9:26 AM, ÐΞ€ρ@Ҝ (๏̯͡๏) <deepuj...@gmail.com> wrote: Any suggestions ? On Fri, Oct 2, 2015 at 3:40 PM, ÐΞ€ρ@Ҝ (๏̯͡๏) <deepuj...@gmail.com> wrote: It always gets three executors, 1 for driver and other 2 for execution. I have 15 data nodes that can be used as executors. I have these in zeppelin-conf export JAVA_HOME=/usr/src/jdk1.7.0_79/ export HADOOP_CONF_DIR=/etc/hadoop/conf export ZEPPELIN_JAVA_OPTS="-Dhdp.version=2.3.1.0-2574" export SPARK_SUBMIT_OPTIONS="--num-executors 15 --driver-memory 14g --driver-java-options -XX:MaxPermSize=512M -Xmx4096M -Xms4096M -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps --executor-memory 14g --executor-cores 1" On Fri, Oct 2, 2015 at 3:32 PM, ÐΞ€ρ@Ҝ (๏̯͡๏) <deepuj...@gmail.com> wrote: how to increase the number of spark executors started by zeppelin ? -- Deepak -- Deepak -- Deepak -- Deepak