You can change number of executors by modifying your spark interpreter
property `spark.cores.max` in Interpreter tab.

On Thu, Oct 8, 2015 at 2:22 AM, ÐΞ€ρ@Ҝ (๏̯͡๏) <deepuj...@gmail.com> wrote:

> Any suggestions ?
>
> On Sun, Oct 4, 2015 at 9:26 AM, ÐΞ€ρ@Ҝ (๏̯͡๏) <deepuj...@gmail.com> wrote:
>
>> Any suggestions ?
>>
>> On Fri, Oct 2, 2015 at 3:40 PM, ÐΞ€ρ@Ҝ (๏̯͡๏) <deepuj...@gmail.com>
>> wrote:
>>
>>> It always gets three executors, 1 for driver and other 2 for execution.
>>> I have 15 data nodes that can be used as executors.
>>>
>>> I have these in zeppelin-conf
>>>
>>> export JAVA_HOME=/usr/src/jdk1.7.0_79/
>>>
>>> export HADOOP_CONF_DIR=/etc/hadoop/conf
>>>
>>> export ZEPPELIN_JAVA_OPTS="-Dhdp.version=2.3.1.0-2574"
>>>
>>> *export SPARK_SUBMIT_OPTIONS="--num-executors 15 --driver-memory 14g
>>> --driver-java-options -XX:MaxPermSize=512M -Xmx4096M -Xms4096M -verbose:gc
>>> -XX:+PrintGCDetails -XX:+PrintGCTimeStamps --executor-memory 14g
>>> --executor-cores 1"*
>>>
>>> On Fri, Oct 2, 2015 at 3:32 PM, ÐΞ€ρ@Ҝ (๏̯͡๏) <deepuj...@gmail.com>
>>> wrote:
>>>
>>>> how to increase the number of spark  executors started by zeppelin ?
>>>>
>>>> --
>>>> Deepak
>>>>
>>>>
>>>
>>>
>>> --
>>> Deepak
>>>
>>>
>>
>>
>> --
>> Deepak
>>
>>
>
>
> --
> Deepak
>
>

Reply via email to