Following thread may help

http://apache-zeppelin-users-incubating-mailing-list.75479.x6.nabble.com/Can-not-configure-driver-memory-size-td1513.html

Thanks,
moon

On Thu, Dec 10, 2015 at 6:47 PM Fengdong Yu <fengdo...@everstring.com>
wrote:

> confirmed.
>
> spark.executor.memory doesn’t change.
>
> @Moon ?
>
>
>
> On Dec 10, 2015, at 4:52 PM, Florian Leitner <
> florian.leit...@seleritycorp.com> wrote:
>
> Hi there,
>
> Yes, I restart Zeppelin each time. But the environment panel only shows 8g
> for the executors, no matter what I define in the settings as that value.
> As I have OOM errors and am actually trying to set it to *less*, who cares,
> I guess, although it's annoying that my settings seem to not be "accepted".
>
> However, that means the heap-space (and, sometimes GC) OutOfMemory errors
> that occur (only) with Zeppelin the moment I try to run SQL queries on my
> DataFrames remain a mystery to me. I don't get them using the exact same
> code/data/content with any other Scala/Spark notebook, so its something
> specific about the Zeppelin default settings that creates the problem.
>
> I tried setting the defaults defaults except for executor (using 4g
> instead), as that works for me with the IBM-Spark Jupyter kernel and the
> Spark-Notebook:
>
> $ grep -v "^#" conf/zeppelin-env.sh | grep -v "^$"
> export ZEPPELIN_JAVA_OPTS="-Dspark.storage.memoryFraction=0.6
> -Dspark.executor.memory=4g -Dspark.driver.memory=1g
> -Dspark.driver.maxResultSize=1g -Dspark.cores.max=4"
> export ZEPPELIN_MEM="-Xmx2g"
> export SPARK_SUBMIT_OPTIONS="--driver-memory 1g --executor-memory 4g"
>
> (Note that I'm using Java 8, so MaxPermSize is no longer relevant)
> I checked both logs for any issues about the config settings, but there
> are no suspicious messages, either.
> So I'm at a complete loss why Zeppelin errors out on a simple dataset
> where all other notebooks work fine (and, my dataset is tiny).
> Any ideas what defaults in Zeppelin are different from other notebooks
> that I am not aware of or affecting with these settings?
>
> Regards,
> Florian
>
> On Thu, Dec 10, 2015 at 6:30 AM, Fengdong Yu <fengdo...@everstring.com>
> wrote:
>
>>
>> Did you restart Zeppelin after you export OPTS in zeppelin-env.sh?
>>
>>
>>
>>
>> > On Dec 10, 2015, at 7:42 AM, Florian Leitner <
>> florian.leit...@seleritycorp.com> wrote:
>> >
>> > Hi there, I need some help, please.
>> >
>> > I'm using Zeppelin 0.5.5 (locally), and am trying to increase my
>> Executors' Memory sizes. They only get the default 1G according to the web
>> panel. In the conf/zeppelin-env.sh file, I've configured as follows:
>> >
>> > export ZEPPELIN_JAVA_OPTS="-Dspark.executor.memory=10g
>> -Dspark.driver.memory=5g -Dspark.cores.max=8"
>> >
>> > However, if, in a notebook, I then run sc.getConf.toDebugString I only
>> see the driver memory change to whatever I set here. Neither the spark
>> cores or the max. number of cores changes with this setting. Also, if I
>> look at running Executors on the web panel, they are always provisioned
>> with 1g. So downstream, as soon as I do DataFrame work even on just
>> modestly sized datasets, I immediately run into java.lang.OutOfMemoryError:
>> Java heap space errors...
>> >
>> >  How/where do I then correctly define how much memory Spark Executors
>> get when running via a local Zeppelin instance?
>> >
>> > Regards,
>> > Florian
>>
>>
>
>

Reply via email to