Hi there, I need some help, please.

I'm using Zeppelin 0.5.5 (locally), and am trying to increase my Executors'
Memory sizes. They only get the default 1G according to the web panel. In
the conf/zeppelin-env.sh file, I've configured as follows:

export ZEPPELIN_JAVA_OPTS="-Dspark.executor.memory=10g
-Dspark.driver.memory=5g -Dspark.cores.max=8"

However, if, in a notebook, I then run sc.getConf.toDebugString I only see
the driver memory change to whatever I set here. Neither the spark cores or
the max. number of cores changes with this setting. Also, if I look at
running Executors on the web panel, they are always provisioned with 1g. So
downstream, as soon as I do DataFrame work even on just modestly sized
datasets, I immediately run into java.lang.OutOfMemoryError: Java heap
space errors...

 How/where do I then correctly define how much memory Spark Executors get
when running via a local Zeppelin instance?

Regards,
Florian

Reply via email to