This is a bug of zeppelin. spark.driver.memory won't take effect. As for now it 
isn't passed to spark through -conf parameter. See 
https://issues.apache.org/jira/browse/ZEPPELIN-1263
The workaround is to specify SPARK_DRIVER_MEMORY in interpreter setting page.



Best Regard,
Jeff Zhang


From: RUSHIKESH RAUT 
<rushikeshraut...@gmail.com<mailto:rushikeshraut...@gmail.com>>
Reply-To: "users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>" 
<users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>>
Date: Sunday, March 26, 2017 at 5:03 PM
To: "users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>" 
<users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>>
Subject: Re: Zeppelin out of memory issue - (GC overhead limit exceeded)

ZEPPELIN_INTP_JAVA_OPTS

Reply via email to