Hi,

Sorry for the late response. Actually I got rid of the error after setting the 
following fields. It could be a cluster specific issue is what I suspect, 
nothing from Zeppelin side.

Thanks for the revert.

spark.dynamicAllocation.initialExecutors

45

spark.dynamicAllocation.maxExecutors

60

spark.dynamicAllocation.minExecutors

5

spark.dynamicAllocation.schedulerBacklogTimeout

600

spark.dynamicAllocation.sustainedSchedulerBacklogTimeout

600



From: Jongyoul Lee [mailto:jongy...@gmail.com]
Sent: Sunday, July 05, 2015 6:45 AM
To: users@zeppelin.incubator.apache.org
Subject: Re: Spark Context time out on Yarn cluster

Hi,

My yarn cluster also sets a setting about dynamic allocation, and I've tested 
that setting, too. I, however, don't make sure that setting works correctly. 
Did you already test dynamic allocation? If it's OK, please share your 
zeppelin-env.sh and interpreter setting.

Regards,
Jongyoul Lee

On Fri, Jun 19, 2015 at 8:45 AM, Sambit Tripathy (RBEI/EDS1) 
<sambit.tripa...@in.bosch.com<mailto:sambit.tripa...@in.bosch.com>> wrote:
Hi,

Recently dynamic allocation feature of YARN has been enabled on our cluster due 
to increase in workload. At the same time I upgraded Zeppelin to work with 
Spark 1.3.1.

Now the spark context that is created in the notebook is short lived. Every 
time I run some command it throws me an error saying, spark context has been 
stopped.

Do I have to provide some configurations in zeppelin-env.sh or interpreter 
settings to work with YARN dynamic allocation?



Regards,
Sambit.




--
이종열, Jongyoul Lee, 李宗烈
http://madeng.net

Reply via email to