Hi, 

I know you can set zeppelin to either use a local copy of spark  or set the ip 
address of the spark master in the configuration files. 

I was wondering if you could do this from within the notebook? 

So I can use the same notebook to run a paragraph on the local machine, then 
have the next paragraph run on a different cluster  and maybe a third run 
elsewhere too? 


In a related question… I have a paragraph that handles all of my dependencies 
for setting up my notebook,  (%spark.dep) 
Since this has to run prior to the spark engine/interpreter running, is there a 
way to shut down spark, run this and start spark from within the paragraph / 
notebook? 

TIA

-Mike

Reply via email to