Re: Configuring Zeppelin spark interpreter to work with different hadoop clusters

2017-06-30 Thread Serega Sheypak
Hi, thanks for your reply! What do you mean by that? I can have only one env variable HADOOP_CONF_DIR... And how can user pick which env to run? Or you mean I have to create three Spark interpreters and each of them would have it's own HADOOP_CONF_DIR pointed to single cluster config? 2017-06-30

Re: Configuring Zeppelin spark interpreter to work with different hadoop clusters

2017-06-30 Thread Jeff Zhang
Try set HADOOP_CONF_DIR for each yarn conf in interpreter setting. Serega Sheypak 于2017年6月30日周五 下午10:11写道: > Hi I have several different hadoop clusters, each of them has it's own > YARN. > Is it possible to configure single Zeppelin instance to work with > different clusters? > I want to run spa

Configuring Zeppelin spark interpreter to work with different hadoop clusters

2017-06-30 Thread Serega Sheypak
Hi I have several different hadoop clusters, each of them has it's own YARN. Is it possible to configure single Zeppelin instance to work with different clusters? I want to run spark on cluster A if data is there. Right now my Zeppelin runs on single cluster and it sucks data from remote clusters w

Re: java.lang.NullPointerException on adding local jar as dependency to the spark interpreter

2017-05-14 Thread shyla deshpande
I added my local jar to my maven local repo and added the dependency by filling groupId:artifactId:version and then it worked. On Tue, May 9, 2017 at 10:19 AM, Jongyoul Lee wrote: > Can you add your spark interpreter's log file? > > On Sat, May 6, 2017 at 12:53 AM, shyla deshpande > wrote: > >>

restarting spark interpreter while its running results

2017-05-12 Thread Ruslan Dautkhanov
Restarting spark interpreter while a spark paragraph is running results in broken Zeppelin state: - popup window that show that the spark interpreter is restarting never closes (spinning); - refreshing browser window - shows [2] - all interpreters "disappear" - attemp to start any spark

Re: java.lang.NullPointerException on adding local jar as dependency to the spark interpreter

2017-05-09 Thread Jongyoul Lee
Can you add your spark interpreter's log file? On Sat, May 6, 2017 at 12:53 AM, shyla deshpande wrote: > Also, my local jar file that I want to add as dependency is a fat jar with > dependencies. Nothing works after I add my local fat jar, I get > *java.lang.NullPointerException > for everythi

Re: java.lang.NullPointerException on adding local jar as dependency to the spark interpreter

2017-05-05 Thread shyla deshpande
Also, my local jar file that I want to add as dependency is a fat jar with dependencies. Nothing works after I add my local fat jar, I get *java.lang.NullPointerException for everything. Please help* On Thu, May 4, 2017 at 10:18 PM, shyla deshpande wrote: > Adding the dependency by filling grou

Re: Preconfigure Spark interpreter

2017-04-22 Thread Paul Brenner
017 at 3:13 PM Serega Sheypak < mailto:Serega Sheypak > wrote: a, pre, code, a:link, body { word-wrap: break-word !important; } Aha, thanks. I'm building Zeppelin from source, so I can put my custom settings directly?  BTW, why does interpreter-list file don't contain spark int

Re: Preconfigure Spark interpreter

2017-04-22 Thread Serega Sheypak
Aha, thanks. I'm building Zeppelin from source, so I can put my custom settings directly? BTW, why does interpreter-list file don't contain spark interpreter? 2017-04-22 13:33 GMT+02:00 Fabian Böhnlein : > Do it via the Ui once and you'll see how interpreter.json of the Zeppe

Re: Preconfigure Spark interpreter

2017-04-22 Thread Fabian Böhnlein
Do it via the Ui once and you'll see how interpreter.json of the Zeppelin installation will be changed. On Sat, Apr 22, 2017, 11:35 Serega Sheypak wrote: > Hi, I need to pre-configure spark interpreter with my own artifacts and > internal repositories. How can I do it? >

Preconfigure Spark interpreter

2017-04-22 Thread Serega Sheypak
Hi, I need to pre-configure spark interpreter with my own artifacts and internal repositories. How can I do it?

Re: Spark Interpreter: Change default scheduler pool

2017-04-17 Thread Fabian Böhnlein
Hi moon, exactly, thanks for the pointer. Added the issue: https://issues.apache.org/jira/browse/ZEPPELIN-2413 Best, Fabian On Tue, 28 Mar 2017 at 15:48 moon soo Lee wrote: > Hi Fabian, > > Thanks for sharing the issue. > SparkSqlInterpreter set scheduler to "fair" depends on interpreter > p

Re: Spark Interpreter: Change default scheduler pool

2017-03-28 Thread moon soo Lee
Hi Fabian, Thanks for sharing the issue. SparkSqlInterpreter set scheduler to "fair" depends on interpreter property [1]. I think we can do the similar for SparkInterpreter. Do you mind file a new JIRA issue for it? Regards, moon [1] https://github.com/apache/zeppelin/blob/0e1964877654c56c72473a

Spark Interpreter: Change default scheduler pool

2017-03-28 Thread Fabian Böhnlein
Hi all, how can I change (globally, for Zeppelin) the default scheduler pool which SparkInterpreter submits jobs to. Currently all jobs go into the pool 'default' but I want them to go into the pool 'fair'. We use "Per Note" and "scoped" processes for best resource sharing. "spark.scheduler.pool"

Re: "spark ui" button in spark interpreter does not show Spark web-ui

2017-03-13 Thread Hyung Sung Shim
Hello. Thank you for sharing the problem. Could you file a jira issue for this? 2017년 3월 13일 (월) 오후 3:18, Meethu Mathew 님이 작성: > Hi, > > I have noticed the same problem > > Regards, > > > Meethu Mathew > > > On Mon, Mar 13, 2017 at 9:56 AM, Xiaohui Liu wrote: > > Hi, > > We used 0.7.1-snapshot w

Re: "spark ui" button in spark interpreter does not show Spark web-ui

2017-03-12 Thread Meethu Mathew
Hi, I have noticed the same problem Regards, Meethu Mathew On Mon, Mar 13, 2017 at 9:56 AM, Xiaohui Liu wrote: > Hi, > > We used 0.7.1-snapshot with our Mesos cluster, almost all our needed > features (ldap login, notebook acl control, livy/pyspark/rspark/scala, > etc.) work pretty well. > >

"spark ui" button in spark interpreter does not show Spark web-ui

2017-03-12 Thread Xiaohui Liu
Hi, We used 0.7.1-snapshot with our Mesos cluster, almost all our needed features (ldap login, notebook acl control, livy/pyspark/rspark/scala, etc.) work pretty well. But one thing does not work for us is the 'spark ui' button does not response to user clicks. No errors in browser side. Anyone

RE: Unable to connect with Spark Interpreter

2016-11-30 Thread Jan Botorek
I finally decided to move the solution to Ubuntu machine where everything works fine. I really don’t know the fundamental problem, why Windows and Zeppelin not work together. It is certain that there is a problem in Spark Interpreter and Zeppelin Engine communication. Unfortunately, I cannot

Re: Unable to connect with Spark Interpreter

2016-11-29 Thread Felix Cheung
Hmm possibly with the classpath. These might be Windows specific issues. We probably need to debug to fix these. From: Jan Botorek Sent: Tuesday, November 29, 2016 4:01:43 AM To: users@zeppelin.apache.org Subject: RE: Unable to connect with Spark Interpreter

RE: Unable to connect with Spark Interpreter

2016-11-29 Thread Jan Botorek
Your last advice helped me to progress a little bit: - I started spark interpreter manually o c:\zepp\\bin\interpreter.cmd, -d, c:\zepp\interpreter\spark\, -p, 61176, -l, c:\zepp\/local-repo/2C2ZNEH5W o I needed to add a ‚\‘ into the –d attributte and make the path shorter

RE: Unable to connect with Spark Interpreter

2016-11-29 Thread Jan Botorek
[mailto:zjf...@gmail.com] Sent: Tuesday, November 29, 2016 10:45 AM To: users@zeppelin.apache.org Subject: Re: Unable to connect with Spark Interpreter I still don't see much useful info. Could you try run the following interpreter command directly ? c:\_libs\zeppelin-0.6.2-bin-all\\bin\interprete

Re: Unable to connect with Spark Interpreter

2016-11-29 Thread Jeff Zhang
11月29日周二 下午5:26写道: > I attach the log file after debugging turned on. > > > > *From:* Jeff Zhang [mailto:zjf...@gmail.com] > *Sent:* Tuesday, November 29, 2016 10:04 AM > > > *To:* users@zeppelin.apache.org > *Subject:* Re: Unable to connect with Spark Interpreter >

Re: Unable to connect with Spark Interpreter

2016-11-29 Thread Jeff Zhang
Then I guess the spark process is failed to start so no logs for spark interpreter. Can you use the following log4.properties ? This log4j properties file print more error info for further diagnose. log4j.rootLogger = INFO, dailyfile log4j.appender.stdout = org.apache.log4j.ConsoleAppender

RE: Unable to connect with Spark Interpreter

2016-11-29 Thread Jan Botorek
] ({Thread-0} RemoteInterpreterServer.java[run]:81) - Starting remote interpreter server on port 55492 From: Jeff Zhang [mailto:zjf...@gmail.com] Sent: Tuesday, November 29, 2016 9:48 AM To: users@zeppelin.apache.org Subject: Re: Unable to connect with Spark Interpreter According your log, the spark

Re: Unable to connect with Spark Interpreter

2016-11-29 Thread Jeff Zhang
According your log, the spark interpreter fail to start. Do you see any spark interpreter log ? Jan Botorek 于2016年11月29日周二 下午4:08写道: > Hello, > > Thanks for the advice, but it doesn’t seem that anything is wrong when I > start the interpreter manually. I attach logs from interpre

RE: Unable to connect with Spark Interpreter

2016-11-29 Thread Jan Botorek
ory] Could you, please, think of any possible next steps? Best regards, Jan From: moon soo Lee [mailto:m...@apache.org] Sent: Monday, November 28, 2016 5:36 PM To: users@zeppelin.apache.org Subject: Re: Unable to connect with Spark Interpreter According to your log, your interpreter process seems fai

Re: Unable to connect with Spark Interpreter

2016-11-28 Thread moon soo Lee
ase, don’t you have any idea what to check or repair, please? > > > > Best regards, > > Jan > > *From:* Jan Botorek [mailto:jan.boto...@infor.com] > *Sent:* Wednesday, November 16, 2016 12:54 PM > *To:* users@zeppelin.apache.org > *Subject:* RE: Unable to connec

RE: Unable to connect with Spark Interpreter

2016-11-28 Thread Jan Botorek
. Please, don’t you have any idea what to check or repair, please? Best regards, Jan From: Jan Botorek [mailto:jan.boto...@infor.com] Sent: Wednesday, November 16, 2016 12:54 PM To: users@zeppelin.apache.org Subject: RE: Unable to connect with Spark Interpreter Hello Alexander, Thank you for a quick

RE: Unable to connect with Spark Interpreter

2016-11-16 Thread Jan Botorek
if the interpreter is re-started -- Jan From: Alexander Bezzubov [mailto:b...@apache.org] Sent: Wednesday, November 16, 2016 12:47 PM To: users@zeppelin.apache.org Subject: Re: Unable to connect with Spark Interpreter Hi Jan, this is rather generic error saying that ZeppelinServer somehow could

Re: Unable to connect with Spark Interpreter

2016-11-16 Thread Alexander Bezzubov
identify the reason. Two more questions: - does this happen on every paragraph run? if you try to click Run multiple times in a row - does it still happen if you re-starting Spark interpreter manually from GUI? ("Anonymous"->Interpreters->Spark->restart) -- Alex On Wed, Nov 16, 2016

Unable to connect with Spark Interpreter

2016-11-16 Thread Jan Botorek
ssumptions, there is something wrong with the spark interpreter in relation to the Zeppelin. I also tried to connect the Spark interpreter to Spark running externally (in interpreter settings of Zeppelin), but it didn't work. Do you have any ideas about what could possibly be wrong? Thank you for

RE: Netty error with spark interpreter

2016-10-19 Thread Vikash Kumar
From: Vikash Kumar [mailto:vikash.ku...@resilinc.com] Sent: Wednesday, October 19, 2016 12:11 PM To: users@zeppelin.apache.org Subject: Netty error with spark interpreter Hi all, I am trying zeppelin with spark which is throwing me the following error related to netty jar conflicts. I che

Netty error with spark interpreter

2016-10-18 Thread Vikash Kumar
Hi all, I am trying zeppelin with spark which is throwing me the following error related to netty jar conflicts. I checked properly my class path. There are only single versions of netty-3.8.0 and netty-all-4.0.29-Final jar. Other information : Spark 2.0.0

Re: Restart zeppelin spark interpreter

2016-10-03 Thread Jung, Soonoh
is > automatically enabled, as it allows longer running apps like Zeppelin's > Spark interpreter to continue running in the background without taking up > resources for any executors unless Spark jobs are actively running. > > However, if you are seeing resources still being us

Re: Restart zeppelin spark interpreter

2016-10-03 Thread Jonathan Kelly
On the most recent several releases of EMR, Spark dynamicAllocation is automatically enabled, as it allows longer running apps like Zeppelin's Spark interpreter to continue running in the background without taking up resources for any executors unless Spark jobs are actively running. Howeve

Restart zeppelin spark interpreter

2016-10-03 Thread Jung, Soonoh
Hi everyone, I am using Zeppelin in AWS EMR (Zeppelin 0.6.1, spark 2.0 on Yarn RM) Basically Zeppelin spark interpreter's spark job is not finishing after executing a notebook. It looks like the spark job still occupying memory a lot in my Yarn cluster. Is there a way restart spark interp

RE: Config for spark interpreter

2016-09-07 Thread Polina Marasanova
olina From: Polina Marasanova [polina.marasan...@quantium.com.au] Sent: Thursday, 8 September 2016 1:46 PM To: users@zeppelin.apache.org Subject: RE: Config for spark interpreter Hi Mina, Thank you for your response. I double checked approach 1 and 3, still no luck. Probably the point is tha

RE: Config for spark interpreter

2016-09-07 Thread Polina Marasanova
ght missed, did you create new Spark Interpreter after restarting zeppelin?" I dont create any new interpreters. I just want to use default one with my custom settings. Regards, Polina From: mina lee [mina...@apache.org] Sent: Friday, 12 August 20

Re: Can I run different versions of spark interpreter in one zeppelin build ?

2016-08-16 Thread Jeff Zhang
gt;> Best, >>>> moon >>>> >>>> On Wed, Aug 3, 2016 at 6:42 PM Jeff Zhang wrote: >>>> >>>>> I build zeppelin with spark-2.0 profile enabled, and it seems I can >>>>> also run spark 1.6 interpreter. But I a

Re: Config for spark interpreter

2016-08-12 Thread mina lee
Hi Polina, I tried first/third approach with zeppelin-0.6.0-bin-netinst.tgz and both seems to work for me. > 3. restart zeppelin, check interpreter tab. Here is one suspicious part you might missed, did you create new Spark Interpreter after restarting zeppelin? One thing you need to keep

Re: Config for spark interpreter

2016-08-11 Thread mina lee
Hi Polina, I tried first/third approach with zeppelin-0.6.0-bin-netinst.tgz and both seems to work for me. > 3. restart zeppelin, check interpreter tab. Here is one suspicious part you might missed, did you create new Spark Interpreter after restarting zeppelin? One thing you need to keep

RE: Config for spark interpreter

2016-08-08 Thread Polina Marasanova
Hi, I think I faced a bug of a Zeppelin 0.6.0. What happened: I'm not able to overwrite spark interpreter properties from config, only via GUI. What I've tried: first approach, worked on previous version 1. export SPARK_CONF_DIR=/zeppelin/conf/spark 2. add my "spark-default

Re: HTML output truncated in spark interpreter

2016-08-08 Thread Ben
I determined that the output was not actually being truncated. Chrome's dev tools truncate the content of very long script tags so it only looked to be the case. On Mon, Aug 8, 2016 at 1:51 PM, Ben wrote: > I am building a D3 based visualizer for GraphX graphs in Zeppelin, based > off of this ex

HTML output truncated in spark interpreter

2016-08-08 Thread Ben
I am building a D3 based visualizer for GraphX graphs in Zeppelin, based off of this example of using requirejs to pull in D3: https://github.com/lockwobr/zeppelin-examples/blob/master/ requirejs/requirejs.md When I have a large amount of data to visualize (and thus a large HTML string), the HTML

Re: Config for spark interpreter

2016-08-07 Thread ndjido
Hi Polina, You can just define the SPARK_HOME one the conf/zeppelin-Envoyez.sh and get rid of any other Spark configuration from that file, otherwise Zeppelin will just overwrite them. Once this is done, you can define the Spark default configurations in its config file living in conf/spark.def

Config for spark interpreter

2016-08-07 Thread Polina Marasanova
Hi everyone, I have a question: in previous versions of Zeppelin all settings for interpreters were stored in a file called "interpreter.json". It was very convenient to provide there some default spark settings such as spark.master, default driver memory and etc. What is the best way for versi

Re: Can I run different versions of spark interpreter in one zeppelin build ?

2016-08-03 Thread Jeff Zhang
#x27; from conf/zeppelin-env.sh and add >>> 'SPARK_HOME' property (in different version of spark directory) in each >>> individual spark interpreter setting on GUI? This should work. >>> >>> Best, >>> moon >>> >>> On Wed,

Re: Can I run different versions of spark interpreter in one zeppelin build ?

2016-08-03 Thread Vinay Shukla
soo Lee wrote: > >> Hi, >> >> Could you try remove 'SPARK_HOME' from conf/zeppelin-env.sh and add >> 'SPARK_HOME' property (in different version of spark directory) in each >> individual spark interpreter setting on GUI? This should work. >&g

Re: Can I run different versions of spark interpreter in one zeppelin build ?

2016-08-03 Thread Jeff Zhang
elin-env.sh and add > 'SPARK_HOME' property (in different version of spark directory) in each > individual spark interpreter setting on GUI? This should work. > > Best, > moon > > On Wed, Aug 3, 2016 at 6:42 PM Jeff Zhang wrote: > >> I build zeppelin with spark-2.

Re: Can I run different versions of spark interpreter in one zeppelin build ?

2016-08-03 Thread moon soo Lee
Hi, Could you try remove 'SPARK_HOME' from conf/zeppelin-env.sh and add 'SPARK_HOME' property (in different version of spark directory) in each individual spark interpreter setting on GUI? This should work. Best, moon On Wed, Aug 3, 2016 at 6:42 PM Jeff Zhang wrote: >

Can I run different versions of spark interpreter in one zeppelin build ?

2016-08-03 Thread Jeff Zhang
I build zeppelin with spark-2.0 profile enabled, and it seems I can also run spark 1.6 interpreter. But I am not sure whether it is officially supported to run different versions of spark interpreter in one zeppelin build ? My guess maybe it is not, otherwise we don't need profiles for diff

Re: Run zeppelin spark interpreter in kerberos

2016-07-20 Thread Felix Cheung
hen.song...@gmail.com>> wrote: I have a question on running Zeppelin Spark interpreter in a Kerberized environment. Spark comes with a runtime conf that allows you to specific the keytab and principal. My questions are: 1. When using Livy, does it rely on the same mechanism when starting Spa

Re: Run zeppelin spark interpreter in kerberos

2016-07-20 Thread Jeff Zhang
t;> On Wed, Jul 20, 2016 at 4:55 AM, Chen Song >> wrote: >> >>> I have a question on running Zeppelin Spark interpreter in a Kerberized >>> environment. >>> >>> Spark comes with a runtime conf that allows you to specific the keytab >>> an

Re: Run zeppelin spark interpreter in kerberos

2016-07-20 Thread Chen Song
o refresh ticket in zeppelin side, it depends on zeppelin > side implementation. > > On Wed, Jul 20, 2016 at 4:55 AM, Chen Song wrote: > >> I have a question on running Zeppelin Spark interpreter in a Kerberized >> environment. >> >> Spark comes with a runtime c

Re: Run zeppelin spark interpreter in kerberos

2016-07-19 Thread Jeff Zhang
e, it depends on zeppelin side implementation. On Wed, Jul 20, 2016 at 4:55 AM, Chen Song wrote: > I have a question on running Zeppelin Spark interpreter in a Kerberized > environment. > > Spark comes with a runtime conf that allows you to specific the keytab > and principal. >

Run zeppelin spark interpreter in kerberos

2016-07-19 Thread Chen Song
I have a question on running Zeppelin Spark interpreter in a Kerberized environment. Spark comes with a runtime conf that allows you to specific the keytab and principal. My questions are: 1. When using Livy, does it rely on the same mechanism when starting Spark 2. Whether to use Livy or not

Re: spark interpreter

2016-07-03 Thread Benjamin Kim
505.html >> >> Thanks, >> moon >> >> On Thu, Jun 30, 2016 at 1:54 PM Leon Katsnelson > > wrote: >> >>> What is the expected day for v0.6? >>> >>> >>> >>> >>> From:moon soo Lee >> > &

Re: spark interpreter

2016-07-02 Thread moon soo Lee
he-Zeppelin-release-0-6-0-rc1-tp11505.html > > Thanks, > moon > > On Thu, Jun 30, 2016 at 1:54 PM Leon Katsnelson wrote: > >> What is the expected day for v0.6? >> >> >> >> >> From: moon soo Lee >> To:users@zeppelin.apac

Re: spark interpreter

2016-07-01 Thread Benjamin Kim
users@zeppelin.apache.org <mailto:users@zeppelin.apache.org> > Date:2016/06/30 11:36 AM > Subject:Re: spark interpreter > > > > Hi Ben, > > Livy interpreter is included in 0.6.0. If it is not listed when you create > interpreter setting, could

Re: spark interpreter

2016-07-01 Thread moon soo Lee
moon soo Lee > To:users@zeppelin.apache.org > Date:2016/06/30 11:36 AM > Subject: Re: spark interpreter > -- > > > > Hi Ben, > > Livy interpreter is included in 0.6.0. If it is not listed when you creat

Re: spark interpreter

2016-06-30 Thread Leon Katsnelson
What is the expected day for v0.6? From: moon soo Lee To: users@zeppelin.apache.org Date: 2016/06/30 11:36 AM Subject:Re: spark interpreter Hi Ben, Livy interpreter is included in 0.6.0. If it is not listed when you create interpreter setting, could you check if your

Re: spark interpreter

2016-06-30 Thread Benjamin Kim
same interpreter >> instance/process, for now. I think supporting per user interpreter >> instance/process would be future work. >> >> Thanks, >> moon >> >> On Wed, Jun 29, 2016 at 7:57 AM Chen Song > > wrote: >> Thanks for your explanation, Mo

Re: spark interpreter

2016-06-30 Thread Jongyoul Lee
Jun 29, 2016 at 7:57 AM Chen Song > > wrote: >> >>> Thanks for your explanation, Moon. >>> >>> Following up on this, I can see the difference in terms of single or >>> multiple interpreter processes. >>> >>> With respect to spark dri

Re: spark interpreter

2016-06-30 Thread Benjamin Kim
ion, when a notebook is shared among users, will they always use >> the same interpreter instance/process already created? >> >> Thanks >> Chen >> >> >> >> On Fri, Jun 24, 2016 at 11:51 AM moon soo Lee > <mailto:m...@apache.org>> wrote: &

Re: spark interpreter

2016-06-30 Thread moon soo Lee
9, 2016 at 7:57 AM Chen Song wrote: > >> Thanks for your explanation, Moon. >> >> Following up on this, I can see the difference in terms of single or >> multiple interpreter processes. >> >> With respect to spark drivers, since each interpreter spawns a separa

Re: spark interpreter

2016-06-29 Thread Benjamin Kim
g <mailto:chen.song...@gmail.com>> wrote: > Thanks for your explanation, Moon. > > Following up on this, I can see the difference in terms of single or multiple > interpreter processes. > > With respect to spark drivers, since each interpreter spawns a separate Spark > d

Re: spark interpreter

2016-06-29 Thread moon soo Lee
. > > Following up on this, I can see the difference in terms of single or > multiple interpreter processes. > > With respect to spark drivers, since each interpreter spawns a separate > Spark driver in regular Spark interpreter setting, it is clear to me the > different implicati

Is there a way to configure -Xmx for the spark interpreter only?

2016-06-29 Thread Mitko
Hello, i have configured the interpreter memory in conf/zeppelin-env.sh with the setting export ZEPPELIN_INTP_MEM="-Xmx6048m -XX:MaxPermSize=512m" I see that this setting is used for all interpreters. Is there any way to have only the spark interpreter run with this setting, a

Re: spark interpreter

2016-06-28 Thread Chen Song
Thanks for your explanation, Moon. Following up on this, I can see the difference in terms of single or multiple interpreter processes. With respect to spark drivers, since each interpreter spawns a separate Spark driver in regular Spark interpreter setting, it is clear to me the different

Re: spark interpreter

2016-06-24 Thread moon soo Lee
Hi, Thanks for asking question. It's not dumb question at all, Zeppelin docs does not explain very well. Spark Interpreter, 'shared' mode, a spark interpreter setting spawn a interpreter process to serve all notebooks which binded to this interpreter setting. 'scoped'

spark interpreter

2016-06-21 Thread Chen Song
Zeppelin provides 3 binding modes for each interpreter. With `scoped` or `shared` Spark interpreter, every user share the same SparkContext. Sorry for the dumb question, how does it differ from Spark via Ivy Server? -- Chen Song

<    1   2