Hi, thanks for your reply!
What do you mean by that?
I can have only one env variable HADOOP_CONF_DIR...
And how can user pick which env to run?
Or you mean I have to create three Spark interpreters and each of them
would have it's own HADOOP_CONF_DIR pointed to single cluster config?
2017-06-30
Try set HADOOP_CONF_DIR for each yarn conf in interpreter setting.
Serega Sheypak 于2017年6月30日周五 下午10:11写道:
> Hi I have several different hadoop clusters, each of them has it's own
> YARN.
> Is it possible to configure single Zeppelin instance to work with
> different clusters?
> I want to run spa
Hi I have several different hadoop clusters, each of them has it's own
YARN.
Is it possible to configure single Zeppelin instance to work with different
clusters?
I want to run spark on cluster A if data is there. Right now my Zeppelin
runs on single cluster and it sucks data from remote clusters w
I added my local jar to my maven local repo and added the dependency by
filling groupId:artifactId:version and then it worked.
On Tue, May 9, 2017 at 10:19 AM, Jongyoul Lee wrote:
> Can you add your spark interpreter's log file?
>
> On Sat, May 6, 2017 at 12:53 AM, shyla deshpande > wrote:
>
>>
Restarting spark interpreter while a spark paragraph is running results
in broken Zeppelin state:
- popup window that show that the spark interpreter is restarting never
closes (spinning);
- refreshing browser window - shows [2] - all interpreters "disappear"
- attemp to start any spark
Can you add your spark interpreter's log file?
On Sat, May 6, 2017 at 12:53 AM, shyla deshpande
wrote:
> Also, my local jar file that I want to add as dependency is a fat jar with
> dependencies. Nothing works after I add my local fat jar, I get
> *java.lang.NullPointerException
> for everythi
Also, my local jar file that I want to add as dependency is a fat jar with
dependencies. Nothing works after I add my local fat jar, I get
*java.lang.NullPointerException
for everything. Please help*
On Thu, May 4, 2017 at 10:18 PM, shyla deshpande
wrote:
> Adding the dependency by filling grou
017 at 3:13 PM Serega Sheypak
<
mailto:Serega Sheypak
> wrote:
a, pre, code, a:link, body { word-wrap: break-word !important; }
Aha, thanks. I'm building Zeppelin from source, so I can put my custom settings
directly?
BTW, why does interpreter-list file don't contain spark int
Aha, thanks. I'm building Zeppelin from source, so I can put my custom
settings directly?
BTW, why does interpreter-list file don't contain spark interpreter?
2017-04-22 13:33 GMT+02:00 Fabian Böhnlein :
> Do it via the Ui once and you'll see how interpreter.json of the Zeppe
Do it via the Ui once and you'll see how interpreter.json of the Zeppelin
installation will be changed.
On Sat, Apr 22, 2017, 11:35 Serega Sheypak wrote:
> Hi, I need to pre-configure spark interpreter with my own artifacts and
> internal repositories. How can I do it?
>
Hi, I need to pre-configure spark interpreter with my own artifacts and
internal repositories. How can I do it?
Hi moon,
exactly, thanks for the pointer.
Added the issue: https://issues.apache.org/jira/browse/ZEPPELIN-2413
Best,
Fabian
On Tue, 28 Mar 2017 at 15:48 moon soo Lee wrote:
> Hi Fabian,
>
> Thanks for sharing the issue.
> SparkSqlInterpreter set scheduler to "fair" depends on interpreter
> p
Hi Fabian,
Thanks for sharing the issue.
SparkSqlInterpreter set scheduler to "fair" depends on interpreter property
[1]. I think we can do the similar for SparkInterpreter.
Do you mind file a new JIRA issue for it?
Regards,
moon
[1]
https://github.com/apache/zeppelin/blob/0e1964877654c56c72473a
Hi all,
how can I change (globally, for Zeppelin) the default scheduler pool which
SparkInterpreter submits jobs to. Currently all jobs go into the pool
'default' but I want them to go into the pool 'fair'.
We use "Per Note" and "scoped" processes for best resource sharing.
"spark.scheduler.pool"
Hello.
Thank you for sharing the problem.
Could you file a jira issue for this?
2017년 3월 13일 (월) 오후 3:18, Meethu Mathew 님이 작성:
> Hi,
>
> I have noticed the same problem
>
> Regards,
>
>
> Meethu Mathew
>
>
> On Mon, Mar 13, 2017 at 9:56 AM, Xiaohui Liu wrote:
>
> Hi,
>
> We used 0.7.1-snapshot w
Hi,
I have noticed the same problem
Regards,
Meethu Mathew
On Mon, Mar 13, 2017 at 9:56 AM, Xiaohui Liu wrote:
> Hi,
>
> We used 0.7.1-snapshot with our Mesos cluster, almost all our needed
> features (ldap login, notebook acl control, livy/pyspark/rspark/scala,
> etc.) work pretty well.
>
>
Hi,
We used 0.7.1-snapshot with our Mesos cluster, almost all our needed
features (ldap login, notebook acl control, livy/pyspark/rspark/scala,
etc.) work pretty well.
But one thing does not work for us is the 'spark ui' button does not
response to user clicks. No errors in browser side.
Anyone
I finally decided to move the solution to Ubuntu machine where everything works
fine.
I really don’t know the fundamental problem, why Windows and Zeppelin not work
together. It is certain that there is a problem in Spark Interpreter and
Zeppelin Engine communication. Unfortunately, I cannot
Hmm possibly with the classpath. These might be Windows specific issues. We
probably need to debug to fix these.
From: Jan Botorek
Sent: Tuesday, November 29, 2016 4:01:43 AM
To: users@zeppelin.apache.org
Subject: RE: Unable to connect with Spark Interpreter
Your last advice helped me to progress a little bit:
- I started spark interpreter manually
o c:\zepp\\bin\interpreter.cmd, -d, c:\zepp\interpreter\spark\, -p, 61176,
-l, c:\zepp\/local-repo/2C2ZNEH5W
o I needed to add a ‚\‘ into the –d attributte and make the path shorter
[mailto:zjf...@gmail.com]
Sent: Tuesday, November 29, 2016 10:45 AM
To: users@zeppelin.apache.org
Subject: Re: Unable to connect with Spark Interpreter
I still don't see much useful info. Could you try run the following interpreter
command directly ?
c:\_libs\zeppelin-0.6.2-bin-all\\bin\interprete
11月29日周二 下午5:26写道:
> I attach the log file after debugging turned on.
>
>
>
> *From:* Jeff Zhang [mailto:zjf...@gmail.com]
> *Sent:* Tuesday, November 29, 2016 10:04 AM
>
>
> *To:* users@zeppelin.apache.org
> *Subject:* Re: Unable to connect with Spark Interpreter
>
Then I guess the spark process is failed to start so no logs for spark
interpreter.
Can you use the following log4.properties ? This log4j properties file
print more error info for further diagnose.
log4j.rootLogger = INFO, dailyfile
log4j.appender.stdout = org.apache.log4j.ConsoleAppender
] ({Thread-0}
RemoteInterpreterServer.java[run]:81) - Starting remote interpreter server on
port 55492
From: Jeff Zhang [mailto:zjf...@gmail.com]
Sent: Tuesday, November 29, 2016 9:48 AM
To: users@zeppelin.apache.org
Subject: Re: Unable to connect with Spark Interpreter
According your log, the spark
According your log, the spark interpreter fail to start. Do you see any
spark interpreter log ?
Jan Botorek 于2016年11月29日周二 下午4:08写道:
> Hello,
>
> Thanks for the advice, but it doesn’t seem that anything is wrong when I
> start the interpreter manually. I attach logs from interpre
ory]
Could you, please, think of any possible next steps?
Best regards,
Jan
From: moon soo Lee [mailto:m...@apache.org]
Sent: Monday, November 28, 2016 5:36 PM
To: users@zeppelin.apache.org
Subject: Re: Unable to connect with Spark Interpreter
According to your log, your interpreter process seems fai
ase, don’t you have any idea what to check or repair, please?
>
>
>
> Best regards,
>
> Jan
>
> *From:* Jan Botorek [mailto:jan.boto...@infor.com]
> *Sent:* Wednesday, November 16, 2016 12:54 PM
> *To:* users@zeppelin.apache.org
> *Subject:* RE: Unable to connec
.
Please, don’t you have any idea what to check or repair, please?
Best regards,
Jan
From: Jan Botorek [mailto:jan.boto...@infor.com]
Sent: Wednesday, November 16, 2016 12:54 PM
To: users@zeppelin.apache.org
Subject: RE: Unable to connect with Spark Interpreter
Hello Alexander,
Thank you for a quick
if the interpreter is re-started
--
Jan
From: Alexander Bezzubov [mailto:b...@apache.org]
Sent: Wednesday, November 16, 2016 12:47 PM
To: users@zeppelin.apache.org
Subject: Re: Unable to connect with Spark Interpreter
Hi Jan,
this is rather generic error saying that ZeppelinServer somehow could
identify the reason.
Two more questions:
- does this happen on every paragraph run? if you try to click Run multiple
times in a row
- does it still happen if you re-starting Spark interpreter manually from
GUI? ("Anonymous"->Interpreters->Spark->restart)
--
Alex
On Wed, Nov 16, 2016
ssumptions, there is something wrong with the spark
interpreter in relation to the Zeppelin.
I also tried to connect the Spark interpreter to Spark running externally (in
interpreter settings of Zeppelin), but it didn't work.
Do you have any ideas about what could possibly be wrong?
Thank you for
From: Vikash Kumar [mailto:vikash.ku...@resilinc.com]
Sent: Wednesday, October 19, 2016 12:11 PM
To: users@zeppelin.apache.org
Subject: Netty error with spark interpreter
Hi all,
I am trying zeppelin with spark which is throwing me the
following error related to netty jar conflicts. I che
Hi all,
I am trying zeppelin with spark which is throwing me the
following error related to netty jar conflicts. I checked properly my class
path. There are only single versions of netty-3.8.0 and netty-all-4.0.29-Final
jar.
Other information :
Spark 2.0.0
is
> automatically enabled, as it allows longer running apps like Zeppelin's
> Spark interpreter to continue running in the background without taking up
> resources for any executors unless Spark jobs are actively running.
>
> However, if you are seeing resources still being us
On the most recent several releases of EMR, Spark dynamicAllocation is
automatically enabled, as it allows longer running apps like Zeppelin's
Spark interpreter to continue running in the background without taking up
resources for any executors unless Spark jobs are actively running.
Howeve
Hi everyone,
I am using Zeppelin in AWS EMR (Zeppelin 0.6.1, spark 2.0 on Yarn RM)
Basically Zeppelin spark interpreter's spark job is not finishing after
executing a notebook.
It looks like the spark job still occupying memory a lot in my Yarn cluster.
Is there a way restart spark interp
olina
From: Polina Marasanova [polina.marasan...@quantium.com.au]
Sent: Thursday, 8 September 2016 1:46 PM
To: users@zeppelin.apache.org
Subject: RE: Config for spark interpreter
Hi Mina,
Thank you for your response.
I double checked approach 1 and 3, still no luck.
Probably the point is tha
ght missed, did you create new Spark Interpreter after restarting
zeppelin?"
I dont create any new interpreters. I just want to use default one with my
custom settings.
Regards,
Polina
From: mina lee [mina...@apache.org]
Sent: Friday, 12 August 20
gt;> Best,
>>>> moon
>>>>
>>>> On Wed, Aug 3, 2016 at 6:42 PM Jeff Zhang wrote:
>>>>
>>>>> I build zeppelin with spark-2.0 profile enabled, and it seems I can
>>>>> also run spark 1.6 interpreter. But I a
Hi Polina,
I tried first/third approach with zeppelin-0.6.0-bin-netinst.tgz and both
seems to work for me.
> 3. restart zeppelin, check interpreter tab.
Here is one suspicious part you might missed, did you create new Spark
Interpreter after restarting zeppelin?
One thing you need to keep
Hi Polina,
I tried first/third approach with zeppelin-0.6.0-bin-netinst.tgz and both
seems to work for me.
> 3. restart zeppelin, check interpreter tab.
Here is one suspicious part you might missed, did you create new Spark
Interpreter after restarting zeppelin?
One thing you need to keep
Hi,
I think I faced a bug of a Zeppelin 0.6.0.
What happened: I'm not able to overwrite spark interpreter properties from
config, only via GUI.
What I've tried:
first approach, worked on previous version
1. export SPARK_CONF_DIR=/zeppelin/conf/spark
2. add my "spark-default
I determined that the output was not actually being truncated. Chrome's dev
tools truncate the content of very long script tags so it only looked to be
the case.
On Mon, Aug 8, 2016 at 1:51 PM, Ben wrote:
> I am building a D3 based visualizer for GraphX graphs in Zeppelin, based
> off of this ex
I am building a D3 based visualizer for GraphX graphs in Zeppelin, based
off of this example of using requirejs to pull in D3:
https://github.com/lockwobr/zeppelin-examples/blob/master/
requirejs/requirejs.md
When I have a large amount of data to visualize (and thus a large HTML
string), the HTML
Hi Polina,
You can just define the SPARK_HOME one the conf/zeppelin-Envoyez.sh and get rid
of any other Spark configuration from that file, otherwise Zeppelin will just
overwrite them. Once this is done, you can define the Spark default
configurations in its config file living in conf/spark.def
Hi everyone,
I have a question: in previous versions of Zeppelin all settings for
interpreters were stored in a file called "interpreter.json". It was very
convenient to provide there some default spark settings such as spark.master,
default driver memory and etc.
What is the best way for versi
#x27; from conf/zeppelin-env.sh and add
>>> 'SPARK_HOME' property (in different version of spark directory) in each
>>> individual spark interpreter setting on GUI? This should work.
>>>
>>> Best,
>>> moon
>>>
>>> On Wed,
soo Lee wrote:
>
>> Hi,
>>
>> Could you try remove 'SPARK_HOME' from conf/zeppelin-env.sh and add
>> 'SPARK_HOME' property (in different version of spark directory) in each
>> individual spark interpreter setting on GUI? This should work.
>&g
elin-env.sh and add
> 'SPARK_HOME' property (in different version of spark directory) in each
> individual spark interpreter setting on GUI? This should work.
>
> Best,
> moon
>
> On Wed, Aug 3, 2016 at 6:42 PM Jeff Zhang wrote:
>
>> I build zeppelin with spark-2.
Hi,
Could you try remove 'SPARK_HOME' from conf/zeppelin-env.sh and add
'SPARK_HOME' property (in different version of spark directory) in each
individual spark interpreter setting on GUI? This should work.
Best,
moon
On Wed, Aug 3, 2016 at 6:42 PM Jeff Zhang wrote:
>
I build zeppelin with spark-2.0 profile enabled, and it seems I can also
run spark 1.6 interpreter. But I am not sure whether it is officially
supported to run different versions of spark interpreter in one zeppelin
build ? My guess maybe it is not, otherwise we don't need profiles for
diff
hen.song...@gmail.com>> wrote:
I have a question on running Zeppelin Spark interpreter in a Kerberized
environment.
Spark comes with a runtime conf that allows you to specific the keytab and
principal.
My questions are:
1. When using Livy, does it rely on the same mechanism when starting Spa
t;> On Wed, Jul 20, 2016 at 4:55 AM, Chen Song
>> wrote:
>>
>>> I have a question on running Zeppelin Spark interpreter in a Kerberized
>>> environment.
>>>
>>> Spark comes with a runtime conf that allows you to specific the keytab
>>> an
o refresh ticket in zeppelin side, it depends on zeppelin
> side implementation.
>
> On Wed, Jul 20, 2016 at 4:55 AM, Chen Song wrote:
>
>> I have a question on running Zeppelin Spark interpreter in a Kerberized
>> environment.
>>
>> Spark comes with a runtime c
e, it depends on zeppelin side
implementation.
On Wed, Jul 20, 2016 at 4:55 AM, Chen Song wrote:
> I have a question on running Zeppelin Spark interpreter in a Kerberized
> environment.
>
> Spark comes with a runtime conf that allows you to specific the keytab
> and principal.
>
I have a question on running Zeppelin Spark interpreter in a Kerberized
environment.
Spark comes with a runtime conf that allows you to specific the keytab
and principal.
My questions are:
1. When using Livy, does it rely on the same mechanism when starting Spark
2. Whether to use Livy or not
505.html
>>
>> Thanks,
>> moon
>>
>> On Thu, Jun 30, 2016 at 1:54 PM Leon Katsnelson > > wrote:
>>
>>> What is the expected day for v0.6?
>>>
>>>
>>>
>>>
>>> From:moon soo Lee >> >
&
he-Zeppelin-release-0-6-0-rc1-tp11505.html
>
> Thanks,
> moon
>
> On Thu, Jun 30, 2016 at 1:54 PM Leon Katsnelson wrote:
>
>> What is the expected day for v0.6?
>>
>>
>>
>>
>> From: moon soo Lee
>> To:users@zeppelin.apac
users@zeppelin.apache.org <mailto:users@zeppelin.apache.org>
> Date:2016/06/30 11:36 AM
> Subject:Re: spark interpreter
>
>
>
> Hi Ben,
>
> Livy interpreter is included in 0.6.0. If it is not listed when you create
> interpreter setting, could
moon soo Lee
> To:users@zeppelin.apache.org
> Date:2016/06/30 11:36 AM
> Subject: Re: spark interpreter
> --
>
>
>
> Hi Ben,
>
> Livy interpreter is included in 0.6.0. If it is not listed when you creat
What is the expected day for v0.6?
From: moon soo Lee
To: users@zeppelin.apache.org
Date: 2016/06/30 11:36 AM
Subject:Re: spark interpreter
Hi Ben,
Livy interpreter is included in 0.6.0. If it is not listed when you create
interpreter setting, could you check if your
same interpreter
>> instance/process, for now. I think supporting per user interpreter
>> instance/process would be future work.
>>
>> Thanks,
>> moon
>>
>> On Wed, Jun 29, 2016 at 7:57 AM Chen Song > > wrote:
>> Thanks for your explanation, Mo
Jun 29, 2016 at 7:57 AM Chen Song > > wrote:
>>
>>> Thanks for your explanation, Moon.
>>>
>>> Following up on this, I can see the difference in terms of single or
>>> multiple interpreter processes.
>>>
>>> With respect to spark dri
ion, when a notebook is shared among users, will they always use
>> the same interpreter instance/process already created?
>>
>> Thanks
>> Chen
>>
>>
>>
>> On Fri, Jun 24, 2016 at 11:51 AM moon soo Lee > <mailto:m...@apache.org>> wrote:
&
9, 2016 at 7:57 AM Chen Song wrote:
>
>> Thanks for your explanation, Moon.
>>
>> Following up on this, I can see the difference in terms of single or
>> multiple interpreter processes.
>>
>> With respect to spark drivers, since each interpreter spawns a separa
g <mailto:chen.song...@gmail.com>> wrote:
> Thanks for your explanation, Moon.
>
> Following up on this, I can see the difference in terms of single or multiple
> interpreter processes.
>
> With respect to spark drivers, since each interpreter spawns a separate Spark
> d
.
>
> Following up on this, I can see the difference in terms of single or
> multiple interpreter processes.
>
> With respect to spark drivers, since each interpreter spawns a separate
> Spark driver in regular Spark interpreter setting, it is clear to me the
> different implicati
Hello,
i have configured the interpreter memory in conf/zeppelin-env.sh with the
setting
export ZEPPELIN_INTP_MEM="-Xmx6048m -XX:MaxPermSize=512m"
I see that this setting is used for all interpreters.
Is there any way to have only the spark interpreter run with this setting,
a
Thanks for your explanation, Moon.
Following up on this, I can see the difference in terms of single or
multiple interpreter processes.
With respect to spark drivers, since each interpreter spawns a separate
Spark driver in regular Spark interpreter setting, it is clear to me the
different
Hi,
Thanks for asking question. It's not dumb question at all, Zeppelin docs
does not explain very well.
Spark Interpreter,
'shared' mode, a spark interpreter setting spawn a interpreter process to
serve all notebooks which binded to this interpreter setting.
'scoped'
Zeppelin provides 3 binding modes for each interpreter. With `scoped` or
`shared` Spark interpreter, every user share the same SparkContext. Sorry
for the dumb question, how does it differ from Spark via Ivy Server?
--
Chen Song
101 - 171 of 171 matches
Mail list logo