No, you don't need to create that directory, it should be in
$ZEPPELIN_HOME/interpreter/spark
Ruslan Dautkhanov 于2016年11月30日周三 下午12:12写道:
> Thank you Jeff.
>
> Do I have to create interpreter/spark directory in $ZEPPELIN_HOME/conf
> or in $ZEPPELIN_HOME directory?
> So zeppelin.interpreters in
Thank you Jeff.
Do I have to create interpreter/spark directory in $ZEPPELIN_HOME/conf
or in $ZEPPELIN_HOME directory?
So zeppelin.interpreters in zeppelin-site.xml is deprecated in 0.7?
Thanks!
--
Ruslan Dautkhanov
On Tue, Nov 29, 2016 at 6:54 PM, Jeff Zhang wrote:
> The default interpret
The default interpreter is now defined in interpreter-setting.json
You can update the following file to make pyspark as the default
interpreter and then copy it to folder interpreter/spark
https://github.com/apache/zeppelin/blob/master/spark/src/main/resources/interpreter-setting.json
Ruslan D
After 0.6.2 -> 0.7 upgrade, pySpark isn't a default Spark interpreter;
despite we have org.apache.zeppelin.spark.*PySparkInterpreter*
listed first in zeppelin.interpreters.
zeppelin.interpreters in zeppelin-site.xml:
> zeppelin.interpreters
>
> org.apache.zeppelin.spark.PySparkInterpreter,org.
Thank you Khalid.
That was it. I was able to start 0.7.0 with ldap shiro config now.
--
Ruslan Dautkhanov
On Tue, Nov 29, 2016 at 12:15 AM, Khalid Huseynov
wrote:
> I think during refactoring LdapGroupRealm has moved into different package,
> so could you try in your shiro config with:
>
>
Thank you a lot moon!
> Interpreter Impersonation [1] is recently introduced and there is further
improvement in progress [2].
Very cool. Please consider checking
https://issues.apache.org/jira/browse/ZEPPELIN-1660 too as we
would always run into this to make Zeppelin not have any user-specific
p
Good to know, great job
2016-11-29 23:30 GMT+01:00 moon soo Lee :
> Interpreter Impersonation [1] is recently introduced and there is further
> improvement in progress [2].
>
> I didn't see any issue about impersonate spark interpreter using
> --proxy-user. Do you mind create one?
>
> Thanks,
> m
Interpreter Impersonation [1] is recently introduced and there is further
improvement in progress [2].
I didn't see any issue about impersonate spark interpreter using
--proxy-user. Do you mind create one?
Thanks,
moon
[1]
http://zeppelin.apache.org/docs/0.7.0-SNAPSHOT/manual/userimpersonation.h
That's what we will have to do. It's hard to explain to users though, that
in Zeppelin you can assign HiveContext
to a variable only once. Didn't have this problem in Jupyter. Is this hard
to fix? Created https://issues.apache.org/jira/browse/ZEPPELIN-1728
If somebody forgets about this rule, it's
Hmm possibly with the classpath. These might be Windows specific issues. We
probably need to debug to fix these.
From: Jan Botorek
Sent: Tuesday, November 29, 2016 4:01:43 AM
To: users@zeppelin.apache.org
Subject: RE: Unable to connect with Spark Interpreter
Yo
Can you reuse the HiveContext instead of making new ones with HiveContext(sc)?
From: Ruslan Dautkhanov
Sent: Sunday, November 27, 2016 8:07:41 AM
To: users
Subject: Re: "You must build Spark with Hive. Export 'SPARK_HIVE=true'"
Also, to get rid of this problem (
It bas been asked many times. For now only livy can impersonate the spark
user. For other interpreters it's not possible as I know...
Le 29 nov. 2016 7:44 PM, "Ruslan Dautkhanov" a
écrit :
> What's a best way to have a multi-tennant Zeppelin notebook?
>
> It seems we currently will have to ask u
Hello everyone,
I'm trying to bind a variable from javascript to the spark context when I
click on an object. So that I can use the variable in another Zeppelin
paragraph. I have the following code in one particular paragraph:
In another file I build the "data" variable used by /plot/. The
*node
What's a best way to have a multi-tennant Zeppelin notebook?
It seems we currently will have to ask users to run their own Zeppelin
instances.
Since each user has its own authethentication & authorization based on user
who runs
Zeppelin server.
I see best solution could be to have probably --keyt
> -LDAP and Notebook level permissions worked great.
Would you mind sharing details on this?
Mohit Jaggi
Founder,
Data Orchard LLC
www.dataorchardllc.com
> On Nov 29, 2016, at 9:52 AM, Kevin Niemann wrote:
>
> I can comment the reasons I use Zeppelin, though I haven't used Jupyter
> extens
I can comment the reasons I use Zeppelin, though I haven't used Jupyter
extensively. This is for a Fortune 500 company shared by many users.
-Easy to write new Interpreter for organization specific requirements (e.g.
authentication, query limits etc).
-Already using Java and AngularJS extensively s
Your last advice helped me to progress a little bit:
- I started spark interpreter manually
o c:\zepp\\bin\interpreter.cmd, -d, c:\zepp\interpreter\spark\, -p, 61176,
-l, c:\zepp\/local-repo/2C2ZNEH5W
o I needed to add a ‚\‘ into the –d attributte and make the path shorter -->
mov
I am sorry, but the directory local-repo is not presented in the zeppelin
folder. I use this (https://zeppelin.apache.org/download.html) newest binary
version.
Unfortunately, in the 0.6 version downloaded and built from github, also the
folder local-repo doesn’t exist
From: Jeff Zhang [mailto
I still don't see much useful info. Could you try run the following
interpreter command directly ?
c:\_libs\zeppelin-0.6.2-bin-all\\bin\interpreter.cmd -d
c:\_libs\zeppelin-0.6.2-bin-all\interpreter\spark -p 53099 -l
c:\_libs\zeppelin-0.6.2-bin-all\/local-repo/2C2ZNEH5W
Jan Botorek 于2016年11月29日
Hi,
I have cluster of ignite which is based on zookeeper discovery.
I can see only five parameters in ignite interpreter of zeppelin.
1. ignite.addresses
2. ignite.clientMode
3. ignite.config.url
4. ignite.jdbc.url
5. ignite.peerClassLoadingEnabled
6. zeppelin.interpreter.localRepo
where should
Then I guess the spark process is failed to start so no logs for spark
interpreter.
Can you use the following log4.properties ? This log4j properties file
print more error info for further diagnose.
log4j.rootLogger = INFO, dailyfile
log4j.appender.stdout = org.apache.log4j.ConsoleAppender
log4j
If I start Zeppelin by zeppelin.cmd, only zeppelin log appears.
Interpreter log is created only when I manually start the interpreter; but the
log contains only information that the interpreter was started (see my
preceding mail with attachment).
- INFO [2016-11-29 08:43:59,757] ({Threa
According your log, the spark interpreter fail to start. Do you see any
spark interpreter log ?
Jan Botorek 于2016年11月29日周二 下午4:08写道:
> Hello,
>
> Thanks for the advice, but it doesn’t seem that anything is wrong when I
> start the interpreter manually. I attach logs from interpreter and from
>
Hello,
Thanks for the advice, but it doesn’t seem that anything is wrong when I start
the interpreter manually. I attach logs from interpreter and from zeppelin.
This is the cmd output from interpreter launched manually:
D:\zeppelin-0.6.2\bin> interpreter.cmd -d D:\zeppelin-0.6.2\interpreter\spar
24 matches
Mail list logo