ideally each user should use /home//zeppelin/notebooks folder.
is there a way to do this?
thank you
From: Manuel Sopena Ballesteros
Sent: Wednesday, 17 June 2020 1:32:16 AM
To: users
Subject: Re: how to setup notebook storage path
thank you Jeff,
do we need
Sopena Ballesteros
mailto:manuel...@garvan.org.au>> 于2020年6月16日周二
下午2:43写道:
Dear Zeppelin community,
I am using zeppelin 0.8.0 deployed by HDP/ambari, by default it uses
FileSystemNotebookRepo as a notebook storage with path /user/.
I would like to change it to VFSNotebookRepo inst
Dear Zeppelin community,
I am using zeppelin 0.8.0 deployed by HDP/ambari, by default it uses
FileSystemNotebookRepo as a notebook storage with path /user/.
I would like to change it to VFSNotebookRepo instead of hadoop.
I can change the zeppelin.notebook.storage in zeppelin-site configurati
Dear Zeppelin community,
I am using zeppelin 0.8.0 deployed by HDP/ambari, by default it uses
FileSystemNotebookRepo as a notebook storage with path /user/.
I would like to change it to VFSNotebookRepo instead of hadoop.
I can change the zeppelin.notebook.storage in zeppelin-site configurati
is a bug of 0.8, but
is fixed in 0.8.2
Manuel Sopena Ballesteros
mailto:manuel...@garvan.org.au>> 于2020年5月20日周三
上午9:28写道:
this is what I can see from the zeppelin logs
DEBUG [2020-05-20 11:25:01,509] ({Exec Stream Pumper}
RemoteInterpreterManagedProcess.java[processLine]:298) - 20/05/20
endOutputRunner.java[run]:107) - Processing size for append-output is 39
characters
INFO [2020-05-20 11:25:01,911] ({pool-2-thread-74}
SchedulerFactory.java[jobFinished]:115) - Job 20160223-144701_1698149301
finished by scheduler
org.apache.zeppelin.interpreter.remote.RemoteInterpreter-anaconda3:mansop:-shared_session
DEBUG [
Dear Zeppelin community,
For some reason my Zeppelin is not aware of the Zeppelin context
paragraph
%spark2.spark
z.input("name", "sun")
output
:24: error: not found: value z
z.input("name", "sun")
^
Any thoughts?
thank you very much
Manuel
NOTICE
Please consider the en
Dear Zeppelin community,
We are using zeppelin through Hortonworks Data Platform. We realised that
Zeppelin provides a set of predefined notes tutorials (eg Getting Started /
Apache Spark in 5 Minutes) that is available to all new users.
We would like to:
- Delete those notes.
- Create ne
ng the first attempt but
second attempt/click will work.
Regards,
Tom
Gesendet: Mittwoch, 29. April 2020 um 04:44 Uhr
Von: "Manuel Sopena Ballesteros"
An: "users"
Betreff: error restarting interpreter if shiro [url] /api/interpreter/** =
authc is commented
I have restricted
I have restricted access to the interpreter configuration page by editing the
shiro [url] section as follows
[urls]
# This section is used for url-based security.
# You can secure interpreter, configuration and credential information by urls.
Comment or uncomment the below urls that you want to
Hi,
Sometimes a user tries to login (to zeppelin) it takes few minutes... is there
a way to speed this up?
Thank you
Manuel Sopena Ballesteros
Big Data Engineer | Kinghorn Centre for Clinical Genomics
[cid:image001.png@01D4C835.ED3C2230] <https://www.garvan.org.au/>
a: 384 Victoria
-alone mode.
On Wed, Nov 20, 2019, 6:25 PM Manuel Sopena Ballesteros
mailto:manuel...@garvan.org.au>> wrote:
Hi Tony,
Are you running a yarn cluster?
thanks
Manuel
From: Tony Primerano
[mailto:primer...@tonycode.com<mailto:primer...@tonycode.com>]
Sent: Thursday, November 21, 2019 9:08
Hi Tony,
Are you running a yarn cluster?
thanks
Manuel
From: Tony Primerano [mailto:primer...@tonycode.com]
Sent: Thursday, November 21, 2019 9:08 AM
To: users@zeppelin.apache.org
Subject: Hiding shiro.ini and other sensitive files from end users
Is there a recommended way to hide secrets cont
Rather than exception, I get an HTTP ERROR 503 when I hardcode a user in shiro
config
[cid:image002.png@01D59F98.6B7039E0]
Manuel
From: Manuel Sopena Ballesteros
Sent: Wednesday, November 20, 2019 11:37 AM
To: users@zeppelin.apache.org
Subject: RE: restrict interpreters to users
Unfortunately
permissions and the documentation needs to provide
more details. Just to be clear, if the configuration above is used, role1,
role2, role3 have the same permissions as admin does.
Please let me know if it works.
On 11/19/2019 13:17,Manuel Sopena
Ballesteros<mailto:manuel...@garvan.org.au> wro
authentication can change this ? Please refer to
https://zeppelin.apache.org/docs/0.8.2/setup/security/shiro_authentication.html
On 11/19/2019 09:28,Manuel Sopena
Ballesteros<mailto:manuel...@garvan.org.au> wrote:
Dear Zeppelin community,
By default interpreters configuration can be chan
Dear Zeppelin community,
By default interpreters configuration can be changed by any user. Is there a
way to avoid this? I would like to hide some interpreters so people can't
change them.
Thank you very much
Manuel Sopena Ballesteros
Big Data Engineer | Kinghorn Centre for Clinical Gen
Thank you very much, that worked
What about passing –conf flag to pyspark?
Manuel
From: Jeff Zhang [mailto:zjf...@gmail.com]
Sent: Friday, November 15, 2019 12:35 PM
To: users
Subject: Re: send parameters to pyspark
you can set property spark.jars
Manuel Sopena Ballesteros
mailto:manuel
Dear zeppelin community,
I need to send some parameters to pyspark so it can find extra jars.
This is an example of the parameters I need to send to pyspark:
pyspark \
--jars
/share/ClusterShare/anaconda3/envs/python37/lib/python3.7/site-packages/hail/hail-all-spark.jar
\
--conf
spark.dri
g in the same interpreter process?
Thank you
Manuel
From: Manuel Sopena Ballesteros [mailto:manuel...@garvan.org.au]
Sent: Wednesday, November 13, 2019 2:32 PM
To: users@zeppelin.apache.org
Subject: spark r interpreter resets working directory
Dear Zeppelin community,
I am testing spark r interpre
process. In your second note,
the current working directory is the yarn container location which is expected
Manuel Sopena Ballesteros
mailto:manuel...@garvan.org.au>> 于2019年11月13日周三
下午1:50写道:
Yarn cluster using impersonate (per user + isolated)
I guess that means each note use dif
notes share the same interpreter ? I suspect you are using
per note isolated or scoped mode.
Looks like you are local or yarn-client mode for the first note, but using
yarn-cluster mode for the second note
Manuel Sopena Ballesteros
mailto:manuel...@garvan.org.au>> 于2019年11月13日周三
上午11:31写道
Dear Zeppelin community,
I am testing spark r interpreter and realised it does not keep the working
directory across notes.
[cid:image001.png@01D59A2F.0E03FB20]
What is the reason behind this behavior?
Thank you very much
NOTICE
Please consider the environment before printing this email. This
Hi,
For some reason python interpreter is missing from the interpreter list so I am
trying to reinstall it.
$ sudo /usr/hdp/3.1.0.0-78/zeppelin/bin/install-interpreter.sh -n python
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m;
support was removed in 8.0
SLF4J: Cla
sage, you are still using python instead of ipython. It
is hard to tell what's wrong.
One suggestion is to try 0.8.2 which is the latest release.
Manuel Sopena Ballesteros
mailto:manuel...@garvan.org.au>> 于2019年10月30日周三
上午9:47写道:
Didn’t like %matplotlib inline
Traceback (most rece
which is the latest release.
Manuel Sopena Ballesteros
mailto:manuel...@garvan.org.au>> 于2019年10月30日周三
上午9:47写道:
Didn’t like %matplotlib inline
Traceback (most recent call last):
File
"/d1/hadoop/yarn/local/usercache/mansop/appcache/application_1570749574365_0083/container_e15_1
ark
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure()
plt.plot([1, 2, 3])
Manuel Sopena Ballesteros
mailto:manuel...@garvan.org.au>> 于2019年10月30日周三
上午9:39写道:
Another example:
%pyspark
import matplotlib.pyplot as plt
plt.plot([1, 2, 3])
z.show(plt)
plt.close()
Acco
From: Manuel Sopena Ballesteros [mailto:manuel...@garvan.org.au]
Sent: Wednesday, October 30, 2019 12:12 PM
To: users@zeppelin.apache.org
Subject: can't plot
Dear Zeppelin user community,
I am running Zeppelin 0.8.0 and I am not able to print a plot using pyspark
interpreter:
This is my not
Dear Zeppelin user community,
I am running Zeppelin 0.8.0 and I am not able to print a plot using pyspark
interpreter:
This is my notebook:
%pyspark
import matplotlib.pyplot as plt
plt.figure()
plt.plot([1, 2, 3])
And this is the output:
[]
Any idea?
NOTICE
Please consider the environment
version of zeppelin do you use ? Did you
do any change on the source code ?
Manuel Sopena Ballesteros
mailto:manuel...@garvan.org.au>> 于2019年10月18日周五
下午2:36写道:
Dear Zeppelin community,
I am playing the script below in Zeppelin yarn cluster mode:
%pyspark
print("Hello world!")
ou
Dear Zeppelin community,
I am playing the script below in Zeppelin yarn cluster mode:
%pyspark
print("Hello world!")
output:
:5: error: object zeppelin is not a member of package org.apache var
value: org.apache.zeppelin.spark.SparkZeppelinContext = _ ^ :6: error:
object zeppelin is not a mem
ache.zeppelin.interpreter=DEBUG
And try it again, this time you will get more log info, I suspect the python
process fail to start
Manuel Sopena Ballesteros
mailto:manuel...@garvan.org.au>> 于2019年10月4日周五
上午9:09写道:
Sorry for the late response,
Yes, I have successfully ran few
It looks like you are using pyspark, could you try just start scala spark
interpreter via `%spark` ? First let's figure out whether it is related with
pyspark.
Manuel Sopena Ballesteros
mailto:manuel...@garvan.org.au>> 于2019年10月1日周二
下午3:29写道:
Dear Zeppelin community,
I would like
Dear Zeppelin community,
I would like to ask for advice in regards an error I am having with thrift.
I am getting quite a lot of these errors while running my notebooks
org.apache.thrift.transport.TTransportException at
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java
Dear Zeppelin user community,
I have a situation where I can't install R packages through zeppelin because:
1. R expects me to give some feedback like choosing a repository or
agreeing to compile and install package from source code.
2. Be able to create multiple environments to kee
Dear Zeppelin community,
I am trying to install the following library
[cid:image003.png@01D561B3.57C646F0]
However when I run the command above `install.packages('Seurat')` in zeppelin
notebook, it freezes, I guess because R is waiting the user to select an option.
I know this is a silly examp
/interpreter/python.html#conda
Manuel Sopena Ballesteros
mailto:manuel...@garvan.org.au>> 于2019年8月22日周四
上午9:57写道:
Hi,
Is there a way to integrate conda with pyspark interpreter so users can create
list and activate environments?
Thank you very much
Manuel
NOTICE
Please consider the environment
Hi,
Is there a way to integrate conda with pyspark interpreter so users can create
list and activate environments?
Thank you very much
Manuel
NOTICE
Please consider the environment before printing this email. This message and
any attachments are intended for the addressee named and may contai
Dear Zeppelin user community,
I would like I a zeppelin installation with spark integration and the "master"
parameter in the spark interpreter configuration always resets its value from
"yarn" to "yarn-client" after zeppelin service reboot.
How can I stop that?
Thank you
NOTICE
Please consid
Dear Zeppelin user community,
I have a zeppelin installation connected to a Spark cluster. I setup Zeppelin
to submit jobs in yarn cluster mode and also impersonation is enabled. Now I
would like to be able to use a python virtual environment instead of system one.
Is there a way I could specify
Dear Zeppelin user community,
I am trying to setup python and R to submit jobs through Spark cluster. This is
already done but now I need to enable the users to install their own libraries.
I was thinking to ask the users to setup conda in their home directory and
modify the `zeppelin.pyspark.p
quot;,
"type": "checkbox"
}
},
"editor": {
"language": "python",
"editOnDblClick": false,
"completionKey": "TAB",
"completionSupport": true
}
},
…
Thank you
Manu
Dear Zeppelin community,
I have a Zeppelin installation connected to Spark. I realized that zeppelin
runs a spark job when it starts but I can't see each individual jobs submitted
through zeppelin notebooks.
Is this the expected behavior by design? Is there a way I can see in spark
history ser
Dear Zeppelin community,
I have a zeppelin installation and a spark cluster. I need to provide options
for users to run either python2 or 3 code using pyspark. At the moment the only
way of doing this is by editing the spark interpreter and changing the
`zeppelin.pyspark.python` from python to
correct
Manuel
From: Jeff Zhang [mailto:zjf...@gmail.com]
Sent: Friday, June 28, 2019 12:41 PM
To: users
Subject: Re: can't use @spark2.r interpreter
Are you using HDP ?
Manuel Sopena Ballesteros
mailto:manuel...@garvan.org.au>> 于2019年6月28日周五
上午10:32写道:
Dear Zeppelin community,
without an open graphics device
Any idea?
Thank you
Manuel Sopena Ballesteros
Big Data Engineer | Kinghorn Centre for Clinical Genomics
[cid:image001.png@01D4C835.ED3C2230] <https://www.garvan.org.au/>
a: 384 Victoria Street, Darlinghurst NSW 2010
p: +61 2 9355 5760 | +61 4 12 123
pache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledTh
hon interpreter not working
Which zeppelin version do you use ? Does it work without impersonation ?
Manuel Sopena Ballesteros
mailto:manuel...@garvan.org.au>> 于2019年6月5日周三
上午10:38写道:
Dear Zeppelin community,
I am trying to setup the python interpreter. Installation is successful however
I c
Dear Zeppelin community,
I am trying to setup the python interpreter. Installation is successful however
I can't make any python code to run.
This is what I can see from the logs:
INFO [2019-06-05 12:35:07,788] ({pool-2-thread-2}
SchedulerFactory.java[jobStarted]:109) - Job 20190605-122140_1966
Jeff Zhang [mailto:zjf...@gmail.com]
Sent: Friday, June 8, 2018 2:54 PM
To: users@zeppelin.apache.org
Subject: Re: how to load pandas into pyspark (centos 6 with python 2.6)
Just find pip in your python 3.6 folder, and run pip using full path. e.g.
/tmp/Python-3.6.5/pip install pandas
Manuel Sope
@zeppelin.apache.org
Subject: Re: how to load pandas into pyspark (centos 6 with python 2.6)
pip should be available under your python3.6.5, you can use that to install
pandas
Manuel Sopena Ballesteros
mailto:manuel...@garvan.org.au>>于2018年6月8日周五 上午11:40写道:
Hi Jeff,
Thank you very much for your
nstall pandas system wide). Do you mean you are not root and don't have
permission to install python packages ?
Manuel Sopena Ballesteros
mailto:manuel...@garvan.org.au>>于2018年6月8日周五 上午9:26写道:
Dear Zeppelin community,
I am trying to load pandas into my zeppelin %spark2.pyspark inter
park2.pyspark interpreter?
Thank you very much
Manuel Sopena Ballesteros | Big data Engineer
Garvan Institute of Medical Research
The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010
T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E:
manuel...@garvan.org.au<mailto:manuel...@garva
53 matches
Mail list logo