Re: Should zeppelin.pyspark.python be used on the worker nodes ?

2017-03-20 Thread moon soo Lee
When property key in interpreter configuration screen matches certain condition [1], it'll be treated as a environment variable. You can remove PYSPARK_PYTHON from conf/zeppelin-env.sh and place it in interpreter configuration. Thanks, moon [1]

Re: Should zeppelin.pyspark.python be used on the worker nodes ?

2017-03-20 Thread William Markito Oliveira
Ah! Thanks Ruslan! I'm still using 0.7.0 - Let me update to 0.8.0 and I'll come back update this thread with the results. On Mon, Mar 20, 2017 at 3:10 PM, William Markito Oliveira < william.mark...@gmail.com> wrote: > Hi moon, thanks for the tip. Here to summarize my current settings are the >

Re: Should zeppelin.pyspark.python be used on the worker nodes ?

2017-03-20 Thread Ruslan Dautkhanov
> from pyspark.conf import SparkConf > ImportError: No module named *pyspark.conf* William, you probably meant from pyspark import SparkConf ? -- Ruslan Dautkhanov On Mon, Mar 20, 2017 at 2:12 PM, William Markito Oliveira < william.mark...@gmail.com> wrote: > Ah! Thanks Ruslan! I'm still

Re: Roadmap for 0.8.0

2017-03-20 Thread moon soo Lee
Great to see discussion for 0.8.0. List of features for 0.8.0 looks really good. *Interpreter factory refactoring* Interpreter layer supports various behavior depends on combination of PerNote,PerUser / Shared,Scoped,Isolated. We'll need strong test cases for each combination as a first step.

Re: Should zeppelin.pyspark.python be used on the worker nodes ?

2017-03-20 Thread William Markito Oliveira
Hi moon, thanks for the tip. Here to summarize my current settings are the following conf/zeppelin-env.sh has only SPARK_HOME setting: export SPARK_HOME=/opt/spark-2.1.0-bin-hadoop2.7/ Then on the configuration of the interpreter through the web interface I have:

Pyspark failing in 0.7.0

2017-03-20 Thread Anandha L Ranganathan
zeppelin : 0.7.0 Spark : 1.6.0 (HDP 2.4) *Command in the notebook* %pyspark 2+2 *Error* Traceback (most recent call last): File "/tmp/zeppelin_pyspark-5483459839514814481.py", line 22, in from pyspark.conf import SparkConf ImportError: No module named pyspark.conf Traceback (most recent call

Re: How to bind angular object with backend when write Helium Application

2017-03-20 Thread fish fish
Thank you Lee for such promptly response! Will check the code and get back if any further problems. Thank you again! 2017-03-21 3:23 GMT+08:00 moon soo Lee : > Hi Hishfish, > > If you take a look Clock example [1], you'll see how it creates angular > objects and update every

Re: Should zeppelin.pyspark.python be used on the worker nodes ?

2017-03-20 Thread Ruslan Dautkhanov
You're right - it will not be dynamic. You may want to check https://issues.apache.org/jira/browse/ZEPPELIN-2195 https://github.com/apache/zeppelin/pull/2079 it seems it is fixed in a current snapshot of Zeppelin (comitted 3 weeks ago). -- Ruslan Dautkhanov On Mon, Mar 20, 2017 at 1:21

Re: Should zeppelin.pyspark.python be used on the worker nodes ?

2017-03-20 Thread Jianfeng (Jeff) Zhang
It is dynamic, you can set enviroment variable in interpreter setting page. Best Regard, Jeff Zhang From: Ruslan Dautkhanov > Reply-To: "users@zeppelin.apache.org"

Re: Roadmap for 0.8.0

2017-03-20 Thread Jianfeng (Jeff) Zhang
Strongly +1 for adding system test for different interpreter modes and focus on bug fixing than new features. I do heard from some users complain about the bugs of zeppelin major release. A stabilized release is very necessary for community. Best Regard, Jeff Zhang From: moon soo Lee

Should zeppelin.pyspark.python be used on the worker nodes ?

2017-03-20 Thread William Markito Oliveira
I'm trying to use zeppelin.pyspark.python as the variable to set the python that Spark worker nodes should use for my job, but it doesn't seem to be working. Am I missing something or this variable does not do that ? My goal is to change that variable to point to different conda environments.

Re: Should zeppelin.pyspark.python be used on the worker nodes ?

2017-03-20 Thread Ruslan Dautkhanov
You can set PYSPARK_PYTHON environment variable for that. Not sure about zeppelin.pyspark.python. I think it does not work See comments in https://issues.apache.org/jira/browse/ZEPPELIN-1265 Eventually, i think we can remove zeppelin.pyspark.python and use only PYSPARK_PYTHON instead to avoid

Re: Should zeppelin.pyspark.python be used on the worker nodes ?

2017-03-20 Thread William Markito Oliveira
Thanks for the quick response Ruslan. But given that it's an environment variable, I can't quickly change that value and point to a different python environment without restarting the Zeppelin process, can I ? I mean is there a way to set the value for PYSPARK_PYTHON from the Interpreter

Re: How to bind angular object with backend when write Helium Application

2017-03-20 Thread moon soo Lee
Hi Hishfish, If you take a look Clock example [1], you'll see how it creates angular objects and update every seconds from backend, so front-end can be updated accordingly. After you add your object into AngularObjectRegistry, you can get AngularObject and add watcher [2]. Then any changes of

Auto completion for defined variable names

2017-03-20 Thread Meethu Mathew
Hi, Is there any way to get auto-completion or suggestions for the defined variable names? In Jupyter notebooks, once defined variables will show under suggestions. Ctrl+. is giving awkward suggestions for related functions also. For a spark data frame, it wont show the relevant functions.

Re: Zeppelin should support standard protocols for authN and AuthZ

2017-03-20 Thread Jongyoul Lee
Hi, Can you explain or give me an idea for it more detail? On Mon, Mar 20, 2017 at 7:02 PM, mbatista wrote: > In order to make Zeppelin more easy to integrate in the modern cloud > environments where authentication and authorization are done by having a > centralized

Re: How can I backup Interpreter setting?

2017-03-20 Thread Jeff Zhang
ZEPPELIN_HOME/conf/interpreter.json JongOk Kim 于2017年3月20日周一 下午4:15写道: > If when I create custom interpreter or edit interpreters setting in > 'manage interpreters setting page', then Where can I find this changes. > > I want to backup all interpreter settings and recover

Zeppelin should support standard protocols for authN and AuthZ

2017-03-20 Thread mbatista
In order to make Zeppelin more easy to integrate in the modern cloud environments where authentication and authorization are done by having a centralized server for all the apps, Zeppelin shall support standard protocols for IAM purposes. Regarding authentication -OpenId connect protocol

Re: Roadmap for 0.8.0

2017-03-20 Thread Jongyoul Lee
Thanks for letting me know. I agree almost things we should develop. Personally, concerning refactoring it, I'm doing a bit with several PRs but we need to restructure InterpreterFactory. At first, list up all issues and make some groups and handle it. How do you think? On Mon, Mar 20, 2017 at

Re: Roadmap for 0.8.0

2017-03-20 Thread Felix Cheung
There are several pending visualization improvements/PRs that would be very good to get them in as well. From: Jongyoul Lee Sent: Sunday, March 19, 2017 9:03:24 PM To: dev; users@zeppelin.apache.org Subject: Roadmap for 0.8.0 Hi dev &

How to bind angular object with backend when write Helium Application

2017-03-20 Thread fish fish
Hi Group, Recently we are exploring building data analysis application based on Zeppelin. We checked Helium document and think it could be an appropriate way to customize both frontend and backend in Zeppelin. However, we did not find a way to bind angular object with backend data when extends

Re: Roadmap for 0.8.0

2017-03-20 Thread Jeff Zhang
Yeah, make sense. Jongyoul Lee 于2017年3月20日周一 下午7:21写道: > Thanks for letting me know. I agree almost things we should develop. > Personally, concerning refactoring it, I'm doing a bit with several PRs but > we need to restructure InterpreterFactory. At first, list up all