eppelin.apache.org<mailto:users@zeppelin.apache.org>>
Date: Tuesday, March 21, 2017 at 3:27 AM
To: users <users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>>
Subject: Re: Should zeppelin.pyspark.python be used on the worker nodes ?
You're right - it will not be dynamic.
> from pyspark.conf import SparkConf
> ImportError: No module named *pyspark.conf*
William, you probably meant
from pyspark import SparkConf
?
--
Ruslan Dautkhanov
On Mon, Mar 20, 2017 at 2:12 PM, William Markito Oliveira <
william.mark...@gmail.com> wrote:
> Ah! Thanks Ruslan! I'm still
Hi moon, thanks for the tip. Here to summarize my current settings are the
following
conf/zeppelin-env.sh has only SPARK_HOME setting:
export SPARK_HOME=/opt/spark-2.1.0-bin-hadoop2.7/
Then on the configuration of the interpreter through the web interface I
have:
Ah! Thanks Ruslan! I'm still using 0.7.0 - Let me update to 0.8.0 and I'll
come back update this thread with the results.
On Mon, Mar 20, 2017 at 3:10 PM, William Markito Oliveira <
william.mark...@gmail.com> wrote:
> Hi moon, thanks for the tip. Here to summarize my current settings are the
>
You're right - it will not be dynamic.
You may want to check
https://issues.apache.org/jira/browse/ZEPPELIN-2195
https://github.com/apache/zeppelin/pull/2079
it seems it is fixed in a current snapshot of Zeppelin (comitted 3 weeks
ago).
--
Ruslan Dautkhanov
On Mon, Mar 20, 2017 at 1:21
When property key in interpreter configuration screen matches certain
condition [1], it'll be treated as a environment variable.
You can remove PYSPARK_PYTHON from conf/zeppelin-env.sh and place it in
interpreter configuration.
Thanks,
moon
[1]
Thanks for the quick response Ruslan.
But given that it's an environment variable, I can't quickly change that
value and point to a different python environment without restarting the
Zeppelin process, can I ? I mean is there a way to set the value for
PYSPARK_PYTHON from the Interpreter
You can set PYSPARK_PYTHON environment variable for that.
Not sure about zeppelin.pyspark.python. I think it does not work
See comments in https://issues.apache.org/jira/browse/ZEPPELIN-1265
Eventually, i think we can remove zeppelin.pyspark.python and use only
PYSPARK_PYTHON instead to avoid
I'm trying to use zeppelin.pyspark.python as the variable to set the python
that Spark worker nodes should use for my job, but it doesn't seem to be
working.
Am I missing something or this variable does not do that ?
My goal is to change that variable to point to different conda
environments.