Hi,

About python, it's %pyspark. To use it, it requires extra configurations.
Like setting spark.home property and PYTHONPATH env variable.

Fyi, a pullrequest about to be merged
https://github.com/apache/incubator-zeppelin/pull/118 will greatly simply
configuration for pyspark.

Best,
moon

On Thu, Jun 25, 2015 at 8:18 PM Nihal Bhagchandani <
nihal_bhagchand...@yahoo.com> wrote:

> Hi Nirmal,
>
> by-default zeppelin use local[*] spark context, if you want to point this
> out to your own, you can configure it under "interpreter" menu on
> zeppelin-UI.
>
> Interpreter --> spark -->master
>
> and for Hive
> Interpreter --> hive -->hive.hiveserver2.url
>
> hope this helps...
>
> -Nihal
>
>
>
>
>   On Friday, 26 June 2015 4:44 AM, Nirmal Sharma <sharma.nir...@gmail.com>
> wrote:
>
>
> Hi,
>
> I am trying to use zeppelin and have couple of questions.
>
> 1. We have MPAR cluster (Hadoop 1.0.3) and Spark 1.4.0  is also running
> on the same using Apache mesos and i tried installing zeppelin and were
> able to build it successfully.
> (Used this command to install : mvn install -DskipTests -Dspark.version=
> 1.4.0 -Dhadoop.version=1.0.3 )
>
> But when i am using it, instead of connecting to my Spark and hadoop
> cluster, it is connecting to its own installed spark instance and giving me
> errors.
> Also, with %sql interpreter, it is not connecting to my Hive DB.
>
> 2. Also with %python, it gives "no python interpreter found error".
>
>
> Please let me know if i need to reinstall it in different way?
>
> Thanks
> Nirmal
>
>
>

Reply via email to