Following up on this to find if it is a bug or this features is removed.
Thanks
Anand
On Thu, Dec 7, 2017 at 9:38 AM, Anandha L Ranganathan wrote:
> Hi,
> I am using Hive interpreter and upgraded to 0.8 version. We used to run
> multiple queries in single paragraph and after upgrade
Hi,
I am using Hive interpreter and upgraded to 0.8 version. We used to run
multiple queries in single paragraph and after upgrade it seems it is
broken. Is it something I am missing ?
I tried with and without semicolon after each statement and still having
the same problem.
%hive
use default;
I noticed some inconsistency in hive UDFs creation. I am using Zeppelin 0.7
in production and currently testing in zeppelin-0.8-SNAPSHOT.
This command works perfectly fine under zeppelin-0.7 but it fails in
zeppelin-0.8.
add jar hdfs://dfs-nameservices/user/anand.ranganathan/hiveGdUDF-current.jar
ist, please check the jar file
>
> Anandha L Ranganathan 于2017年11月26日周日 下午1:27写道:
>
>> I have created a new interpreter called "hive" in zeppelin using jdbc
>> interpreter. Using that interpreter, I can run queries on Hive.
>>
>> I am trying to create a
Thanks Jeff.
We will add dependencies though livy.spark.jars.packages
Thanks
Anand
On Wed, Nov 22, 2017 at 4:29 PM, Jeff Zhang wrote:
>
> livy doesn't support adding dependency via in note like %spark.dep, you
> have to do it in interpreter setting.
>
>
> Anandha L R
I have created a new interpreter called "hive" in zeppelin using jdbc
interpreter. Using that interpreter, I can run queries on Hive.
I am trying to create a UDFs but it is failing.
%hive
ADD JAR /mnt/data/apps/hiveUDF/hiveGdUDF-current.jar;
org.apache.hive.service.cli.HiveSQLException: Error
etting.
>
> Here's 2 configuration which can help you add external jars and external
> packages
>
> livy.spark.jars
> livy.spark.jars.packages
>
> And this is the configuration for queue name
>
> livy.spark.yarn.queue
>
>
> Anandha L Ranganathan 于2017年11月
We are using Livy interpreter from Zeppelin to connect to Spark.
In this, we want to give the users an option to download the external
libraries.
By default we have added some basic libraries in interpreter setting.
In spark interpreter, an users can download the external libraries they
want usi
n, you still need to manually
> delete interpreter in UI.
>
>
>
> Anandha L Ranganathan 于2017年11月8日周三 上午11:55写道:
>
>> We have zeppelin running in production but when someone creates a new
>> notebook, it displays all the interpreter in the dropdown. We are using
>>
We have zeppelin running in production but when someone creates a new
notebook, it displays all the interpreter in the dropdown. We are using
only 2 or 3 interpreters and other interpreters not in use. Is there a way
to delete/disable other interpreters that are not being used ?
I tried to delet
interpreter_2059763487
On Tue, Mar 21, 2017 at 5:30 AM, Ahyoung Ryu wrote:
> Hi Anand,
>
> Can you share your log files in here?
> You can find them under ZEPPELIN_HOME/logs/
>
> Thanks
> Ahyoung
>
> On Tue, Mar 21, 2017 at 6:35 AM, Anandha L Ranganathan <
zeppelin : 0.7.0
Spark : 1.6.0 (HDP 2.4)
*Command in the notebook*
%pyspark
2+2
*Error*
Traceback (most recent call last):
File "/tmp/zeppelin_pyspark-5483459839514814481.py", line 22, in
from pyspark.conf import SparkConf
ImportError: No module named pyspark.conf
Traceback (most recent call
12 matches
Mail list logo