Re: UDAF "not found: type UserDefinedAggregateFunction" in zeppelin 0.6.2

2016-11-02 Thread moon soo Lee
Tried the sample code in both Zeppelin and spark-shell, and got the same error. Please try following code as a workaround. import org.apache.spark.sql.expressions.MutableAggregationBuffer import org.apache.spark.sql.expressions.UserDefinedAggregateFunction class GeometricMean extends org.apache.

Re: Use user id dynamically in JDBC interpreter

2016-11-02 Thread moon soo Lee
Hi, This feature is work in progress here https://github.com/apache/zeppelin/pull/1539 Hope we can see this feature in master, soon. Thanks, moon On Wed, Nov 2, 2016 at 1:07 PM Chen Song wrote: > Hello > > Is there a way to configure a JDBC interpreter to use the user id logged > in instead o

Use user id dynamically in JDBC interpreter

2016-11-02 Thread Chen Song
Hello Is there a way to configure a JDBC interpreter to use the user id logged in instead of a static value? Something like shown below: default.user -> jdbc_user to default.user -> ${user_id} Chen

Re: Zeppelin in local computer using yarn on distant cluster

2016-11-02 Thread Benoit Hanotte
I am pointing to the dirs on my local machine, what I want is simply for my jobs to be submitted to the distant yarn cluster Thanks On Wed, Nov 2, 2016 at 4:00 PM, Abhi Basu <9000r...@gmail.com> wrote: > I am assuming you are pointing to hadoop/spark on remote host, right? Can > you not point ha

Re: Zeppelin in local computer using yarn on distant cluster

2016-11-02 Thread Abhi Basu
I am assuming you are pointing to hadoop/spark on remote host, right? Can you not point hadoop conf and spark dirs to remote machine? Not sure if this works, just suggesting, others may have tried. On Wed, Nov 2, 2016 at 9:58 AM, Hyung Sung Shim wrote: > Hello. > You don't need to install hadoop

Re: Zeppelin in local computer using yarn on distant cluster

2016-11-02 Thread Hyung Sung Shim
Hello. You don't need to install hadoop in your machine but you need a proper version of spark[0] to use spark-submit. and then you can set[1] the SPARK_HOME where the spark exists and HADOOP_CONF_DIR, master as yarn-client your spark interpreter in the interpreter menu. [0] http://spark.apache.or

Re: Zeppelin in local computer using yarn on distant cluster

2016-11-02 Thread Benoit Hanotte
I have only set HADOOP_CONF_DIR as following (my hadoop conf files are in /usr/local/lib/hadoop/etc/hadoop/, eg /usr/local/lib/hadoop/etc/hadoop/yarn-site.xml): #!/bin/bash # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See

Re: Zeppelin in local computer using yarn on distant cluster

2016-11-02 Thread Hyung Sung Shim
Could you share your zeppelin-env.sh ? 2016년 11월 2일 (수) 오후 4:57, Benoit Hanotte 님이 작성: > Thanks for your reply, > I have tried setting it within zeppelin-env.sh but it doesn't work any > better. > > Thanks > > On Wed, Nov 2, 2016 at 2:13 AM, Hyung Sung Shim wrote: > > Hello. > You should set the

Re: spark streaming with Kafka

2016-11-02 Thread Mich Talebzadeh
This is a good question. Normally I create a streaming app (in Scala) using mvn or sbt with a Uber jar file and run that with dependencies. Tried to run the source code in Zeppelin after adding /home/hduser/jars/spark-streaming-kafka-assembly_2.10-1.6.1.jar to dependencies but it did not work. Th

Re: Zeppelin in local computer using yarn on distant cluster

2016-11-02 Thread Benoit Hanotte
Thanks for your reply, I have tried setting it within zeppelin-env.sh but it doesn't work any better. Thanks On Wed, Nov 2, 2016 at 2:13 AM, Hyung Sung Shim wrote: > Hello. > You should set the HADOOP_CONF_DIR to /usr/local/lib/hadoop/etc/hadoop/ > in the conf/zeppelin-env.sh. > Thanks. > 2016년