Tried the sample code in both Zeppelin and spark-shell, and got the same
error.
Please try following code as a workaround.
import org.apache.spark.sql.expressions.MutableAggregationBuffer
import org.apache.spark.sql.expressions.UserDefinedAggregateFunction
class GeometricMean extends
org.apache.
Hi,
This feature is work in progress here
https://github.com/apache/zeppelin/pull/1539
Hope we can see this feature in master, soon.
Thanks,
moon
On Wed, Nov 2, 2016 at 1:07 PM Chen Song wrote:
> Hello
>
> Is there a way to configure a JDBC interpreter to use the user id logged
> in instead o
Hello
Is there a way to configure a JDBC interpreter to use the user id logged in
instead of a static value? Something like shown below:
default.user -> jdbc_user
to
default.user -> ${user_id}
Chen
I am pointing to the dirs on my local machine, what I want is simply for my
jobs to be submitted to the distant yarn cluster
Thanks
On Wed, Nov 2, 2016 at 4:00 PM, Abhi Basu <9000r...@gmail.com> wrote:
> I am assuming you are pointing to hadoop/spark on remote host, right? Can
> you not point ha
I am assuming you are pointing to hadoop/spark on remote host, right? Can
you not point hadoop conf and spark dirs to remote machine? Not sure if
this works, just suggesting, others may have tried.
On Wed, Nov 2, 2016 at 9:58 AM, Hyung Sung Shim wrote:
> Hello.
> You don't need to install hadoop
Hello.
You don't need to install hadoop in your machine but you need a proper
version of spark[0] to use spark-submit.
and then you can set[1] the SPARK_HOME where the spark exists and
HADOOP_CONF_DIR, master as yarn-client your spark interpreter in the
interpreter menu.
[0]
http://spark.apache.or
I have only set HADOOP_CONF_DIR as following (my hadoop conf files are in
/usr/local/lib/hadoop/etc/hadoop/, eg
/usr/local/lib/hadoop/etc/hadoop/yarn-site.xml):
#!/bin/bash
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See
Could you share your zeppelin-env.sh ?
2016년 11월 2일 (수) 오후 4:57, Benoit Hanotte 님이 작성:
> Thanks for your reply,
> I have tried setting it within zeppelin-env.sh but it doesn't work any
> better.
>
> Thanks
>
> On Wed, Nov 2, 2016 at 2:13 AM, Hyung Sung Shim wrote:
>
> Hello.
> You should set the
This is a good question.
Normally I create a streaming app (in Scala) using mvn or sbt with a Uber
jar file and run that with dependencies. Tried to run the source code in
Zeppelin after adding
/home/hduser/jars/spark-streaming-kafka-assembly_2.10-1.6.1.jar
to dependencies but it did not work.
Th
Thanks for your reply,
I have tried setting it within zeppelin-env.sh but it doesn't work any
better.
Thanks
On Wed, Nov 2, 2016 at 2:13 AM, Hyung Sung Shim wrote:
> Hello.
> You should set the HADOOP_CONF_DIR to /usr/local/lib/hadoop/etc/hadoop/
> in the conf/zeppelin-env.sh.
> Thanks.
> 2016년
10 matches
Mail list logo