Hi,
I am facing some issues while running zeppelin spark interpreter in the
yarn client mode. I get the following error in the
zeppelin-interpreter-spark*.log:
ERROR [2015-06-25 22:17:56,374] ({pool-1-thread-4}
ProcessFunction.java[process]:41) - Internal error processing getProgress
org.apache.z
gt;
> Caused by: java.lang.UnsupportedOperationException: Not implemented by the
> TFS FileSystem implementation
>
> It looks like you need add libraries for the TFS FileSystem into the
> classapath of Zeppelin's spark interpreter.
>
> Best,
> moon
>
>
> On Thu,
could this be related: https://issues.apache.org/jira/browse/SPARK-8385 ?
Im not sure since im building using spark-1.2 profile.
On Thu, Jun 25, 2015 at 11:22 PM, Udit Mehta wrote:
> Hi,
>
> Thanks for the reply moon. I dont see a reason to add TFS based libraries
> when I am not us
Hi,
I am running spark on zeppeling and trying to create some temp tables to
run sql queries on.
I have json data on hdfs which I am trying to load as a jsonRdd.
Here are my commands:
val data=sc.sequenceFile("/user/ds=01-02-2015/hour=2/*", classOf[Null],
> classOf[org.apache.hadoop.io.Text]).map
Hi,
I was trying to figure if its possible to update the zeppelin interpreter
settings using a REST endpoint. Is this possible and is there any
documentation around it?
Thanks in advance,
Udit
t;>> (GET) http://localhost:8080/api/interpreter/setting will list
>>> interpreter setting
>>> (PUT) http://localhost:8080/api/interpreter/setting/{settingId} will
>>> update setting
>>> (DELETE) http://localhost:8080/api/interpreter/setting/{settingId} will
Hi,
Is it possible to run multiple zeppelin instances from the same zeppelin
installation. I want to start multiple daemons on different ports and want
to know if I can do all this from a single zeppelin installation.
Thanks in advance,
Udit
Hi,
I am trying to load a spark dependency using the following in my zeppelin
notebook:
*%depz.load("com.databricks:spark-csv_2.10:1.0.3")*
Doing this I get the following error:
ERROR [2015-08-19 21:11:52,511] ({pool-2-thread-2} Job.java[run]:183) - Job
failed
org.apache.zeppelin.interpreter.I
For some more context, I started the zeppelin daemon using:
$ZEPPELIN_HOME/bin/zeppelin-daemon.sh --config $HOME/zeppelin_conf start
I can run spark code in the notebook but the dependency loader throws
these errors.
On Wed, Aug 19, 2015 at 2:18 PM, Udit Mehta wrote:
> Hi,
>
> I am
Nevermind. Seems like it was a permission issue. Resolved now.
On Wed, Aug 19, 2015 at 2:27 PM, Udit Mehta wrote:
> For some more context, I started the zeppelin daemon using:
> $ZEPPELIN_HOME/bin/zeppelin-daemon.sh --config $HOME/zeppelin_conf start
>
> I can run spark code in
ies.
> Could you please let me know how exactly you resolved it ?
>
> Thanks,
> Ashwin
>
> On Wed, Aug 19, 2015 at 4:26 PM, Udit Mehta wrote:
>
>> Nevermind. Seems like it was a permission issue. Resolved now.
>>
>> On Wed, Aug 19, 2015 at 2:27 PM, Udit Me
Hi All,
I keep getting this error when trying to run Pyspark on *Zeppelin 0.5.6*:
Py4JJavaError: An error occurred while calling
> z:org.apache.spark.api.python.PythonRDD.runJob. :
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
> in stage 0.0 failed 4 times, most rece
Hi Udit,
>
> Seems like you are trying to import pyspark. What was the code you tried
> to execute? Could you share it?
>
> Paul
>
>
> On Thu, May 19, 2016 at 7:46 PM Udit Mehta wrote:
>
>> Hi All,
>>
>> I keep getting this error when trying to run Pysp
13 matches
Mail list logo