Hi Jeff

I've found another issue in both rc1 & rc2 - if you don't specify the
SPARK_HOME, then the default Spark interpreter doesn't start with following
error if I execute the code for reading from Cassandra:

%spark

import org.apache.spark.sql.cassandra._
val data = spark.read.cassandraFormat("test", "test").load()
z.show(data)



org.apache.zeppelin.interpreter.InterpreterException:
java.lang.NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream
at
org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:76)
at
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:760)
at
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:668)
at org.apache.zeppelin.scheduler.Job.run(Job.java:172)
at
org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:130)
at
org.apache.zeppelin.scheduler.FIFOScheduler.lambda$runJobInScheduler$0(FIFOScheduler.java:39)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoClassDefFoundError:
org/apache/hadoop/fs/FSDataInputStream
at org.apache.spark.SparkConf.loadFromSystemProperties(SparkConf.scala:76)
at org.apache.spark.SparkConf.<init>(SparkConf.scala:71)
at org.apache.spark.SparkConf.<init>(SparkConf.scala:58)
at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:80)
at
org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70)
... 8 more
Caused by: java.lang.ClassNotFoundException:
org.apache.hadoop.fs.FSDataInputStream
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 13 more
ERROR

This works just fine in the preview1, without any additional configuration.
I remember that we had something around this already reported, but I can't
find JIRA

What do you think?



On Mon, Jul 20, 2020 at 3:54 PM Jeff Zhang <zjf...@gmail.com> wrote:

> Sorry, the blocker issue in spark interpreter is this one,
> https://issues.apache.org/jira/browse/ZEPPELIN-4962
>
>
> Jeff Zhang <zjf...@gmail.com> 于2020年7月20日周一 下午9:53写道:
>
>> Thanks Alex, I also found another blocker issue in spark interpreter.
>> https://issues.apache.org/jira/browse/ZEPPELIN-4912
>>
>> Folks,
>> I'd like to cancel this RC, and will prepare another RC after these 2
>> blocker issues.
>>
>>
>>
>> Alex Ott <alex...@gmail.com> 于2020年7月20日周一 下午6:44写道:
>>
>>> Hi Jeff
>>>
>>> I didn't identify the root cause (I'm not a HTML/JavaScript developer),
>>> but I fixed the issue. PR is open:
>>> https://github.com/apache/zeppelin/pull/3858 - it's primarily HTML
>>> templates changes, so it could be merged quite fast, and then we can cut
>>> new RC.
>>>
>>>
>>> On Sat, Jul 18, 2020 at 3:43 PM Jeff Zhang <zjf...@gmail.com> wrote:
>>>
>>>> Thanks for the feedback, @Alex Ott <alex...@gmail.com> , We can wait
>>>> for your fix, and everyone else can continue to test the preview2.
>>>>
>>>>
>>>> Alex Ott <alex...@gmail.com> 于2020年7月18日周六 下午6:13写道:
>>>>
>>>>> I'm hitting https://issues.apache.org/jira/browse/ZEPPELIN-4787 in
>>>>> Cassandra interpreter, need to investigate why this happens - is it
>>>>> result
>>>>> of the Cassandra interpreter refactoring, or something else...
>>>>> Everything
>>>>> was fine in the preview1, and many templates weren't affected by
>>>>> refactoring, but are broken now.
>>>>>
>>>>> I would prefer to fix this for preview, but I need 1-2 days for
>>>>> investigation
>>>>>
>>>>>
>>>>> On Fri, Jul 17, 2020 at 5:25 PM Jeff Zhang <zjf...@gmail.com> wrote:
>>>>>
>>>>> >
>>>>> > Hi folks,
>>>>> >
>>>>> > I propose the following RC to be released for the Apache Zeppelin
>>>>> 0.9.0-preview2 release.
>>>>> >
>>>>> >
>>>>> > The commit id is a74365c0813b451db1bc78def7d1ad1279429224 :
>>>>> https://gitbox.apache.org/repos/asf?p=zeppelin.git;a=commit;h=dd2058395ad4cf08fb6bdc901ec0c426c5095a94
>>>>> >
>>>>> > This corresponds to the tag: v0.9.0-preview1-rc2 :
>>>>> https://gitbox.apache.org/repos/asf?p=zeppelin.git;a=shortlog;h=refs/tags/v0.9.0-preview2-rc1
>>>>> >
>>>>> > The release archives (tgz), signature, and checksums are here
>>>>> https://dist.apache.org/repos/dist/dev/zeppelin/zeppelin-0.9.0-preview2-rc1/
>>>>> >
>>>>> > The release candidate consists of the following source distribution
>>>>> archive zeppelin-v0.9.0-preview2.tgz
>>>>> >
>>>>> > In addition, the following supplementary binary distributions are
>>>>> provided
>>>>> > for user convenience at the same location
>>>>> zeppelin-0.9.0-preview2-bin-all.tgz
>>>>> >
>>>>> >
>>>>> > The maven artifacts are here
>>>>> https://repository.apache.org/content/repositories/orgapachezeppelin-1279/org/apache/zeppelin/
>>>>> >
>>>>> > You can find the KEYS file here:
>>>>> https://dist.apache.org/repos/dist/release/zeppelin/KEYS
>>>>> >
>>>>> > Release notes available at
>>>>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12342692&styleName=&projectId=12316221
>>>>> >
>>>>> > Vote will be open for next 72 hours (close at 8am 20/July PDT).
>>>>> >
>>>>> > [ ] +1 approve
>>>>> > [ ] 0 no opinion
>>>>> > [ ] -1 disapprove (and reason why)
>>>>> >
>>>>> >
>>>>> > --
>>>>> > Best Regards
>>>>> >
>>>>> > Jeff Zhang
>>>>> >
>>>>>
>>>>>
>>>>> --
>>>>> With best wishes,                    Alex Ott
>>>>> http://alexott.net/
>>>>> Twitter: alexott_en (English), alexott (Russian)
>>>>>
>>>>
>>>>
>>>> --
>>>> Best Regards
>>>>
>>>> Jeff Zhang
>>>>
>>>
>>>
>>> --
>>> With best wishes,                    Alex Ott
>>> http://alexott.net/
>>> Twitter: alexott_en (English), alexott (Russian)
>>>
>>
>>
>> --
>> Best Regards
>>
>> Jeff Zhang
>>
>
>
> --
> Best Regards
>
> Jeff Zhang
>


-- 
With best wishes,                    Alex Ott
http://alexott.net/
Twitter: alexott_en (English), alexott (Russian)

Reply via email to