This issue has been fixed few days ago in master branch.

Here is the PR
https://github.com/apache/incubator-zeppelin/pull/673

And related issues filed in JIRA before
https://issues.apache.org/jira/browse/ZEPPELIN-194
https://issues.apache.org/jira/browse/ZEPPELIN-381

With the latest master branch, we recommend you to load dependencies via
interpreter setting menu instead of %dep interpreter.

If you want to know how to set dependencies with latest master branch,
please check doc
<https://zeppelin.incubator.apache.org/docs/0.6.0-incubating-SNAPSHOT/manual/dependencymanagement.html>
and
let me know if it works.

Cheers,
Mina

On Tue, Feb 2, 2016 at 12:50 PM, Lin, Yunfeng <yunfeng....@citi.com> wrote:

> I’ve created an issue in jira
>
>
>
> https://issues.apache.org/jira/browse/ZEPPELIN-648
>
>
>
> *From:* Benjamin Kim [mailto:bbuil...@gmail.com]
> *Sent:* Tuesday, February 02, 2016 3:34 PM
> *To:* us...@zeppelin.incubator.apache.org
> *Cc:* dev@zeppelin.incubator.apache.org
> *Subject:* Re: csv dependencies loaded in %spark but not %sql in spark
> 1.6/zeppelin 0.5.6
>
>
>
> Same here. I want to know the answer too.
>
>
>
>
>
> On Feb 2, 2016, at 12:32 PM, Jonathan Kelly <jonathaka...@gmail.com>
> wrote:
>
>
>
> Hey, I just ran into that same exact issue yesterday and wasn't sure if I
> was doing something wrong or what. Glad to know it's not just me!
> Unfortunately I have not yet had the time to look any deeper into it. Would
> you mind filing a JIRA if there isn't already one?
>
>
>
> On Tue, Feb 2, 2016 at 12:29 PM Lin, Yunfeng <yunfeng....@citi.com> wrote:
>
> Hi guys,
>
>
>
> I load spark-csv dependencies in %spark, but not in %sql using apache
> zeppelin 0.5.6 with spark 1.6.0. Everything is working fine in zeppelin
> 0.5.5 with spark 1.5 through
>
>
>
> Do you have similar problems?
>
>
>
> I am loading spark csv dependencies (
> https://github.com/databricks/spark-csv
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_databricks_spark-2Dcsv&d=BQMFaQ&c=j-EkbjBYwkAB4f8ZbVn1Fw&r=b2BXWa66OlJ_NWqk5P310M6mGfus8eDC5O4J0-nePFY&m=dSXRZCZNlnU1tx9rtyX9UWfdjT0EPbafKr2NyIrXP-o&s=zUPPWKYhZiNUuIUWmlXXGF_94ImGHQ4qHpFCU0xSEzg&e=>
> )
>
>
>
> Using:
>
> %dep
>
> z.load(“PATH/commons-csv-1.1.jar”)
>
> z.load(“PATH/spark-csv_2.10-1.3.0.jar”)
>
> z.load(“PATH/univocity-parsers-1.5.1.jar:)
>
> z.load(“PATH/scala-library-2.10.5.jar”)
>
>
>
> I am able to load a csv from hdfs using data frame API in spark. It is
> running perfect fine.
>
> %spark
>
> val df = sqlContext.read
>
>     .format("com.databricks.spark.csv")
>
>     .option("header", "false") // Use finrst line of all files as header
>
>     .option("inferSchema", "true") // Automatically infer data types
>
>     .load("hdfs://sd-6f48-7fe6:8020/tmp/people.txt")   // this is a file
> in HDFS
>
> df.registerTempTable("people")
>
> df.show()
>
>
>
> This also work:
>
> %spark
>
> val df2=sqlContext.sql(“select * from people”)
>
> df2.show()
>
>
>
> But this doesn’t work….
>
> %sql
>
> select * from people
>
>
>
> java.lang.ClassNotFoundException:
> com.databricks.spark.csv.CsvRelation$$anonfun$1$$anonfun$2 at
> java.net.URLClassLoader$1.run(URLClassLoader.java:366) at
> java.net.URLClassLoader$1.run(URLClassLoader.java:355) at
> java.security.AccessController.doPrivileged(Native Method) at
> java.net.URLClassLoader.findClass(URLClassLoader.java:354) at
> java.lang.ClassLoader.loadClass(ClassLoader.java:425) at
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at
> java.lang.ClassLoader.loadClass(ClassLoader.java:358) at
> java.lang.Class.forName0(Native Method) at
> java.lang.Class.forName(Class.java:270) at
> org.apache.spark.util.InnerClosureFinder$$anon$4.visitMethodInsn(ClosureCleaner.scala:435)
> at org.apache.xbean.asm5.ClassReader.a(Unknown Source) at
> org.apache.xbean.asm5.ClassReader.b(Unknown Source) at
> org.apache.xbean.asm5.ClassReader.accept(Unknown Source) at
> org.apache.xbean.asm5.ClassReader.accept(Unknown Source) at
> org.apache.spark.util.ClosureCleaner$.getInnerClosureClasses(ClosureCleaner.scala:84)
> at
> org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:187)
> at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122) at
> org.apache.spark.SparkContext.clean(SparkContext.scala:2055) at
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:707) at
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:706) at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
> at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) at
> org.apache.spark.rdd.RDD.mapPartitions(RDD.scala:706) at
> com.databricks.spark.csv.CsvRelation.tokenRdd(CsvRelation.scala:90) at
> com.databricks.spark.csv.CsvRelation.buildScan(CsvRelation.scala:104) at
> com.databricks.spark.csv.CsvRelation.buildScan(CsvRelation.scala:152) at
> org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$4.apply(DataSourceStrategy.scala:64)
> at
> org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$4.apply(DataSourceStrategy.scala:64)
> at
> org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$pruneFilterProject$1.apply(DataSourceStrategy.scala:274)
> at
> org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$pruneFilterProject$1.apply(DataSourceStrategy.scala:273)
> at
> org.apache.spark.sql.execution.datasources.DataSourceStrategy$.pruneFilterProjectRaw(DataSourceStrategy.scala:352)
> at
> org.apache.spark.sql.execution.datasources.DataSourceStrategy$.pruneFilterProject(DataSourceStrategy.scala:269)
> at
> org.apache.spark.sql.execution.datasources.DataSourceStrategy$.apply(DataSourceStrategy.scala:60)
> at
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
> at
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
> at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at
> org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59)
> at
> org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)
> at
> org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:349)
> at
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
> at
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
>
> at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at
> org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59)
> at
> org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:47)
> at
> org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:45)
> at
> org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:52)
> at
> org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:52)
> at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2134) at
> org.apache.spark.sql.DataFrame.head(DataFrame.scala:1413) at
> org.apache.spark.sql.DataFrame.take(DataFrame.scala:1495) at
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606) at 
> org.apache.zeppelin.spark.ZeppelinContext.showDF(ZeppelinContext.java:297)
> at
> org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:144)
> at 
> org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
> at
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
> at
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:300)
> at org.apache.zeppelin.scheduler.Job.run(Job.java:169) at
> org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:134)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>
>
>
>
>
>
>
> I notice that in the source code of spark interpreter (class:
> org.apache.zeppelin.spark.ZeppelinContext. There are something different
> between spark 1.5 and spark 1.6. Spark 1.5 is using SQLContext while Spark
> 1.6 is HiveContext. Unfortunately, no matter true or false are set in
> zeppelin.spark.useHiveContext, %sql just can’t find csv dependencies …
>
>
>
>
>
> try {
>   // Use reflection because of classname returned by queryExecution
> changes from
>   // Spark <1.5.2 org.apache.spark.sql.SQLContext$QueryExecution
>   // Spark 1.6.0> org.apache.spark.sql.hive.HiveContext$QueryExecution
>   Object qe = df.getClass().getMethod("queryExecution").invoke(df);
>   Object a = qe.getClass().getMethod("analyzed").invoke(qe);
>   scala.collection.Seq seq = (scala.collection.Seq)
> a.getClass().getMethod("output").invoke(a);
>
>   columns = (List<Attribute>) scala.collection.JavaConverters.
> *seqAsJavaListConverter*(seq)
>                                                              .asJava();
> } catch (NoSuchMethodException | SecurityException |
> IllegalAccessException
>     | IllegalArgumentException | InvocationTargetException e) {
>   throw new InterpreterException(e);
> }
>
>
>
> Yunfeng
>
>
>

Reply via email to