Hi Damien,

there are similar issues reported in JIRA(ZEPPELIN-381
<https://issues.apache.org/jira/browse/ZEPPELIN-381> and ZEPPELIN-194
<https://issues.apache.org/jira/browse/ZEPPELIN-194>) that Spark sql
doesn't work in Zeppelin when the table is created with external library.

I will look into this issue, and will ping you back when I submit the patch.



On Mon, Jan 11, 2016 at 9:19 AM, COUERON Damien (i-BP - MICROPOLE) <
damien.coueron_s...@i-bp.fr> wrote:

> Hi,
>
>
>
> I’m trying to run SQL queries with Spark on tables that were created with
> a custom Serde.
>
> Spark is configured to load our custom jar but in some cases zeppelin does
> not seem to be able to do the same.
>
>
>
> For example, the following query works in zeppelin AND in spark-shell
> command line client:
>
>
>
>                 %spark
>
>                 sqlContext.sql("select * from dcou.storage_wsg_webapi
> limit 10").collect().foreach(println)
>
>
>
> But the following query works well in spark-sql command line but does not
> work in zeppelin:
>
>
>
>
>                 %sql
>
>                 select * from dcou.storage_wsg_webapi limit 10
>
>
>
> Could you please help me understand?
>
>
>
> Here is the error I get in zeppelin:
>
>
>
> java.lang.ClassNotFoundException: ibp.big.hive.serde.CSVSerde
>
>        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>
>        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>
>        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>
>        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>
>        at java.lang.Class.forName0(Native Method)
>
>        at java.lang.Class.forName(Class.java:348)
>
>        at
> org.apache.spark.sql.hive.MetastoreRelation.<init>(HiveMetastoreCatalog.scala:701)
>
>        at
> org.apache.spark.sql.hive.HiveMetastoreCatalog.lookupRelation(HiveMetastoreCatalog.scala:248)
>
>        at org.apache.spark.sql.hive.HiveContext$$anon$2.org
> $apache$spark$sql$catalyst$analysis$OverrideCatalog$$super$lookupRelation(HiveContext.scala:373)
>
>        at
> org.apache.spark.sql.catalyst.analysis.OverrideCatalog$$anonfun$lookupRelation$3.apply(Catalog.scala:165)
>
>        at
> org.apache.spark.sql.catalyst.analysis.OverrideCatalog$$anonfun$lookupRelation$3.apply(Catalog.scala:165)
>
>        at scala.Option.getOrElse(Option.scala:120)
>
>        at
> org.apache.spark.sql.catalyst.analysis.OverrideCatalog$class.lookupRelation(Catalog.scala:165)
>
>        at
> org.apache.spark.sql.hive.HiveContext$$anon$2.lookupRelation(HiveContext.scala:373)
>
>        at
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.getTable(Analyzer.scala:222)
>
>        at
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$7.applyOrElse(Analyzer.scala:233)
>
>        at
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$7.applyOrElse(Analyzer.scala:229)
>
>        at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:222)
>
>        at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:222)
>
>        at
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:51)
>
>        at
> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:221)
>
>        at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:242)
>
>        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>
>        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>
>        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>
>        at
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>
>        at
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>
>        at
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>
>        at scala.collection.TraversableOnce$class.to
> (TraversableOnce.scala:273)
>
>        at scala.collection.AbstractIterator.to(Iterator.scala:1157)
>
>        at
> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>
>        at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
>
>        at
> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>
>        at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
>
>        at
> org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:272)
>
>        at
> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:227)
>
>        at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:242)
>
>        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>
>        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>
>        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>
>        at
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>
>        at
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>
>        at
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>
>        at scala.collection.TraversableOnce$class.to
> (TraversableOnce.scala:273)
>
>        at scala.collection.AbstractIterator.to(Iterator.scala:1157)
>
>        at
> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>
>        at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
>
>        at
> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>
>        at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
>
>        at
> org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:272)
>
>        at
> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:227)
>
>        at
> org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:212)
>
>        at
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:229)
>
>        at
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:219)
>
>        at
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:61)
>
>        at
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:59)
>
>        at
> scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:111)
>
>        at scala.collection.immutable.List.foldLeft(List.scala:84)
>
>        at
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:59)
>
>        at
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:51)
>
>        at scala.collection.immutable.List.foreach(List.scala:318)
>
>        at
> org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:51)
>
>        at
> org.apache.spark.sql.SQLContext$QueryExecution.analyzed$lzycompute(SQLContext.scala:933)
>
>        at
> org.apache.spark.sql.SQLContext$QueryExecution.analyzed(SQLContext.scala:933)
>
>        at
> org.apache.spark.sql.SQLContext$QueryExecution.assertAnalyzed(SQLContext.scala:931)
>
>        at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:131)
>
>        at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
>
>        at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:755)
>
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>        at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>
>        at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>        at java.lang.reflect.Method.invoke(Method.java:497)
>
>        at
> org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:136)
>
>        at
> org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
>
>        at
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
>
>        at
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:276)
>
>        at org.apache.zeppelin.scheduler.Job.run(Job.java:170)
>
>        at
> org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)
>
>        at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>
>        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>
>        at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>
>        at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>
>        at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>
>        at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>
>        at java.lang.Thread.run(Thread.java:745)
>
>
>
> Best regards,
>
> Damien
>
> ------------------------------
> __________ L'intégrité de ce message n'étant pas assurée sur Internet, la
> société i-BP ne peut être tenue responsable de son contenu. Si vous n'êtes
> pas destinataire de ce message, merci de le détruire et d'avertir
> l'expéditeur. The integrity of this message cannot be guaranteed on the
> Internet. The i-BP company cannot therefore be considered responsible for
> the contents. If you are not the intended recipient of this message, then
> please delete it and notify the sender. __________
>
>

Reply via email to