[ 
https://issues.apache.org/jira/browse/CALCITE-1806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16392184#comment-16392184
 ] 

ASF GitHub Bot commented on CALCITE-1806:
-----------------------------------------

GitHub user risdenk opened a pull request:

    https://github.com/apache/calcite-avatica/pull/28

    [WIP][CALCITE-1806] Add Apache Spark JDBC test to Avatica server

    This branch is a work in progress to show how Apache Spark and Avatica 
don't seem to be playing along nicely together. Spark JDBC against Avatica 
returns an empty result even though it determines the correct schema.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/risdenk/calcite-avatica CALCITE-1806

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/calcite-avatica/pull/28.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #28
    
----
commit 311ed286cdf2807e8daf84b52b5956a4b54d0a74
Author: Kevin Risden <krisden@...>
Date:   2018-03-09T00:41:54Z

    [CALCITE-1806] Add Apache Spark JDBC test to Avatica server

----


> UnsupportedOperationException accessing Druid through Spark JDBC
> ----------------------------------------------------------------
>
>                 Key: CALCITE-1806
>                 URL: https://issues.apache.org/jira/browse/CALCITE-1806
>             Project: Calcite
>          Issue Type: Bug
>          Components: avatica
>    Affects Versions: avatica-1.9.0
>         Environment: Spark 1.6, scala 10, CDH 5.7
>            Reporter: Benjamin Vogan
>            Priority: Major
>
> I am interested in querying Druid via Spark.  I realize that there are 
> several mechanisms for doing this, but I was curious about using the JDBC 
> batch offered by the latest release as it is familiar to our analysts and 
> seems like it should be a well supported path going forward.
> My first attempt has failed with an UnsupportedOperationException.  I ran 
> spark-shell with the --jars option to add the avatica 1.9.0 jdbc driver jar.
> {noformat}
> scala> val dw2 = sqlContext.read.format("jdbc").options(Map("url" -> 
> "jdbc:avatica:remote:url=http://jarvis-druid-query002:8082/druid/v2/sql/avatica/";,
>  "dbtable" -> "sor_business_events_all", "driver" -> 
> "org.apache.calcite.avatica.remote.Driver", "fetchSize"->"10000")).load()
> java.lang.UnsupportedOperationException
>       at 
> org.apache.calcite.avatica.AvaticaConnection.prepareStatement(AvaticaConnection.java:275)
>       at 
> org.apache.calcite.avatica.AvaticaConnection.prepareStatement(AvaticaConnection.java:121)
>       at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:122)
>       at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:91)
>       at 
> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
>       at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>       at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>       at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:25)
>       at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:30)
>       at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:32)
>       at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:34)
>       at $iwC$$iwC$$iwC$$iwC.<init>(<console>:36)
>       at $iwC$$iwC$$iwC.<init>(<console>:38)
>       at $iwC$$iwC.<init>(<console>:40)
>       at $iwC.<init>(<console>:42)
>       at <init>(<console>:44)
>       at .<init>(<console>:48)
>       at .<clinit>(<console>)
>       at .<init>(<console>:7)
>       at .<clinit>(<console>)
>       at $print(<console>)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:606)
>       at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1045)
>       at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1326)
>       at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:821)
>       at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:852)
>       at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:800)
>       at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>       at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>       at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>       at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
>       at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
>       at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>       at 
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>       at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
>       at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1064)
>       at org.apache.spark.repl.Main$.main(Main.scala:31)
>       at org.apache.spark.repl.Main.main(Main.scala)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:606)
>       at 
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>       at 
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
>       at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>       at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
>       at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to