[ 
https://issues.apache.org/jira/browse/CALCITE-1806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16394529#comment-16394529
 ] 

ASF GitHub Bot commented on CALCITE-1806:
-----------------------------------------

Github user risdenk commented on the issue:

    https://github.com/apache/calcite-avatica/pull/28
  
    So it looks like Spark is setting fetch size to be 0 on the prepared 
statement. This is causing the maxRows to get set to 0 during the request.
    
    For standard JDBC when the execute request is made: `"maxRowCount":100}` 
which results in at least 100 rows.
    For Spark when the execute request is made: `"maxRowCount":0}` which 
results in 0 rows.
    
    I can reproduce this by `setFetchSize(0)` on the PreparedStatement before 
executing it. Using `setMaxRows(n)` seems to have no affect. It looks like the 
two (fetch size and max size) are being confused somewhere in Avatica.


> UnsupportedOperationException accessing Druid through Spark JDBC
> ----------------------------------------------------------------
>
>                 Key: CALCITE-1806
>                 URL: https://issues.apache.org/jira/browse/CALCITE-1806
>             Project: Calcite
>          Issue Type: Bug
>          Components: avatica
>    Affects Versions: avatica-1.9.0
>         Environment: Spark 1.6, scala 10, CDH 5.7
>            Reporter: Benjamin Vogan
>            Priority: Major
>
> I am interested in querying Druid via Spark.  I realize that there are 
> several mechanisms for doing this, but I was curious about using the JDBC 
> batch offered by the latest release as it is familiar to our analysts and 
> seems like it should be a well supported path going forward.
> My first attempt has failed with an UnsupportedOperationException.  I ran 
> spark-shell with the --jars option to add the avatica 1.9.0 jdbc driver jar.
> {noformat}
> scala> val dw2 = sqlContext.read.format("jdbc").options(Map("url" -> 
> "jdbc:avatica:remote:url=http://jarvis-druid-query002:8082/druid/v2/sql/avatica/";,
>  "dbtable" -> "sor_business_events_all", "driver" -> 
> "org.apache.calcite.avatica.remote.Driver", "fetchSize"->"10000")).load()
> java.lang.UnsupportedOperationException
>       at 
> org.apache.calcite.avatica.AvaticaConnection.prepareStatement(AvaticaConnection.java:275)
>       at 
> org.apache.calcite.avatica.AvaticaConnection.prepareStatement(AvaticaConnection.java:121)
>       at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:122)
>       at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:91)
>       at 
> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
>       at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>       at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>       at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:25)
>       at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:30)
>       at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:32)
>       at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:34)
>       at $iwC$$iwC$$iwC$$iwC.<init>(<console>:36)
>       at $iwC$$iwC$$iwC.<init>(<console>:38)
>       at $iwC$$iwC.<init>(<console>:40)
>       at $iwC.<init>(<console>:42)
>       at <init>(<console>:44)
>       at .<init>(<console>:48)
>       at .<clinit>(<console>)
>       at .<init>(<console>:7)
>       at .<clinit>(<console>)
>       at $print(<console>)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:606)
>       at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1045)
>       at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1326)
>       at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:821)
>       at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:852)
>       at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:800)
>       at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>       at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>       at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>       at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
>       at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
>       at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>       at 
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>       at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
>       at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1064)
>       at org.apache.spark.repl.Main$.main(Main.scala:31)
>       at org.apache.spark.repl.Main.main(Main.scala)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:606)
>       at 
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>       at 
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
>       at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>       at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
>       at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to