[ 
https://issues.apache.org/jira/browse/CALCITE-1806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16396109#comment-16396109
 ] 

ASF GitHub Bot commented on CALCITE-1806:
-----------------------------------------

Github user joshelser commented on the issue:

    https://github.com/apache/calcite-avatica/pull/28
  
    > I can reproduce this by setFetchSize(0) on the PreparedStatement before 
executing it.
    
    Oof. Getting fetchSize and maxResultSize mixed up is bad.
    
    > I found a solution to the setFetchSize issue. However I'm not entirely 
sure it is correct. It looks like fetchSize, maxRowsInFirstFrame, and 
maxRowCount could be misused throughout Avatica. It looks like at some point 
maxRowsinFirstFrame was added but not all the variable names were changed? I 
don't think I understand the different between fetchSize and 
maxRowsInFirstFrame since they should be similar if not the same?
    
    `maxRowsInFirstFrame` and `maxRowCount` are irritating. This was trying to 
fix an old bug in a backwards-compat manner. IIRC, `maxRowCount` is supposed to 
be a global "I want no more than $maxRowCount rows from this query", whereas 
`maxRowsInFirstFrame` is a protocol-specific thing that limits the number of 
rows we'll pack into a single ResultSetResponse message from server->client.
    
    I don't recall how fetchSize fits into this, but it seems to be getting 
used in Avatica somewhere, instead of just getting set onto the "real" 
Statement inside of the server. (Avatica could try to do something smart with 
the fetchSize, but an acceptable starting point would just be to pass the value 
along).
    
    Does that help? If you have a unit test you can share, that would help me 
get up to speed in debugging.


> UnsupportedOperationException accessing Druid through Spark JDBC
> ----------------------------------------------------------------
>
>                 Key: CALCITE-1806
>                 URL: https://issues.apache.org/jira/browse/CALCITE-1806
>             Project: Calcite
>          Issue Type: Bug
>          Components: avatica
>    Affects Versions: avatica-1.9.0
>         Environment: Spark 1.6, scala 10, CDH 5.7
>            Reporter: Benjamin Vogan
>            Priority: Major
>
> I am interested in querying Druid via Spark.  I realize that there are 
> several mechanisms for doing this, but I was curious about using the JDBC 
> batch offered by the latest release as it is familiar to our analysts and 
> seems like it should be a well supported path going forward.
> My first attempt has failed with an UnsupportedOperationException.  I ran 
> spark-shell with the --jars option to add the avatica 1.9.0 jdbc driver jar.
> {noformat}
> scala> val dw2 = sqlContext.read.format("jdbc").options(Map("url" -> 
> "jdbc:avatica:remote:url=http://jarvis-druid-query002:8082/druid/v2/sql/avatica/";,
>  "dbtable" -> "sor_business_events_all", "driver" -> 
> "org.apache.calcite.avatica.remote.Driver", "fetchSize"->"10000")).load()
> java.lang.UnsupportedOperationException
>       at 
> org.apache.calcite.avatica.AvaticaConnection.prepareStatement(AvaticaConnection.java:275)
>       at 
> org.apache.calcite.avatica.AvaticaConnection.prepareStatement(AvaticaConnection.java:121)
>       at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:122)
>       at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:91)
>       at 
> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
>       at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>       at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>       at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:25)
>       at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:30)
>       at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:32)
>       at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:34)
>       at $iwC$$iwC$$iwC$$iwC.<init>(<console>:36)
>       at $iwC$$iwC$$iwC.<init>(<console>:38)
>       at $iwC$$iwC.<init>(<console>:40)
>       at $iwC.<init>(<console>:42)
>       at <init>(<console>:44)
>       at .<init>(<console>:48)
>       at .<clinit>(<console>)
>       at .<init>(<console>:7)
>       at .<clinit>(<console>)
>       at $print(<console>)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:606)
>       at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1045)
>       at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1326)
>       at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:821)
>       at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:852)
>       at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:800)
>       at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>       at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>       at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>       at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
>       at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
>       at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>       at 
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>       at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
>       at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1064)
>       at org.apache.spark.repl.Main$.main(Main.scala:31)
>       at org.apache.spark.repl.Main.main(Main.scala)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:606)
>       at 
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>       at 
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
>       at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>       at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
>       at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to