juliuszsompolski commented on a change in pull request #25721:
[WIP][SPARK-29018][SQL] Implement Spark Thrift Server with it's own code base
on PROTOCOL_VERSION_V9
URL: https://github.com/apache/spark/pull/25721#discussion_r338120177
##########
File path:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/cli/operation/SparkExecuteStatementOperation.scala
##########
@@ -134,31 +140,36 @@ private[hive] class SparkExecuteStatementOperation(
resultRowSet
} else {
// maxRowsL here typically maps to java.sql.Statement.getFetchSize,
which is an int
- val maxRows = maxRowsL.toInt
+ val maxRows = maxRowsL
var curRow = 0
while (curRow < maxRows && iter.hasNext) {
val sparkRow = iter.next()
- val row = ArrayBuffer[Any]()
- var curCol = 0
- while (curCol < sparkRow.length) {
- if (sparkRow.isNullAt(curCol)) {
- row += null
- } else {
- addNonNullColumnValue(sparkRow, row, curCol)
Review comment:
Using RowSet with Spark Rows is fine, but if you now just do
resultRowSet.addRow(sparkRow), then it will e.g. not convert Array, Map, Struct
or Interval to String, like addNonNullColumnValue did, so you still need to add
these conversions somewhere. Probably would be easiest with a Project with
casts added on top of the query if needed.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]