MaxGekk commented on a change in pull request #32121:
URL: https://github.com/apache/spark/pull/32121#discussion_r611148067



##########
File path: 
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkExecuteStatementOperation.scala
##########
@@ -122,6 +124,12 @@ private[hive] class SparkExecuteStatementOperation(
           timeFormatters)
       case _: ArrayType | _: StructType | _: MapType | _: UserDefinedType[_] =>
         to += toHiveString((from.get(ordinal), dataTypes(ordinal)), false, 
timeFormatters)
+      case YearMonthIntervalType =>

Review comment:
       If you mean the array, map types, they are converted to strings and they 
don't have a structure that could be recognized by Hive lib. Don't think I can 
do something here in the PR. This is a restriction of current implementation, 
and it should be solved in general way.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to