yaooqinn commented on a change in pull request #32121:
URL: https://github.com/apache/spark/pull/32121#discussion_r611188787



##########
File path: 
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkExecuteStatementOperation.scala
##########
@@ -122,6 +124,12 @@ private[hive] class SparkExecuteStatementOperation(
           timeFormatters)
       case _: ArrayType | _: StructType | _: MapType | _: UserDefinedType[_] =>
         to += toHiveString((from.get(ordinal), dataTypes(ordinal)), false, 
timeFormatters)
+      case YearMonthIntervalType =>

Review comment:
       I mean the client-side get the plain 
YearMonthIntervalType/DayTimeIntervalType values via 
`HiveIntervalYearMonth/DayTimeIntervalType`.toString, but the ones that in 
nested types are handled in `toHiveString`. They look the same but these 
branches seem to be redundant and may cause inconsistency if the underlying 
hive changed
   
   Is 
   ```scala
   case YearMonthIntervalType | DayTimeIntervalType  | _: ArrayType | _: 
StructType | _: MapType | _: UserDefinedType[_] =>
   ```
   better as we handle them consistently at spark-side?
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to