cloud-fan commented on a change in pull request #32452:
URL: https://github.com/apache/spark/pull/32452#discussion_r628193572



##########
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/TypeUtils.scala
##########
@@ -116,7 +116,7 @@ object TypeUtils {
 
   def invokeOnceForInterval(dataType: DataType)(f: => Unit): Unit = {
     def isInterval(dataType: DataType): Boolean = dataType match {
-      case CalendarIntervalType | DayTimeIntervalType | YearMonthIntervalType 
=> true

Review comment:
       This is to allow writing out interval values. Is it really related to 
columnar execution?

##########
File path: 
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedColumnReader.java
##########
@@ -575,7 +575,8 @@ private void readIntBatch(int rowId, int num, 
WritableColumnVector column) throw
     // This is where we implement support for the valid type conversions.
     // TODO: implement remaining type conversions
     if (column.dataType() == DataTypes.IntegerType ||
-        canReadAsIntDecimal(column.dataType())) {
+        canReadAsIntDecimal(column.dataType()) ||
+        column.dataType() == DataTypes.YearMonthIntervalType) {

Review comment:
       ditto




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to