zeruibao commented on code in PR #41521:
URL: https://github.com/apache/spark/pull/41521#discussion_r1223944833


##########
connector/avro/src/main/scala/org/apache/spark/sql/avro/AvroDeserializer.scala:
##########
@@ -160,6 +160,14 @@ private[sql] class AvroDeserializer(
         (logicalDataType, catalystType) match {
           case (LongType, LongType) => (updater, ordinal, value) =>
             updater.setLong(ordinal, value.asInstanceOf[Long])
+          case (_, LongType) => avroType.getLogicalType match {
+            case _: TimestampMicros | _: TimestampMillis |

Review Comment:
   I always read them as long with `updater.setLong(ordinal, 
value.asInstanceOf[Long])` In the past, we don't care about LogicalType at all. 
code look like this
   ```
   case (LONG, LongType) => (updater, ordinal, value) =>
           updater.setLong(ordinal, value.asInstanceOf[Long])
   ```
   As long as the encoded type is Long, we always handle it in the same way.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to