Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/21935#discussion_r206747243
--- Diff:
external/avro/src/main/scala/org/apache/spark/sql/avro/AvroDeserializer.scala
---
@@ -71,7 +72,15 @@ class AvroDeserializer(rootAvroType: Schema,
rootCatalystType: DataType) {
private def newWriter(
avroType: Schema,
catalystType: DataType,
- path: List[String]): (CatalystDataUpdater, Int, Any) => Unit =
+ path: List[String]): (CatalystDataUpdater, Int, Any) => Unit = {
+ (avroType.getLogicalType, catalystType) match {
--- End diff --
Can we do this like:
```scala
case (LONG, TimestampType) => avroType.getLogicalType match {
case _: TimestampMillis => (updater, ordinal, value) =>
updater.setLong(ordinal, value.asInstanceOf[Long] * 1000)
case _: TimestampMicros => (updater, ordinal, value) =>
updater.setLong(ordinal, value.asInstanceOf[Long])
case _ => (updater, ordinal, value) =>
updater.setLong(ordinal, value.asInstanceOf[Long] * 1000)
}
```
? Looks they have Avro long type anyway. Thought it's better to read and
actually safer and correct.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]