Github user gengliangwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/22037#discussion_r208655309
--- Diff:
external/avro/src/main/scala/org/apache/spark/sql/avro/AvroDeserializer.scala
---
@@ -138,10 +138,24 @@ class AvroDeserializer(rootAvroType: Schema,
rootCatalystType: DataType) {
bytes
case b: Array[Byte] => b
case other => throw new RuntimeException(s"$other is not a valid
avro binary.")
-
}
updater.set(ordinal, bytes)
+ case (FIXED, _: DecimalType) => (updater, ordinal, value) =>
+ val decimal = Decimal(value.asInstanceOf[GenericFixed].bytes())
--- End diff --
No, I don't think we need. Normally Avro records is validated with the
schema before written. So the precision of input here is supposed be valid.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]