Github user jamesthomp commented on the issue: https://github.com/apache/spark/pull/20580 @kiszk - I've changed the implementation to no longer use `column.isArray()` and instead just inline the decimal type check (so no changes needed to the public api). I don't think you could actually hit this codepath with ArrayType, so that part was unnecessary. As for testing, it might be easiest to check in a parquet file with the binary decimal format and then check that spark can read it?
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org