Github user a10y commented on the issue:

    https://github.com/apache/spark/pull/20580
  
    As far as we can tell this is an accidental breaking change, as dropping 
support for this in vectorized Parquet reader was never called out. We have 
Parquet datasets with binary columns with logical type DECIMAL that were 
loadable before the change and have since become unloadable, throwing in 
`readBinaryBatch`


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to