Hi, I am getting an exception in Spark 2.1 reading parquet files where some
columns are DELTA_BYTE_ARRAY encoded.

java.lang.UnsupportedOperationException: Unsupported encoding:
DELTA_BYTE_ARRAY

Is this exception by design, or am I missing something?

If I turn off the vectorized reader, reading these files works fine.

AndreiL



--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/Parquet-vectorized-reader-DELTA-BYTE-ARRAY-tp21538.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to