Hi AndreiL,

Were these files written with the Parquet V2 writer? The Spark 2.1 vectorized 
reader does not appear to support that format.

Michael


> On May 9, 2017, at 11:04 AM, andreiL <leibov...@rogers.com> wrote:
> 
> Hi, I am getting an exception in Spark 2.1 reading parquet files where some
> columns are DELTA_BYTE_ARRAY encoded.
> 
> java.lang.UnsupportedOperationException: Unsupported encoding:
> DELTA_BYTE_ARRAY
> 
> Is this exception by design, or am I missing something?
> 
> If I turn off the vectorized reader, reading these files works fine.
> 
> AndreiL
> 
> 
> 
> --
> View this message in context: 
> http://apache-spark-developers-list.1001551.n3.nabble.com/Parquet-vectorized-reader-DELTA-BYTE-ARRAY-tp21538.html
> Sent from the Apache Spark Developers List mailing list archive at Nabble.com.
> 
> ---------------------------------------------------------------------
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
> 


---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to