Hi,
The error message when reading Parquet data in Spark 3.2.1 is due to a
schema mismatch between the Parquet file and the Spark schema. The Parquet
file contains INT32 data for the ss_sold_time_sk column, while Spark schema
expects it to be BIGINT. This schema mismatch is causing the error.
Hello spark-dev
I have loaded tpcds data in parquet format using spark *3.0.2* and while
reading it from spark *3.2.1* , my query is failing with below error.
Later I set spark.sql.parquet.enableVectorizedReader=false my but it
resulted in a different error. I am also providing output of