AngersZhuuuu commented on a change in pull request #34308:
URL: https://github.com/apache/spark/pull/34308#discussion_r745244458



##########
File path: 
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetIOSuite.scala
##########
@@ -1084,6 +1087,32 @@ class ParquetIOSuite extends QueryTest with ParquetTest 
with SharedSparkSession
       }
     }
   }
+
+  test("SPARK-37035: Improve error message when use parquet vectorized 
reader") {

Review comment:
       > I spent sometime looking for a e2e test but couldn't figure out one 🤷 
. The errors with a corrupted Parquet file would either be caught when 
initializing the Parquet dictionary (for instance, when trying to initialize a 
Parquet INT64 dictionary with INT32 type), or in Spark when checking Spark 
schema versus Parquet schema.
   > 
   > It's be interesting to know how you encountered this error originally 
@AngersZhuuuu . Which Spark version you were using?
   
   This situation has been encountered several times, and each time I repeated 
the test to determine which partition's file had a problem with the data, and 
then rewritten the data to solve it, so I did this PR.
   
   I don't know why this happen. Hmm last time when I want to save the file 
with bad data,  the data have been replaced by our user. So I can't provide you 
a such file now. Maybe next time ==
   Spark version is  spark-3.1.2




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to