revans2 commented on a change in pull request #31284:
URL: https://github.com/apache/spark/pull/31284#discussion_r569525495



##########
File path: 
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetIOSuite.scala
##########
@@ -1205,6 +1205,32 @@ class ParquetIOSuite extends QueryTest with ParquetTest 
with SharedSparkSession
       }
     }
   }
+
+  test("SPARK-34167: read LongDecimals with precision < 10, VectorizedReader 
off") {
+    // decimal32-written-as-64-bit.snappy.parquet was generated using a 
3rd-party library. It has
+    // 10 rows of Decimal(9, 1) written as LongDecimal instead of an IntDecimal
+    
readParquetFile(testFile("test-data/decimal32-written-as-64-bit.snappy.parquet"),
 false) {
+      df => assert(10 == df.collect().length)
+    }
+    // decimal32-written-as-64-bit-dict.snappy.parquet was generated using a 
3rd-party library. It
+    // has 2048 rows of Decimal(3, 1) written as LongDecimal instead of an 
IntDecimal
+    
readParquetFile(testFile("test-data/decimal32-written-as-64-bit-dict.snappy.parquet"),
 false) {
+      df => assert(2048 == df.collect().length)
+    }
+  }
+
+  test("SPARK-34167: read LongDecimals with precision < 10, VectorizedReader 
on") {
+    // decimal32-written-as-64-bit.snappy.parquet was generated using a 
3rd-party library. It has
+    // 10 rows of Decimal(9, 1) written as LongDecimal instead of an IntDecimal
+    
readParquetFile(testFile("test-data/decimal32-written-as-64-bit.snappy.parquet"))
 { df =>

Review comment:
       We found this while adding in parquet write support for decimal values 
to https://github.com/NVIDIA/spark-rapids/
   It has the Spark metadata because it was written by an incomplete version of 
the plugin running in Apache Spark.  We have fixed the issue in the plugin so 
it will no longer write out files like this. It matches what Spark does.  But 
because it was not technically a violation of the parquet specification we 
decided to fix Spark so it could read in the files correctly because we know 
that the python API to RAPIDS is also going to produce parquet files (without 
the Spark metadata) that also exhibit these conditions because they want to let 
the user have control over how it is written.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to