rdblue commented on a change in pull request #1184:
URL: https://github.com/apache/iceberg/pull/1184#discussion_r454036152



##########
File path: 
spark/src/test/java/org/apache/iceberg/spark/data/TestSparkParquetReader.java
##########
@@ -67,4 +78,49 @@ protected void writeAndValidate(Schema schema) throws 
IOException {
       Assert.assertFalse("Should not have extra rows", rows.hasNext());
     }
   }
+
+  protected List<InternalRow> rowsFromFile(InputFile inputFile, Schema schema) 
throws IOException {
+    try (CloseableIterable<InternalRow> reader =
+        Parquet.read(inputFile)
+            .project(schema)
+            .createReaderFunc(type -> SparkParquetReaders.buildReader(schema, 
type))
+            .build()) {
+      return Lists.newArrayList(reader);
+    }
+  }
+
+  @Test
+  public void testInt96TimestampProducedBySparkIsReadCorrectly() throws 
IOException {
+    final SparkSession spark =
+        SparkSession.builder()
+            .master("local[2]")
+            .config("spark.sql.parquet.int96AsTimestamp", "false")
+            .getOrCreate();
+
+    final String parquetPath = temp.getRoot().getAbsolutePath() + 
"/parquet_int96";
+    final java.sql.Timestamp ts = java.sql.Timestamp.valueOf("2014-01-01 
23:00:01");
+    spark.createDataset(ImmutableList.of(ts), 
Encoders.TIMESTAMP()).write().parquet(parquetPath);

Review comment:
       Using Spark's `FileFormat` would also make this test easier. You'd be 
able to pass in a value in micros and validate that you get the same value 
back, unmodified. You'd also not need to locate the Parquet file using `find`.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to