gustavoatt commented on a change in pull request #1184:
URL: https://github.com/apache/iceberg/pull/1184#discussion_r459728793
##########
File path:
spark/src/test/java/org/apache/iceberg/spark/data/TestSparkParquetReader.java
##########
@@ -67,4 +78,49 @@ protected void writeAndValidate(Schema schema) throws
IOException {
Assert.assertFalse("Should not have extra rows", rows.hasNext());
}
}
+
+ protected List<InternalRow> rowsFromFile(InputFile inputFile, Schema schema)
throws IOException {
+ try (CloseableIterable<InternalRow> reader =
+ Parquet.read(inputFile)
+ .project(schema)
+ .createReaderFunc(type -> SparkParquetReaders.buildReader(schema,
type))
+ .build()) {
+ return Lists.newArrayList(reader);
+ }
+ }
+
+ @Test
+ public void testInt96TimestampProducedBySparkIsReadCorrectly() throws
IOException {
+ final SparkSession spark =
+ SparkSession.builder()
+ .master("local[2]")
+ .config("spark.sql.parquet.int96AsTimestamp", "false")
+ .getOrCreate();
Review comment:
Yes, looking at one of the tests we do support writing parquet files
using Spark's WriteSupport.
To be able to use a `FileAppender` I had to add a TimestampAsInt96 type
(that can only be written using Spark's builtin WriteSupport) so that schema
conversion within Iceberg's `ParquetWriteSupport` knows that this timestamps
should be encoded as int96 in the parquet schema.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]