openinx commented on a change in pull request #3987:
URL: https://github.com/apache/iceberg/pull/3987#discussion_r810843539



##########
File path: 
flink/v1.14/flink/src/test/java/org/apache/iceberg/flink/data/TestFlinkParquetReader.java
##########
@@ -129,4 +147,85 @@ protected void writeAndValidate(Schema schema) throws 
IOException {
     
writeAndValidate(RandomGenericData.generateDictionaryEncodableRecords(schema, 
NUM_RECORDS, 21124), schema);
     writeAndValidate(RandomGenericData.generateFallbackRecords(schema, 
NUM_RECORDS, 21124, NUM_RECORDS / 20), schema);
   }
+
+  protected List<RowData> rowDataFromFile(InputFile inputFile, Schema schema) 
throws IOException {
+    try (CloseableIterable<RowData> reader =
+        Parquet.read(inputFile)
+            .project(schema)
+            .createReaderFunc(type -> FlinkParquetReaders.buildReader(schema, 
type))
+            .build()) {
+      return Lists.newArrayList(reader);
+    }
+  }
+
+  @Test
+  public void testInt96TimestampProducedBySparkIsReadCorrectly() throws 
IOException {

Review comment:
       I will suggest to write few rows by using the flink native writers, and 
then use the the following readers to assert the their results: 
   * flink native parquet reader;
   * iceberg generic parquet reader
   * iceberg flink parquet reader




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to