kingeasternsun commented on a change in pull request #3987:
URL: https://github.com/apache/iceberg/pull/3987#discussion_r803484922



##########
File path: 
flink/v1.13/flink/src/main/java/org/apache/iceberg/flink/data/FlinkParquetReaders.java
##########
@@ -321,6 +327,29 @@ public DecimalData read(DecimalData ignored) {
     }
   }
 
+  private static class TimestampInt96Reader extends 
ParquetValueReaders.UnboxedReader<Long> {
+    private static final long UNIX_EPOCH_JULIAN = 2_440_588L;
+
+    TimestampInt96Reader(ColumnDescriptor desc) {
+      super(desc);
+    }
+
+    @Override
+    public Long read(Long ignored) {
+      return readLong();
+    }
+
+    @Override
+    public long readLong() {

Review comment:
       Finally  I added the support to both flink v1.13 and v1.14,  and  for 
the test
   - add test case in  flink v1.13 , flink v1.14 module
   - for generate parquet-int96  as `TestSparkParquetReaders` does,  import 
`spark-sql` in  `testImplementation` of build.gradle
   - just use  `GenericInternalRow` and `RandomUtil.generatePrimitive` to 
generate `List<InternalRow> rows` that will be written to parquet file as int96 
timestamp type
   
   how about this ? @rdblue @kbendick 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to