ConeyLiu commented on code in PR #7836:
URL: https://github.com/apache/iceberg/pull/7836#discussion_r1240591088


##########
flink/v1.17/flink/src/test/java/org/apache/iceberg/flink/sink/TestDeltaTaskWriter.java:
##########
@@ -349,4 +397,42 @@ private TaskWriterFactory<RowData> 
createTaskWriterFactory(List<Integer> equalit
         equalityFieldIds,
         false);
   }
+
+  private TaskWriterFactory<RowData> createTaskWriterFactory(
+      RowType flinkType, List<Integer> equalityFieldIds) {
+    return new RowDataTaskWriterFactory(
+        SerializableTable.copyOf(table),
+        flinkType,
+        128 * 1024 * 1024,
+        format,
+        table.properties(),
+        equalityFieldIds,
+        true);
+  }
+
+  private void initTable(boolean partitioned) {
+    if (partitioned) {
+      this.table = create(SCHEMA, 
PartitionSpec.builderFor(SCHEMA).identity("data").build());
+    } else {
+      this.table = create(SCHEMA, PartitionSpec.unpartitioned());
+    }
+
+    initTable(table);
+  }
+
+  private void initTable(TestTables.TestTable testTable) {
+    this.table = testTable;
+
+    table
+        .updateProperties()
+        .set(TableProperties.PARQUET_ROW_GROUP_SIZE_BYTES, String.valueOf(8 * 
1024))
+        .defaultFormat(format)
+        .commit();
+  }
+
+  private RowData createBinaryRowData(

Review Comment:
   `BinaryRowData` is want to simulate the ts data with precision 3, then 
getting with precision 6 will fail. Such as following:
   ```java
   java.lang.ArrayIndexOutOfBoundsException: 6
        at 
org.apache.flink.table.data.binary.BinarySegmentUtils.getLongSlowly(BinarySegmentUtils.java:744)
        at 
org.apache.flink.table.data.binary.BinarySegmentUtils.getLongMultiSegments(BinarySegmentUtils.java:738)
        at 
org.apache.flink.table.data.binary.BinarySegmentUtils.getLong(BinarySegmentUtils.java:726)
        at 
org.apache.flink.table.data.binary.BinarySegmentUtils.readTimestampData(BinarySegmentUtils.java:1022)
        at 
org.apache.flink.table.data.binary.BinaryRowData.getTimestamp(BinaryRowData.java:356)
        at 
org.apache.flink.table.data.RowData.lambda$createFieldGetter$39385f9c$1(RowData.java:260)
        at 
org.apache.flink.table.data.RowData.lambda$createFieldGetter$25774257$1(RowData.java:296)
        at 
org.apache.iceberg.flink.data.RowDataProjection.getValue(RowDataProjection.java:159)
        at 
org.apache.iceberg.flink.data.RowDataProjection.isNullAt(RowDataProjection.java:179)
        at org.apache.iceberg.flink.RowDataWrapper.get(RowDataWrapper.java:67)
        at 
org.apache.iceberg.types.JavaHashes$StructLikeHash.hash(JavaHashes.java:92)
        at 
org.apache.iceberg.types.JavaHashes$StructLikeHash.hash(JavaHashes.java:71)
        at 
org.apache.iceberg.util.StructLikeWrapper.hashCode(StructLikeWrapper.java:96)
        at java.util.HashMap.hash(HashMap.java:340)
   ```
   
   However the `GenericRowData` ignored the precision:
   ```java
       @Override
       public TimestampData getTimestamp(int pos, int precision) {
           return (TimestampData) this.fields[pos];
       }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to