JingsongLi commented on a change in pull request #1173:
URL: https://github.com/apache/iceberg/pull/1173#discussion_r455467964



##########
File path: flink/src/test/java/org/apache/iceberg/flink/TestFlinkSchemaUtil.java
##########
@@ -54,21 +54,22 @@ public void testConvertFlinkSchemaToIcebergSchema() {
         .field("decimal", DataTypes.DECIMAL(2, 2))
         .field("decimal2", DataTypes.DECIMAL(38, 2))
         .field("decimal3", DataTypes.DECIMAL(10, 1))
+        .field("multiset", DataTypes.MULTISET(DataTypes.STRING().notNull()))

Review comment:
       Null values are OK, the problem is null keys.
   For null keys support, looks like formats are OK, the only constraint of 
formats is that Avro only support string key of map type.
   But the thing is that whether we have any special optimizations for not 
null. The answer is yes, see `ParquetValueWriters.option`. If a null key comes 
to parquet writer, I think there should be `NullPointException`. This looks not 
so elegant.
   
   Another choice is what I said in 
https://github.com/apache/iceberg/pull/1096/files/8891cd5438306f0b4b226706058beff7c3cd4080#diff-12a375418217cdc6be26c73e02d56065R102
 
   We can throw a `UnsupportedException` here to tell users, although Flink has 
default nullable map key.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to