liuyongvs commented on code in PR #22324:
URL: https://github.com/apache/flink/pull/22324#discussion_r1163759521


##########
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/ExecutionConfigOptions.java:
##########
@@ -435,6 +435,16 @@ public class ExecutionConfigOptions {
                             "Determines whether CAST will operate following 
the legacy behaviour "
                                     + "or the new one that introduces various 
fixes and improvements.");
 
+    @Documentation.TableOption(execMode = 
Documentation.ExecMode.BATCH_STREAMING)
+    public static final ConfigOption<MapKeyDedupPolicy> 
TABLE_EXEC_MAPKEY_DEDUP_POLICY =
+            key("table.exec.mapkey-dedup-policy")

Review Comment:
    i do the surveys:
   1) max compute supports  extra arguments , which user can use different 
strategy for different function call of map_from_entris in a job. 
https://www.alibabacloud.com/help/zh/maxcompute/latest/map-from-entries
   2) while spark supports with global config 
https://github.com/apache/spark/blob/7e2c6c7ab23f75a6ba83baaa0545482a43845ce8/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/ArrayBasedMapBuilder.scala#L52
   3) presto doesn't support the mapkey-dedup-policy, and spark supports this 
function later, so it suppots https://issues.apache.org/jira/browse/SPARK-23934
   4) this function is not sql standard. and other colllection functions i 
supports following the spark way, so i does it too. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to