KurtYoung commented on a change in pull request #8179: [FLINK-12200]
[Table-planner] Support UNNEST for MAP types
URL: https://github.com/apache/flink/pull/8179#discussion_r278782753
##########
File path:
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/rules/logical/LogicalUnnestRule.scala
##########
@@ -94,6 +95,17 @@ class LogicalUnnestRule(
case arrayType: ArrayRelDataType =>
(arrayType.getComponentType,
ExplodeFunctionUtil.explodeTableFuncFromType(arrayType.typeInfo))
+
+ case map: MapRelDataType =>
+ val keyTypeInfo = FlinkTypeFactory.toTypeInfo(map.keyType)
+ val valueTypeInfo = FlinkTypeFactory.toTypeInfo(map.valueType)
+ val componentTypeInfo =
createTuple2TypeInformation(keyTypeInfo,valueTypeInfo)
Review comment:
Sorry for giving a wrong example. What i want to say is we should make
built-in table functions looks more easy and explicit for framework. To this
very example, you relied on some implicit functionality which Flink currently
support. The first one is you replied on Flink can implicitly convert Tuple to
a Row, and the second one is you don't provide any valuable type related
information in your table function, all the type information only provided in
the logical rule.
I'm not saying this is wrong, but i think there is another way to make all
these things more accurate and explicit. For example, you can explicitly tell
framework your table function is returning Row type, and give more type related
information to framework. It will make these codes more robust, have less
possibility break in the future. Can you give this a try? If this will involve
lots of changes, i'm also ok with current version.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services