dawidwys commented on a change in pull request #10059: [FLINK-14543][table]
Support partition for temporary table
URL: https://github.com/apache/flink/pull/10059#discussion_r347253491
##########
File path:
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/descriptors/ConnectTableDescriptor.java
##########
@@ -136,14 +137,24 @@ public void createTemporaryTable(String path) {
" use
registerTableSource/registerTableSink/registerTableSourceAndSink.");
}
+ Map<String, String> properties = new HashMap<>(toProperties());
+
+ // handle schema
Map<String, String> schemaProperties =
schemaDescriptor.toProperties();
TableSchema tableSchema = getTableSchema(schemaProperties);
-
- Map<String, String> properties = new HashMap<>(toProperties());
schemaProperties.keySet().forEach(properties::remove);
+ // handle partition keys
+ DescriptorProperties descriptor = new DescriptorProperties();
Review comment:
Why do you need this code in the first place?
As far as I can tell there is no way to pass partitionKeys to the
`ConnectTableDescriptor`. They will never be in the properties. We need a
method for doing it. Once you add one you don't need to put the keys into
properties. You may just pass them explicitly to the `CatalogTable`.
The reason why its different for the `TableSchema` is backwards
compatibility. The schema was part of properties before and was passed as part
of `SchemaDescriptor`. This is why we had to extract and remove it from the
properties.
Moreover I am not sure if it makes sense to have the `CatalogTableBuilder`.
It duplicates the functionality of `ConnectTableDescriptor` and actually was
never part of the public API. This is a different case though.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services