lirui-apache commented on a change in pull request #8965: [FLINK-13068][hive]
HiveTableSink should implement PartitionableTable…
URL: https://github.com/apache/flink/pull/8965#discussion_r300214660
##########
File path:
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/batch/connectors/hive/HiveTableSink.java
##########
@@ -161,4 +164,37 @@ private String toStagingDir(String finalDir,
Configuration conf) throws IOExcept
fs.deleteOnExit(path);
return res;
}
+
+ @Override
+ public List<String> getPartitionFieldNames() {
+ return catalogTable.getPartitionKeys();
+ }
+
+ @Override
+ public void setStaticPartition(Map<String, String> partitions) {
+ // make it a LinkedHashMap to maintain partition column order
+ staticPartitionSpec = new LinkedHashMap<>();
+ for (String partitionCol : getPartitionFieldNames()) {
+ if (partitions.containsKey(partitionCol)) {
+ staticPartitionSpec.put(partitionCol,
partitions.get(partitionCol));
+ }
+ }
+ }
+
+ private void validatePartitionSpec() {
+ List<String> partitionCols = getPartitionFieldNames();
+ Preconditions.checkArgument(new
HashSet<>(partitionCols).containsAll(
Review comment:
Yeah I'll print the specific columns and reformat the code.
We don't need the check the order here. Partition columns order is defined
by `getPartitionFieldNames()`. We can always reorder a partition spec (which is
a map) as long as it only contains valid partition columns.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services