lirui-apache commented on a change in pull request #8449: [FLINK-12235][hive] 
Support partition related operations in HiveCatalog
URL: https://github.com/apache/flink/pull/8449#discussion_r284964094
 
 

 ##########
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/HiveCatalogBase.java
 ##########
 @@ -407,4 +443,201 @@ private Table getHiveTable(ObjectPath tablePath) throws 
TableNotExistException {
                                String.format("Failed to get table %s from Hive 
metastore", tablePath.getFullName()), e);
                }
        }
+
+       // ------ partitions ------
+
+       @Override
+       public boolean partitionExists(ObjectPath tablePath, 
CatalogPartitionSpec partitionSpec)
+               throws CatalogException {
+               try {
+                       Table hiveTable = getHiveTable(tablePath);
+                       return client.getPartition(tablePath.getDatabaseName(), 
tablePath.getObjectName(),
+                               getFullPartitionValues(tablePath, 
partitionSpec, getFieldNames(hiveTable.getPartitionKeys()))) != null;
+               } catch (NoSuchObjectException | TableNotExistException | 
PartitionSpecInvalidException e) {
+                       return false;
+               } catch (TException e) {
+                       throw new CatalogException(
+                               String.format("Failed to get partition %s of 
table %s", partitionSpec, tablePath), e);
+               }
+       }
+
+       @Override
+       public void createPartition(ObjectPath tablePath, CatalogPartitionSpec 
partitionSpec, CatalogPartition partition, boolean ignoreIfExists)
+               throws TableNotExistException, TableNotPartitionedException, 
PartitionSpecInvalidException, PartitionAlreadyExistsException, 
CatalogException {
+
+               validateCatalogPartition(partition);
+
+               Table hiveTable = getHiveTable(tablePath);
+
+               ensurePartitionedTable(tablePath, hiveTable);
+
+               try {
+                       client.add_partition(createHivePartition(hiveTable, 
partitionSpec, partition));
 
 Review comment:
   Yes, but that means we need to parse the exceptions returned by HMS client 
to figure out the root cause. I think it's cleaner to check explicitly by 
ourselves.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to