bowenli86 commented on a change in pull request #8449: [FLINK-12235][hive] 
Support partition related operations in HiveCatalog
URL: https://github.com/apache/flink/pull/8449#discussion_r285215791
 
 

 ##########
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/HiveCatalog.java
 ##########
 @@ -200,44 +200,56 @@ protected Table createHiveTable(ObjectPath tablePath, 
CatalogBaseTable table) {
        // ------ partitions ------
 
        @Override
-       public void createPartition(ObjectPath tablePath, CatalogPartitionSpec 
partitionSpec, CatalogPartition partition, boolean ignoreIfExists)
-                       throws TableNotExistException, 
TableNotPartitionedException, PartitionSpecInvalidException, 
PartitionAlreadyExistsException, CatalogException {
-               throw new UnsupportedOperationException();
-       }
-
-       @Override
-       public void dropPartition(ObjectPath tablePath, CatalogPartitionSpec 
partitionSpec, boolean ignoreIfNotExists)
-                       throws PartitionNotExistException, CatalogException {
-               throw new UnsupportedOperationException();
-       }
-
-       @Override
-       public void alterPartition(ObjectPath tablePath, CatalogPartitionSpec 
partitionSpec, CatalogPartition newPartition, boolean ignoreIfNotExists)
-                       throws PartitionNotExistException, CatalogException {
-               throw new UnsupportedOperationException();
-       }
-
-       @Override
-       public List<CatalogPartitionSpec> listPartitions(ObjectPath tablePath)
-                       throws TableNotExistException, 
TableNotPartitionedException, CatalogException {
-               throw new UnsupportedOperationException();
+       protected void validateCatalogPartition(CatalogPartition 
catalogPartition) throws CatalogException {
+               if (!(catalogPartition instanceof HiveCatalogPartition)) {
+                       throw new CatalogException(String.format("%s can only 
handle %s but got %s", getClass().getSimpleName(),
+                               HiveCatalogPartition.class.getSimpleName(), 
catalogPartition.getClass().getSimpleName()));
+               }
        }
 
        @Override
-       public List<CatalogPartitionSpec> listPartitions(ObjectPath tablePath, 
CatalogPartitionSpec partitionSpec)
-                       throws TableNotExistException, 
TableNotPartitionedException, CatalogException {
-               throw new UnsupportedOperationException();
-       }
+       protected Partition createHivePartition(Table hiveTable, 
CatalogPartitionSpec partitionSpec, CatalogPartition catalogPartition)
+               throws PartitionSpecInvalidException {
+               Partition partition = new Partition();
+               List<String> partCols = 
getFieldNames(hiveTable.getPartitionKeys());
+               List<String> partValues = getFullPartitionValues(new 
ObjectPath(hiveTable.getDbName(), hiveTable.getTableName()),
+                       partitionSpec, partCols);
+               // validate partition values
+               for (int i = 0; i < partCols.size(); i++) {
+                       if 
(StringUtils.isNullOrWhitespaceOnly(partValues.get(i))) {
+                               throw new 
PartitionSpecInvalidException(catalogName, partCols,
+                                       new ObjectPath(hiveTable.getDbName(), 
hiveTable.getTableName()), partitionSpec);
+                       }
+               }
+               HiveCatalogPartition hiveCatalogPartition = 
(HiveCatalogPartition) catalogPartition;
+               partition.setValues(partValues);
+               partition.setDbName(hiveTable.getDbName());
+               partition.setTableName(hiveTable.getTableName());
+               partition.setCreateTime((int) (System.currentTimeMillis() / 
1000));
+               partition.setParameters(hiveCatalogPartition.getProperties());
+               partition.setSd(hiveTable.getSd().deepCopy());
+
+               String location = hiveCatalogPartition.getLocation();
+               if (null == location) {
 
 Review comment:
   I see. The root cause of missing location seems to be that we assign the 
table's StorageDescriptor to the new partition rather than the old partition's, 
in line 230.  I think we can address it by using location of old partition if 
the new partition doesn't have it.  We already retrieved old partition from HMS 
in `partitionExist()` (as a result we may need to refactor how alterPartition() 
checks if a partition exists or not). It's similar to how `alterTable()` works.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to