difin commented on code in PR #5248:
URL: https://github.com/apache/hive/pull/5248#discussion_r1619485404


##########
ql/src/java/org/apache/hadoop/hive/ql/ddl/table/storage/compact/AlterTableCompactOperation.java:
##########
@@ -141,11 +142,13 @@ private List<Partition> getPartitions(Table table, 
AlterTableCompactDesc desc, D
     } else {
       Map<String, String> partitionSpec = desc.getPartitionSpec();
       partitions = context.getDb().getPartitions(table, partitionSpec);
-      if (partitions.size() > 1) {
-        throw new HiveException(ErrorMsg.TOO_MANY_COMPACTION_PARTITIONS);
-      } else if (partitions.isEmpty()) {
+      if (partitions.isEmpty()) {
         throw new HiveException(ErrorMsg.INVALID_PARTITION_SPEC);
       }
+      partitions = partitions.stream().filter(part -> part.getSpec().size() == 
partitionSpec.size()).collect(Collectors.toList());

Review Comment:
   This validates that the partition spec given in the compaction command 
matches exactly one partition in the table, not a partial partition spec. 
   
   Let's say, a table has partitions with specs (a,b) and (a,b,c) because of 
evolution and a compaction command is run with spec (a,b). On line 144 it will 
find both partition specs and after filtering it will have only one (a,b) and 
will pass validation.
   
   Another case, let's assume a table has the same partitions with specs (a,b) 
and (a,b,c) and a compaction command is run with spec (a).  On line 144 it will 
find both partition specs and after filtering it will have zero partitions and 
will fail validation with TOO_MANY_COMPACTION_PARTITIONS exception.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to