kasakrisz commented on code in PR #4672:
URL: https://github.com/apache/hive/pull/4672#discussion_r1327263251


##########
ql/src/java/org/apache/hadoop/hive/ql/ddl/table/misc/truncate/TruncateTableAnalyzer.java:
##########
@@ -338,4 +370,47 @@ private void addStatTask(ASTNode root, Table table, Path 
oldPartitionLocation, P
       moveTask.addDependentTask(statTask);
     }
   }
+
+  public StringBuilder constructDeleteQuery(Table table, Map<String, String> 
partitionSpec) throws SemanticException {
+    String qualifiedTableName = 
HiveTableName.ofNullable(HiveUtils.unparseIdentifier(table.getTableName(), 
this.conf),
+            HiveUtils.unparseIdentifier(table.getDbName(), this.conf), 
null).getNotEmptyDbTable();
+    StringBuilder sb = new StringBuilder().append("delete from 
").append(qualifiedTableName)
+            .append(" where ");
+    List<String> keyList = new ArrayList<String>(partitionSpec.keySet());
+    Deserializer deserializer = table.getDeserializer();
+    Map<String, PrimitiveObjectInspector.PrimitiveCategory> stringTypeInfoMap 
= new HashMap<>();
+    try {
+      ObjectInspector objectInspector = deserializer.getObjectInspector();
+      if (objectInspector.getCategory() == ObjectInspector.Category.STRUCT) {
+        StructObjectInspector structObjectInspector = (StructObjectInspector) 
objectInspector;
+        List<? extends StructField> structFields =  
structObjectInspector.getAllStructFieldRefs();
+        for (StructField structField : structFields) {
+          if (structField.getFieldObjectInspector().getCategory() == 
ObjectInspector.Category.PRIMITIVE) {
+            PrimitiveObjectInspector primitiveObjectInspector = 
(PrimitiveObjectInspector) structField.getFieldObjectInspector();
+            stringTypeInfoMap.put(structField.getFieldName(),
+                primitiveObjectInspector.getTypeInfo().getPrimitiveCategory());
+          }
+        }
+      }
+    } catch (SerDeException e) {
+      throw new SemanticException(String.format("Unable to get object 
inspector due to: %s", e));
+    }
+    for (int index = 0;index < keyList.size();index++) {
+      String key = keyList.get(index);
+      PrimitiveObjectInspector.PrimitiveCategory category = 
stringTypeInfoMap.get(key);
+      if (category == null) {
+        throw new SemanticException(String.format(
+                "The column %s is a complex type column which is not supported 
via TRUNCATE operation.", key));
+      }
+      String value = partitionSpec.get(key);
+      boolean shouldEncloseQuotes = 
TypeInfoUtils.shouldEncloseQuotes(category);
+      sb.append(index == 0 ? "" : " and ").append(key).append(" = ");
+      if (shouldEncloseQuotes) {
+        sb.append("\'").append(value).append("\'");

Review Comment:
   I think the original approach
   ```
   sb.append("'").append(value).append("'");
   ```
   was good.
   My question is about identifiers which has `'` character.  Those must be 
doubled but I'm not sure of we support partition values with `'` character in 
it. Could you please check it?



##########
ql/src/java/org/apache/hadoop/hive/ql/ddl/table/misc/truncate/TruncateTableAnalyzer.java:
##########
@@ -338,4 +370,47 @@ private void addStatTask(ASTNode root, Table table, Path 
oldPartitionLocation, P
       moveTask.addDependentTask(statTask);
     }
   }
+
+  public StringBuilder constructDeleteQuery(Table table, Map<String, String> 
partitionSpec) throws SemanticException {
+    String qualifiedTableName = 
HiveTableName.ofNullable(HiveUtils.unparseIdentifier(table.getTableName(), 
this.conf),
+            HiveUtils.unparseIdentifier(table.getDbName(), this.conf), 
null).getNotEmptyDbTable();
+    StringBuilder sb = new StringBuilder().append("delete from 
").append(qualifiedTableName)
+            .append(" where ");
+    List<String> keyList = new ArrayList<String>(partitionSpec.keySet());
+    Deserializer deserializer = table.getDeserializer();
+    Map<String, PrimitiveObjectInspector.PrimitiveCategory> stringTypeInfoMap 
= new HashMap<>();
+    try {
+      ObjectInspector objectInspector = deserializer.getObjectInspector();
+      if (objectInspector.getCategory() == ObjectInspector.Category.STRUCT) {
+        StructObjectInspector structObjectInspector = (StructObjectInspector) 
objectInspector;
+        List<? extends StructField> structFields =  
structObjectInspector.getAllStructFieldRefs();
+        for (StructField structField : structFields) {
+          if (structField.getFieldObjectInspector().getCategory() == 
ObjectInspector.Category.PRIMITIVE) {
+            PrimitiveObjectInspector primitiveObjectInspector = 
(PrimitiveObjectInspector) structField.getFieldObjectInspector();
+            stringTypeInfoMap.put(structField.getFieldName(),
+                primitiveObjectInspector.getTypeInfo().getPrimitiveCategory());
+          }
+        }
+      }
+    } catch (SerDeException e) {
+      throw new SemanticException(String.format("Unable to get object 
inspector due to: %s", e));
+    }
+    for (int index = 0;index < keyList.size();index++) {
+      String key = keyList.get(index);
+      PrimitiveObjectInspector.PrimitiveCategory category = 
stringTypeInfoMap.get(key);
+      if (category == null) {
+        throw new SemanticException(String.format(
+                "The column %s is a complex type column which is not supported 
via TRUNCATE operation.", key));

Review Comment:
   I think it worth  print what is supported. How about
   ```
   "Only primitive partition column type is supported via TRUNCATE operation 
but column %s is %s", key, category"
   ```
   ?



##########
iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergStorageHandler.java:
##########
@@ -1644,4 +1647,74 @@ public void 
validatePartSpec(org.apache.hadoop.hive.ql.metadata.Table hmsTable,
       }
     }
   }
+
+  /**
+   * A function to decide whether a given truncate query can perform a 
metadata delete or not.
+   * If its not possible to perform metadata delete then try to perform a 
positional delete.
+   * @param hmsTable A Hive table instance.
+   * @param partitionSpec Map containing partition specification given by user.
+   * @return true if we can perform metadata delete, otherwise false.
+   * @throws SemanticException Exception raised when a partition transform is 
being used
+   * or when partition column is not present in the table.
+   */

Review Comment:
   Thanks for the comment. The method name `shouldTruncate` explains to me that 
this method is making some decision based on its actual parameters. However the 
method body does not seems to be self explaining. Could you please extend this 
javadocs with the steps of the decision making? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to