[
https://issues.apache.org/jira/browse/HIVE-26156?focusedWorklogId=759239&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-759239
]
ASF GitHub Bot logged work on HIVE-26156:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 20/Apr/22 13:39
Start Date: 20/Apr/22 13:39
Worklog Time Spent: 10m
Work Description: marton-bod commented on code in PR #3225:
URL: https://github.com/apache/hive/pull/3225#discussion_r854147948
##########
iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergStorageHandler.java:
##########
@@ -374,9 +373,12 @@ public DynamicPartitionCtx createDPContext(HiveConf
hiveConf, org.apache.hadoop.
fieldOrderMap.put(fields.get(i).name(), i);
}
+ // deletes already use the bucket values in the partition_struct for
sorting, so no need to add the sort expression
Review Comment:
Yes, good catch, I think we can avoid sorting by the other partition columns
too.
Issue Time Tracking
-------------------
Worklog Id: (was: 759239)
Time Spent: 40m (was: 0.5h)
> Iceberg delete writer should handle deleting from old partition specs
> ---------------------------------------------------------------------
>
> Key: HIVE-26156
> URL: https://issues.apache.org/jira/browse/HIVE-26156
> Project: Hive
> Issue Type: Bug
> Reporter: Marton Bod
> Assignee: Marton Bod
> Priority: Major
> Labels: pull-request-available
> Time Spent: 40m
> Remaining Estimate: 0h
>
> While {{HiveIcebergRecordWriter}} always writes data out according to the
> latest spec, the {{HiveIcebergDeleteWriter}} might have to write delete files
> into partitions that correspond to a variety of specs, both old and new.
> Therefore we should pass the {{{}table.specs(){}}}map into the
> {{HiveIcebergWriter}} so that the delete writer can choose the appropriate
> spec on a per-record basis.
--
This message was sent by Atlassian Jira
(v8.20.7#820007)