[
https://issues.apache.org/jira/browse/HIVE-26102?focusedWorklogId=753264&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-753264
]
ASF GitHub Bot logged work on HIVE-26102:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 06/Apr/22 09:05
Start Date: 06/Apr/22 09:05
Worklog Time Spent: 10m
Work Description: pvary commented on code in PR #3131:
URL: https://github.com/apache/hive/pull/3131#discussion_r843693178
##########
iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergOutputCommitter.java:
##########
@@ -118,18 +120,23 @@ public void commitTask(TaskAttemptContext
originalContext) throws IOException {
.run(output -> {
Table table =
HiveIcebergStorageHandler.table(context.getJobConf(), output);
if (table != null) {
- HiveIcebergRecordWriter writer = writers.get(output);
- DataFile[] closedFiles;
+ HiveIcebergWriter writer = writers.get(output);
+ HiveIcebergWriter delWriter = delWriters.get(output);
+ String fileForCommitLocation =
generateFileForCommitLocation(table.location(), jobConf,
+ attemptID.getJobID(), attemptID.getTaskID().getId());
+ if (delWriter != null) {
+ DeleteFile[] closedFiles = delWriter.deleteFiles().toArray(new
DeleteFile[0]);
+ createFileForCommit(closedFiles, fileForCommitLocation,
table.io());
Review Comment:
Maybe we can create a little bit more complex data structure to serialise.
I think creating/reading back the S3 files is where we will spend some
serious time
Issue Time Tracking
-------------------
Worklog Id: (was: 753264)
Time Spent: 6h (was: 5h 50m)
> Implement DELETE statements for Iceberg tables
> ----------------------------------------------
>
> Key: HIVE-26102
> URL: https://issues.apache.org/jira/browse/HIVE-26102
> Project: Hive
> Issue Type: New Feature
> Reporter: Marton Bod
> Assignee: Marton Bod
> Priority: Major
> Labels: pull-request-available
> Time Spent: 6h
> Remaining Estimate: 0h
>
--
This message was sent by Atlassian Jira
(v8.20.1#820001)