[
https://issues.apache.org/jira/browse/HIVE-26156?focusedWorklogId=759237&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-759237
]
ASF GitHub Bot logged work on HIVE-26156:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 20/Apr/22 13:37
Start Date: 20/Apr/22 13:37
Worklog Time Spent: 10m
Work Description: marton-bod commented on code in PR #3225:
URL: https://github.com/apache/hive/pull/3225#discussion_r854146249
##########
iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergRecordWriter.java:
##########
@@ -37,17 +38,17 @@
class HiveIcebergRecordWriter extends HiveIcebergWriter {
- HiveIcebergRecordWriter(Schema schema, PartitionSpec spec, FileFormat format,
+ HiveIcebergRecordWriter(Schema schema, Map<Integer, PartitionSpec> specs,
FileFormat format,
FileWriterFactory<Record> fileWriterFactory, OutputFileFactory
fileFactory, FileIO io, long targetFileSize,
TaskAttemptID taskAttemptID, String tableName) {
- super(schema, spec, io, taskAttemptID, tableName,
+ super(schema, specs, io, taskAttemptID, tableName,
new ClusteredDataWriter<>(fileWriterFactory, fileFactory, io, format,
targetFileSize));
}
@Override
public void write(Writable row) throws IOException {
Record record = ((Container<Record>) row).get();
- writer.write(record, spec, partition(record));
+ writer.write(record, specs.get(specs.size() - 1), partition(record));
Review Comment:
Good catch, I did not know that Iceberg reused the old spec in step3. Will
store the latest spec in the record writer then
Issue Time Tracking
-------------------
Worklog Id: (was: 759237)
Time Spent: 0.5h (was: 20m)
> Iceberg delete writer should handle deleting from old partition specs
> ---------------------------------------------------------------------
>
> Key: HIVE-26156
> URL: https://issues.apache.org/jira/browse/HIVE-26156
> Project: Hive
> Issue Type: Bug
> Reporter: Marton Bod
> Assignee: Marton Bod
> Priority: Major
> Labels: pull-request-available
> Time Spent: 0.5h
> Remaining Estimate: 0h
>
> While {{HiveIcebergRecordWriter}} always writes data out according to the
> latest spec, the {{HiveIcebergDeleteWriter}} might have to write delete files
> into partitions that correspond to a variety of specs, both old and new.
> Therefore we should pass the {{{}table.specs(){}}}map into the
> {{HiveIcebergWriter}} so that the delete writer can choose the appropriate
> spec on a per-record basis.
--
This message was sent by Atlassian Jira
(v8.20.7#820007)