pvary commented on a change in pull request #2701:
URL: https://github.com/apache/hive/pull/2701#discussion_r724388854



##########
File path: 
iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergRecordWriter.java
##########
@@ -83,7 +102,29 @@ protected PartitionKey partition(Record row) {
 
   @Override
   public void write(Writable row) throws IOException {
-    super.write(((Container<Record>) row).get());
+    if (!isDelete) {
+      super.write(((Container<Record>) row).get());
+    } else {
+      Record rec = ((Container<Record>) row).get();
+      Record actualRow = GenericRecord.create(schema);
+      for (int i = 2; i < rec.size(); ++i) {
+        actualRow.set(i - 2, rec.get(i));
+      }
+      if (!spec.isUnpartitioned()) {
+        currentKey.partition(actualRow);
+      }
+      // for now, we always create parquet delete writer
+      PositionDeleteWriter<Record> deleteWriter =

Review comment:
       Is the `BaseFileWriterFactory` not available for us yet?
   
   We might want to create a `GenericFileWriterFactory`, and might want to use 
`BaseFileWriterFactory.newPositionDeleteWriter` to get the writer




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to