openinx commented on a change in pull request #1802:
URL: https://github.com/apache/iceberg/pull/1802#discussion_r528287777



##########
File path: 
spark/src/main/java/org/apache/iceberg/spark/source/SparkAppenderFactory.java
##########
@@ -86,4 +98,81 @@
       throw new RuntimeIOException(e);
     }
   }
+
+  @Override
+  public DataWriter<InternalRow> newDataWriter(EncryptedOutputFile file, 
FileFormat fileFormat, StructLike partition) {
+    return new DataWriter<>(
+        newAppender(file.encryptingOutputFile(), fileFormat), fileFormat,
+        file.encryptingOutputFile().location(), spec, partition, 
file.keyMetadata());
+  }
+
+  @Override
+  public EqualityDeleteWriter<InternalRow> 
newEqDeleteWriter(EncryptedOutputFile outputFile, FileFormat format,
+                                                             StructLike 
partition) {
+    try {
+      switch (format) {
+        case PARQUET:
+          return Parquet.writeDeletes(outputFile.encryptingOutputFile())
+              .createWriterFunc(msgType -> 
SparkParquetWriters.buildWriter(dsSchema, msgType))
+              .overwrite()
+              .rowSchema(writeSchema)
+              .withSpec(spec)
+              .withPartition(partition)
+              .equalityFieldIds(equalityFieldIds)
+              .withKeyMetadata(outputFile.keyMetadata())
+              .buildEqualityWriter();
+
+        case AVRO:
+          return Avro.writeDeletes(outputFile.encryptingOutputFile())
+              .createWriterFunc(ignored -> new SparkAvroWriter(dsSchema))
+              .overwrite()
+              .rowSchema(writeSchema)
+              .withSpec(spec)
+              .withPartition(partition)
+              .equalityFieldIds(equalityFieldIds)
+              .withKeyMetadata(outputFile.keyMetadata())
+              .buildEqualityWriter();
+
+        default:
+          throw new UnsupportedOperationException("Cannot write unsupported 
format: " + format);
+      }
+
+    } catch (IOException e) {
+      throw new UncheckedIOException("Failed to create new equality delete 
writer", e);
+    }
+  }
+
+  @Override
+  public PositionDeleteWriter<InternalRow> 
newPosDeleteWriter(EncryptedOutputFile outputFile, FileFormat format,
+                                                              StructLike 
partition) {
+    try {
+      switch (format) {
+        case PARQUET:
+          return Parquet.writeDeletes(outputFile.encryptingOutputFile())
+              .createWriterFunc(msgType -> 
SparkParquetWriters.buildWriter(dsSchema, msgType))

Review comment:
       Here we will need to provide the schema which combines with 
`writeSchema`  and file & pos columns to construct the `ParquetValueWriter` 
because the `PositionDeleteStructWriter` will treat the  user-provided rows and 
file+pos as a whole record to write into the target file.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to