rdblue commented on a change in pull request #1802:
URL: https://github.com/apache/iceberg/pull/1802#discussion_r528934829



##########
File path: 
spark/src/main/java/org/apache/iceberg/spark/source/SparkAppenderFactory.java
##########
@@ -86,4 +98,81 @@
       throw new RuntimeIOException(e);
     }
   }
+
+  @Override
+  public DataWriter<InternalRow> newDataWriter(EncryptedOutputFile file, 
FileFormat fileFormat, StructLike partition) {
+    return new DataWriter<>(
+        newAppender(file.encryptingOutputFile(), fileFormat), fileFormat,
+        file.encryptingOutputFile().location(), spec, partition, 
file.keyMetadata());
+  }
+
+  @Override
+  public EqualityDeleteWriter<InternalRow> 
newEqDeleteWriter(EncryptedOutputFile outputFile, FileFormat format,
+                                                             StructLike 
partition) {
+    try {
+      switch (format) {
+        case PARQUET:
+          return Parquet.writeDeletes(outputFile.encryptingOutputFile())
+              .createWriterFunc(msgType -> 
SparkParquetWriters.buildWriter(dsSchema, msgType))
+              .overwrite()
+              .rowSchema(writeSchema)
+              .withSpec(spec)
+              .withPartition(partition)
+              .equalityFieldIds(equalityFieldIds)
+              .withKeyMetadata(outputFile.keyMetadata())
+              .buildEqualityWriter();
+
+        case AVRO:
+          return Avro.writeDeletes(outputFile.encryptingOutputFile())
+              .createWriterFunc(ignored -> new SparkAvroWriter(dsSchema))
+              .overwrite()
+              .rowSchema(writeSchema)
+              .withSpec(spec)
+              .withPartition(partition)
+              .equalityFieldIds(equalityFieldIds)
+              .withKeyMetadata(outputFile.keyMetadata())
+              .buildEqualityWriter();
+
+        default:
+          throw new UnsupportedOperationException("Cannot write unsupported 
format: " + format);
+      }
+
+    } catch (IOException e) {
+      throw new UncheckedIOException("Failed to create new equality delete 
writer", e);
+    }
+  }
+
+  @Override
+  public PositionDeleteWriter<InternalRow> 
newPosDeleteWriter(EncryptedOutputFile outputFile, FileFormat format,
+                                                              StructLike 
partition) {
+    try {
+      switch (format) {
+        case PARQUET:
+          return Parquet.writeDeletes(outputFile.encryptingOutputFile())
+              .createWriterFunc(msgType -> 
SparkParquetWriters.buildWriter(dsSchema, msgType))

Review comment:
       The builder here will automatically wrap the row schema and the function 
passed here to add the extra schema layer. So we just need to configure this 
for rows, not for the combined schema. That's part of why the position writer's 
delete method accepts file and position independent of row, to keep the 
encapsulation and not leak the concern to places like this.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to