openinx commented on a change in pull request #1802:
URL: https://github.com/apache/iceberg/pull/1802#discussion_r529192386



##########
File path: 
spark/src/main/java/org/apache/iceberg/spark/source/SparkAppenderFactory.java
##########
@@ -86,4 +98,81 @@
       throw new RuntimeIOException(e);
     }
   }
+
+  @Override
+  public DataWriter<InternalRow> newDataWriter(EncryptedOutputFile file, 
FileFormat fileFormat, StructLike partition) {
+    return new DataWriter<>(
+        newAppender(file.encryptingOutputFile(), fileFormat), fileFormat,
+        file.encryptingOutputFile().location(), spec, partition, 
file.keyMetadata());
+  }
+
+  @Override
+  public EqualityDeleteWriter<InternalRow> 
newEqDeleteWriter(EncryptedOutputFile outputFile, FileFormat format,
+                                                             StructLike 
partition) {
+    try {
+      switch (format) {
+        case PARQUET:
+          return Parquet.writeDeletes(outputFile.encryptingOutputFile())
+              .createWriterFunc(msgType -> 
SparkParquetWriters.buildWriter(dsSchema, msgType))
+              .overwrite()
+              .rowSchema(writeSchema)
+              .withSpec(spec)
+              .withPartition(partition)
+              .equalityFieldIds(equalityFieldIds)
+              .withKeyMetadata(outputFile.keyMetadata())
+              .buildEqualityWriter();
+
+        case AVRO:
+          return Avro.writeDeletes(outputFile.encryptingOutputFile())
+              .createWriterFunc(ignored -> new SparkAvroWriter(dsSchema))
+              .overwrite()
+              .rowSchema(writeSchema)
+              .withSpec(spec)
+              .withPartition(partition)
+              .equalityFieldIds(equalityFieldIds)
+              .withKeyMetadata(outputFile.keyMetadata())
+              .buildEqualityWriter();
+
+        default:
+          throw new UnsupportedOperationException("Cannot write unsupported 
format: " + format);
+      }
+
+    } catch (IOException e) {
+      throw new UncheckedIOException("Failed to create new equality delete 
writer", e);
+    }
+  }
+
+  @Override
+  public PositionDeleteWriter<InternalRow> 
newPosDeleteWriter(EncryptedOutputFile outputFile, FileFormat format,
+                                                              StructLike 
partition) {
+    try {
+      switch (format) {
+        case PARQUET:
+          return Parquet.writeDeletes(outputFile.encryptingOutputFile())
+              .createWriterFunc(msgType -> 
SparkParquetWriters.buildWriter(dsSchema, msgType))

Review comment:
       In this sentence `reateWriterFunc(msgType -> 
SparkParquetWriters.buildWriter(dsSchema, msgType))`,  the `msgType`  have 
included the `file` and `pos` columns, but the `dsSchema`  hasn't. I mean we 
need to provide a  correct `dsSchema` so that it could match the `msgType`  
exactly.  It's similar to this: 
https://github.com/apache/iceberg/pull/1663/files#diff-7f498f01885f6e813bc3892c8dfb02b8893365540438b78b3a0221f9c8667c8fR211
 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to