rdblue commented on a change in pull request #3567:
URL: https://github.com/apache/iceberg/pull/3567#discussion_r754689302
##########
File path:
spark/v3.2/spark/src/test/java/org/apache/iceberg/spark/actions/TestNewRewriteDataFilesAction.java
##########
@@ -1054,6 +1045,41 @@ private void writeDF(Dataset<Row> df) {
.save(tableLocation);
}
+ private List<DeleteFile> writePosDeletesToFile(Table table, DataFile
dataFile,
+ int outputDeleteFiles, int
rowsPerDelete) {
+ return writePosDeletes(table, dataFile.partition(),
dataFile.path().toString(), outputDeleteFiles, rowsPerDelete);
+ }
+
+ private List<DeleteFile> writePosDeletes(Table table, StructLike partition,
String path,
+ int outputDeleteFiles, int
rowsPerDelete) {
+ List<DeleteFile> results = Lists.newArrayList();
+ int rowStart = 0;
+ for (int file = 0; file < outputDeleteFiles; file++) {
+ EncryptedOutputFile outputFile = EncryptedFiles.encryptedOutput(
+
table.io().newOutputFile(table.locationProvider().newDataLocation(UUID.randomUUID().toString())),
+ EncryptionKeyMetadata.EMPTY);
+ PositionDeleteWriter<Record> posDeleteWriter = new
GenericAppenderFactory(table.schema(), table.spec(),
+ null, null, null)
Review comment:
Yep, looks great. Thanks! I know it's minor but it really does help when
I'm reading code!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]