szehon-ho commented on code in PR #4764:
URL: https://github.com/apache/iceberg/pull/4764#discussion_r873968348
##########
spark/v3.2/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSourceTablesBase.java:
##########
@@ -1457,4 +1514,28 @@ private void asMetadataRecord(GenericData.Record file) {
file.put(0, FileContent.DATA.id());
file.put(3, 0); // specId
}
+
+ private PositionDeleteWriter<InternalRow> newPositionDeleteWriter(Table
table, PartitionSpec spec,
+ StructLike
partition) {
+ OutputFileFactory fileFactory = OutputFileFactory.builderFor(table, 0,
0).build();
+ EncryptedOutputFile outputFile = fileFactory.newOutputFile(spec,
partition);
+
+ SparkFileWriterFactory fileWriterFactory =
SparkFileWriterFactory.builderFor(table).build();
+ return fileWriterFactory.newPositionDeleteWriter(outputFile, spec,
partition);
+ }
+
+ private DeleteFile writePositionDeletes(Table table, PartitionSpec spec,
StructLike partition,
+
Iterable<PositionDelete<InternalRow>> deletes) {
+ PositionDeleteWriter<InternalRow> positionDeleteWriter =
newPositionDeleteWriter(table, spec, partition);
+
+ try (PositionDeleteWriter<InternalRow> writer = positionDeleteWriter) {
+ for (PositionDelete<InternalRow> delete : deletes) {
+ writer.write(delete);
+ }
+ } catch (IOException e) {
Review Comment:
For test method I usually think to declare "throws Exception" instead of
writing catches to reduce lines of code, but it may be just personal preference.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]