openinx commented on a change in pull request #1704:
URL: https://github.com/apache/iceberg/pull/1704#discussion_r533277424
##########
File path:
flink/src/test/java/org/apache/iceberg/flink/actions/TestRewriteDataFilesAction.java
##########
@@ -280,4 +290,79 @@ public void testRewriteLargeTableHasResiduals() throws
IOException {
// Assert the table records as expected.
SimpleDataUtil.assertTableRecords(icebergTableUnPartitioned, expected);
}
+
+ /**
+ * a test case to test avoid repeate compress
+ * <p>
+ * If datafile cannot be combined to CombinedScanTask with other DataFiles,
the size of the CombinedScanTask list size
+ * is 1, so we remove these CombinedScanTasks to avoid compressed repeatedly.
+ * <p>
+ * In this test case,we generated 3 data files and set targetSizeInBytes
greater than the largest file size so that it
+ * cannot be combined a CombinedScanTask with other datafiles. The datafile
with the largest file size will not be
+ * compressed.
+ *
+ * @throws IOException IOException
+ */
+ @Test
+ public void testRewriteAvoidRepeateCompress() throws IOException {
+ if (!format.equals(FileFormat.ORC)) {
+ List<Record> expected = Lists.newArrayList();
+ Schema schema = icebergTableUnPartitioned.schema();
+ GenericAppenderFactory genericAppenderFactory = new
GenericAppenderFactory(schema);
+ File file = temp.newFile();
+ FileAppender<Record> fileAppender =
genericAppenderFactory.newAppender(Files.localOutput(file), format);
+ long filesize = 20000;
+ int count = 0;
+ for (; fileAppender.length() < filesize; count++) {
+ Record record = RECORD.copy();
+ record.setField("id", count);
+ record.setField("data", "iceberg");
+ fileAppender.add(record);
+ expected.add(record);
+ }
+ fileAppender.close();
+
+ DataFile dataFile = DataFiles.builder(icebergTableUnPartitioned.spec())
+ .withPath(file.getAbsolutePath())
+ .withFileSizeInBytes(file.length())
+ .withFormat(format)
+ .withRecordCount(count)
+ .build();
Review comment:
Use the `try(...){}` to close the fileAppender like this:
```java
File file = temp.newFile();
int fileSize = 2000;
try (FileAppender<Record> fileAppender =
genericAppenderFactory.newAppender(Files.localOutput(file), format)) {
for (int idx = 0; fileAppender.length() < fileSize; idx++) {
Record record = RECORD.copy();
record.setField("id", idx);
record.setField("data", "iceberg");
fileAppender.add(record);
expected.add(record);
}
}
DataFile dataFile = DataFiles.builder(icebergTableUnPartitioned.spec())
.withPath(file.getAbsolutePath())
.withFileSizeInBytes(file.length())
.withFormat(format)
.withRecordCount(expected.size())
.build();
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]