The GitHub Actions job "Spark CI" on iceberg.git/rewrite_data_files2 has 
succeeded.
Run started by GitHub user shangxinli (triggered by shangxinli).

Head commit for run:
3194f1e6cc2872ae07376311a58539607e21d55e / Xinli Shang <[email protected]>
Address pvary's review comments on ParquetFileMerger PR

1. Remove Spark-specific javadoc constraints from ParquetFileMerger
   - Removed "Files must not have associated delete files" constraint
   - Removed "Table must not have a sort order" constraint
   - These validations are only enforced in SparkParquetFileMergeRunner,
     not in the ParquetFileMerger class itself

2. Fix code style in TestParquetFileMerger
   - Replace 'var' with explicit types (Parquet.DataWriteBuilder, 
DataWriter<Record>)
   - Add newlines after for loop and try-catch blocks for better readability
   - Remove unused Parquet import

3. Optimize test execution in TestRewriteDataFilesAction
   - Add assumeThat for comparison tests to run once instead of twice
   - Use String.valueOf(useParquetFileMerger) for regular tests to test both 
approaches
   - Remove redundant testParquetFileMergerExplicitlyEnabledAndDisabled test

4. Fix TestSparkParquetFileMergeRunner to actually call canMergeAndGetSchema
   - Changed canMergeAndGetSchema from private to package-private in 
SparkParquetFileMergeRunner
   - Updated all tests to create runner instance and call canMergeAndGetSchema()
   - Removed 4 trivial tests (description, inheritance, validOptions, init)
   - All remaining tests now validate actual canMergeAndGetSchema behavior

Report URL: https://github.com/apache/iceberg/actions/runs/20785764791

With regards,
GitHub Actions via GitBox

Reply via email to