The GitHub Actions job "Spark CI" on iceberg.git/rewrite_data_files2 has 
succeeded.
Run started by GitHub user shangxinli (triggered by shangxinli).

Head commit for run:
3afc2b961e8afadc1a07cc7f46d2c2ec922a0218 / Xinli Shang <[email protected]>
Have ParquetFileMerger.mergeFiles() return DataFile to avoid code duplication

Updated ParquetFileMerger.mergeFiles() to return a DataFile instead of void,
eliminating code duplication between Spark and Flink implementations.

Changes:
- Updated ParquetFileMerger.mergeFiles() to accept PartitionSpec and partition
  parameters and return DataFile with complete metadata
- Method now handles metrics extraction and DataFile construction internally
- Simplified SparkParquetFileMergeRunner by removing ~40 lines of metrics
  extraction and DataFile building code
- Deleted MergeResult class (no longer needed)
- Updated all test calls to pass new parameters

Benefits:
- Flink can reuse the same logic without duplicating code
- Cleaner API with single method handling merge and metadata extraction
- Consistent DataFile construction across all engines

Report URL: https://github.com/apache/iceberg/actions/runs/19899291503

With regards,
GitHub Actions via GitBox

Reply via email to