SusurHe opened a new issue, #5099:
URL: https://github.com/apache/iceberg/issues/5099

   Hi,
   Recently, I was preparing to upgrade Spark and Iceberg, but i find it 
produced a lot of small files when i perform `MERGE INTO` operation; In 
previous versions(3.1 + 0.12.1), a parquet file size was 100+mb, but now it is 
only 20-30mb;
   
   i don't know I don't know what changes caused the increase of small files, I 
tried some configurations of spark and iceberg, but they didn't work as well as 
i wanted: 
    - `spark.sql.adaptive.coalescePartitions.minPartitionSize`
    - `spark.sql.adaptive.advisoryPartitionSizeInBytes`
    - `write.parquet.row-group-size-bytes`
    - `write.target-file-size-bytes`
   
   ***How can reduce the small files of `MERGE INTO` operation, or control 
their size?***
   
   
   - Spark3.2 + Iceberg0.13.1 data files:
   
![wecom-temp-4bc3f323fa91e64e5d31057a5adc2404](https://user-images.githubusercontent.com/51081799/174707878-cba0e974-fe16-4274-8657-12e0e340bc61.png)
   
    - Spark3.1 + Iceberg0.12.1 data files:
   
![wecom-temp-2a806d9bb599d76f380d1dad28a0a8f3](https://user-images.githubusercontent.com/51081799/174708069-5876858b-e90c-4bbd-bb6f-d34b52148f62.png)
    
   
   Thanks all;


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to