JoshuaZhuCN opened a new issue, #7443:
URL: https://github.com/apache/hudi/issues/7443
At present, spark provides to increase bucket numbers automatically.
But in the case of partitioning, the data volume of each partition is not
necessarily balanced,
the initially specified bucket number is generally calculated according to
the maximum partition,
resulting in too many small partitions with the same number.
It cannot reduce the bucket number of small partitions at present.
**Environment Description**
* Hudi version : 0.12.1
* Spark version : 3.1.3
* Hive version : 3.1.0
* Hadoop version : 3.1.1
* Storage (HDFS/S3/GCS..) : HDFS
* Running on Docker? (yes/no) : no
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]