swuferhong opened a new issue, #1993:
URL: https://github.com/apache/fluss/issues/1993

   ### Search before asking
   
   - [x] I searched in the [issues](https://github.com/apache/fluss/issues) and 
found nothing similar.
   
   
   ### Motivation
   
   During the use of Fluss, I've observed that many users configure an 
excessively large number of `partitions or buckets number`—even when their 
actual throughput is low. For log tables, a large number of 
`partitions/buckets` doesn't consume significant memory. However, for 
PrimaryKey tables, each bucket corresponds to a separate RocksDB instance, and 
even when no data is written, each RocksDB instance still consumes 
approximately 15 MB of memory, placing a heavy burden on the system.
   
   Although Fluss currently supports cluster-wide limits on the maximum number 
of partitions (max.partition.num) and buckets (max.bucket.num), this approach 
lacks flexibility and is insufficient for effectively controlling cluster load.
   
   Fluss does not support multi-tenancy. Therefore, we currently treat 
databases as the tenant boundary in Fluss. We propose introducing per-database 
limits on the maximum number of partitions and buckets. This would allow us to 
set a small initial quota during cluster setup and later relax the limits based 
on individual customer needs—effectively implementing a tenant-level throttling 
strategy similar to those in systems like Kafka that support true multi-tenancy.
   
   The implementation can be achieved through dynamic configuration parameters 
(alterClusterConfigs), allowing cluster administrators to adjust the 
partition/bucket limits for different databases by updating this dynamic 
configuration.
   
   ### Solution
   
   _No response_
   
   ### Anything else?
   
   _No response_
   
   ### Willingness to contribute
   
   - [ ] I'm willing to submit a PR!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to