swuferhong commented on issue #1396:
URL: https://github.com/apache/fluss/issues/1396#issuecomment-3146953917

   Hi, @gyang94. Apologies, I'm currently finalizing the POC for this task 
(80%). The POC implementation details are substantially similar to the FIP 
proposal:
   
https://cwiki.apache.org/confluence/display/FLUSS/FIP-8%3A+Support+Cluster+Reblance.
 However, the initial implementation is intentionally minimal, currently we 
only support `LeaderReplicaDistributionGoal `, `ReplicaDistributionGoal `, 
`PreferredLeaderElectionGoal `, and will not introduce a metrics collection 
system. It will simply count the current bucket distribution of each 
tabletServer and will not involve rack location. So, if you're interested, 
there will be many sub-tasks to work on in the future.
   
   Additionally, regarding the implementation, the.FIP-8's idea is not to 
expose the reassignment interface to users, as user-defined reassignment plans 
are impractical for large clusters—especially for Fluss with Partitioned 
tables. Therefore, the rebalance plan generation will be managed by Fluss self. 
User can use the `serverTag` interface to mark machines that need to be 
decommissioned or require special handling, and the `RebalanceManager` can 
collect these `serverTags` to generate the plan.  
   
   For the specific goal implementation, we will refer to LinkedIn's 
open-source Cruise Control (https://github.com/linkedin/cruise-control), which 
has many excellent design choices.
   
   If you'd like to participate in this large rebalance effort, you could start 
by reviewing my API PR (https://github.com/apache/fluss/pull/1380) and the 
upcoming PR supporting rebalance. Later, you could expand on the foundational 
capabilities, such as collecting richer metrics or supporting more advanced 
goals.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to