bseto commented on issue #214:
URL: 
https://github.com/apache/kvrocks-controller/issues/214#issuecomment-2985831545

   ## Background
   
   Hi, my previous PR #304 enabled support for migrating a single slot range 
`["2-8"]` for example, or a single slot `["1"]`. 
   
   However I'd like to be able to queue up a few slot migrations like your 
original example: `["1", "2-8", "11-22"]`, which my PR did not support. 
   
   ## Goal
   
   Trying to see how much effort it'd be to implement this functionality. 
   
   ## Thoughts
   
   Please correct me, or let me know if anything i'm saying below is wrong!
   
   ### Current Implementation
   So when we do a slot migration, what happens is
   1. Handler receives a Migrate Slot request
   2. We call cluster.MigrateSlot which does some checks but essentially issues 
the command to kvrocks to start migrating the slot (or range). 
   3. The request execution ends 
   4. ClusterChecker which is on it's own go-routine checks each shard for 
migration. If it is successful, then it updates the migration status of the 
shard, and also the store (etcd, consul ...)
   
   ### What I think needs to happen
   
   1. If we want to support multiple slot ranges, we'd need to make use of the 
store to save which slot ranges need to migrate. 
   2. Have the controller somehow trigger the next migration when the previous 
one ends with "success"
   3. Support cancelling the migration (will just stop what's queue'd up next, 
but not what's currently migrating)
   4. Support reconnecting and reading from the store and continuing the 
migration
   
   Please let me know if there's anything I got wrong or anything that needs to 
be added. I also haven't started on this yet (or decided to take it on yet), 
I'm just trying to see how much effort it'd be. 
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to