[ 
https://issues.apache.org/jira/browse/CASSANDRA-762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12831020#action_12831020
 ] 

Jonathan Ellis commented on CASSANDRA-762:
------------------------------------------

looking at 0.5 patch:

patch does not build.  (looks like the Gossiper diff made it in by mistake.)

the load wait should probably be for BROADCAST_INTERVAL + RING_DELAY.


> Load balancing does not account for the load of the moving node
> ---------------------------------------------------------------
>
>                 Key: CASSANDRA-762
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-762
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.5, 0.6
>            Reporter: Stu Hood
>            Assignee: Stu Hood
>            Priority: Minor
>             Fix For: 0.5, 0.6
>
>         Attachments: 
> 0001-Wait-BROADCAST_INTERVAL-for-load-information-and-cal.patch, 
> for-0.5-0001-Wait-BROADCAST_INTERVAL-for-load-information-and-cal.patch
>
>
> Given a node A (with load 10 gb) and a node B (with load 20 gb), running the 
> loadbalance command against node A will:
> 1. Remove node A from the ring
>   * Recalculates pending ranges so that node B is responsible for the entire 
> ring
> 2. Pick the most loaded node
>   * node B is still reporting 20 gb load, because that is all it has locally
> 3. Choose a token that divides the range of the most loaded node in half
> Since the token calculation doesn't take into account the load that node B is 
> 'inheriting' from node A, the token will divide node B's load in half and 
> swap the loads. Instead, the token calculation needs to pretend that B has 
> already inherited the 10 gb from node A, for a total of 30 gb. The token that 
> should be chosen falls at 15 gb of the total load, or 5 gb into node B's load.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to