[ 
https://issues.apache.org/jira/browse/IGNITE-3018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15270569#comment-15270569
 ] 

Taras Ledkov commented on IGNITE-3018:
--------------------------------------

The results of the test the bucket based implementation:

Test 63 nodes. Old: 49 ms +/- 10.377 ms; New: 2 ms +/- 5.966 ms; <-- the 
threshold nodes count to switch on bucket based calculation
Test 100 nodes. Old: 77 ms +/- 10.279 ms; New: 1 ms +/- 2.149 ms;
Test 200 nodes. Old: 154 ms +/- 13.921 ms; New: 1 ms +/- 2.165 ms;
Test 300 nodes. Old: 233 ms +/- 14.627 ms; New: 1 ms +/- 2.675 ms;
Test 400 nodes. Old: 314 ms +/- 16.676 ms; New: 2 ms +/- 2.520 ms;
Test 500 nodes. Old: 397 ms +/- 18.704 ms; New: 2 ms +/- 2.775 ms;
Test 600 nodes. Old: 477 ms +/- 18.628 ms; New: 2 ms +/- 2.841 ms;


> Cache affinity calculation is slow with large nodes number
> ----------------------------------------------------------
>
>                 Key: IGNITE-3018
>                 URL: https://issues.apache.org/jira/browse/IGNITE-3018
>             Project: Ignite
>          Issue Type: Bug
>          Components: cache
>            Reporter: Semen Boikov
>            Assignee: Taras Ledkov
>            Priority: Critical
>             Fix For: 1.6
>
>
> With large number of cache server nodes (> 200)  RendezvousAffinityFunction 
> and FairAffinityFunction work pretty slow .
> For RendezvousAffinityFunction.assignPartitions can take hundredes of 
> milliseconds, for FairAffinityFunction it can take seconds.
> For RendezvousAffinityFunction most time is spent in MD5 hash calculation and 
> nodes list sorting. As optimization we can try to cache {partion, node} MD5 
> hash or try another hash function. Also several minor optimizations are 
> possible (avoid unncecessary allocations, only one thread local 'get', etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to