[ 
https://issues.apache.org/jira/browse/IGNITE-3018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260271#comment-15260271
 ] 

Taras Ledkov commented on IGNITE-3018:
--------------------------------------

The results of the test with backups = 2, pairs of neighbors nodes, and option 
excludeNeighbors=true.
(The new implementation contains minor performance fixes)

Test 100 nodes. Old: 78 ms +/- 10.406 ms; New: 15 ms +/- 6.390 ms;
Test 200 nodes. Old: 154 ms +/- 13.137 ms; New: 34 ms +/- 8.860 ms;
Test 300 nodes. Old: 233 ms +/- 13.691 ms; New: 56 ms +/- 11.526 ms;
Test 400 nodes. Old: 316 ms +/- 16.706 ms; New: 78 ms +/- 10.508 ms;
Test 500 nodes. Old: 397 ms +/- 19.009 ms; New: 105 ms +/- 10.445 ms;
Test 600 nodes. Old: 475 ms +/- 19.000 ms; New: 133 ms +/- 12.245 ms;

The results looks like the effect of minor performance fixes is within the 
measurement error.

> Cache affinity calculation is slow with large nodes number
> ----------------------------------------------------------
>
>                 Key: IGNITE-3018
>                 URL: https://issues.apache.org/jira/browse/IGNITE-3018
>             Project: Ignite
>          Issue Type: Bug
>          Components: cache
>            Reporter: Semen Boikov
>            Assignee: Taras Ledkov
>            Priority: Critical
>             Fix For: 1.6
>
>
> With large number of cache server nodes (> 200)  RendezvousAffinityFunction 
> and FairAffinityFunction work pretty slow .
> For RendezvousAffinityFunction.assignPartitions can take hundredes of 
> milliseconds, for FairAffinityFunction it can take seconds.
> For RendezvousAffinityFunction most time is spent in MD5 hash calculation and 
> nodes list sorting. As optimization we can try to cache {partion, node} MD5 
> hash or try another hash function. Also several minor optimizations are 
> possible (avoid unncecessary allocations, only one thread local 'get', etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to