Niels,

I believe, that the reason is performance of an affinity function and a size of 
GridDhtPartitionsFullMessage.
An affinity function needs to assign partitions to nodes. In case of a 
replicated cache, there are (number of partitions) x (number of nodes) pairs of 
(node, partition) that need to be calculated. It was especially critical in 
Ignite 1.x when FairAffinityFunction was used, since its computational 
performance was worse than of Rendezvous.

I think, it was decided to decrease the number of partitions for replicated 
caches to make the affinity function work faster and decrease the size of the 
partition map.

For PARTITIONED caches it’s important to keep an even number of partitions on 
every node. Rendezvous works as a pseudo-random, so it needs a high number of 
partitions to give a fair distribution of partitions.
For REPLICATED caches it’s not that critical, since the number of partitions is 
equal on every node anyway.

Denis
On 26 Aug 2019, 14:34 +0300, Niels Ejrnæs <niels.ejrn...@enghouse.com>, wrote:
> Hi,
>
> Is there a particular reason for why replicated caches has their partition 
> size set to 512 by default?
> I found this in 
> org.apache.ignite.internal.processors.cache.GridCacheUtils#initializeConfigDefaults(IgniteLogger,
>  CacheConfiguration, CacheObjectContext):V
>
>         if (cfg.getAffinity() == null) {
>               ...
>             else if (cfg.getCacheMode() == REPLICATED) {
>                 RendezvousAffinityFunction aff = new 
> RendezvousAffinityFunction(false, 512);
>
>                 cfg.setAffinity(aff);
>
>                 cfg.setBackups(Integer.MAX_VALUE);
>             }
>
> The default partition size for the RendezvousAffinityFunction is 1024.
>
> Best regards
> Niels Elkjær Ejrnæs
>

Reply via email to