Hi, I did some more digging and discovered that the issue seems to be: org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture: Completed partition exchange
Is there any way to disable or limit the partition exchange? Best, Marco On Mon, 12 Aug 2019 at 16:59, Andrei Aleksandrov <[email protected]> wrote: > Hi, > > Could you share the whole reproducer with all configurations and required > methods? > > BR, > Andrei > 8/12/2019 4:48 PM, Marco Bernagozzi пишет: > > I have a set of nodes, and I want to be able to set a cache in specific > nodes. It works, but whenever I turn on a new node the cache is > automatically spread to that node, which then causes errors like: > Failed over job to a new node ( I guess that there was a computation going > on in a node that shouldn't have computed that, and was shut down in the > meantime). > > I don't know if I'm doing something wrong here or I'm missing something. > As I understand it, NodeFilter and Affinity are equivalent in my case > (Affinity is a node filter which also creates rules on where can the cache > spread from a given node?). With rebalance mode set to NONE, shouldn't the > cache be spread on the "nodesForOptimization" nodes, according to either > the node filter or the affinityFunction? > > Here's my code: > > List<UUID> nodesForOptimization = fetchNodes(); > > CacheConfiguration<String, Graph> graphCfg = new > CacheConfiguration<>(graphCacheName); > graphCfg = graphCfg.setCacheMode(CacheMode.REPLICATED) > .setBackups(nodesForOptimization.size() - 1) > .setAtomicityMode(CacheAtomicityMode.ATOMIC) > .setRebalanceMode(CacheRebalanceMode.NONE) > .setStoreKeepBinary(true) > .setCopyOnRead(false) > .setOnheapCacheEnabled(false) > .setNodeFilter(u -> nodesForOptimization.contains(u.id())) > .setAffinity( > new RendezvousAffinityFunction( > 1024, > (c1, c2) -> nodesForOptimization.contains(c1.id()) && > nodesForOptimization.contains(c2.id()) > ) > ) > > .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC); > >
