Denis, it's not about the size - the size could be different. It's about having different configured DataRegions and creating caches without NodeFilter. If newly added node has new DataRegion and cache created for this region - it will lead to the cluster failure.
Evgenii ср, 8 янв. 2020 г. в 13:15, Denis Magda <[email protected]>: > Andrey, > > Are you saying we require to have regions of the same size preconfigured > across the nodes? Hope I misunderstood you. > > - > Denis > > > On Mon, Jan 6, 2020 at 7:18 AM Andrei Aleksandrov <[email protected]> > wrote: > >> Hi, >> >> I guess that every data node should have have the same data regions. I >> checked that in case if you have for example 2 nodes with persistence >> region in BLT and then start a new node (that isn't the part of BLT) >> with new region and some cache in this new region then it will produce >> next exception: >> >> [17:53:30,446][SEVERE][exchange-worker-#48][GridDhtPartitionsExchangeFuture] >> >> Failed to reinitialize local partitions (rebalancing will be stopped): >> GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=3, >> minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode >> [id=44c8ba83-4a4d-4b0e-b4b6-530a23b25d24, addrs=[0:0:0:0:0:0:0:1, >> 10.0.1.1, 10.0.75.1, 127.0.0.1, 172.25.4.231, 192.168.244.113, >> 192.168.56.1], >> sockAddrs=[LAPTOP-I5CE4BEI.mshome.net/192.168.244.113:47502, >> /192.168.56.1:47502, host.docker.internal/172.25.4.231:47502, >> LAPTOP-I5CE4BEI/10.0.75.1:47502, /0:0:0:0:0:0:0:1:47502, >> /10.0.1.1:47502, /127.0.0.1:47502], discPort=47502, order=3, intOrder=3, >> lastExchangeTime=1578322410223, loc=false, >> ver=2.7.2#20191202-sha1:2e9d1c89, isClient=false], topVer=3, >> nodeId8=f581f039, msg=Node joined: TcpDiscoveryNode >> [id=44c8ba83-4a4d-4b0e-b4b6-530a23b25d24, addrs=[0:0:0:0:0:0:0:1, >> 10.0.1.1, 10.0.75.1, 127.0.0.1, 172.25.4.231, 192.168.244.113, >> 192.168.56.1], >> sockAddrs=[LAPTOP-I5CE4BEI.mshome.net/192.168.244.113:47502, >> /192.168.56.1:47502, host.docker.internal/172.25.4.231:47502, >> LAPTOP-I5CE4BEI/10.0.75.1:47502, /0:0:0:0:0:0:0:1:47502, >> /10.0.1.1:47502, /127.0.0.1:47502], discPort=47502, order=3, intOrder=3, >> lastExchangeTime=1578322410223, loc=false, >> ver=2.7.2#20191202-sha1:2e9d1c89, isClient=false], type=NODE_JOINED, >> tstamp=1578322410400], nodeId=44c8ba83, evt=NODE_JOINED] >> class org.apache.ignite.IgniteCheckedException: Requested DataRegion is >> not configured: 1GB_Region_Eviction >> at >> >> org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager.dataRegion(IgniteCacheDatabaseSharedManager.java:729) >> >> BR, >> Andrei >> >> 1/6/2020 2:52 PM, djm132 пишет: >> > You can also look to this topic, probably related to yours with code >> sample >> > >> http://apache-ignite-users.70518.x6.nabble.com/Embedded-ignite-and-baseline-upgrade-questions-td30822.html >> > >> > >> > >> > -- >> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >> >
