I’m trying out a proposal for an Ignite based topology that will have two
grids. I am using Ignite.Net 2.3.



One grid is responsible for processing inbound events, the second is
responsible for servicing read requests against an immutable
query-efficient projection of the data held in the first grid. Both grids
will handle compute intensive tasks.



So:  Ingest Data is processed into Grid #1 (read-write, mutable with
‘Default-Mutable’ data region configuration) which is then projected into
Grid #2 (read-only, immutable with ‘Default-Immutable’ data region
configuration)



Both grids will use persistency, and I’m keen on having isolation between
the two so I can scale read/write sides of the operation independently, as
well as start/stop the ingest grid independently of the read grid.



I currently have set up different ports (48500 for Grid 1 and 47500 for
Grid 2) on local host for each grid to start allocating port numbers for
nodes. These are assigned consistently for server and client nodes of each
grid, though server nodes in Grid #1 (on also act as clients to Grid #2),
ie: it creates a server node on 48500 and a client node on 47500.



When I start the two grids, each with a single node, I see errors like the
one below (from the log of the mutable grid server node) that suggests
Ignite is treating both nodes as if they belonged to the same grid and is
exchanging partition maps between them. The exception text in the error
cites the Default-Immutable data region which is configured only on the
immutable grid server node.



ERROR 2018-01-10 13:23:17,001 24025ms
GridCachePartitionExchangeManager        <LoggerLog>b__22   - Failed to
wait for completion of partition map exchange (preloading will not start):
GridDhtPartitionsExchangeFuture [firstDiscoEvt=DiscoveryEvent
[evtNode=TcpDiscoveryNode [id=026fc6c4-ae5e-4e4b-ba7f-b74c7b685875,
addrs=[127.0.0.1], sockAddrs=[/127.0.0.1:48500], discPort=48500, order=11,
intOrder=8, lastExchangeTime=1515543793995, loc=true,
ver=2.3.0#20171028-sha1:8add7fd5, isClient=false], topVer=11,
nodeId8=026fc6c4, msg=null, type=NODE_JOINED, tstamp=1515543796137],
crd=TcpDiscoveryNode [id=d0b35790-4bad-4060-b86e-5382ac65e57a,
addrs=[127.0.0.1], sockAddrs=[/127.0.0.1:47500], discPort=47500, order=1,
intOrder=1, lastExchangeTime=1515543795296, loc=false,
ver=2.3.0#20171028-sha1:8add7fd5, isClient=false],
exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion
[topVer=11, minorTopVer=0], discoEvt=DiscoveryEvent
[evtNode=TcpDiscoveryNode [id=026fc6c4-ae5e-4e4b-ba7f-b74c7b685875,
addrs=[127.0.0.1], sockAddrs=[/127.0.0.1:48500], discPort=48500, order=11,
intOrder=8, lastExchangeTime=1515543793995, loc=true,
ver=2.3.0#20171028-sha1:8add7fd5, isClient=false], topVer=11,
nodeId8=026fc6c4, msg=null, type=NODE_JOINED, tstamp=1515543796137],
nodeId=026fc6c4, evt=NODE_JOINED], added=true, initFut=GridFutureAdapter
[ignoreInterrupts=false, state=DONE, res=false, hash=7528364], init=false,
lastVer=null, partReleaseFut=null, exchActions=null, affChangeMsg=null,
initTs=1515543796171, centralizedAff=false, changeGlobalStateE=null,
done=true, state=SRV, evtLatch=0,
remaining=[d0b35790-4bad-4060-b86e-5382ac65e57a], super=GridFutureAdapter
[ignoreInterrupts=false, state=DONE, res=class
o.a.i.IgniteCheckedException: Requested DataRegion is not configured:
Default-Immutable, hash=23921041]]

The error above cites both 48500 & 47500 discovery ports even though this
process was only ever configured with the 48500 discovery port for creation
of the Ignite server node. Given the discovery port ranges (48500-48600 and
47500-47600) don’t intersect, I am confused as to how this is happening.



Is there anything additional I need to add to the TCPDiscoverySPI other
than these settings to achieve two functioning grids running locally:



            cfg.DiscoverySpi = new TcpDiscoverySpi()

            {

                LocalAddress = "127.0.0.1",

                LocalPort = 48500 // 48500 for Grid #1, 47500 for Grid #2

            };



Thanks,

Raymond.

Reply via email to