Re: incorrect partition map exchange behaviour

2021-01-08 Thread tschauenberg
Here's my attempt to demonstrate and also provide logs

Standup 3 node cluster and load with data


Using a thick client, 250k devices are loaded into the device cache.  The
thick client then leaves.  There's one other thick client connected the
whole time for serving requests but I think that's irrelevant for the test
but want to point it out in case someone notices there's still a client
connected.

Show topology from logs of the client leaving:


[2021-01-08T23:08:05.012Z][INFO][disco-event-worker-#40][GridDiscoveryManager]
Node left topology: TcpDiscoveryNode
[id=611e30ee-b7c6-4ead-a746-f609b206cfb4,
consistentId=611e30ee-b7c6-4ead-a746-f609b206cfb4, addrs=ArrayList
[127.0.0.1, 172.17.0.3], sockAddrs=HashSet [/127.0.0.1:0, /172.17.0.3:0],
discPort=0, order=7, intOrder=6, lastExchangeTime=1610146373751, loc=false,
ver=2.8.1#20200521-sha1:86422096, isClient=true]
[2021-01-08T23:08:05.013Z][INFO][disco-event-worker-#40][GridDiscoveryManager]
Topology snapshot [ver=8, locNode=75e4ddea, servers=3, clients=1,
state=ACTIVE, CPUs=7, offheap=3.0GB, heap=3.1GB]

Start visor on one of the nodes


Show topology from logs


[2021-01-08T23:30:33.461Z][INFO][tcp-disco-msg-worker-[4ea8efe1
10.12.3.76:47500]-#2][TcpDiscoverySpi] New next node
[newNext=TcpDiscoveryNode [id=1cca94e3-f15f-4a8b-9f65-d9b9055a5fa7,
consistentId=10.12.2.110:47501, addrs=ArrayList [10.12.2.110],
sockAddrs=HashSet [/10.12.2.110:47501], discPort=47501, order=0, intOrder=7,
lastExchangeTime=1610148633458, loc=false, ver=2.8.1#20200521-sha1:86422096,
isClient=false]]
[2021-01-08T23:30:34.045Z][INFO][sys-#1011][GridDhtPartitionsExchangeFuture]
Completed partition exchange
[localNode=75e4ddea-1927-4e93-82e9-fdfbb7b58d1c,
exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
[topVer=9, minorTopVer=0], evt=NODE_JOINED, evtNode=TcpDiscoveryNode
[id=1cca94e3-f15f-4a8b-9f65-d9b9055a5fa7, consistentId=10.12.2.110:47501,
addrs=ArrayList [10.12.2.110], sockAddrs=HashSet [/10.12.2.110:47501],
discPort=47501, order=9, intOrder=7, lastExchangeTime=1610148633458,
loc=false, ver=2.8.1#20200521-sha1:86422096, isClient=false], done=true,
newCrdFut=null], topVer=AffinityTopologyVersion [topVer=9, minorTopVer=0]]

Show data balanced in visor


+---+-+---+-+---+---+---+---++
| Devices(@c2)  | PARTITIONED | 3 | 25 (0 / 25)
| min: 80315 (0 / 80315)| min: 0| min: 0| min: 0|
min: 25|
|   | |   |
| avg: 8.33 (0.00 / 8.33)   | avg: 0.00 | avg: 0.00 | avg: 0.00 |
avg: 25.00 |
|   | |   |
| max: 86968 (0 / 86968)| max: 0| max: 0| max: 0|
max: 25|
+---+-+---+-+---+---+---+---++

At this point the data is all relatively balanced and the topology increased
when visor connected.

Stop ignite on one node


Show topology and PME from logs (from a different ignite node as the ignite
process was stopped)


[2021-01-08T23:35:39.333Z][INFO][disco-event-worker-#40][GridDiscoveryManager]
Node left topology: TcpDiscoveryNode
[id=75e4ddea-1927-4e93-82e9-fdfbb7b58d1c,
consistentId=3a4a497f-5a89-4f2c-8531-b2b05f2ede22, addrs=ArrayList
[10.12.2.110], sockAddrs=HashSet [/10.12.2.110:47500], discPort=47500,
order=3, intOrder=3, lastExchangeTime=1610139164908, loc=false,
ver=2.8.1#20200521-sha1:86422096, isClient=false]
[2021-01-08T23:35:39.333Z][INFO][disco-event-worker-#40][GridDiscoveryManager]
Topology snapshot [ver=10, locNode=4ea8efe1, servers=2, clients=1,
state=ACTIVE, CPUs=5, offheap=2.0GB, heap=2.1GB]
[2021-01-08T23:35:39.333Z][INFO][disco-event-worker-#40][GridDiscoveryManager]  
^-- Baseline [id=0, size=3, online=2, offline=1]
[2021-01-08T23:35:39.335Z][INFO][exchange-worker-#41][time] Started exchange
init [topVer=AffinityTopologyVersion [topVer=10, minorTopVer=0], crd=true,
evt=NODE_LEFT, evtNode=75e4ddea-1927-4e93-82e9-fdfbb7b58d1c, customEvt=null,
allowMerge=false, exchangeFreeSwitch=true]
[2021-01-08T23:35:39.338Z][INFO][sys-#1031][GridAffinityAssignmentCache]
Local node affinity assignment distribution is not ideal [cache=Households,
expectedPrimary=512.00, actualPrimary=548, expectedBackups=1024.00,
actualBackups=476, warningThreshold=50.00%]
[2021-01-08T23:35:39.340Z][INFO][sys-#1032][GridAffinityAssignmentCache]
Local node affinity assignment distribution is not ideal [cache=Devices,
expectedPrimary=512.00, actualPrimary=548, expectedBackups=1024.00,
actualBackups=476, warningThreshold=50.00%]
[2021-01-08T23:35:39.354Z][INFO][exchange-worker-#41][GridDhtPartitionsExchangeFuture]
Finished waiting for partition release future

Re: Control Center core limit info

2021-01-08 Thread Carsten

Hey Denis,

thanks for letting me know.
I also tried the Web Console and it seems amazing to me. Started right 
up, showed the whole cluster, no limitations  great.



Again, thank you very much !!

Carsten


Am 08.01.21 um 11:20 schrieb Denis Magda:

Hello Carsten,

To my knowledge, you can have up to 2 nodes monitored by Control 
Center for free. Then you need to request a license file.


As a temporary workaround, you can use the hosted version of the tool 
that doesn't put any limitations:

https://control.gridgain.com 

-
Denis


On Fri, Jan 8, 2021 at 7:09 AM Carsten > wrote:


Hello all,

after an install marathon yesterday night, I was ready to test Ignite
2.9.1 + GridGainControl Center 2020.12.

But when starting everything (4 nodes + control center) I got the
message "Core limit has been exceeded for the current license".

I was just wondering what the core limit is and were to find
information
on it (i did search but came up empty)

Any advice is greatly appreciated


All the best,

Carsten





Re: Control Center core limit info

2021-01-08 Thread Denis Magda
Hello Carsten,

To my knowledge, you can have up to 2 nodes monitored by Control Center for
free. Then you need to request a license file.

As a temporary workaround, you can use the hosted version of the tool that
doesn't put any limitations:
https://control.gridgain.com

-
Denis


On Fri, Jan 8, 2021 at 7:09 AM Carsten 
wrote:

> Hello all,
>
> after an install marathon yesterday night, I was ready to test Ignite
> 2.9.1 + GridGainControl Center 2020.12.
>
> But when starting everything (4 nodes + control center) I got the
> message "Core limit has been exceeded for the current license".
>
> I was just wondering what the core limit is and were to find information
> on it (i did search but came up empty)
>
> Any advice is greatly appreciated
>
>
> All the best,
>
> Carsten
>
>


Control Center core limit info

2021-01-08 Thread Carsten

Hello all,

after an install marathon yesterday night, I was ready to test Ignite 
2.9.1 + GridGainControl Center 2020.12.


But when starting everything (4 nodes + control center) I got the 
message "Core limit has been exceeded for the current license".


I was just wondering what the core limit is and were to find information 
on it (i did search but came up empty)


Any advice is greatly appreciated


All the best,

Carsten