Re: incorrect partition map exchange behaviour

2021-01-14 Thread Ilya Kasnacheev
Hello! I have just looked at this issue today, and the relevant fix seems to be https://issues.apache.org/jira/browse/IGNITE-11147 Regards, -- Ilya Kasnacheev ср, 13 янв. 2021 г. в 20:26, tschauenberg : > Sorry about mixing the terminology. My post was meant to be about the PME > and the

RE: incorrect partition map exchange behaviour

2021-01-13 Thread tschauenberg
Sorry about mixing the terminology. My post was meant to be about the PME and the primary keys. So the summary of my post and what it was trying to show was the PME was only happening on cluster node leaves (server or visor) but not cluster node joins (at least with previously joined nodes -

Re: incorrect partition map exchange behaviour

2021-01-13 Thread tschauenberg
Haven't tested on 2.9.1 as we don't have that database provisioned and sadly won't for awhile. When we do though I will update. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/

RE: incorrect partition map exchange behaviour

2021-01-13 Thread Alexandr Shapkin
Hi, As you correctly pointed to the PME implementation details webpage, this is a process of exchanging information about partition holders. And it’s happening on every topology change, cluster deactivation, etc. The process itself is not about data rebalancing, it’s about what node should store a

Re: incorrect partition map exchange behaviour

2021-01-13 Thread Ilya Kasnacheev
Hello! Does it happen to work on 2.9.1, or will fail too? I recommend checking it since I vaguely remember some discussions about late affinity assignments fix. Regards, -- Ilya Kasnacheev сб, 9 янв. 2021 г. в 03:11, tschauenberg : > Here's my attempt to demonstrate and also provide logs > >

Re: incorrect partition map exchange behaviour

2021-01-08 Thread tschauenberg
Here's my attempt to demonstrate and also provide logs Standup 3 node cluster and load with data Using a thick client, 250k devices are loaded into the device cache. The thick client then leaves. There's one other thick client connected the whole time for serving requests but I think that's