RE: Inconsistent data.
Hi, No, the behavior you’re describing is not expected. Moreover, there is a bunch of tests in Ignite that make sure the reads are consistent during the rebalance, so this shouldn’t be possible. Can you create a small reproducer project and share it? Thanks, Stan From: Shrikant Haridas Sonone Sent: 19 сентября 2018 г. 12:28 To: user@ignite.apache.org Cc: Mahesh Sanjay Jadhav; Chinmoy Mohanty; Abhay Dutt Paroha Subject: Inconsistent data. Hi, I am using Ignite 2.6.0 on top of google kubernetes engine v1.10.6-gke.2. I am using native persistent and example configuration as follows. We have around 2GB of data. We have recently migrated our data from 2.4.0 to 2.6.0 Issue 1. When running 3 nodes of Ignite (each in a pod) in 3 as baseline nodes. If any of the nodes get restarted then it takes approximately of 5-6sec to rebalance the data. If there are any requests to be served, some of the requests are sent to the newly created node which is yet to complete the rebalance operation. Hence response received is inconsistent (sometimes empty) with the previous state before the node is added 2. Same output of point 1 is repeated even if a new/fresh node is added to the baseline topology. We tried to use the cache rebalance mode as SYNC. (as per doc https://apacheignite.readme.io/docs/rebalancing) The issue still persists. Is this the expected behavior? Is there any way/settings, which will restrict the requests directed to the newly added/restarted node till the rebalance operation is completed successfully..? Is it side effect of the migration activity from Apache Ignite 2.4.0 to 2.6.0..? Thanks and Regards, Shrikant Sonone Schlumberger.
Re: Inconsistent data.
I have a scala akka app which is using Ignite to store data. I am using Ignite SDK ignite-spring.2.6.0 to connect to Ignite cluster. Ignite has 3 nodes in the cluster. The app has an endpoint which gets the data from ignite cluster. In normal operation the request to the app is getting the correct data from ignite. however, if any of any single Ignite node is restarted then there is a window of 5-6 sec to re-balancing the data. If a request is send to the app during that window which will get data from ignite, at that time the response is inconsistent(Usually blank). Most of the time the data is blank. But once the re-balance is complete. Everything gets normal. I am not making any affinity calls. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Inconsistent data.
Hi Shrikant, What kind of requests do you mean? If you make affinity calls, ignite always sends the task to a node which has data, if new node doesn't have data, tasks just won't go to this node. Could you please explain in more details what you mean by requests? Thanks, Mike. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Inconsistent data.
Hi, I am using Ignite 2.6.0 on top of google kubernetes engine v1.10.6-gke.2. I am using native persistent and example configuration as follows. We have around 2GB of data. We have recently migrated our data from 2.4.0 to 2.6.0 Issue 1. When running 3 nodes of Ignite (each in a pod) in 3 as baseline nodes. If any of the nodes get restarted then it takes approximately of 5-6sec to rebalance the data. If there are any requests to be served, some of the requests are sent to the newly created node which is yet to complete the rebalance operation. Hence response received is inconsistent (sometimes empty) with the previous state before the node is added 2. Same output of point 1 is repeated even if a new/fresh node is added to the baseline topology. We tried to use the cache rebalance mode as SYNC. (as per doc https://apacheignite.readme.io/docs/rebalancing) The issue still persists. Is this the expected behavior? Is there any way/settings, which will restrict the requests directed to the newly added/restarted node till the rebalance operation is completed successfully..? Is it side effect of the migration activity from Apache Ignite 2.4.0 to 2.6.0..? Thanks and Regards, Shrikant Sonone Schlumberger.