It seems this is very similar with
After node restart there can be some delay in removal. When some other
entry will be removed, entry which survived during restart should be
I guess entry may come from WAL after restart and not removed because of
чт, 8 мар. 2018 г., 17:15 Subash Chaturanga <subash...@gmail.com>:
> Ideally 3 nodes.
> So expiry inconsistency is a known issue ?
> But I reproduce the expiry inconsistency issue even in a single node with
> the steps mentioned earlier.
> We are evaluating Ignite vs Redis for our use case these days, and wanted
> to make sure everything works fine as in docs. Can you please confirm the
> cache eviction bahavior also, like can we put a limit to the persistence
> later too ?
> On Thu, Mar 8, 2018 at 6:51 AM Dmitry Pavlov <dpavlov....@gmail.com>
>> There was some well known issue with native persistence and rebalancing
>> data between nodes, later I can try to find its Id .
>> How much nodes do you use?
>> Dmitry Pavlov
>> ср, 7 мар. 2018 г., 22:25 Subash Chaturanga <subash...@gmail.com>:
>>> Hi team,
>>> With a cache having CreatedExpiryPolicy for 1min duration and having
>>> ignite persistence enabled as per docs. And did this.
>>> - cache put and then cache get, gives me the value
>>> - wait for 1min
>>> - cache get, returns null. Perfectly fine up to now.
>>> - then recycled JVM.
>>> - now only cache.get, no puts, and return == null is my expectation. But
>>> it wasn’t the case. It returned me the value.
>>> This means cache expiry doesn’t remove the entry from native persistent
>>> layer. But if you do a cache.remove() it will remove it from native
>>> persistent later too. Meaning JVM recycle will return null.
>>> So compared to cache.remove working as expected, cache expiry behavior
>>> is very inconsistent.
>>> Can someone please clarify ?