Re: Node is unable to join cluster because it has destroyed caches

2020-06-08 Thread Ilya Kasnacheev
Hello! It was disabled not just because of potential data loss but because the cache was resurrected on such start and could break cluster. Creating cache per operation and destroying afterwards is an anti-pattern, it can cause all sorts of issues and is better avoided. Regards, -- Ilya

Re: Node is unable to join cluster because it has destroyed caches

2020-06-03 Thread xero
Hi, I tried your suggestion of using a NodeFilter but, is not solving this issue. Using a NodeFilter by consistent-id in order to create the cache in only one node is creating persistence information in every node: In the node for which the filter is true (directory size 75MB):

Re: Node is unable to join cluster because it has destroyed caches

2020-06-02 Thread xero
Hi, thanks for the prompt response We can have several of these caches, one for each query (is an exceptional case but, with load, there can be several simultaneously) that is being executed so we would like to preserve the persistence to take advantage of the swapping in case the amount of memory

Re: Node is unable to join cluster because it has destroyed caches

2020-06-02 Thread Aleksandr Shapkin
Hi, Have you tried to put the temp cache into the different, non persisted, memory region? You can also try to use a node filter to control what nodes should store the cache. On Tue, Jun 2, 2020, 19:24 xero wrote: > Hi Ignite team, We have a use case where a small portion of the dataset >

Node is unable to join cluster because it has destroyed caches

2020-06-02 Thread xero
Hi Ignite team, We have a use case where a small portion of the dataset must answer successive queries that could be relatively expensive. For this, we create a temporary cache with that small subset of the dataset and operate on that new cache. At the end of the process, that cache is destroyed.