Hi,
I tried your suggestion of using a NodeFilter but, is not solving this
issue. Using a NodeFilter by consistent-id in order to create the cache in
only one node is creating persistence information in every node:

In the node for which the filter is true (directory size 75MB):
//work/db/node01-0da087c4-c11a-47ce-ad53-0380f0d2c51a//cache-tempBCK0-cd982aa5-c27f-4582-8a3b-b34c5c60a49c/

In the node for which the filter is false  (directory size 8k):
//work/db/node01-0da087c4-c11a-47ce-ad53-0380f0d2c51a//cache-tempBCK0-cd982aa5-c27f-4582-8a3b-b34c5c60a49c/

If the cache is destroyed while *any* of these nodes is down, it won't join
the cluster again throwing the exception:
/Caused by: class org.apache.ignite.spi.IgniteSpiException: Joining node has
caches with data which are not presented on cluster, it could mean that they
were already destroyed, to add the node to cluster - remove directories with
the caches[tempBCK0-cd982aa5-c27f-4582-8a3b-b34c5c60a49c]/



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Reply via email to