Hi,

1. If the cache mode is local, why does IgniteCluster even come into play?
All that we want to do is to read/write from the work directory
corresponding to our db id. Is there a way to get persistence without
activating the IgniteCluster?
*Cluster needs to make sure that all nodes in cluster know about this cache
to avoid situations in future when other nodes will try to create a
distributed cache with the same name, for example.*

2. How do we achieve the cache isolation we are looking for? Currently, if
I have db A and B, this would result in IgniteA and IgniteB in separate JVM
instances, each with their own directory. If I start IgniteA, and then
IgniteB, IgniteB will complain about not being part of the BaselineTopology
and will not use persistence. It seems like there are two options:
*You can use NodeFilter for the
cache: 
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/CacheConfiguration.html#setNodeFilter-org.apache.ignite.lang.IgnitePredicate-
<https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/CacheConfiguration.html#setNodeFilter-org.apache.ignite.lang.IgnitePredicate->*
*It will define a subset of the nodes, which will store data for this
cache. So, you can set there only one node.*
*>*-Have every node join the baseline topology when it comes up (I think
there is one per machine by default). This seems like it could work, but I
am worried about how rebalancing would work. If all of the server nodes are
using LOCAL cache mode, then would rebalancing ever occur? If it doesn't,
then this seems like it would be an easy enough solution. Also, where is
the cluster state stored (Members of baseline topology, etc.)?
*No, rebalancing won't happen, it will be skipped. However, before that,
nodes should exchange an information regarding new topology ad partition
distribution. *

*Baseline topology should contains all nodes that will store the data.*

Evgenii


пн, 30 дек. 2019 г. в 10:59, Mitchell Rathbun (BLOOMBERG/ 731 LEX) <
[email protected]>:

> Any thoughts on this?
>
> From: [email protected] At: 12/18/19 19:55:47
> To: [email protected]
> Cc: Anant Narayan (BLOOMBERG/ 731 LEX ) <[email protected]>, Ranjith
> Lingamaneni (BLOOMBERG/ 731 LEX ) <[email protected]>
> Subject: Isolating IgniteCache instances across JVMs on same machine by id
>
> We have multiple different database instances for which we are looking
> into using Ignite as a local caching layer. Each db has a unique id that we
> are using as part of the IgniteInstanceName and for the WorkDirectory path.
> We are running IgniteCache in Local mode with persistence enabled, as we
> effectively want to completely separate the Ignite caches for each db id.
> However, this is not working due to the IgniteCluster being shared and the
> BaselineTopology concept. So a couple of questions:
>
> 1. If the cache mode is local, why does IgniteCluster even come into play?
> All that we want to do is to read/write from the work directory
> corresponding to our db id. Is there a way to get persistence without
> activating the IgniteCluster?
>
> 2. How do we achieve the cache isolation we are looking for? Currently, if
> I have db A and B, this would result in IgniteA and IgniteB in separate JVM
> instances, each with their own directory. If I start IgniteA, and then
> IgniteB, IgniteB will complain about not being part of the BaselineTopology
> and will not use persistence. It seems like there are two options:
>
> -Have every node join the baseline topology when it comes up (I think
> there is one per machine by default). This seems like it could work, but I
> am worried about how rebalancing would work. If all of the server nodes are
> using LOCAL cache mode, then would rebalancing ever occur? If it doesn't,
> then this seems like it would be an easy enough solution. Also, where is
> the cluster state stored (Members of baseline topology, etc.)?
> -If rebalancing does come into play even if all caches are local, then it
> seems like separate clusters would have to be specified when starting up.
> https://apacheignite.readme.io/docs/tcpip-discovery seems to provide a
> way to do that by making sure that TcpDiscoverySpi and TcpCommunicationSpi
> have different port ranges for every instance of the cluster. Again, this
> seems kind of like overkill if we know in advance that all of the caches
> will be local and will not need any of the cluster functionality.
>
>
> Are there any other simpler options? We are only interested in the
> IgniteCache functionality at a local per process level, at least for now.
> We might have 10-20 different caches on the same machine, and need these
> caches to be completely separated with no risk of any of the data being
> rebalanced/exposed to a different client instance. Ideally we would get
> persistence without having to activate IgniteCluster, but it does not seem
> like that is an option.
>
>
>

Reply via email to