Vyacheslav,

CacheStore design assumes that the underlying storage is shared by all the
nodes in topology. Even if you delay rebalancing on node stop (which is
possible via CacheConfiguration#rebalanceDelay), I doubt it will solve all
your consistency issues.

Why don't you use Ignite persistence [1]?

[1] https://apacheignite.readme.io/docs/distributed-persistent-store

-Val

On Fri, Nov 17, 2017 at 4:24 AM, Vyacheslav Daradur <daradu...@gmail.com>
wrote:

> Hi Andrey! Thank you for answering.
>
> >> Key to partition mapping shouldn't depends on topology, and shouldn't
> changed unstable topology.
> Key to partition mapping doesn't depend on topology in my test
> affinity function. It only depends on partitions number.
> But partition to node mapping depends on topology and at cluster stop,
> when one node left topology, some partitions may be moved to other
> nodes.
>
> >> Does all nodes share same RockDB database or each node has its own copy?
> Each Ignite node has own RocksDB instance.
>
> >> Would you please share configuration?
> It's pretty simple:
>         IgniteConfiguration cfg = new IgniteConfiguration();
>         cfg.setIgniteInstanceName(instanceName);
>
>         CacheConfiguration<Integer, String> cacheCfg = new
> CacheConfiguration<>();
>         cacheCfg.setName(TEST_CACHE_NAME);
>         cacheCfg.setCacheMode(CacheMode.PARTITIONED);
>         cacheCfg.setWriteSynchronizationMode(
> CacheWriteSynchronizationMode.PRIMARY_SYNC);
>         cacheCfg.setBackups(1);
>         cacheCfg.setAffinity(new
> TestAffinityFunction(partitionsNumber, backupsNumber));
>         cacheCfg.setWriteThrough(true);
>         cacheCfg.setReadThrough(true);
>         cacheCfg.setRebalanceMode(CacheRebalanceMode.SYNC);
>         cacheCfg.setCacheStoreFactory(new
> RocksDBCacheStoreFactory<>("/test/path/to/persistence",
> TEST_CACHE_NAME, cfg));
>
>         cfg.setCacheConfiguration(cacheCfg);
>
> Could you give me advice on places which I need to pay attention?
>
>
> On Wed, Nov 15, 2017 at 3:02 PM, Andrey Mashenkov
> <andrey.mashen...@gmail.com> wrote:
> > Hi Vyacheslav,
> >
> > Key to partition mapping shouldn't depends on topology, and shouldn't
> > changed unstable topology.
> > Looks like you've missed smth.
> >
> > Would you please share configuration?
> > Does all nodes share same RockDB database or each node has its own copy?
> >
> >
> >
> > On Wed, Nov 15, 2017 at 12:22 AM, Vyacheslav Daradur <
> daradu...@gmail.com>
> > wrote:
> >
> >> Hi, Igniters!
> >>
> >> I’m using partitioned Ignite cache with RocksDB as 3rd party persistence
> >> store.
> >> I've got an issue: if cache rebalancing is switched on, then it’s
> >> possible to lose some data.
> >>
> >> Basic scenario:
> >> 1) Start Ignite cluster and fill a cache with RocksDB persistence;
> >> 2) Stop all nodes
> >> 3) Start Ignite cluster and validate data
> >>
> >> This works fine while rebalancing is switched off.
> >>
> >> If rebalancing switched on: when I call Ignition#stopAll, some nodes
> >> go down sequentially and while one node having gone down another start
> >> rebalancing. When nodes started affinity function works with a full
> >> set of nodes and may define a wrong partition for a key because the
> >> previous state was changed at rebalancing.
> >>
> >> Maybe I'm doing something wrong. How can I avoid rebalancing while
> >> stopping all nodes in the cluster?
> >>
> >> Could you give me any advice, please?
> >>
> >> --
> >> Best Regards, Vyacheslav D.
> >>
> >
> >
> >
> > --
> > Best regards,
> > Andrey V. Mashenkov
>
>
>
> --
> Best Regards, Vyacheslav D.
>

Reply via email to