Hello.
As far as I know data is always passed to the cache store on the same node it is being written in case of TRANSACTIONAL cache, to make cache store transaction-aware, unless write-behind mode is enabled (making the cache store effectively not participating in the actual txs that wrote the data), where data will always be passed to the cache store on the primary nodes. ATOMIC caches also write data to the cache store on primary nodes.

In case of transactional cache writing all data using inside affinity calls may solve the problem of writing data to the cache store only on primary or backup nodes.

On 11/21/2017 4:37 PM, Vyacheslav Daradur wrote:
Valentin,

Why don't you use Ignite persistence [1]?
I have a use case for one of the projects that need the RAM on disk
replication only. All PDS features aren't needed.
During the first assessment, persist to RocksDB works faster.

CacheStore design assumes that the underlying storage is shared by all the 
nodes in topology.
This is the very important note.
I'm a bit confused because I've thought that each node in cluster
persists partitions for which the node is either primary or backup
like in PDS.

My RocksDB implementation supports working with one DB instance which
shared by all the nodes in the topology, but it would make no sense of
using embedded fast storage.

Is there any link to a detailed description of CacheStorage design or
any other advice?
Thanks in advance.



On Fri, Nov 17, 2017 at 9:07 PM, Valentin Kulichenko
<valentin.kuliche...@gmail.com> wrote:
Vyacheslav,

CacheStore design assumes that the underlying storage is shared by all the
nodes in topology. Even if you delay rebalancing on node stop (which is
possible via CacheConfiguration#rebalanceDelay), I doubt it will solve all
your consistency issues.

Why don't you use Ignite persistence [1]?

[1] https://apacheignite.readme.io/docs/distributed-persistent-store

-Val

On Fri, Nov 17, 2017 at 4:24 AM, Vyacheslav Daradur <daradu...@gmail.com>
wrote:

Hi Andrey! Thank you for answering.

Key to partition mapping shouldn't depends on topology, and shouldn't
changed unstable topology.
Key to partition mapping doesn't depend on topology in my test
affinity function. It only depends on partitions number.
But partition to node mapping depends on topology and at cluster stop,
when one node left topology, some partitions may be moved to other
nodes.

Does all nodes share same RockDB database or each node has its own copy?
Each Ignite node has own RocksDB instance.

Would you please share configuration?
It's pretty simple:
         IgniteConfiguration cfg = new IgniteConfiguration();
         cfg.setIgniteInstanceName(instanceName);

         CacheConfiguration<Integer, String> cacheCfg = new
CacheConfiguration<>();
         cacheCfg.setName(TEST_CACHE_NAME);
         cacheCfg.setCacheMode(CacheMode.PARTITIONED);
         cacheCfg.setWriteSynchronizationMode(
CacheWriteSynchronizationMode.PRIMARY_SYNC);
         cacheCfg.setBackups(1);
         cacheCfg.setAffinity(new
TestAffinityFunction(partitionsNumber, backupsNumber));
         cacheCfg.setWriteThrough(true);
         cacheCfg.setReadThrough(true);
         cacheCfg.setRebalanceMode(CacheRebalanceMode.SYNC);
         cacheCfg.setCacheStoreFactory(new
RocksDBCacheStoreFactory<>("/test/path/to/persistence",
TEST_CACHE_NAME, cfg));

         cfg.setCacheConfiguration(cacheCfg);

Could you give me advice on places which I need to pay attention?


On Wed, Nov 15, 2017 at 3:02 PM, Andrey Mashenkov
<andrey.mashen...@gmail.com> wrote:
Hi Vyacheslav,

Key to partition mapping shouldn't depends on topology, and shouldn't
changed unstable topology.
Looks like you've missed smth.

Would you please share configuration?
Does all nodes share same RockDB database or each node has its own copy?



On Wed, Nov 15, 2017 at 12:22 AM, Vyacheslav Daradur <
daradu...@gmail.com>
wrote:

Hi, Igniters!

I’m using partitioned Ignite cache with RocksDB as 3rd party persistence
store.
I've got an issue: if cache rebalancing is switched on, then it’s
possible to lose some data.

Basic scenario:
1) Start Ignite cluster and fill a cache with RocksDB persistence;
2) Stop all nodes
3) Start Ignite cluster and validate data

This works fine while rebalancing is switched off.

If rebalancing switched on: when I call Ignition#stopAll, some nodes
go down sequentially and while one node having gone down another start
rebalancing. When nodes started affinity function works with a full
set of nodes and may define a wrong partition for a key because the
previous state was changed at rebalancing.

Maybe I'm doing something wrong. How can I avoid rebalancing while
stopping all nodes in the cluster?

Could you give me any advice, please?

--
Best Regards, Vyacheslav D.



--
Best regards,
Andrey V. Mashenkov


--
Best Regards, Vyacheslav D.




Reply via email to