Re: Losing data during restarting cluster with persistence enabled

2017-12-27 Thread Vyacheslav Daradur
Hi, looks like there is no much profit when PDS throttling is enabled and tuned according to an article [1]. I’ve benchmarked the solutions with ‘put’ operation for 3 hours via Ignite Yardstick. I see quite similar results with the write-heavy pattern. Most time PDS works ~10% faster. Only one

Re: Losing data during restarting cluster with persistence enabled

2017-12-06 Thread Valentin Kulichenko
Vyacheslav, In this case community should definitely take a look and investigate. Please share your results when you have a chance. -Val On Wed, Dec 6, 2017 at 1:45 AM, Vyacheslav Daradur wrote: > Evgeniy, as far as I understand PDS and rebalancing are based on >

Re: Losing data during restarting cluster with persistence enabled

2017-12-06 Thread Vyacheslav Daradur
Evgeniy, as far as I understand PDS and rebalancing are based on page-memory approach instead of entry-based 3rd Party Persistence, so I'm not sure how to extend rebalancing behavior properly. Dmitry, the performance is the only reason of why I try to solve rebalancing issue. I've benchmarked

Re: Losing data during restarting cluster with persistence enabled

2017-11-24 Thread Dmitry Pavlov
Please see the discussion on the user list. It seems that the same happened there: http://apache-ignite-users.70518.x6.nabble.com/Reassign-partitions-td7461.html#a7468 it contains examples when the data can diverge. пт, 24 нояб. 2017 г. в 16:42, Dmitry Pavlov : > If we

Re: Losing data during restarting cluster with persistence enabled

2017-11-24 Thread Dmitry Pavlov
If we compare native and 3rd party persistence (cache store): - Updating and reading data from DBMS is slower in most scenarios. - Non-clustered DBMS is a single point of failure, it is hard to scale. - Ignite SQL does not extend to External (3rd party persitsence) Cache Store (and queries

Re: Losing data during restarting cluster with persistence enabled

2017-11-24 Thread Evgeniy Ignatiev
Sorry linked the wrong page, the latter url is not the example. On 11/24/2017 1:12 PM, Evgeniy Ignatiev wrote: By the way I remembered that there is an annotation CacheLocalStore for marking exactly the CacheStore that is not distributed -

Re: Losing data during restarting cluster with persistence enabled

2017-11-24 Thread Evgeniy Ignatiev
By the way I remembered that there is an annotation CacheLocalStore for marking exactly the CacheStore that is not distributed - http://apache-ignite-developers.2346864.n4.nabble.com/CacheLocalStore-td734.html - here is short explanation and this -

Re: Losing data during restarting cluster with persistence enabled

2017-11-23 Thread Dmitry Pavlov
Hi Evgeniy, Technically it is, of course, possible, but still - it is not simple at all - IgniteCacheOffheapManager & IgniteWriteAheadLogManager are internal APIs, and community can change any APIs here in any time. Vyacheslav, Why Ignite Native Persistence is not suitable for this case?

Re: Losing data during restarting cluster with persistence enabled

2017-11-23 Thread Evgeniy Ignatiev
As far as I remember, last webinar I heard on Ignite Native Persistence - it actually exposes some interfaces like IgniteWriteAheadLogManager, PageStore, PageStoreManager, etc., with the file-based implementation provided by Ignite being only one possible approach, and users can create their

Re: Losing data during restarting cluster with persistence enabled

2017-11-22 Thread Valentin Kulichenko
Vyacheslav, There is no way to do this and I'm not sure why you want to do this. Ignite persistence was developed to solve exactly the problems you're describing. Just use it :) -Val On Wed, Nov 22, 2017 at 12:36 AM, Vyacheslav Daradur wrote: > Valentin, Evgeniy thanks

Re: Losing data during restarting cluster with persistence enabled

2017-11-22 Thread Vyacheslav Daradur
Valentin, Evgeniy thanks for your help! Valentin, unfortunately, you are right. I've tested that behavior in the following scenario: 1. Started N nodes and filled it with data 2. Shutdown one node 3. Called rebalance directly and waited to finish 4. Stopped all other (N-1) nodes 5. Started N-1

Re: Losing data during restarting cluster with persistence enabled

2017-11-21 Thread Valentin Kulichenko
Vyacheslav, If you want the persistence storage to be *distributed*, then using Ignite persistence would be the easiest thing to do anyway, even if you don't need all its features. CacheStore indeed can be updated from different nodes with different nodes, but the problem is in coordination. If

Re: Losing data during restarting cluster with persistence enabled

2017-11-21 Thread Evgeniy Ignatiev
Hello. As far as I know data is always passed to the cache store on the same node it is being written in case of TRANSACTIONAL cache, to make cache store transaction-aware, unless write-behind mode is enabled (making the cache store effectively not participating in the actual txs that wrote

Re: Losing data during restarting cluster with persistence enabled

2017-11-21 Thread Vyacheslav Daradur
Valentin, >> Why don't you use Ignite persistence [1]? I have a use case for one of the projects that need the RAM on disk replication only. All PDS features aren't needed. During the first assessment, persist to RocksDB works faster. >> CacheStore design assumes that the underlying storage is

Re: Losing data during restarting cluster with persistence enabled

2017-11-17 Thread Valentin Kulichenko
Vyacheslav, CacheStore design assumes that the underlying storage is shared by all the nodes in topology. Even if you delay rebalancing on node stop (which is possible via CacheConfiguration#rebalanceDelay), I doubt it will solve all your consistency issues. Why don't you use Ignite persistence

Re: Losing data during restarting cluster with persistence enabled

2017-11-17 Thread Vyacheslav Daradur
Hi Andrey! Thank you for answering. >> Key to partition mapping shouldn't depends on topology, and shouldn't >> changed unstable topology. Key to partition mapping doesn't depend on topology in my test affinity function. It only depends on partitions number. But partition to node mapping depends

Re: Losing data during restarting cluster with persistence enabled

2017-11-15 Thread Andrey Mashenkov
Hi Vyacheslav, Key to partition mapping shouldn't depends on topology, and shouldn't changed unstable topology. Looks like you've missed smth. Would you please share configuration? Does all nodes share same RockDB database or each node has its own copy? On Wed, Nov 15, 2017 at 12:22 AM,