[
https://issues.apache.org/jira/browse/IGNITE-1605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15102081#comment-15102081
]
ASF GitHub Bot commented on IGNITE-1605:
----------------------------------------
GitHub user VladimirErshov opened a pull request:
https://github.com/apache/ignite/pull/407
IGNITE-1605 implementation of data loss
IGNITE-1605 implementation of data loss
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/VladimirErshov/ignite ignite-1605_3
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/ignite/pull/407.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #407
----
commit 16fdb3e07782900e4e0925a6f1e7c7430c423d6f
Author: vershov <[email protected]>
Date: 2016-01-15T17:01:09Z
IGNITE-1605 implementation of data loss
----
> Provide stronger data loss check
> --------------------------------
>
> Key: IGNITE-1605
> URL: https://issues.apache.org/jira/browse/IGNITE-1605
> Project: Ignite
> Issue Type: Task
> Reporter: Yakov Zhdanov
> Assignee: Vladimir Ershov
>
> Need to provide stronger data loss check.
> Currently node can fire event - EVT_CACHE_REBALANCE_PART_DATA_LOST
> However, this is not enough since if there is strong requirement on
> application behavior on data loss e.g. further cache updates should throw
> exception - this requirement cannot currently be met even with use of cache
> interceptor.
> Suggestions:
> * Introduce CacheDataLossPolicy enum: FAIL_OPS, NOOP and put it to
> configuration
> * If node fires PART_LOST_EVT then any update to lost partition will throw
> (or will not throw) exception according to DataLossPolicy
> * ForceKeysRequest should be completed with exception (if plc == FAIL) if all
> nodes to request from are gone. So, all gets/puts/txs should fail.
> * Add a public API method in order to allow a recovery from a failed state.
> Another solution is to detect partition loss at the time partition exchange
> completes. Since we hold topology lock during the exchange, we can easily
> check that there are no owners for a partition and act as a topology
> validator in case FAIL policy is configured. There is one thing needed to be
> carefully analyzed: demand worker should not park partition as owning in case
> last owner leaves grid before the corresponding exchange completes.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)