[ 
https://issues.apache.org/jira/browse/IGNITE-1605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yakov Zhdanov updated IGNITE-1605:
----------------------------------
    Description: 
Need to provide stronger data loss check.

Currently node can fire event - EVT_CACHE_REBALANCE_PART_DATA_LOST

However, this is not enough since if there is strong requirement on application 
behavior on data loss e.g. further cache updates should throw exception - this 
requirement cannot currently be met even with use of cache interceptor.

Suggestions:
* Introduce CacheDataLossPolicy enum: FAIL_OPS, NOOP and put it to configuration
* If node fires PART_LOST_EVT then any update to lost partition will throw (or 
will not throw) exception according to DataLossPolicy
* ForceKeysRequest should be completed with exception (if plc == FAIL) if all 
nodes to request from are gone. So, all gets/puts/txs should fail.
* Add a public API method in order to allow a recovery from a failed state.

Another solution is to detect partition loss at the time partition exchange 
completes. Since we hold topology lock during the exchange, we can easily check 
that there are no owners for a partition and act as a topology validator in 
case FAIL policy is configured. There is one thing needed to be carefully 
analyzed: demand worker should not park partition as owning in case last owner 
leaves grid before the corresponding exchange completes.

  was:
Need to provide stronger data loss check.

Currently node can fire event - EVT_CACHE_REBALANCE_PART_DATA_LOST

However, this is not enough since if there is strong requirement on application 
behavior on data loss e.g. further cache updates should throw exception - this 
requirement cannot currently be met even with use of cache interceptor.

Suggestions:
* Introduce CacheDataLossPolicy enum: FAIL_OPS, NOOP and put it to configuration
* If node fires PART_LOST_EVT then any update to lost partition will throw (or 
will not throw) exception according to DataLossPolicy
* ForceKeysRequest should be completed with exception (if plc == FAIL) if all 
nodes to request from are gone. So, all gets/puts/txs should fail.

We also may want to add a public API method in order to allow a recovery from a 
failed state.


> Provide stronger data loss check
> --------------------------------
>
>                 Key: IGNITE-1605
>                 URL: https://issues.apache.org/jira/browse/IGNITE-1605
>             Project: Ignite
>          Issue Type: Task
>            Reporter: Yakov Zhdanov
>            Assignee: Alexey Goncharuk
>
> Need to provide stronger data loss check.
> Currently node can fire event - EVT_CACHE_REBALANCE_PART_DATA_LOST
> However, this is not enough since if there is strong requirement on 
> application behavior on data loss e.g. further cache updates should throw 
> exception - this requirement cannot currently be met even with use of cache 
> interceptor.
> Suggestions:
> * Introduce CacheDataLossPolicy enum: FAIL_OPS, NOOP and put it to 
> configuration
> * If node fires PART_LOST_EVT then any update to lost partition will throw 
> (or will not throw) exception according to DataLossPolicy
> * ForceKeysRequest should be completed with exception (if plc == FAIL) if all 
> nodes to request from are gone. So, all gets/puts/txs should fail.
> * Add a public API method in order to allow a recovery from a failed state.
> Another solution is to detect partition loss at the time partition exchange 
> completes. Since we hold topology lock during the exchange, we can easily 
> check that there are no owners for a partition and act as a topology 
> validator in case FAIL policy is configured. There is one thing needed to be 
> carefully analyzed: demand worker should not park partition as owning in case 
> last owner leaves grid before the corresponding exchange completes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to