Re: Partition Loss Policy options

2018-10-09 Thread Roman Novichenok
Stan,
thanks for looking into it.  I agree with you that this is the observed
behaviour, but it is not what I would expect.

My expectation would be to get an exception when I attempt to query
unavailable partitions.  Ideally this behavior would be dependent on the
PartitionLossPolicy.  When READ_ONLY_SAFE or READ_WRITE_SAFE policy is
selected, and the query condition does not explicitly specify which
partitions it is interested in, then query should fail. Query could
implicitly specify partitions by including indexed value in the where
clause.  Simpler implementation could just raise exceptions on queries when
policy is ..._SAFE and some partitions are unavailable.

Thanks again,
Roman

On Tue, Oct 9, 2018 at 2:54 PM Stanislav Lukyanov 
wrote:

> Hi,
>
>
>
> I’ve tried your test and it works as expected, with some partitions lost
> and the final size being ~850 (~150 less than on the start).
>
> Am I missing something?
>
>
>
> Thanks,
>
> Stan
>
>
>
> *From: *Roman Novichenok 
> *Sent: *2 октября 2018 г. 22:21
> *To: *user@ignite.apache.org
> *Subject: *Re: Partition Loss Policy options
>
>
>
> Anton,
>
> thanks for quick response.  Not sure if I'm setting wrong expectations.
> Just tried sql query and that exhibited the same behavior.  Created a pull
> request with a test: https://github.com/novicr/ignite/pull/3.
>
>
>
> The test goes through the following steps:
>
> 1. creates a 4 node cluster with persistence enabled.
>
> 2. creates 2 caches - with setBackups(1)
>
> 3. populates caches with 1000 elements
>
> 4. runs a sql query and prints result size: 1000
>
> 5. stops 2 nodes
>
> 6. runs sql query from step 4 and prints result size: (something less than
> 1000).
>
>
>
> Thanks,
>
> Roman
>
>
>
>
>
> On Tue, Oct 2, 2018 at 2:07 PM Roman Novichenok <
> roman.noviche...@gmail.com> wrote:
>
> I was looking at scan queries. cache.query(new ScanQuery()) returns
> partial results.
>
>
>
> On Tue, Oct 2, 2018 at 1:37 PM akurbanov  wrote:
>
> Hello Roman,
>
> Correct me if I'm mistaken, you are talking about SQL queries. That was
> fixed under https://issues.apache.org/jira/browse/IGNITE-8927, primary
> ticket is https://issues.apache.org/jira/browse/IGNITE-8834, will be
> delivered in 2.7 release.
>
> Regards,
> Anton
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>


Re: lost partition recovery with native persistence

2018-10-04 Thread Roman Novichenok
Thanks.  I understand relying on user to determine if data is upto date
when Ignite is used as a cache.  With native persistence, Ignite is the
source of the data.  If some partitions become unavailable, there's no way
for data to become outdated.  Feels like there should be a configuration
setting to allow Ignite to auto-recover when all partitions are back online
for a cache with native persistence.

On Thu, Oct 4, 2018 at 10:54 AM Maxim.Pudov  wrote:

> I'm not an architect of this feature, but the explanation could be quite
> simple: data stored in lost partitions could have become outdated while
> nodes containing data were out of the cluster, so it's up to user to decide
> whether the restored data is OK, or not.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


lost partition recovery with native persistence

2018-10-03 Thread Roman Novichenok
I was going over failure recovery scenarios, trying to understand logic
behind lost partitions functionality.  In the case of native persistence,
Ignite fully manages data persistence and availability.  If enough nodes in
the cluster become unavailable resulting in partitions marked lost, Ignite
keeps track of those partitions.  When nodes rejoin the cluster partitions
are automatically discovered and loaded from disk.  This can be shown by
the fact that data actually becomes available and can be retrieved using
normal get/query api's.  However, lostPartitions() lists still contain some
partitions that were previously lost and Ignite expects user to manually
mark partitions available by calling Ignite.resetLostPartitions() api.

I found some discussion about issues with topology version handling in
resetLostPartitions() in this ticket:
https://issues.apache.org/jira/browse/IGNITE-7832, but it does not address
the question, why user involvement is required at all.

Thanks,
Roman


Re: Partition Loss Policy options

2018-10-02 Thread Roman Novichenok
Anton,
thanks for quick response.  Not sure if I'm setting wrong expectations.
Just tried sql query and that exhibited the same behavior.  Created a pull
request with a test: https://github.com/novicr/ignite/pull/3.

The test goes through the following steps:
1. creates a 4 node cluster with persistence enabled.
2. creates 2 caches - with setBackups(1)
3. populates caches with 1000 elements
4. runs a sql query and prints result size: 1000
5. stops 2 nodes
6. runs sql query from step 4 and prints result size: (something less than
1000).

Thanks,
Roman


On Tue, Oct 2, 2018 at 2:07 PM Roman Novichenok 
wrote:

> I was looking at scan queries. cache.query(new ScanQuery()) returns
> partial results.
>
> On Tue, Oct 2, 2018 at 1:37 PM akurbanov  wrote:
>
>> Hello Roman,
>>
>> Correct me if I'm mistaken, you are talking about SQL queries. That was
>> fixed under https://issues.apache.org/jira/browse/IGNITE-8927, primary
>> ticket is https://issues.apache.org/jira/browse/IGNITE-8834, will be
>> delivered in 2.7 release.
>>
>> Regards,
>> Anton
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Partition Loss Policy options

2018-10-02 Thread Roman Novichenok
I was looking at scan queries. cache.query(new ScanQuery()) returns partial
results.

On Tue, Oct 2, 2018 at 1:37 PM akurbanov  wrote:

> Hello Roman,
>
> Correct me if I'm mistaken, you are talking about SQL queries. That was
> fixed under https://issues.apache.org/jira/browse/IGNITE-8927, primary
> ticket is https://issues.apache.org/jira/browse/IGNITE-8834, will be
> delivered in 2.7 release.
>
> Regards,
> Anton
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Partition Loss Policy options

2018-10-02 Thread Roman Novichenok
PartitionLossPolicy controls access level to the cache in case any
partitions for this cache are not available on the cluster.  As far as I
can see this policy is only consulted for cache put/get operations.  Is
there a way to prevent queries from returning results (force an exception)
when cache data is partially unavailable?

thanks,
Roman