Re: Right MXBean for new metrics

2017-11-24 Thread Dmitriy Setrakyan
Got it, but I do not like the name of the metric, I think it is confusing.

I would provide the following metrics:
- minNumberOfCopies()
- maxNumberOfCopies()

Will this work for you?

D.

On Thu, Nov 23, 2017 at 10:38 PM, Alex Plehanov 
wrote:

> We have target redundancy level - 4. If, for some reason, minimal
> redundancy level reached the value of 1, then each next node left the
> cluster may cause data loss or service unavailability.
>
> 2017-11-24 1:31 GMT+03:00 Dmitriy Setrakyan :
>
> > Alex,
> >
> > I am really confused. What do you need to know the "minimal partition
> > redundancy" for? What will it give you?
> >
> > D.
> >
> > On Thu, Nov 23, 2017 at 2:25 PM, Alex Plehanov 
> > wrote:
> >
> > > Example was in my previous letters: if we have in our cluster for cache
> > > group one partition with 2 copies (1 primary and 1 backup) and other
> > > partitions with 4 copies (1 primary and 3 backups), then minimal
> > partition
> > > redundancy level for this cache group will be 2.
> > >
> > > Maybe code will be more clear than my description, I think it will be
> > > something like that:
> > >
> > > for (int part = 0; part < partitions; part++) {
> > > int partRedundancyLevel = 0;
> > >
> > > for (Map.Entry entry :
> > > partFullMap.entrySet()) {
> > > if (entry.getValue().get(part) ==
> > > GridDhtPartitionState.OWNING)
> > > partRedundancyLevel++;
> > > }
> > >
> > > if (partRedundancyLevel < minRedundancyLevel)
> > > minRedundancyLevel  = partRedundancyLevel;
> > > }
> > >
> > >
> > > 2017-11-23 4:04 GMT+03:00 Dmitriy Setrakyan :
> > >
> > > > I think you are talking about the case when cluster temporarily gets
> > into
> > > > unbalanced state and needs to rebalance. However, I am still not sure
> > > what
> > > > this metric would show. Can you provide an example?
> > > >
> > > > D.
> > > >
> > > > On Wed, Nov 22, 2017 at 2:10 PM, Alex Plehanov <
> > plehanov.a...@gmail.com>
> > > > wrote:
> > > >
> > > > > It's not about caches.
> > > > > Each partition has certain amount of copies. Amount of copies may
> > > differ
> > > > > for different partitions of one cache group.
> > > > >
> > > > > This configuration possible:
> > > > > 1) With custom affinity function
> > > > > 2) When nodes left the cluster, till rebalancing is not finished
> > > > >
> > > > >
> > > > >
> > > > > 2017-11-23 0:18 GMT+03:00 Dmitriy Setrakyan  >:
> > > > >
> > > > > > On Wed, Nov 22, 2017 at 12:39 PM, Alex Plehanov <
> > > > plehanov.a...@gmail.com
> > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hello Dmitriy,
> > > > > > >
> > > > > > > I agree.
> > > > > > >
> > > > > > > By "minimal partition redundancy level for cache group" I mean
> > > > minimal
> > > > > > > number of partition copies among all partitions of this cache
> > > group.
> > > > > > > For example, if we have in our cluster for cache group one
> > > partition
> > > > > > with 2
> > > > > > > copies (1 primary and 1 backup) and other partitions with 4
> > copies
> > > (1
> > > > > > > primary and 3 backups), then minimal partition redundancy level
> > for
> > > > > this
> > > > > > > cache group will be 2.
> > > > > > >
> > > > > >
> > > > > > Such configuration within the same group would be impossible. All
> > > > caches
> > > > > > within the same group have identical total number of partitions
> and
> > > > > > identical number of backups. If that is not the case, then they
> > fall
> > > > into
> > > > > > different groups.
> > > > > >
> > > > > > D.
> > > > > >
> > > > >
> > > >
> > >
> >
>


[GitHub] ignite pull request #3093: Ignite 2.4.1 mm page memory npe fix

2017-11-24 Thread sergey-chugunov-1985
GitHub user sergey-chugunov-1985 opened a pull request:

https://github.com/apache/ignite/pull/3093

Ignite 2.4.1 mm page memory npe fix



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite 
ignite-2.4.1-mm-PageMemory-NPE-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3093.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3093


commit e7ca9b65a68de7752195c8f4d2b5180f3c77d19f
Author: Dmitriy Govorukhin 
Date:   2017-11-13T18:52:47Z

ignite-blt-merge -> ignite-2.4.1

commit cc8168fc184bb7f5e3cc3bbb0743397097f78bfb
Author: Dmitriy Govorukhin 
Date:   2017-11-13T19:13:01Z

merge ignite-pitr-rc1 -> ignite-2.4.1

commit 87e6d74cf6a251c7984f9e68c391f790feccc281
Author: Dmitriy Govorukhin 
Date:   2017-11-14T12:49:33Z

ignite-gg-12877 Compact consistent ID in WAL

commit 9f5a22711baea05bd37ab07c8f928a4837dd83a4
Author: Ilya Lantukh 
Date:   2017-11-14T14:12:28Z

Fixed javadoc.

commit d5af2d78dd8eef8eca8ac5391d31d8c779649bb0
Author: Alexey Kuznetsov 
Date:   2017-11-15T08:09:00Z

IGNITE-6913 Baseline: Added new options to controls.sh for baseline 
manipulations.

commit 713924ce865752b6e99b03bd624136541cea5f9f
Author: Sergey Chugunov 
Date:   2017-11-15T09:03:12Z

IGNITE-5850 failover tests for cache operations during BaselineTopology 
changes

commit b65fd134e748d496f732ec2aa0953a0531f544b8
Author: Ilya Lantukh 
Date:   2017-11-15T12:54:35Z

TX read logging if PITR is enabled.

commit 9b2a567c0e04dc33116b51f88bee75f76e9107d1
Author: Ilya Lantukh 
Date:   2017-11-15T13:45:16Z

TX read logging if PITR is enabled.

commit 993058ccf0b2b8d9e80750c3e45a9ffa31d85dfa
Author: Dmitriy Govorukhin 
Date:   2017-11-15T13:51:54Z

ignite-2.4.1 optimization for store full set node more compacted

commit 1eba521f608d39967aec376b397b7fc800234e54
Author: Dmitriy Govorukhin 
Date:   2017-11-15T13:52:22Z

Merge remote-tracking branch 'professional/ignite-2.4.1' into ignite-2.4.1

commit 564b3fd51f8a7d1d81cb6874df66d0270623049c
Author: Sergey Chugunov 
Date:   2017-11-15T14:00:51Z

IGNITE-5850 fixed issue with initialization of data regions on node 
activation, fixed issue with auto-activation when random node joins inactive 
cluster with existing BLT

commit c6d1fa4da7adfadc80abdc7eaf6452b86a4f6aa4
Author: Sergey Chugunov 
Date:   2017-11-15T16:23:08Z

IGNITE-5850 transitionResult is set earlier when request for changing 
BaselineTopology is sent

commit d65674363163e38a4c5fdd73d1c8d8e1c7610797
Author: Sergey Chugunov 
Date:   2017-11-16T11:59:07Z

IGNITE-5850 new failover tests for changing BaselineTopology up (new node 
added to topology)

commit 20552f3851fe8825191b144179be032965e0b5c6
Author: Sergey Chugunov 
Date:   2017-11-16T12:53:43Z

IGNITE-5850 improved error message when online node is removed from baseline

commit 108bbcae4505ac904a6db774643ad600bfb42c21
Author: Sergey Chugunov 
Date:   2017-11-16T13:45:52Z

IGNITE-5850 BaselineTopology should not change on cluster deactivation

commit deb641ad3bdbf260fa60ad6bf607629652e324bd
Author: Dmitriy Govorukhin 
Date:   2017-11-17T09:45:44Z

ignite-2.4.1 truncate wal and checkpoint history on move/delete snapshot

commit 3c8b06f3659af30d1fd148ccc0f40e216a56c998
Author: Alexey Goncharuk 
Date:   2017-11-17T12:48:12Z

IGNITE-6947 Abandon remap after single map if future is done (fixes NPE)

commit ba2047e5ae7d271a677e0c418375d82d78c4023e
Author: devozerov 
Date:   2017-11-14T12:26:31Z

IGNITE-6901: Fixed assertion during 
IgniteH2Indexing.rebuildIndexesFromHash. This closes #3027.

commit abfc0466d6d61d87255d0fe38cbdf11ad46d4f89
Author: Sergey Chugunov 
Date:   2017-11-17T13:40:57Z

IGNITE-5850 tests for queries in presence of BaselineTopology

commit f4eabaf2a905abacc4c60c01d3ca04f6ca9ec188
Author: Sergey Chugunov 
Date:   2017-11-17T17:23:02Z

IGNITE-5850 implementation for setBaselineTopology(long topVer) migrated 
from wc-251

commit 4edeccd3e0b671aa277f58995df9ff9935baa95a
Author: EdShangGG 
Date:   2017-11-17T18:21:17Z

GG-13074 Multiple snapshot test failures after baseline topology is 
introduced
-adding baseline test to suite
-fixing issues with baseline

commit 

[GitHub] ignite pull request #2813: IGNITE-6272: .NET: Propagate multiple services de...

2017-11-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/2813


---


[jira] [Created] (IGNITE-7017) Reconsider WAL archive strategy

2017-11-24 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-7017:


 Summary: Reconsider WAL archive strategy
 Key: IGNITE-7017
 URL: https://issues.apache.org/jira/browse/IGNITE-7017
 Project: Ignite
  Issue Type: Task
Affects Versions: 2.3
Reporter: Alexey Goncharuk
 Fix For: 2.4


Currently, we write WAL files in work directory and then copy them to the 
archive after the segment is closed. 
This was done to overcome XFS bug which leads to a double fsync cost 
immediately after file creation. This approach leads to excessive disk usage 
and can be optimized, especially for LOG_ONLY mode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7016) Avoid fsync on WAL rollover in non-default mode

2017-11-24 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-7016:


 Summary: Avoid fsync on WAL rollover in non-default mode
 Key: IGNITE-7016
 URL: https://issues.apache.org/jira/browse/IGNITE-7016
 Project: Ignite
  Issue Type: Task
Affects Versions: 2.3
Reporter: Alexey Goncharuk
 Fix For: 2.4






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #3092: ignite-gg-13017

2017-11-24 Thread AMashenkov
GitHub user AMashenkov opened a pull request:

https://github.com/apache/ignite/pull/3092

ignite-gg-13017

fixed test

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-gg-13017

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3092.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3092


commit 6e36a7950db84913ddfd0d98f5a0b50923d2a29c
Author: tledkov-gridgain 
Date:   2016-11-15T09:42:29Z

IGNITE-3191: Fields are now sorted for binary objects which don't implement 
Binarylizable interface. This closes #1197.

commit e39888a08da313bec4d30f96488eccb36b4abacc
Author: Vasiliy Sisko 
Date:   2016-11-17T04:41:05Z

IGNITE-4163 Fixed load range queries.

commit 3eacc0b59c27be6b4b3aaa09f84b867ba42b449f
Author: Alexey Kuznetsov 
Date:   2016-11-21T10:28:56Z

Merged ignite-1.7.3 into ignite-1.7.4.

commit 0234f67390c88dceefd6e62de98adb922b4ba9ac
Author: Alexey Kuznetsov 
Date:   2016-11-21T10:40:50Z

IGNITE-3443 Implemented metrics for queries monitoring.

commit a24a394bb66ba0237a9e9ef940707d422b2980f0
Author: Konstantin Dudkov 
Date:   2016-11-21T10:53:58Z

IGNITE-2523 "single put" NEAR update request

commit 88f38ac6305578946f2881b12d2d557bd561f67d
Author: Konstantin Dudkov 
Date:   2016-11-21T12:11:09Z

IGNITE-3074 Optimize DHT atomic update future

commit 51ca24f2db32dff9c0034603ea3abfe5ef5cd846
Author: Konstantin Dudkov 
Date:   2016-11-21T13:48:44Z

IGNITE-3075 Implement single key-value pair DHT request/response for ATOMIC 
cache.

commit 6e4a279e34584881469a7d841432e6c38db2f06f
Author: tledkov-gridgain 
Date:   2016-11-21T14:15:17Z

IGNITE-2355: fix test - clear client connections before and after a test.

commit 551f90dbeebcad35a0e3aac07229fb67578f2ab7
Author: tledkov-gridgain 
Date:   2016-11-21T14:16:49Z

Merge remote-tracking branch 'community/ignite-1.7.4' into ignite-1.7.4

commit f2dc1d71705b86428a04a69c4f2d4ee3a82ed1bd
Author: sboikov 
Date:   2016-11-21T15:12:27Z

Merged ignite-1.6.11 into ignite-1.7.4.

commit d32fa21b673814b060d2362f06ff44838e9c2cdc
Author: sboikov 
Date:   2016-11-22T08:33:55Z

IGNITE-3075 Fixed condition for 'single' request creation

commit d15eba4becf7515b512c1032b193ce75e1589177
Author: Anton Vinogradov 
Date:   2016-11-22T08:56:20Z

IGNITE-4225 DataStreamer can hang on changing topology

commit f80bfbd19e7870554bf3abd13bde89b0f39aaee1
Author: Anton Vinogradov 
Date:   2016-11-22T09:02:57Z

IGNITE-3748 Data rebalancing of large cache can hang out.

commit bc695f8e3306c6d74d4fe53d9a98adedd43ad8f0
Author: Igor Sapego 
Date:   2016-11-22T09:05:15Z

IGNITE-4227: ODBC: Implemented SQLError. This closes #1237.

commit fc9ee6a74fe0bf413ab0643d2776a1a43e6dd5d2
Author: devozerov 
Date:   2016-11-22T09:05:32Z

Merge remote-tracking branch 'upstream/ignite-1.7.4' into ignite-1.7.4

commit 861fab9d0598ca2f06c4a6f293bf2866af31967c
Author: tledkov-gridgain 
Date:   2016-11-22T09:52:03Z

IGNITE-4239: add GridInternal annotaion for tasks instead of jobs. This 
closes #1250.

commit ba99df1554fbd1de2b2367b6ce011a024cd199bd
Author: tledkov-gridgain 
Date:   2016-11-22T10:07:20Z

IGNITE-4239: test cleanup

commit c34d27423a0c45c61341c1fcb3f56727fb91498f
Author: Igor Sapego 
Date:   2016-11-22T11:13:28Z

IGNITE-4100: Fix for DEVNOTES paths.

commit 9d82f2ca06fa6069c1976cc75814874256b24f8c
Author: devozerov 
Date:   2016-11-22T12:05:29Z

IGNITE-4259: Fixed a problem with geospatial indexes and BinaryMarshaller.

commit b038730ee56a662f73e02bbec83eb1712180fa82
Author: isapego 
Date:   2016-11-23T09:05:54Z

IGNITE-4249: ODBC: Fixed performance issue caused by ineddicient IO 
handling on CPP side. This closes #1254.

commit 7a47a0185d308cd3a58c7bfcb4d1cd548bff5b87
Author: devozerov 
Date:   2016-11-24T08:14:08Z

IGNITE-4270: Allow GridUnsafe.UNALIGNED flag override.

commit bf330251734018467fa3291fccf0414c9da7dd1b
Author: Andrey Novikov 
Date:   2016-11-24T10:08:08Z

Web console beta-6.

commit 7d88c5bfe7d6f130974fab1ed4266fff859afd3d
Author: Andrey Novikov 
Date:   2016-11-24T10:59:33Z

Web console beta-6. Minor fix.

commit 9c6824b4f33fbdead64299d9e0c34365d5d4a570
Author: nikolay_tikhonov 
Date:   2016-11-24T13:27:05Z

IGNITE-3958 Fixed "Client node 

Re: Losing data during restarting cluster with persistence enabled

2017-11-24 Thread Dmitry Pavlov
Please see the discussion on the user list. It seems that the same happened
there:

http://apache-ignite-users.70518.x6.nabble.com/Reassign-partitions-td7461.html#a7468

it contains examples when the data can diverge.

пт, 24 нояб. 2017 г. в 16:42, Dmitry Pavlov :

> If we compare native and 3rd party persistence (cache store):
>  - Updating and reading data from DBMS is slower in most scenarios.
>  - Non-clustered DBMS is a single point of failure, it is hard to scale.
>  - Ignite SQL does not extend to External (3rd party persitsence) Cache
> Store (and queries ignore DBMS changes).
>
>
> Which is why I am wondering if Native persistence is applicable in this
> case decribed by Vyacheslav.
>
> пт, 24 нояб. 2017 г. в 12:23, Evgeniy Ignatiev <
> yevgeniy.ignat...@gmail.com>:
>
>> Sorry linked the wrong page, the latter url is not the example.
>>
>>
>> On 11/24/2017 1:12 PM, Evgeniy Ignatiev wrote:
>> > By the way I remembered that there is an annotation CacheLocalStore
>> > for marking exactly the CacheStore that is not distributed -
>> >
>> http://apache-ignite-developers.2346864.n4.nabble.com/CacheLocalStore-td734.html
>> > - here is short explanation and this -
>> >
>> https://github.com/gridgain/gridgain-advanced-examples/blob/master/src/main/java/org/gridgain/examples/localstore/LocalRecoverableStoreExample.java
>> > - is example implementation.
>> >
>> >
>> > On 11/23/2017 4:42 PM, Dmitry Pavlov wrote:
>> >> Hi Evgeniy,
>> >>
>> >> Technically it is, of course, possible, but still
>> >> - it is not simple at all
>> >> - IgniteCacheOffheapManager & IgniteWriteAheadLogManager are internal
>> >> APIs,
>> >> and community can change any APIs here in any time.
>> >>
>> >> Vyacheslav,
>> >>
>> >> Why Ignite Native Persistence is not suitable for this case?
>> >>
>> >> Sincerely,
>> >> Dmitriy Pavlov
>> >>
>> >> чт, 23 нояб. 2017 г. в 11:01, Evgeniy Ignatiev
>> >> > >>> :
>> >>> As far as I remember, last webinar I heard on Ignite Native
>> Persistence
>> >>> - it actually exposes some interfaces like IgniteWriteAheadLogManager,
>> >>> PageStore, PageStoreManager, etc., with the file-based implementation
>> >>> provided by Ignite being only one possible approach, and users can
>> >>> create their own Native Persistence variations. At least that what has
>> >>> been said by Denis Magda at that time.
>> >>>
>> >>> May be creating own implementation of Ignite Native Persistence rather
>> >>> than CacheStore based persistence is an option here?
>> >>>
>> >>> On 11/23/2017 2:23 AM, Valentin Kulichenko wrote:
>>  Vyacheslav,
>> 
>>  There is no way to do this and I'm not sure why you want to do this.
>> >>> Ignite
>>  persistence was developed to solve exactly the problems you're
>> >>> describing.
>>  Just use it :)
>> 
>>  -Val
>> 
>>  On Wed, Nov 22, 2017 at 12:36 AM, Vyacheslav Daradur <
>> >>> daradu...@gmail.com>
>>  wrote:
>> 
>> > Valentin, Evgeniy thanks for your help!
>> >
>> > Valentin, unfortunately, you are right.
>> >
>> > I've tested that behavior in the following scenario:
>> > 1. Started N nodes and filled it with data
>> > 2. Shutdown one node
>> > 3. Called rebalance directly and waited to finish
>> > 4. Stopped all other (N-1) nodes
>> > 5. Started N-1 nodes and validated data
>> >
>> > Validation didn't pass - data consistency was broken. As you say it
>> > works only on stable topology.
>> > As far as I understand Ignite doesn't manage to rebalance in
>> > underlying storage, it became clear from tests and your description
>> > that CacheStore design assumes that the underlying storage is shared
>> > by all the
>> > nodes in the topology.
>> >
>> > I understand that PDS is the best option in case of distributing
>> > persistence.
>> > However, could you point me the best way to override default
>> > rebalance
>> > behavior?
>> > Maybe it's possible to extend it by a custom plugin?
>> >
>> > On Wed, Nov 22, 2017 at 1:35 AM, Valentin Kulichenko
>> >  wrote:
>> >> Vyacheslav,
>> >>
>> >> If you want the persistence storage to be *distributed*, then using
>> > Ignite
>> >> persistence would be the easiest thing to do anyway, even if you
>> >> don't
>> > need
>> >> all its features.
>> >>
>> >> CacheStore indeed can be updated from different nodes with
>> different
>> > nodes,
>> >> but the problem is in coordination. If instances of the store are
>> >> not
>> > aware
>> >> of each other, it's really hard to handle all rebalancing cases.
>> >> Such
>> >> solution will work only on stable topology.
>> >>
>> >> Having said that, if you can have one instance of RocksDB (or any
>> >> other
>> > DB
>> >> for that matter) that is accessed via network by all nodes, then
>> >> 

[jira] [Created] (IGNITE-7015) SQL: index should be updated only when relevant values changed

2017-11-24 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-7015:
---

 Summary: SQL: index should be updated only when relevant values 
changed
 Key: IGNITE-7015
 URL: https://issues.apache.org/jira/browse/IGNITE-7015
 Project: Ignite
  Issue Type: Task
  Components: sql
Reporter: Vladimir Ozerov
 Fix For: 2.4


See {{GridH2Table.update}} method. Whenever value is updated, we propagate it 
to all indexes. Consider the following case:
1) Old row is not null, so this is "update", not "create".
2) Link hasn't changed
3) Indexed fields haven't changed

If all conditions are met, we can skip index update completely, as state before 
and after will be the same. This is especially important when persistence is 
enabled because currently we generate unnecessary dirty pages what increases IO 
pressure.

Suggested fix:
1) Iterate over index columns, skipping key and affinity columns (as they are 
guaranteed to be the same);
2) Compare relevant index columns of both old and new rows
3) If all columns are equal, do nothing.

Fields should be read through {{GridH2KeyValueRowOnheap#getValue}}, because in 
this case we will re-use value cache transparently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7014) SQL: in-place update should be allowed when indexing is enabled

2017-11-24 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-7014:
---

 Summary: SQL: in-place update should be allowed when indexing is 
enabled
 Key: IGNITE-7014
 URL: https://issues.apache.org/jira/browse/IGNITE-7014
 Project: Ignite
  Issue Type: Task
  Components: cache, sql
Reporter: Vladimir Ozerov
 Fix For: 2.4


See {{IgniteCacheOffheapManagerImpl.canUpdateOldRow}} - if cache has at least 
one query entity, in-place updates are not possible. This drastically reduces 
performance of DML and regular update (cache API) operations. 

We need to understand whether any explanation of this restriction exists. In 
any case, in-place updates should be allowed for SQL-enabled caches.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Losing data during restarting cluster with persistence enabled

2017-11-24 Thread Dmitry Pavlov
If we compare native and 3rd party persistence (cache store):
 - Updating and reading data from DBMS is slower in most scenarios.
 - Non-clustered DBMS is a single point of failure, it is hard to scale.
 - Ignite SQL does not extend to External (3rd party persitsence) Cache
Store (and queries ignore DBMS changes).


Which is why I am wondering if Native persistence is applicable in this
case decribed by Vyacheslav.

пт, 24 нояб. 2017 г. в 12:23, Evgeniy Ignatiev :

> Sorry linked the wrong page, the latter url is not the example.
>
>
> On 11/24/2017 1:12 PM, Evgeniy Ignatiev wrote:
> > By the way I remembered that there is an annotation CacheLocalStore
> > for marking exactly the CacheStore that is not distributed -
> >
> http://apache-ignite-developers.2346864.n4.nabble.com/CacheLocalStore-td734.html
> > - here is short explanation and this -
> >
> https://github.com/gridgain/gridgain-advanced-examples/blob/master/src/main/java/org/gridgain/examples/localstore/LocalRecoverableStoreExample.java
> > - is example implementation.
> >
> >
> > On 11/23/2017 4:42 PM, Dmitry Pavlov wrote:
> >> Hi Evgeniy,
> >>
> >> Technically it is, of course, possible, but still
> >> - it is not simple at all
> >> - IgniteCacheOffheapManager & IgniteWriteAheadLogManager are internal
> >> APIs,
> >> and community can change any APIs here in any time.
> >>
> >> Vyacheslav,
> >>
> >> Why Ignite Native Persistence is not suitable for this case?
> >>
> >> Sincerely,
> >> Dmitriy Pavlov
> >>
> >> чт, 23 нояб. 2017 г. в 11:01, Evgeniy Ignatiev
> >>  >>> :
> >>> As far as I remember, last webinar I heard on Ignite Native Persistence
> >>> - it actually exposes some interfaces like IgniteWriteAheadLogManager,
> >>> PageStore, PageStoreManager, etc., with the file-based implementation
> >>> provided by Ignite being only one possible approach, and users can
> >>> create their own Native Persistence variations. At least that what has
> >>> been said by Denis Magda at that time.
> >>>
> >>> May be creating own implementation of Ignite Native Persistence rather
> >>> than CacheStore based persistence is an option here?
> >>>
> >>> On 11/23/2017 2:23 AM, Valentin Kulichenko wrote:
>  Vyacheslav,
> 
>  There is no way to do this and I'm not sure why you want to do this.
> >>> Ignite
>  persistence was developed to solve exactly the problems you're
> >>> describing.
>  Just use it :)
> 
>  -Val
> 
>  On Wed, Nov 22, 2017 at 12:36 AM, Vyacheslav Daradur <
> >>> daradu...@gmail.com>
>  wrote:
> 
> > Valentin, Evgeniy thanks for your help!
> >
> > Valentin, unfortunately, you are right.
> >
> > I've tested that behavior in the following scenario:
> > 1. Started N nodes and filled it with data
> > 2. Shutdown one node
> > 3. Called rebalance directly and waited to finish
> > 4. Stopped all other (N-1) nodes
> > 5. Started N-1 nodes and validated data
> >
> > Validation didn't pass - data consistency was broken. As you say it
> > works only on stable topology.
> > As far as I understand Ignite doesn't manage to rebalance in
> > underlying storage, it became clear from tests and your description
> > that CacheStore design assumes that the underlying storage is shared
> > by all the
> > nodes in the topology.
> >
> > I understand that PDS is the best option in case of distributing
> > persistence.
> > However, could you point me the best way to override default
> > rebalance
> > behavior?
> > Maybe it's possible to extend it by a custom plugin?
> >
> > On Wed, Nov 22, 2017 at 1:35 AM, Valentin Kulichenko
> >  wrote:
> >> Vyacheslav,
> >>
> >> If you want the persistence storage to be *distributed*, then using
> > Ignite
> >> persistence would be the easiest thing to do anyway, even if you
> >> don't
> > need
> >> all its features.
> >>
> >> CacheStore indeed can be updated from different nodes with different
> > nodes,
> >> but the problem is in coordination. If instances of the store are
> >> not
> > aware
> >> of each other, it's really hard to handle all rebalancing cases.
> >> Such
> >> solution will work only on stable topology.
> >>
> >> Having said that, if you can have one instance of RocksDB (or any
> >> other
> > DB
> >> for that matter) that is accessed via network by all nodes, then
> >> it's
> > also
> >> an option. But in this case storage is not distributed.
> >>
> >> -Val
> >>
> >> On Tue, Nov 21, 2017 at 4:37 AM, Vyacheslav Daradur <
> >>> daradu...@gmail.com
> >> wrote:
> >>
> >>> Valentin,
> >>>
> > Why don't you use Ignite persistence [1]?
> >>> I have a use case for one of the projects that need the RAM on disk
> >>> replication only. All PDS 

Re: Integration of Spark and Ignite. Prototype.

2017-11-24 Thread Николай Ижиков

Hello, Val, Denis.

> Personally, I think that we should release the integration only after the 
strategy is fully supported.

I see two major reason to propose merge of DataFrame API implementation without 
custom strategy:

1. My PR is relatively huge, already. From my experience of interaction with 
Ignite community - the bigger PR becomes, the more time of commiters required 
to review PR.
So, I propose to move smaller, but complete steps here.

2. It is not clear for me what exactly includes "custom strategy and 
optimization".
Seems, that additional discussion required.
I think, I can put my thoughts on the paper and start discussion right after 
basic implementation is done.

> Custom strategy implementation is actually very important for this 
integration.

Understand and fully agreed.
I'm ready to continue work in that area.

23.11.2017 02:15, Denis Magda пишет:

Val, Nikolay,

Personally, I think that we should release the integration only after the 
strategy is fully supported. Without the strategy we don’t really leverage from 
Ignite’s SQL engine and introduce redundant data movement between Ignite and 
Spark nodes.

How big is the effort to support the strategy in terms of the amount of work 
left? 40%, 60%, 80%?

—
Denis


On Nov 22, 2017, at 2:57 PM, Valentin Kulichenko 
 wrote:

Nikolay,

Custom strategy implementation is actually very important for this
integration. Basically, it will allow to create a SQL query for Ignite and
execute it directly on the cluster. Your current implementation only adds a
new DataSource which means that Spark will fetch data in its own memory
first, and then do most of the work (like joins for example). Does it make
sense to you? Can you please take a look at this and provide your thoughts
on how much development is implied there?

Current code looks good to me though and I'm OK if the strategy is
implemented as a next step in a scope of separate ticket. I will do final
review early next week and will merge it if everything is OK.

-Val

On Thu, Oct 19, 2017 at 7:29 AM, Николай Ижиков 
wrote:


Hello.


3. IgniteCatalog vs. IgniteExternalCatalog. Why do we have two Catalog

implementations and what is the difference?

IgniteCatalog removed.


5. I don't like that IgniteStrategy and IgniteOptimization have to be

set manually on SQLContext each time it's createdIs there any way to
automate this and improve usability?

IgniteStrategy and IgniteOptimization are removed as it empty now.


Actually, I think it makes sense to create a builder similar to

SparkSession.builder()...

IgniteBuilder added.
Syntax looks like:

```
val igniteSession = IgniteSparkSession.builder()
.appName("Spark Ignite catalog example")
.master("local")
.config("spark.executor.instances", "2")
.igniteConfig(CONFIG)
.getOrCreate()

igniteSession.catalog.listTables().show()
```

Please, see updated PR - https://github.com/apache/ignite/pull/2742

2017-10-18 20:02 GMT+03:00 Николай Ижиков :


Hello, Valentin.

My answers is below.
Dmitry, do we need to move discussion to Jira?


1. Why do we have org.apache.spark.sql.ignite package in our codebase?


As I mentioned earlier, to implement and override Spark Catalog one have
to use internal(private) Spark API.
So I have to use package `org.spark.sql.***` to have access to private
class and variables.

For example, SharedState class that stores link to ExternalCatalog
declared as `private[sql] class SharedState` - i.e. package private.


Can these classes reside under org.apache.ignite.spark instead?


No, as long as we want to have our own implementation of ExternalCatalog.


2. IgniteRelationProvider contains multiple constants which I guess are

some king of config options. Can you describe the purpose of each of them?

I extend comments for this options.
Please, see my commit [1] or PR HEAD:


3. IgniteCatalog vs. IgniteExternalCatalog. Why do we have two Catalog

implementations and what is the difference?

Good catch, thank you!
After additional research I founded that only IgniteExternalCatalog
required.
I will update PR with IgniteCatalog remove in a few days.


4. IgniteStrategy and IgniteOptimization are currently no-op. What are

our plans on implementing them? Also, what exactly is planned in
IgniteOptimization and what is its purpose?

Actually, this is very good question :)
And I need advice from experienced community members here:

`IgniteOptimization` purpose is to modify query plan created by Spark.
Currently, we have one optimization described in IGNITE-3084 [2] by you,
Valentin :) :

“If there are non-Ignite relations in the plan, we should fall back to
native Spark strategies“

I think we can go little further and reduce join of two Ignite backed
Data Frames into single Ignite SQL query. Currently, this feature is
unimplemented.

*Do we need it now? Or we can postpone it and concentrates on basic Data
Frame and Catalog 

[GitHub] ignite pull request #3091: IGNITE-7013 .NET: Fix startup on macOS (dlopen ca...

2017-11-24 Thread ptupitsyn
GitHub user ptupitsyn opened a pull request:

https://github.com/apache/ignite/pull/3091

IGNITE-7013 .NET: Fix startup on macOS (dlopen call)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ptupitsyn/ignite ignite-7013

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3091.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3091


commit 071010551e3cde09e206c168408f13a3c15e615e
Author: Pavel Tupitsyn 
Date:   2017-11-24T13:06:10Z

IGNITE-7013 .NET: Ignite does not start on macOS




---


[GitHub] ignite pull request #3090: IGNITE-6929

2017-11-24 Thread alexpaschenko
GitHub user alexpaschenko opened a pull request:

https://github.com/apache/ignite/pull/3090

IGNITE-6929



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-6929

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3090.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3090


commit f6e982540e65ab17d439dba990794f35616a30dd
Author: sboikov 
Date:   2017-08-30T09:45:40Z

ignite-3478

commit 275a85db5cd6923b36126166ae99b15e876192be
Author: sboikov 
Date:   2017-08-31T07:44:07Z

Merge remote-tracking branch 'remotes/origin/master' into ignite-3478

commit b7b9089f0102b8cab9942a9c887d93e9f26cc7d2
Author: sboikov 
Date:   2017-08-31T09:00:36Z

disco cache cleanup

commit 855c2d45794c300d41e386b4e6fa40736cc3e40d
Author: sboikov 
Date:   2017-08-31T09:09:58Z

Merge branch 'ignite-3478-1' into ignite-3478

commit 08be7310a93d3ce455215b97cf8ab1a2c3f0ab31
Author: sboikov 
Date:   2017-08-31T09:52:23Z

ignite-3478

commit fce2e31f0fd2f4f6a9944422e40408a0c65cfe90
Author: sboikov 
Date:   2017-09-04T08:13:50Z

Merge remote-tracking branch 'remotes/origin/master' into ignite-3478

commit d3c049952384750c5543a9f88b383c033ef74096
Author: sboikov 
Date:   2017-09-04T08:52:11Z

ignite-3478

commit e71ce1937a18dd32448e92b1038dc48d4cb6f8ab
Author: sboikov 
Date:   2017-09-04T10:16:03Z

ignite-3478

commit 5fac5b0965e97f8951e16e10ca9229a2e78ddb0c
Author: sboikov 
Date:   2017-09-05T10:16:44Z

Merge remote-tracking branch 'remotes/origin/master' into ignite-3478

# Conflicts:
#   
modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxPrepareFuture.java

commit 2e0c9c08e046e8d6af1b5358d9053eae999b1fb4
Author: sboikov 
Date:   2017-09-05T11:30:55Z

ignite-3478

commit e1e07ffdf2d711ba3e72f316f5a3970eff27372e
Author: sboikov 
Date:   2017-09-05T11:31:14Z

ignite-3478

commit cbada3934a386668da0b11d4de7d0f58a4d04dfe
Author: sboikov 
Date:   2017-09-05T11:49:49Z

ignite-3484

commit 5a82c68dcd1927bb6fded8b7def38c91ff6e145b
Author: sboikov 
Date:   2017-09-05T11:59:49Z

Merge remote-tracking branch 'remotes/origin/master' into ignite-3478

commit bc9134c94b7a738dc1664e96ca6eabb059f1c268
Author: sboikov 
Date:   2017-09-05T12:03:39Z

Merge branch 'ignite-3478' into ignite-3484

# Conflicts:
#   
modules/core/src/main/java/org/apache/ignite/internal/processors/cache/tree/AbstractDataInnerIO.java

commit b4bfcde78825c6517232e49d389bdb5de19f05a9
Author: sboikov 
Date:   2017-09-05T12:27:51Z

ignite-3484

commit 43834aaab9e2c3cd5fdd55289fdc4a9ff8ab6599
Author: sboikov 
Date:   2017-09-05T13:13:00Z

ignite-3478

commit d1b828095713fcadfa260cf94fef01b42a1b12fd
Author: sboikov 
Date:   2017-09-05T13:13:33Z

Merge branch 'ignite-3478' into ignite-3484

commit 6be6779b6336c36cd71eef0a25199a6a877ce6b5
Author: sboikov 
Date:   2017-09-05T13:47:11Z

ignite-3484

commit e3bba83256c1eb53c4b40fbd9ddba47fcf9d58d5
Author: sboikov 
Date:   2017-09-06T07:10:26Z

ignite-3484

commit dd0afb28466094b801506da8afa3601bfaebd853
Author: sboikov 
Date:   2017-09-06T07:30:04Z

ignite-3484

commit 27b87b413348b03986a463551db24b7726321732
Author: sboikov 
Date:   2017-09-06T08:19:18Z

Merge remote-tracking branch 'remotes/origin/master' into ignite-3478

# Conflicts:
#   
modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxPrepareFuture.java

commit dcaf8801accd6ee089849a82b2ccd558aec81895
Author: sboikov 
Date:   2017-09-06T08:19:30Z

Merge remote-tracking branch 'remotes/origin/master' into ignite-3478

# Conflicts:
#   
modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/GridDhtTxPrepareFuture.java

commit c966451d0bf7059575de92bcfae43d72096ebce4
Author: sboikov 
Date:   2017-09-06T08:27:04Z

Merge branch 'ignite-3478' into ignite-3484

commit 91b9911731a387a3199ddbbc22704bc14af09995
Author: sboikov 
Date:   2017-09-06T09:22:22Z

ignite-3484

commit e40b4d9dcd6fe6c1cd2640bdd7116ca5a08ed781
Author: sboikov 
Date:   2017-09-07T09:12:32Z

ignite-3484

commit 41a1c571e6ba1765941e2f1679dc4ac1582275c4

[jira] [Created] (IGNITE-7013) .NET: Ignite does not start on macOS

2017-11-24 Thread Pavel Tupitsyn (JIRA)
Pavel Tupitsyn created IGNITE-7013:
--

 Summary: .NET: Ignite does not start on macOS
 Key: IGNITE-7013
 URL: https://issues.apache.org/jira/browse/IGNITE-7013
 Project: Ignite
  Issue Type: Bug
  Components: platforms
Affects Versions: 2.4
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn
 Fix For: 2.4


Looks like dlopen code is incorrect for macOS:

{code}
Unhandled Exception: System.DllNotFoundException: Unable to load DLL 
'libcoreclr.so': The specified module or one of its dependencies could not be 
found.
 (Exception from HRESULT: 0x8007007E)
   at 
Apache.Ignite.Core.Impl.Unmanaged.DllLoader.NativeMethodsCore.dlopen(String 
filename, Int32 flags)
   at Apache.Ignite.Core.Impl.Unmanaged.DllLoader.Load(String dllPath)
   at Apache.Ignite.Core.Impl.IgniteUtils.LoadDll(String filePath, String 
simpleName)
   at Apache.Ignite.Core.Impl.IgniteUtils.LoadJvmDll(String configJvmDllPath, 
ILogger log)
   at Apache.Ignite.Core.Impl.IgniteUtils.LoadDlls(String configJvmDllPath, 
ILogger log)
   at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration cfg)
   at ignite_nuget_test.Program.Main(String[] args) in 
/Users/vveider/Development/VCS/Git/ignite-dotnetcore-demo/Program.cs:line 17
{code}

Steps to reproduce:
{code}
git clone https://github.com/ptupitsyn/ignite-dotnetcore-demo.git
cd ignite-dotnetcore-demo
dotnet run
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


TC issues. IGNITE-3084. Spark Data Frame API

2017-11-24 Thread Николай Ижиков

Hello, guys.

I have some issues on TC with my PR [1] for IGNITE-3084(Spark Data Frame API).
Can you, please, help me:


1. `Ignite RDD spark 2_10` -

Currently this build runs with following profiles: 
`-Plgpl,examples,scala-2.10,-clean-libs,-release` [2]
That means `scala` profile is activated too for `Ignite RDD spark 2_10`
Because `scala` activation is done like [3]:

```

!scala-2.10

```

I think it a misconfiguration because scala(2.11) shouldn't be activated for 
2.10 build.
Am I miss something?

Can someone edit build property?
* Add `-scala` to profiles list
* Or add `-Dscala-2.10` to jvm properties to turn off `scala` profile 
in this build.


2. `Ignite RDD` -

Currently this build run on jvm7 [4].
As I wrote in my previous mail [5] current version of spark(2.2) runs only on 
jvm8.

Can someone edit build property to run it on jvm8?


3. For now `Ignite RDD` and `Ignite RDD spark 2_10` only runs java tests [6] 
existing in `spark` module.
There are several existing tests written in scala(i.e. scala-test) ignored in 
TC. IgniteRDDSpec [7] for example.
Is it turned off by a purpose or I miss something?
Should we run scala-test for spark and spark_2.10 modules?


[1] https://github.com/apache/ignite/pull/2742
[2] 
https://ci.ignite.apache.org/viewLog.html?buildId=960220=Ignite20Tests_IgniteRddSpark210=buildLog&_focus=379#_state=371
[3] https://github.com/apache/ignite/blob/master/pom.xml#L533
[4] 
https://ci.ignite.apache.org/viewLog.html?buildId=960221=Ignite20Tests_IgniteRdd=buildParameters
[5] 
http://apache-ignite-developers.2346864.n4.nabble.com/Integration-of-Spark-and-Ignite-Prototype-tp22649p23099.html
[6] 
https://ci.ignite.apache.org/viewLog.html?buildId=960220=Ignite20Tests_IgniteRddSpark210=testsInfo
[7] 
https://github.com/apache/ignite/blob/master/modules/spark/src/test/scala/org/apache/ignite/spark/IgniteRDDSpec.scala


[GitHub] ignite pull request #3089: Ignite-2.4.1-merge-master

2017-11-24 Thread DmitriyGovorukhin
GitHub user DmitriyGovorukhin opened a pull request:

https://github.com/apache/ignite/pull/3089

Ignite-2.4.1-merge-master



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite 
ignite-2.4.1-merge-master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3089.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3089


commit e7ca9b65a68de7752195c8f4d2b5180f3c77d19f
Author: Dmitriy Govorukhin 
Date:   2017-11-13T18:52:47Z

ignite-blt-merge -> ignite-2.4.1

commit cc8168fc184bb7f5e3cc3bbb0743397097f78bfb
Author: Dmitriy Govorukhin 
Date:   2017-11-13T19:13:01Z

merge ignite-pitr-rc1 -> ignite-2.4.1

commit 87e6d74cf6a251c7984f9e68c391f790feccc281
Author: Dmitriy Govorukhin 
Date:   2017-11-14T12:49:33Z

ignite-gg-12877 Compact consistent ID in WAL

commit 9f5a22711baea05bd37ab07c8f928a4837dd83a4
Author: Ilya Lantukh 
Date:   2017-11-14T14:12:28Z

Fixed javadoc.

commit d5af2d78dd8eef8eca8ac5391d31d8c779649bb0
Author: Alexey Kuznetsov 
Date:   2017-11-15T08:09:00Z

IGNITE-6913 Baseline: Added new options to controls.sh for baseline 
manipulations.

commit 713924ce865752b6e99b03bd624136541cea5f9f
Author: Sergey Chugunov 
Date:   2017-11-15T09:03:12Z

IGNITE-5850 failover tests for cache operations during BaselineTopology 
changes

commit b65fd134e748d496f732ec2aa0953a0531f544b8
Author: Ilya Lantukh 
Date:   2017-11-15T12:54:35Z

TX read logging if PITR is enabled.

commit 9b2a567c0e04dc33116b51f88bee75f76e9107d1
Author: Ilya Lantukh 
Date:   2017-11-15T13:45:16Z

TX read logging if PITR is enabled.

commit 993058ccf0b2b8d9e80750c3e45a9ffa31d85dfa
Author: Dmitriy Govorukhin 
Date:   2017-11-15T13:51:54Z

ignite-2.4.1 optimization for store full set node more compacted

commit 1eba521f608d39967aec376b397b7fc800234e54
Author: Dmitriy Govorukhin 
Date:   2017-11-15T13:52:22Z

Merge remote-tracking branch 'professional/ignite-2.4.1' into ignite-2.4.1

commit 564b3fd51f8a7d1d81cb6874df66d0270623049c
Author: Sergey Chugunov 
Date:   2017-11-15T14:00:51Z

IGNITE-5850 fixed issue with initialization of data regions on node 
activation, fixed issue with auto-activation when random node joins inactive 
cluster with existing BLT

commit c6d1fa4da7adfadc80abdc7eaf6452b86a4f6aa4
Author: Sergey Chugunov 
Date:   2017-11-15T16:23:08Z

IGNITE-5850 transitionResult is set earlier when request for changing 
BaselineTopology is sent

commit d65674363163e38a4c5fdd73d1c8d8e1c7610797
Author: Sergey Chugunov 
Date:   2017-11-16T11:59:07Z

IGNITE-5850 new failover tests for changing BaselineTopology up (new node 
added to topology)

commit 20552f3851fe8825191b144179be032965e0b5c6
Author: Sergey Chugunov 
Date:   2017-11-16T12:53:43Z

IGNITE-5850 improved error message when online node is removed from baseline

commit 108bbcae4505ac904a6db774643ad600bfb42c21
Author: Sergey Chugunov 
Date:   2017-11-16T13:45:52Z

IGNITE-5850 BaselineTopology should not change on cluster deactivation

commit deb641ad3bdbf260fa60ad6bf607629652e324bd
Author: Dmitriy Govorukhin 
Date:   2017-11-17T09:45:44Z

ignite-2.4.1 truncate wal and checkpoint history on move/delete snapshot

commit 3c8b06f3659af30d1fd148ccc0f40e216a56c998
Author: Alexey Goncharuk 
Date:   2017-11-17T12:48:12Z

IGNITE-6947 Abandon remap after single map if future is done (fixes NPE)

commit ba2047e5ae7d271a677e0c418375d82d78c4023e
Author: devozerov 
Date:   2017-11-14T12:26:31Z

IGNITE-6901: Fixed assertion during 
IgniteH2Indexing.rebuildIndexesFromHash. This closes #3027.

commit abfc0466d6d61d87255d0fe38cbdf11ad46d4f89
Author: Sergey Chugunov 
Date:   2017-11-17T13:40:57Z

IGNITE-5850 tests for queries in presence of BaselineTopology

commit f4eabaf2a905abacc4c60c01d3ca04f6ca9ec188
Author: Sergey Chugunov 
Date:   2017-11-17T17:23:02Z

IGNITE-5850 implementation for setBaselineTopology(long topVer) migrated 
from wc-251

commit 4edeccd3e0b671aa277f58995df9ff9935baa95a
Author: EdShangGG 
Date:   2017-11-17T18:21:17Z

GG-13074 Multiple snapshot test failures after baseline topology is 
introduced
-adding baseline test to suite
-fixing issues with baseline

commit 

[jira] [Created] (IGNITE-7012) Web console: investigate E2E tests

2017-11-24 Thread Ilya Borisov (JIRA)
Ilya Borisov created IGNITE-7012:


 Summary: Web console: investigate E2E tests
 Key: IGNITE-7012
 URL: https://issues.apache.org/jira/browse/IGNITE-7012
 Project: Ignite
  Issue Type: Bug
  Components: wizards
Reporter: Ilya Borisov
Assignee: Alexander Kalinin
Priority: Minor


Investigate what tools we can use to implement E2E tests and try some of them 
out. I think that testcafe will be a good start:
https://github.com/DevExpress/testcafe
https://github.com/DevExpress/testcafe-angular-selectors



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7011) Web console: add more tests

2017-11-24 Thread Ilya Borisov (JIRA)
Ilya Borisov created IGNITE-7011:


 Summary: Web console: add more tests
 Key: IGNITE-7011
 URL: https://issues.apache.org/jira/browse/IGNITE-7011
 Project: Ignite
  Issue Type: Improvement
  Components: wizards
Reporter: Ilya Borisov
Assignee: Ilya Borisov
Priority: Minor


Web console does not have enough test coverage, let's fix that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Losing data during restarting cluster with persistence enabled

2017-11-24 Thread Evgeniy Ignatiev

Sorry linked the wrong page, the latter url is not the example.


On 11/24/2017 1:12 PM, Evgeniy Ignatiev wrote:
By the way I remembered that there is an annotation CacheLocalStore 
for marking exactly the CacheStore that is not distributed - 
http://apache-ignite-developers.2346864.n4.nabble.com/CacheLocalStore-td734.html 
- here is short explanation and this - 
https://github.com/gridgain/gridgain-advanced-examples/blob/master/src/main/java/org/gridgain/examples/localstore/LocalRecoverableStoreExample.java 
- is example implementation.



On 11/23/2017 4:42 PM, Dmitry Pavlov wrote:

Hi Evgeniy,

Technically it is, of course, possible, but still
- it is not simple at all
- IgniteCacheOffheapManager & IgniteWriteAheadLogManager are internal 
APIs,

and community can change any APIs here in any time.

Vyacheslav,

Why Ignite Native Persistence is not suitable for this case?

Sincerely,
Dmitriy Pavlov

чт, 23 нояб. 2017 г. в 11:01, Evgeniy Ignatiev 


wrote:


Valentin, Evgeniy thanks for your help!

Valentin, unfortunately, you are right.

I've tested that behavior in the following scenario:
1. Started N nodes and filled it with data
2. Shutdown one node
3. Called rebalance directly and waited to finish
4. Stopped all other (N-1) nodes
5. Started N-1 nodes and validated data

Validation didn't pass - data consistency was broken. As you say it
works only on stable topology.
As far as I understand Ignite doesn't manage to rebalance in
underlying storage, it became clear from tests and your description
that CacheStore design assumes that the underlying storage is shared
by all the
nodes in the topology.

I understand that PDS is the best option in case of distributing
persistence.
However, could you point me the best way to override default 
rebalance

behavior?
Maybe it's possible to extend it by a custom plugin?

On Wed, Nov 22, 2017 at 1:35 AM, Valentin Kulichenko
 wrote:

Vyacheslav,

If you want the persistence storage to be *distributed*, then using

Ignite
persistence would be the easiest thing to do anyway, even if you 
don't

need

all its features.

CacheStore indeed can be updated from different nodes with different

nodes,
but the problem is in coordination. If instances of the store are 
not

aware
of each other, it's really hard to handle all rebalancing cases. 
Such

solution will work only on stable topology.

Having said that, if you can have one instance of RocksDB (or any 
other

DB
for that matter) that is accessed via network by all nodes, then 
it's

also

an option. But in this case storage is not distributed.

-Val

On Tue, Nov 21, 2017 at 4:37 AM, Vyacheslav Daradur <

daradu...@gmail.com

wrote:


Valentin,


Why don't you use Ignite persistence [1]?

I have a use case for one of the projects that need the RAM on disk
replication only. All PDS features aren't needed.
During the first assessment, persist to RocksDB works faster.

CacheStore design assumes that the underlying storage is 
shared by

all

the nodes in topology.
This is the very important note.
I'm a bit confused because I've thought that each node in cluster
persists partitions for which the node is either primary or backup
like in PDS.

My RocksDB implementation supports working with one DB instance 
which
shared by all the nodes in the topology, but it would make no 
sense of

using embedded fast storage.

Is there any link to a detailed description of CacheStorage 
design or

any other advice?
Thanks in advance.



On Fri, Nov 17, 2017 at 9:07 PM, Valentin Kulichenko
 wrote:

Vyacheslav,

CacheStore design assumes that the underlying storage is shared by

all

the
nodes in topology. Even if you delay rebalancing on node stop 
(which

is

possible via CacheConfiguration#rebalanceDelay), I doubt it will

solve

all

your consistency issues.

Why don't you use Ignite persistence [1]?

[1] 
https://apacheignite.readme.io/docs/distributed-persistent-store


-Val

On Fri, Nov 17, 2017 at 4:24 AM, Vyacheslav Daradur <

daradu...@gmail.com

wrote:


Hi Andrey! Thank you for answering.


Key to partition mapping 

Re: Losing data during restarting cluster with persistence enabled

2017-11-24 Thread Evgeniy Ignatiev
By the way I remembered that there is an annotation CacheLocalStore for 
marking exactly the CacheStore that is not distributed - 
http://apache-ignite-developers.2346864.n4.nabble.com/CacheLocalStore-td734.html 
- here is short explanation and this - 
https://github.com/gridgain/gridgain-advanced-examples/blob/master/src/main/java/org/gridgain/examples/localstore/LocalRecoverableStoreExample.java 
- is example implementation.



On 11/23/2017 4:42 PM, Dmitry Pavlov wrote:

Hi Evgeniy,

Technically it is, of course, possible, but still
- it is not simple at all
- IgniteCacheOffheapManager & IgniteWriteAheadLogManager are internal APIs,
and community can change any APIs here in any time.

Vyacheslav,

Why Ignite Native Persistence is not suitable for this case?

Sincerely,
Dmitriy Pavlov

чт, 23 нояб. 2017 г. в 11:01, Evgeniy Ignatiev 

wrote:


Valentin, Evgeniy thanks for your help!

Valentin, unfortunately, you are right.

I've tested that behavior in the following scenario:
1. Started N nodes and filled it with data
2. Shutdown one node
3. Called rebalance directly and waited to finish
4. Stopped all other (N-1) nodes
5. Started N-1 nodes and validated data

Validation didn't pass - data consistency was broken. As you say it
works only on stable topology.
As far as I understand Ignite doesn't manage to rebalance in
underlying storage, it became clear from tests and your description
that CacheStore design assumes that the underlying storage is shared
by all the
nodes in the topology.

I understand that PDS is the best option in case of distributing
persistence.
However, could you point me the best way to override default rebalance
behavior?
Maybe it's possible to extend it by a custom plugin?

On Wed, Nov 22, 2017 at 1:35 AM, Valentin Kulichenko
 wrote:

Vyacheslav,

If you want the persistence storage to be *distributed*, then using

Ignite

persistence would be the easiest thing to do anyway, even if you don't

need

all its features.

CacheStore indeed can be updated from different nodes with different

nodes,

but the problem is in coordination. If instances of the store are not

aware

of each other, it's really hard to handle all rebalancing cases. Such
solution will work only on stable topology.

Having said that, if you can have one instance of RocksDB (or any other

DB

for that matter) that is accessed via network by all nodes, then it's

also

an option. But in this case storage is not distributed.

-Val

On Tue, Nov 21, 2017 at 4:37 AM, Vyacheslav Daradur <

daradu...@gmail.com

wrote:


Valentin,


Why don't you use Ignite persistence [1]?

I have a use case for one of the projects that need the RAM on disk
replication only. All PDS features aren't needed.
During the first assessment, persist to RocksDB works faster.


CacheStore design assumes that the underlying storage is shared by

all

the nodes in topology.
This is the very important note.
I'm a bit confused because I've thought that each node in cluster
persists partitions for which the node is either primary or backup
like in PDS.

My RocksDB implementation supports working with one DB instance which
shared by all the nodes in the topology, but it would make no sense of
using embedded fast storage.

Is there any link to a detailed description of CacheStorage design or
any other advice?
Thanks in advance.



On Fri, Nov 17, 2017 at 9:07 PM, Valentin Kulichenko
 wrote:

Vyacheslav,

CacheStore design assumes that the underlying storage is shared by

all

the

nodes in topology. Even if you delay rebalancing on node stop (which

is

possible via CacheConfiguration#rebalanceDelay), I doubt it will

solve

all

your consistency issues.

Why don't you use Ignite persistence [1]?

[1] https://apacheignite.readme.io/docs/distributed-persistent-store

-Val

On Fri, Nov 17, 2017 at 4:24 AM, Vyacheslav Daradur <

daradu...@gmail.com

wrote:


Hi Andrey! Thank you for answering.


Key to partition mapping shouldn't depends on topology, and

shouldn't

changed unstable topology.
Key to partition mapping doesn't depend on topology in my 

[GitHub] ignite pull request #3088: IGNITE-6853: Cassandra cache store does not clean...

2017-11-24 Thread jasonman107
GitHub user jasonman107 opened a pull request:

https://github.com/apache/ignite/pull/3088

IGNITE-6853: Cassandra cache store does not clean prepared statements…

IGNITE-6853: Cassandra cache store does not clean prepared statements cache 
when remove old cassandra session

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jasonman107/ignite ignite-6853

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3088.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3088


commit a025adba87d7f4df8f8c9b13a9ceaf9eb80942b9
Author: Jason Man 
Date:   2017-11-24T08:52:57Z

IGNITE-6853: Cassandra cache store does not clean prepared statements cache 
when remove old cassandra session




---


Re: Data Rebalancing status API

2017-11-24 Thread Alexey Popov
Hm, actually, I've missed existent MXBean: RebalancingPartitionsCount.

Probably it should be enough.

Thanks,
Alexey



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


[jira] [Created] (IGNITE-7010) Ignite hangs on cache destroy called from IgniteBiPredicate - Ignite Events

2017-11-24 Thread Krzysztof Chmielewski (JIRA)
Krzysztof Chmielewski created IGNITE-7010:
-

 Summary: Ignite hangs on cache destroy called from 
IgniteBiPredicate - Ignite Events
 Key: IGNITE-7010
 URL: https://issues.apache.org/jira/browse/IGNITE-7010
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.3
Reporter: Krzysztof Chmielewski


Ignite hangs trying to delete cache as a response to Ignite Event 
EventType.EVT_NODE_LEFT.
Code to reproduce this issue is available at 
https://github.com/kristoffSC/IgniteHang

Reproduce instructions:
# Put break points in ServerStarter.java line 40 and 42.
## Start ServerStarter as Java Application in Debug mode.
### Start ClientStarter as Java Application. After ClientStarter ends, 
ServerStarter detects EventType.EVT_NODE_LEFT and will stops at line 40 (BP). 
Try skip to next line (ignite.destroyCache("testCache");) and then try skip one 
more line. Ignite will hang at line 41.

This "hang" prevents other nodes from connecting to this running/hanging Ignite 
Server.

Thread that stuck is "disco-event-worker". It waits on {code} return 
stopFut.get() {code} in IgniteKernel.class at line 3146




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Confusing slf4j error messages

2017-11-24 Thread Alexey Popov
I've opened a ticket https://issues.apache.org/jira/browse/IGNITE-6828 some
time ago to fix this message.

If nobody argues I will prepare a patch.

Thanks,
Alexey



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


Data Rebalancing status API

2017-11-24 Thread Alexey Popov
Hi Igniters,

I saw several cases when Ignite users/admins need to ensure if Data
Rebalancing is NOT in progress to prevent possible data loss.
For instance, when nodes one-by-one are moved out of the cluster for
maintenance and returned back by some script. 

Currently, this information could be received only by listening to
EVT_CACHE_REBALANCE_STARTED and EVT_CACHE_REBALANCE_STOPPED events. 

Why Ignite doesn't provide any public API to get a cache (or node)
rebalancing state?
Can we add some API for such purpose?

Thanks,
Alexey



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


[GitHub] ignite pull request #3075: no BaselineTopology for in-memory-only grid

2017-11-24 Thread sergey-chugunov-1985
Github user sergey-chugunov-1985 closed the pull request at:

https://github.com/apache/ignite/pull/3075


---