[GitHub] ignite pull request #5564: IGNITE-10461: Create Mvcc PDS test suite.

2018-12-03 Thread AMashenkov
GitHub user AMashenkov opened a pull request:

https://github.com/apache/ignite/pull/5564

IGNITE-10461: Create Mvcc PDS test suite.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10461

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5564.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5564


commit b806f3b4d73f16c0c3e30f255700b4cde415c3b4
Author: Andrey V. Mashenkov 
Date:   2018-12-04T07:39:36Z

IGNITE-10461: Create Mvcc PDS test suite.




---


[GitHub] ignite pull request #5563: IGNITE-10511 removeExplicitNodeLocks() removed.

2018-12-03 Thread voropava
GitHub user voropava opened a pull request:

https://github.com/apache/ignite/pull/5563

IGNITE-10511 removeExplicitNodeLocks() removed.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10511

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5563.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5563


commit b657b8231677b3e9b80ef7b72e1bd7f87e59a69d
Author: Pavel Voronkin 
Date:   2018-12-04T07:11:42Z

IGNITE-10511 removeExplicitNodeLocks() removed.




---


Re: [RESULT] [VOTE] Apache Ignite 2.7.0 Release (RC2)

2018-12-03 Thread Nikolay Izhikov
Sorry, Alex.

I miss your +1.
Thank you, very much for checking RC artifacts.

вт, 4 дек. 2018 г., 7:10 Alexey Kuznetsov akuznet...@apache.org:

> Nikolay,
>
> Actually 4 "+1"  binding.
>
> You did not count my "+1".
>
> :)
>
>
> On Tue, Dec 4, 2018 at 4:28 AM Nikolay Izhikov 
> wrote:
>
> > Igniters,
> >
> > Apache Ignite 2.7.0 release (RC2) has been accepted.
> >
> > 3 "+1" binding votes received:
> >
> > - Pavel Tupitsyn
> > - Dmitriy Pavlov
> > - Nikolay Izhikov
> >
> > Vote thread:
> >
> >
> >
> http://apache-ignite-developers.2346864.n4.nabble.com/VOTE-Apache-Ignite-2-7-0-RC2-td38788.html
> >
>
>
> --
> Alexey Kuznetsov
>


Re: [RESULT] [VOTE] Apache Ignite 2.7.0 Release (RC2)

2018-12-03 Thread Alexey Kuznetsov
Nikolay,

Actually 4 "+1"  binding.

You did not count my "+1".

:)


On Tue, Dec 4, 2018 at 4:28 AM Nikolay Izhikov  wrote:

> Igniters,
>
> Apache Ignite 2.7.0 release (RC2) has been accepted.
>
> 3 "+1" binding votes received:
>
> - Pavel Tupitsyn
> - Dmitriy Pavlov
> - Nikolay Izhikov
>
> Vote thread:
>
>
> http://apache-ignite-developers.2346864.n4.nabble.com/VOTE-Apache-Ignite-2-7-0-RC2-td38788.html
>


-- 
Alexey Kuznetsov


[GitHub] ignite pull request #5562: IGNITE-10175 migrate core module tests from Junit...

2018-12-03 Thread oignatenko
GitHub user oignatenko opened a pull request:

https://github.com/apache/ignite/pull/5562

IGNITE-10175 migrate core module tests from Junit 3 to 4



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10175

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5562.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5562


commit ff4699bebf2571cbd648cbf8bccf1e957c7547e5
Author: Oleg Ignatenko 
Date:   2018-12-02T23:35:51Z

IGNITE-10175 migrate core module tests from Junit 3 to 4
- wip - added @Test
-- verified with diffs overview

commit a72e24ea010fd7333b7425c349e3de1373aed0fc
Author: Oleg Ignatenko 
Date:   2018-12-03T00:24:34Z

IGNITE-10175 migrate core module tests from Junit 3 to 4
- wip - migrating
-- verified with diffs overview

commit eeeba1cc2f9d9dc02065c19c7c1079008bf73e00
Author: Oleg Ignatenko 
Date:   2018-12-03T00:26:13Z

IGNITE-10175 migrate core module tests from Junit 3 to 4
- wip - migrating
-- verified with diffs overview

commit 9304b5f31ef712708be7d04b68a963d8bf4b050d
Author: Oleg Ignatenko 
Date:   2018-12-03T00:29:51Z

IGNITE-10175 migrate core module tests from Junit 3 to 4
- wip - migrating
-- verified with diffs overview

commit 7eaf7a0252fb347f19d192859c410c3c29f7c4b6
Author: Oleg Ignatenko 
Date:   2018-12-03T00:39:54Z

IGNITE-10175 migrate core module tests from Junit 3 to 4
- wip - migrating
-- verified with diffs overview

commit eaee42ebf611c1617993177d6466e81399c3ebdc
Author: Oleg Ignatenko 
Date:   2018-12-03T00:48:56Z

IGNITE-10175 migrate core module tests from Junit 3 to 4
- wip - migrating
-- verified with diffs overview

commit 62738a3820d4057a9c34ad8b2c6e85959ebfc67e
Author: Oleg Ignatenko 
Date:   2018-12-03T00:57:00Z

IGNITE-10175 migrate core module tests from Junit 3 to 4
- wip - migrating
-- verified with diffs overview

commit afb495f9e97e6e15b0c4c29f6d57dd2bbd569f1c
Author: Oleg Ignatenko 
Date:   2018-12-03T08:39:38Z

IGNITE-10175 migrate core module tests from Junit 3 to 4
- wip - migrating
-- verified with diffs overview

commit 3b56b742b52a516a4244d2a07635a99d9abeba18
Author: Oleg Ignatenko 
Date:   2018-12-03T09:06:45Z

IGNITE-10175 migrate core module tests from Junit 3 to 4
- wip - migrating
-- verified with diffs overview

commit 3835772a72a99882f9d6cd2557001c9db94ed416
Author: Oleg Ignatenko 
Date:   2018-12-03T09:31:45Z

IGNITE-10175 migrate core module tests from Junit 3 to 4
- wip - migrating
-- verified with diffs overview

commit ac6be2c4e07748f34da5d83ce3e9f20786ead2fc
Author: Oleg Ignatenko 
Date:   2018-12-03T10:46:59Z

IGNITE-10175 migrate core module tests from Junit 3 to 4
- wip - migrating
-- verified with diffs overview

commit 9f663a9c857f76fcb0e56c194cc2d6ec10f6888b
Author: Oleg Ignatenko 
Date:   2018-12-03T11:13:26Z

IGNITE-10175 migrate core module tests from Junit 3 to 4
- wip - migrating
-- verified with diffs overview

commit e2eae3201a88c3d9800b9964ce9ca4096ed3d85f
Author: Oleg Ignatenko 
Date:   2018-12-03T11:31:35Z

IGNITE-10175 migrate core module tests from Junit 3 to 4
- wip - migrating
-- verified with diffs overview

commit f9ccc80de28fb2fc64c96d695f5f906a8cd9a946
Author: Oleg Ignatenko 
Date:   2018-12-03T12:40:21Z

IGNITE-10175 migrate core module tests from Junit 3 to 4
- wip - migrating
-- verified with diffs overview

commit 3eadd0abf6f58a0a3d8fe8464bb08879d412bca1
Author: Oleg Ignatenko 
Date:   2018-12-03T13:33:00Z

IGNITE-10175 migrate core module tests from Junit 3 to 4
- wip - migrating
-- verified with diffs overview

commit 28d9f854993cb99867803cfe0126cf723e071cc5
Author: Oleg Ignatenko 
Date:   2018-12-03T13:54:52Z

IGNITE-10175 migrate core module tests from Junit 3 to 4
- wip - migrating
-- verified with diffs overview

commit ed54719377f694d0111740c0bbd92dcf5294e6c6
Author: Oleg Ignatenko 
Date:   2018-12-03T14:10:26Z

IGNITE-10175 migrate core module tests from Junit 3 to 4
- wip - migrating
-- verified with diffs overview

commit fb97a8c23881d1d1c50423dcea6ac519a36c195a
Author: Oleg Ignatenko 
Date:   2018-12-03T14:26:19Z

IGNITE-10175 migrate core module tests from Junit 3 to 4
- wip - migrating
-- verified with diffs overview

commit 063f7ae0164924a775b4235ebd0f704017bc2c73
Author: Oleg Ignatenko 
Date:   2018-12-03T14:45:00Z

IGNITE-10175 migrate core module tests from Junit 3 to 4
- wip - migrating
-- verified with diffs overview

commit 17e86cb29c3f3accc65d464a0e0de4f4eac6a19e
Author: Oleg Ignatenko 
Date:   2018-12-03T15:05:01Z

IGNITE-10175 migrate core module tests from Junit 3 to 4
- wip - migrating
-- 

[RESULT] [VOTE] Apache Ignite 2.7.0 Release (RC2)

2018-12-03 Thread Nikolay Izhikov
Igniters, 

Apache Ignite 2.7.0 release (RC2) has been accepted. 

3 "+1" binding votes received: 

- Pavel Tupitsyn
- Dmitriy Pavlov
- Nikolay Izhikov

Vote thread: 

http://apache-ignite-developers.2346864.n4.nabble.com/VOTE-Apache-Ignite-2-7-0-RC2-td38788.html


signature.asc
Description: This is a digitally signed message part


[jira] [Created] (IGNITE-10516) Storage is corrupted after CREATE INDEX IF NOT EXISTS on different tables

2018-12-03 Thread Stanislav Lukyanov (JIRA)
Stanislav Lukyanov created IGNITE-10516:
---

 Summary: Storage is corrupted after CREATE INDEX IF NOT EXISTS on 
different tables
 Key: IGNITE-10516
 URL: https://issues.apache.org/jira/browse/IGNITE-10516
 Project: Ignite
  Issue Type: Bug
Reporter: Stanislav Lukyanov


Given two tables in the same schema, we can't create an index with the same 
name for both tables. In other words, the following code leads to an error - 
which is good
{code}
CREATE INDEX IDX on T1 (COL);
CREATE INDEX IDX on T2 (COL);
{code}

If used with `IF NOT EXISTS`, the queries pass. It might be OK or not - one 
needs to look into SQL spec to check if the second operation should be a no-op 
(because IDX exists) or fail (because IDX exists for a different table, so the 
caller is probably doing something wrong)
{code}
CREATE INDEX IDX on T1 (COL);
CREATE INDEX IF NOT EXISTS IDX on T2 (COL);
{code}

However, if persistence is enabled, the node will fail to restart complaining 
about duplicate index names.
{code}
class org.apache.ignite.IgniteCheckedException: Duplicate index name 
[cache=SQL_PUBLIC_T2, schemaName=PUBLIC, idxName=IDX, existingTable=T, table=T2]
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1183)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2040)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1732)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1158)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:656)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:959)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:900)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:888)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:854)
at 
org.apache.ignite.IndexWithSameNameTest.test(IndexWithSameNameTest.java:77)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at junit.framework.TestCase.runTest(TestCase.java:176)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.access$001(GridAbstractTest.java:150)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$6.evaluate(GridAbstractTest.java:2104)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$7.run(GridAbstractTest.java:2119)
at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.IgniteException: Duplicate index name 
[cache=SQL_PUBLIC_T2, schemaName=PUBLIC, idxName=IDX, existingTable=T, table=T2]
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.registerCache0(GridQueryProcessor.java:1650)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStart0(GridQueryProcessor.java:803)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStart(GridQueryProcessor.java:866)
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCacheInRecoveryMode(GridCacheProcessor.java:2595)
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.access$1400(GridCacheProcessor.java:204)
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor$CacheRecoveryLifecycle.afterBinaryMemoryRestore(GridCacheProcessor.java:5481)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.restoreBinaryMemory(GridCacheDatabaseSharedManager.java:947)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.startMemoryRestore(GridCacheDatabaseSharedManager.java:1922)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1050)
... 18 more
{code}
It looks like the second index (on T2) is partially created after all.

Need to either block index creation by `CREATE INDEX IF NOT EXISTS` completely, 
or just fail that query when the table names don't match (if SQL spec allows 
it).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5561: IGNITE-10406 .NET: Reset writer structures on rec...

2018-12-03 Thread ptupitsyn
GitHub user ptupitsyn opened a pull request:

https://github.com/apache/ignite/pull/5561

IGNITE-10406 .NET: Reset writer structures on reconnect



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ptupitsyn/ignite ignite-10406

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5561.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5561


commit 083fef611d35f78ca0a7bdfa3f7cf8ab1f25d314
Author: Pavel Tupitsyn 
Date:   2018-11-27T14:37:24Z

IGNITE-10406 .NET Failed to run ScanQuery with custom filter after server 
node restart - add test

commit c3d48ebf25e46b1b18c5b3c69f50250898c12958
Author: Pavel Tupitsyn 
Date:   2018-12-01T14:30:11Z

TODOs

commit ac52623248183b294e4b1d3e0b35304680a8286a
Author: Pavel Tupitsyn 
Date:   2018-12-01T14:35:13Z

TODOs

commit ca9c0c4a985ccf24a3fc93d68bb9cc861f829708
Author: Pavel Tupitsyn 
Date:   2018-12-01T14:45:16Z

Add test for non-empty filter

commit 53137ad302cea7fb01677f582b914daf5c4acb53
Author: Pavel Tupitsyn 
Date:   2018-12-01T14:46:52Z

Add test for non-empty filter

commit 194a3316e23d502a1bb8fa3942a788450c44a579
Author: Pavel Tupitsyn 
Date:   2018-12-01T14:51:00Z

Add test for non-empty filter

commit 431aa95a3f88a0163888cbfcbad9edbf323fd614
Author: Pavel Tupitsyn 
Date:   2018-12-03T17:57:26Z

update TODOs

commit 5dd95b5d8ed94335ae0f63047c73622dbfae691d
Author: Pavel Tupitsyn 
Date:   2018-12-03T18:27:15Z

update TODOs

commit 439cf575070c78aa2bac052197c14545c37408b7
Author: Pavel Tupitsyn 
Date:   2018-12-03T18:37:22Z

Add TODO

commit 5bca7238233a0e269d836c66081c7a9b5a5f9fa8
Author: Pavel Tupitsyn 
Date:   2018-12-03T18:40:10Z

Cleanup test code

commit 7f5c5d50e2d25543295b8a168ba855a9da4b5715
Author: Pavel Tupitsyn 
Date:   2018-12-03T18:40:51Z

Revert unrelated changes

commit 6488156700c5fd44c1cf74d4671394f40c51074b
Author: Pavel Tupitsyn 
Date:   2018-12-03T18:48:41Z

Cleanup

commit 0c5a43b12918e8089ad37ad2fcd897b7cab8fbd6
Author: Pavel Tupitsyn 
Date:   2018-12-03T19:00:14Z

Implement writer structure reset

commit 0892d9e97bccdf41a38fe01a42354e78f903aae9
Author: Pavel Tupitsyn 
Date:   2018-12-03T19:05:32Z

Implement writer structure reset

commit ad80b6a9ac803eabd28b098fea61f86720158058
Author: Pavel Tupitsyn 
Date:   2018-12-03T19:07:25Z

Implement writer structure reset

commit 3b77263c72692e6930e74eb11be59181d6f0c3d6
Author: Pavel Tupitsyn 
Date:   2018-12-03T19:09:56Z

Fix tests

commit a5b20c139ad8216dac735cb846a5e4c3562808df
Author: Pavel Tupitsyn 
Date:   2018-12-03T19:10:37Z

Fix tests

commit cf912013547f128e84c17dbafb0f327549c6d4a7
Author: Pavel Tupitsyn 
Date:   2018-12-03T19:41:49Z

TODOs

commit d5faccd7d8b6055581649a9717e867cd76974d21
Author: Pavel Tupitsyn 
Date:   2018-12-03T19:49:29Z

TODOs and cleanup

commit 4606c7f55233f4005a8c2f75552e2ac1df292f4c
Author: Pavel Tupitsyn 
Date:   2018-12-03T19:56:33Z

Adding concurrency test

commit eea92638bf64dc2aa1013e041e5635c938ec2198
Author: Pavel Tupitsyn 
Date:   2018-12-03T20:00:56Z

Adding concurrency test

commit eda2a0a256b44e8a6dc0ed28498e677fd3159d48
Author: Pavel Tupitsyn 
Date:   2018-12-03T20:01:14Z

Adding concurrency test

commit 1c73a3fe9c70cb28fb1a7b36e56711f20fb57925
Author: Pavel Tupitsyn 
Date:   2018-12-03T20:03:01Z

Adding concurrency test

commit 4ce52f7f6a46eba00bc8eeb79a3256b67f307644
Author: Pavel Tupitsyn 
Date:   2018-12-03T20:03:48Z

Adding concurrency test

commit f46cc5cd01b96ead5136c8e3380766abcc5c303d
Author: Pavel Tupitsyn 
Date:   2018-12-03T20:15:00Z

Adding concurrency test

commit 244229cfd4d1be52670ccb32097515e06fc8da38
Author: Pavel Tupitsyn 
Date:   2018-12-03T20:25:46Z

Add concurrency assumptions

commit f104a6e419445adea972b9f300730bf9094109be
Author: Pavel Tupitsyn 
Date:   2018-12-03T20:27:30Z

Merge remote-tracking branch 'origin/master' into ignite-10406




---


[GitHub] ignite pull request #5560: IGNITE-10516: Storage is corrupted after CREATE I...

2018-12-03 Thread slukyano
GitHub user slukyano opened a pull request:

https://github.com/apache/ignite/pull/5560

IGNITE-10516: Storage is corrupted after CREATE INDEX IF NOT EXISTS on 
different tables



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10516

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5560.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5560


commit dfcaaa9374c289deedb0ef1921dd645fd077531c
Author: Stanislav Lukyanov 
Date:   2018-12-03T20:06:27Z

IGNITE-10516: Add test.




---


Re: welcome letter

2018-12-03 Thread Dmitriy Pavlov
Hi Vladimir,

Welcome to the Apache Software Foundation and to the Apache Ignite
Community.

I've added your account to the list of contributors. Now you should be able
to assign an issue to yourself.

Should you have any questions please do not hesitate to ask here. Looking
forward to your contributions.

Sincerely,
Dmitriy Pavlov

P.S. Additional references that should boost your onboarding.

Please subscribe to both dev@ and user@ lists, optionally you may subscribe
to notifications@:
https://ignite.apache.org/community/resources.html#mail-lists

Get familiar with Apache Ignite development process described here:
https://cwiki.apache.org/confluence/display/IGNITE/Development+Process

Instructions on how to contribute can be found here:
https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute

Project setup in Intellij IDEA:
https://cwiki.apache.org/confluence/display/IGNITE/Project+Setup

пн, 3 дек. 2018 г. в 22:57, Владимир Плигин :

> Hello Ignite Community!
>
>
> My name is Vladimir. I want to contribute to Apache Ignite, my JIRA
> username is "Vladimir Pligin".
>
> Thanks!
>


welcome letter

2018-12-03 Thread Владимир Плигин
Hello Ignite Community!


My name is Vladimir. I want to contribute to Apache Ignite, my JIRA username is 
"Vladimir Pligin".

Thanks!


[GitHub] ignite pull request #5559: IGNITE-10380: Drop Multi-label Classification for...

2018-12-03 Thread zaleslaw
GitHub user zaleslaw opened a pull request:

https://github.com/apache/ignite/pull/5559

IGNITE-10380: Drop Multi-label Classification for LogReg and SVM



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10380

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5559.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5559


commit 668caf5d25a1ba74828cdeb7085022472429aaac
Author: Zinoviev Alexey 
Date:   2018-11-29T13:10:19Z

IGNITE-10380: Drop Multi-label Classification for Logistic Regression and 
SVM




---


Re: Code inspection

2018-12-03 Thread Andrey Mashenkov
Hi,

Have someone tried to investigate the issue related to Inspection TC task
execution time variation (from 0.5 up to 1,5 hours)?
Can we enable GC logs for this task or may be even get CPU, Disk, Network
metrics?
Can someone check if there are unnecessary Idea plugins starts that can be
safely disabled?


On Tue, Nov 27, 2018 at 5:52 PM Dmitriy Pavlov  wrote:

> I'm totally with you in this decision, let's move the file.
>
> вт, 27 нояб. 2018 г. в 16:24, Maxim Muzafarov :
>
> > Igniters,
> >
> > I propose to make inspection configuration default on the project
> > level. I've created a new issue [1] for it. It can be easily done and
> > recommend by IntelliJ documentation [2].
> > Thoughts?
> >
> >
> > Vyacheslav,
> >
> > Can you share an example of your warnings?
> > Currently, we have different inspection configurations:
> > - ignite_inspections.xml - to import inspections as default and use it
> > daily.
> > - ignite_inspections_teamcity.xml - config to run it on TC. Only fixed
> > rules in the project code are enabled. Each of these rules are marked
> > with ERROR level.
> >
> > [1] https://issues.apache.org/jira/browse/IGNITE-10422
> > [2] https://www.jetbrains.com/help/idea/code-inspection.html
> > On Tue, 20 Nov 2018 at 13:58, Nikolay Izhikov 
> wrote:
> > >
> > > Hello, Vyacheslav.
> > >
> > > Yes, we have.
> > >
> > > Maxim Muzafarov, can you fix it, please?
> > >
> > > вт, 20 нояб. 2018 г., 13:10 Vyacheslav Daradur daradu...@gmail.com:
> > >
> > > > Guys, why we have 2 different inspection files in the repo?
> > > > idea\ignite_inspections.xml
> > > > idea\ignite_inspections_teamcity.xml
> > > >
> > > > AFAIK TeamCity is able to use the same inspection file with IDE.
> > > >
> > > > I've imported 'idea\ignite_inspections.xml' in the IDE, but now see
> > > > inspection warnings for my PR on TC because of different rules.
> > > >
> > > >
> > > > On Sun, Nov 11, 2018 at 6:06 PM Maxim Muzafarov 
> > > > wrote:
> > > > >
> > > > > Yakov, Dmitry,
> > > > >
> > > > > Which example of unsuccessful suite execution do we need?
> > > > > Does the current fail [1] in the master branch enough to configure
> > > > > notifications by TC.Bot?
> > > > >
> > > > > > Please consider adding more checks
> > > > > > - line endings. I think we should only have \n
> > > > > > - ensure blank line at the end of file
> > > > >
> > > > > It seems to me that `line endings` is easy to add, but for the
> `blank
> > > > > line at the end` we need as special regexp. Can we focus on
> built-in
> > > > > IntelliJ inspections at first and fix others special further?
> > > > >
> > > > > [1]
> > > >
> >
> https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_InspectionsCore_IgniteTests24Java8=%3Cdefault%3E=buildTypeStatusDiv
> > > > > On Sun, 11 Nov 2018 at 17:55, Maxim Muzafarov 
> > > > wrote:
> > > > > >
> > > > > > Igniters,
> > > > > >
> > > > > > Since the inspection rules are included in RunAll a few members
> of
> > the
> > > > > > community mentioned a wide distributed execution time on TC
> agents:
> > > > > >  - 1h:27m:38s publicagent17_9094
> > > > > >  - 38m:04s publicagent17_9094
> > > > > >  - 33m:29s publicagent17_9094
> > > > > >  - 17m:13s publicagent17_9094
> > > > > > It seems that we should configure the resources distribution
> > across TC
> > > > > > containers. Can anyone take a look at it?
> > > > > >
> > > > > >
> > > > > > I've also prepared the short list of rules to work on:
> > > > > > + Inconsistent line separators (6 matches)
> > > > > > + Problematic whitespace (4 matches)
> > > > > > + expression.equals("literal")' rather than
> > > > > > '"literal".equals(expression) (53 matches)
> > > > > > + Unnecessary 'null' check before 'instanceof' expression or call
> > (42
> > > > matches)
> > > > > > + Redundant 'if' statement (69 matches)
> > > > > > + Redundant interface declaration (28 matches)
> > > > > > + Double negation (0 matches)
> > > > > > + Unnecessary code block (472 matches)
> > > > > > + Line is longer than allowed by code style (2614 matches) (Is it
> > > > > > possible to implement?)
> > > > > >
> > > > > > WDYT?
> > > > > >
> > > > > > On Fri, 26 Oct 2018 at 23:43, Dmitriy Pavlov <
> > dpavlov@gmail.com>
> > > > wrote:
> > > > > > >
> > > > > > > Hi Maxim,
> > > > > > >
> > > > > > >  thank you for your efforts to make this happen. Keep the pace!
> > > > > > >
> > > > > > > Could you please provide an example of how Inspections can
> fail,
> > so
> > > > I or
> > > > > > > another contributor could implement support of these failures
> > > > validation in
> > > > > > > the Tc Bot.
> > > > > > >
> > > > > > > Sincerely,
> > > > > > > Dmitriy Pavlov
> > > > > > >
> > > > > > > пт, 26 окт. 2018 г. в 18:27, Yakov Zhdanov <
> yzhda...@apache.org
> > >:
> > > > > > >
> > > > > > > > Maxim,
> > > > > > > >
> > > > > > > > Thanks for response, let's do it the way you suggested.
> > > > > > > >
> > > > > > > > Please consider adding more checks
> > > > > > > > - line 

[jira] [Created] (IGNITE-10515) In JDBC thick driver streaming mode, INSERTs ignored silently when trying to insert into wrong table

2018-12-03 Thread Ilya Kasnacheev (JIRA)
Ilya Kasnacheev created IGNITE-10515:


 Summary: In JDBC thick driver streaming mode, INSERTs ignored 
silently when trying to insert into wrong table
 Key: IGNITE-10515
 URL: https://issues.apache.org/jira/browse/IGNITE-10515
 Project: Ignite
  Issue Type: Bug
  Components: jdbc
Reporter: Ilya Kasnacheev
Assignee: Ilya Kasnacheev
 Attachments: JdbcStreamingSelfTest.java

It is not uncommon to do all your SQL from cache=default JDBC url and forget 
about that.
But, if you try to use streaming=true with that, your INSERTs will be ignored 
when the destination cache is different.

There should be some kind of error message if you try to stream to a different 
cache than the one you are connected to.

Please see reproducer in maillist or the attached test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5558: IGNITE-10514: Cache validation on the primary nod...

2018-12-03 Thread sk0x50
GitHub user sk0x50 opened a pull request:

https://github.com/apache/ignite/pull/5558

IGNITE-10514: Cache validation on the primary node may result in 
AssertionError



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10514

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5558.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5558


commit c15dac80bdad2e7001b74897ba87093203eb5047
Author: Slava Koptilin 
Date:   2018-12-03T17:50:42Z

IGNITE-10514 Cache validation has to use top ver from the update request in 
case of topology version was locked on near node

commit 0a2a63962907e38ff13b38bb383a1aaa0e92e591
Author: Slava Koptilin 
Date:   2018-12-03T17:58:03Z

IGNITE-10514 fixed race between GridDhtTopologyFuture.exchangeDone() and 
GridDhtTopologyFuture.validateCache()




---


Re: Apache Ignite 2.7. Last Mile

2018-12-03 Thread Ivan Fedotov
Vyacheslav, thank you for remark. I've tried to launch test on the 2.7
version and it is fine.

I changed priority of the ticket from "Blocker" to "Major" and fix version
to 2.8.



пн, 3 дек. 2018 г. в 13:53, Vladimir Ozerov :

> Confirming. Test never failed in AI 2.7 even though it contains mentioned
> MVCC commit.
>
> On Mon, Dec 3, 2018 at 1:36 PM Vyacheslav Daradur 
> wrote:
>
> > Guys, I checked that `testAtomicOnheapTwoBackupAsyncFullSync` failed
> > in the master (as described Ivan), but it passes in branch ignite-2.7
> > (tag 2.7.0-rc2), so this shouldn't block the release.
> >
> > Ivan, were you able to reproduce this issue in ignite-2.7 branch?
> >
> >
> > On Mon, Dec 3, 2018 at 1:03 PM Ivan Fedotov  wrote:
> > >
> > > Nikolay,
> > >
> > > I think that end-user may face the problem during call
> IgniteCache#invoke
> > > on a cache with registered continious query if cache's configuration is
> > as
> > > in the failed test: [PARTITIONED, ATOMIC, FULL_SYNCH, 2 backups].
> > >
> > > I've found that failure has been introduced by MVCC commit [1]. As I
> > > understand the issue relates to the process of updating metadata, when
> > the
> > > future of binary metadata registration hangs because of an unclear
> > reason.
> > >
> > > I don't know if the issue the blocker, but seems it's regression
> because
> > > the test has been passed on Ignite 2.6
> > >
> > > What do you think?
> > >
> > > [1]
> > >
> >
> https://github.com/apache/ignite/commit/51a202a4c48220fa919f47147bd4889033cd35a8
> > >
> > > пн, 3 дек. 2018 г. в 11:14, Nikolay Izhikov :
> > >
> > > > Ivan, please, clarify.
> > > >
> > > > How your investigation are related to 2.7 release?
> > > > Do you think it's a release blocker?
> > > > If yes, please, describe impact to users and how users can reproduce
> > this
> > > > issue.
> > > >
> > > > пн, 3 дек. 2018 г., 9:30 Ivan Fedotov ivanan...@gmail.com:
> > > >
> > > > > I've created the PR 
> > which
> > > > > includes changes <
> > https://github.com/1vanan/ignite/commits/before-MVCC>
> > > > > just before integration MVCC with Continuous Query and from the
> > TeamCity
> > > > > <
> > > > >
> > > >
> >
> https://ci.ignite.apache.org/viewLog.html?buildId=2434057=buildResultsDiv=IgniteTests24Java8_ContinuousQuery1
> > > > > >
> > > > > it is clear that before this changes the
> > > > > test testAtomicOnheapTwoBackupAsyncFullSync is green.
> > > > >
> > > > > Also Roman Kondakov gave his view on this problem in the comments
> > > > > . Now the
> > problem
> > > > > becomes more understandable, but the root reason is still unclear.
> > > > >
> > > > > May be a few of you have any suggestions why hang of threads on the
> > > > binary
> > > > > metadata registration future appears?
> > > > >
> > > > > пт, 30 нояб. 2018 г. в 13:48, Ivan Fedotov :
> > > > >
> > > > > > Igor, thank you for explanation.
> > > > > >
> > > > > > Now it seems that when the one thread tries to invoke
> > > > > > GridCacheMapEntry#touch, the another one makes
> > > > > > GridCacheProcessor#stopCache. If I am wrong, please feel free to
> > > > correct
> > > > > me.
> > > > > >
> > > > > > But it still does not clear for me why this fail appears after
> > commit
> > > > > > <
> > > > >
> > > >
> >
> https://github.com/apache/ignite/commit/51a202a4c48220fa919f47147bd4889033cd35a8
> > > > >
> > > > > which
> > > > > > is about MVCC. Moreover, NPE appears only with
> > BinaryObjectException,
> > > > and
> > > > > > when the test is green, I can not find NPE in the log.
> > > > > >
> > > > > > Now I tried to run test locally 1000 times on the version before
> > MVCC
> > > > and
> > > > > > could not find error on this concretely case (but it exists the
> > another
> > > > > > one
> > > > > > <
> > > > >
> > > >
> >
> https://github.com/apache/ignite/blob/master/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryOrderingEventTest.java#L426
> > > > >
> > > > > which
> > > > > > is about assertion on received events).
> > > > > >
> > > > > > пт, 30 нояб. 2018 г. в 13:37, Roman Kondakov
> > > >  > > > > >:
> > > > > >
> > > > > >> Nikolay,
> > > > > >>
> > > > > >> I couldn't quickly find the root cause of this problem because
> > I'm not
> > > > > >> an expert in the binary metadata flow. I think community should
> > decide
> > > > > >> whether this is a release blocker or not.
> > > > > >>
> > > > > >>
> > > > > >> --
> > > > > >> Kind Regards
> > > > > >> Roman Kondakov
> > > > > >>
> > > > > >> On 30.11.2018 13:23, Nikolay Izhikov wrote:
> > > > > >> > Hello, Roman.
> > > > > >> >
> > > > > >> > Is this issue blocks the 2.7 release?
> > > > > >> >
> > > > > >> > пт, 30 нояб. 2018 г., 13:19 Roman Kondakov
> > > > kondako...@mail.ru.invalid
> > > > > :
> > > > > >> >
> > > > > >> >> Hi all!
> > > > > >> >>
> > > > > >> >> I've reproduced this problem locally and attached the 

[jira] [Created] (IGNITE-10514) Cache validation on the primary node may result in AssertionError

2018-12-03 Thread Vyacheslav Koptilin (JIRA)
Vyacheslav Koptilin created IGNITE-10514:


 Summary: Cache validation on the primary node may result in 
AssertionError
 Key: IGNITE-10514
 URL: https://issues.apache.org/jira/browse/IGNITE-10514
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.8
Reporter: Vyacheslav Koptilin
Assignee: Vyacheslav Koptilin
 Fix For: 2.8


Cache validation on the primary node, that was introduced by IGNITE-10413, may 
lead to the following AssertionError.
{code:java}
java.lang.AssertionError: GridDhtPartitionsExchangeFuture 
[firstDiscoEvt=DiscoveryCustomEvent [customMsg=CacheAffinityChangeMessage [...]]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1788)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1671)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processNearAtomicUpdateRequest(GridDhtAtomicCache.java:3184)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$400(GridDhtAtomicCache.java:138)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:273)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:268)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1059)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:584)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:383)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:309)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:100)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:299)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1568)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1196)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:127)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1092)
at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:505)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:748)
{code}
Let's consider the following scenario:
 * Start one node and upload data.
 * Start a new node (note that this step triggers rebalancing).
 * Start explicit transaction and try to update atomic cache (it is assumed 
that atomic operation are allowed for use inside transactions, see Ignite 
system property DFLT_ALLOW_ATOMIC_OPS_IN_TX)

{code:java}
IgniteTransactions txs = ignite.transactions();

try (Transaction tx = txs.txStart()) {
atomicCache.put();

tx.commit();
}
{code}

Let's assume that the transaction mapped on the topology version that is 
related to {{NODE_JOIN}} event,
on the other hand, the corresponding request 
{{GridNearAtomicAbstractUpdateRequest}} can be validated on the primary node 
using the next top version, triggered by \{{CacheAffinityMessage}}.
That is the root cause of the {{AssertionError}} mentioned above.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10513) Java client stucks when connects to server with slow disk

2018-12-03 Thread Dmitry Lazurkin (JIRA)
Dmitry Lazurkin created IGNITE-10513:


 Summary: Java client stucks when connects to server with slow disk
 Key: IGNITE-10513
 URL: https://issues.apache.org/jira/browse/IGNITE-10513
 Project: Ignite
  Issue Type: Bug
  Components: general
Affects Versions: 2.6
Reporter: Dmitry Lazurkin
 Attachments: ignite-client.log

For emulating slow disk add _sleep_ to partitions cycle in 
_GridCacheDatabaseSharedManager#restorePartitionStates_:
{noformat}
//...
    for (int i = 0; i < grp.affinity().partitions(); i++) {
     try {
     log.error("Wait");
     Thread.sleep(1);
     } catch (InterruptedException e) {
     e.printStackTrace();
     }
//...
{noformat}
My server has 1024 partitions.

Steps to reproduce:
 * Start server
 * Start client
 * On client wait message "Join cluster while cluster state transition is in 
progress, waiting when transition finish."
 * Kill server
 * On client wait repeatable java.net.ConnectException: Connection refused 
(Connection refused)
 * Start server (I have 100% chance to reproduce issue on my computer)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5557: IGNITE-10513 Fix NullPointerException on reconnec...

2018-12-03 Thread laz2
GitHub user laz2 opened a pull request:

https://github.com/apache/ignite/pull/5557

IGNITE-10513 Fix NullPointerException on reconnect



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/laz2/ignite ignite-10513

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5557.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5557


commit 6d59979f40fc5307b229708d740768e39f996b1b
Author: Dmitry Lazurkin 
Date:   2018-12-03T16:46:10Z

IGNITE-10513 Fix NullPointerException on reconnect




---


[GitHub] ignite pull request #5540: IGNITE-10422: move default inspections config to ...

2018-12-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5540


---


[GitHub] asfgit closed pull request #86: IGNITE-10071 Queued and running builds hang in the TC bot

2018-12-03 Thread GitBox
asfgit closed pull request #86: IGNITE-10071 Queued and running builds hang in 
the TC bot
URL: https://github.com/apache/ignite-teamcity-bot/pull/86
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asfgit closed pull request #85: IGNITE-10436 Add ticket and PR links on report TC Bot page

2018-12-03 Thread GitBox
asfgit closed pull request #85: IGNITE-10436 Add ticket and PR links on report 
TC Bot page
URL: https://github.com/apache/ignite-teamcity-bot/pull/85
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/conf/apache.auth.properties b/conf/apache.auth.properties
index da84ef6c..a022bf70 100644
--- a/conf/apache.auth.properties
+++ b/conf/apache.auth.properties
@@ -5,6 +5,7 @@ logs=apache_logs
 git.api_url=https://api.github.com/repos/apache/ignite/
 jira.api_url=https://issues.apache.org/jira/rest/api/2/
 
+jira.url=https://issues.apache.org/jira/
 
 #specify JIRA Auth token (if needed)
 jira.auth_token=
diff --git 
a/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/HelperConfig.java 
b/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/HelperConfig.java
index 0b6b0a84..a39d779f 100644
--- a/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/HelperConfig.java
+++ b/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/HelperConfig.java
@@ -60,6 +60,9 @@
 /** JIRA authorization token property name. */
 public static final String JIRA_API_URL = "jira.api_url";
 
+/** */
+public static final String JIRA_URL = "jira.url";
+
 /** Slack authorization token property name. */
 public static final String SLACK_AUTH_TOKEN = "slack.auth_token";
 public static final String SLACK_CHANNEL = "slack.channel";
diff --git 
a/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/TcHelper.java 
b/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/TcHelper.java
index 9832d2a5..eb66f302 100644
--- a/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/TcHelper.java
+++ b/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/TcHelper.java
@@ -53,7 +53,7 @@
  * TC Bot implementation. To be migrated to smaller injected classes
  */
 @Deprecated
-public class TcHelper implements ITcHelper, IJiraIntegration {
+public class TcHelper implements ITcHelper {
 /** Logger. */
 private static final Logger logger = 
LoggerFactory.getLogger(TcHelper.class);
 
@@ -199,7 +199,7 @@ private BranchesTracked getTrackedBranches() {
 return new Visa("JIRA wasn't commented - " + errMsg);
 }
 
-return new Visa(JIRA_COMMENTED, res, blockers);
+return new Visa(IJiraIntegration.JIRA_COMMENTED, res, blockers);
 }
 
 
diff --git 
a/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/di/IgniteTcBotModule.java
 
b/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/di/IgniteTcBotModule.java
index e61e2bd0..d714b998 100644
--- 
a/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/di/IgniteTcBotModule.java
+++ 
b/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/di/IgniteTcBotModule.java
@@ -24,7 +24,6 @@
 import java.util.concurrent.Future;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.TimeoutException;
-import javax.inject.Inject;
 import javax.inject.Provider;
 import org.apache.ignite.Ignite;
 import org.apache.ignite.ci.ITcHelper;
@@ -34,16 +33,14 @@
 import org.apache.ignite.ci.di.scheduler.SchedulerModule;
 import org.apache.ignite.ci.github.ignited.GitHubIgnitedModule;
 import org.apache.ignite.ci.issue.IssueDetector;
-import org.apache.ignite.ci.jira.IJiraIntegration;
+import org.apache.ignite.ci.jira.JiraIntegrationModule;
 import org.apache.ignite.ci.observer.BuildObserver;
 import org.apache.ignite.ci.observer.ObserverTask;
 import org.apache.ignite.ci.tcbot.trends.MasterTrendsService;
 import org.apache.ignite.ci.teamcity.ignited.TeamcityIgnitedModule;
-import org.apache.ignite.ci.user.ICredentialsProv;
 import org.apache.ignite.ci.util.ExceptionUtil;
 import org.apache.ignite.ci.web.BackgroundUpdater;
 import org.apache.ignite.ci.web.TcUpdatePool;
-import org.apache.ignite.ci.web.model.Visa;
 import org.apache.ignite.ci.web.model.hist.VisasHistoryStorage;
 import org.apache.ignite.ci.web.rest.exception.ServiceStartingException;
 
@@ -80,27 +77,15 @@
 bind(BuildObserver.class).in(new SingletonScope());
 bind(VisasHistoryStorage.class).in(new SingletonScope());
 bind(ITcHelper.class).to(TcHelper.class).in(new SingletonScope());
-
-bind(IJiraIntegration.class).to(Jira.class).in(new SingletonScope());
-
 bind(BackgroundUpdater.class).in(new SingletonScope());
 bind(MasterTrendsService.class).in(new SingletonScope());
 
 install(new TeamcityIgnitedModule());
+install(new JiraIntegrationModule());
 install(new GitHubIgnitedModule());
 install(new SchedulerModule());
 }
 
-//todo now it is just fallback to TC big class, extract JIRA integation 
module
-private static class Jira implements IJiraIntegration {
-@Inject ITcHelper helper;
-
-@Override public Visa 

[GitHub] ignite pull request #5531: IGNITE-5759 unmuted testPartitionRent test

2018-12-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5531


---


[jira] [Created] (IGNITE-10512) Fix javadoc for public Query classes.

2018-12-03 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10512:
-

 Summary: Fix javadoc for public Query classes.
 Key: IGNITE-10512
 URL: https://issues.apache.org/jira/browse/IGNITE-10512
 Project: Ignite
  Issue Type: Improvement
  Components: documentation, sql
Affects Versions: 1.9
Reporter: Andrew Mashenkov
Assignee: Andrew Mashenkov
 Fix For: 2.8


The documentation for Full TEXT Queries is thin at best:
* What syntax does it use?
* ...is it the full [Lucene Classic Query Parser 
Syntax|https://lucene.apache.org/core/6_3_0/queryparser/org/apache/lucene/queryparser/classic/package-summary.html]?
* ...if so how does the syntax map to the {{@QueryTextField}} annotation?
* How is Lucene analyser customisation performed?
* What version is supported? (looks like 3.5.0 which is pretty old, latest is 
6.4.1)
* The 
[{{@QueryTextField}}|https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/query/annotations/QueryTextField.html]
 JavaDoc refers to 
[{{CacheQuery}}|https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/internal/processors/cache/query/CacheQuery.html]
 but strangely this doesn't even appear in the official JavaDoc. It is because 
it's an 'internal' class?

It's mentioned multiple times as a feature, but doesn't look like much of 
Lucene can actually be utilised so clarifications would help greatly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5525: IGNITE-10291

2018-12-03 Thread devozerov
Github user devozerov closed the pull request at:

https://github.com/apache/ignite/pull/5525


---


[jira] [Created] (IGNITE-10511) disco-event-worker can be deadlocked by BinaryContext.metadata running is sys striped pool waiting for cache entry lock

2018-12-03 Thread Pavel Voronkin (JIRA)
Pavel Voronkin created IGNITE-10511:
---

 Summary: disco-event-worker can be deadlocked by 
BinaryContext.metadata running is sys striped pool waiting for cache entry lock
 Key: IGNITE-10511
 URL: https://issues.apache.org/jira/browse/IGNITE-10511
 Project: Ignite
  Issue Type: Bug
Reporter: Pavel Voronkin
 Attachments: race.txt





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10510) [ML] Use OneVsRest for SVMLinearMultiClassClassificationTrainer

2018-12-03 Thread Yury Babak (JIRA)
Yury Babak created IGNITE-10510:
---

 Summary: [ML] Use OneVsRest for 
SVMLinearMultiClassClassificationTrainer
 Key: IGNITE-10510
 URL: https://issues.apache.org/jira/browse/IGNITE-10510
 Project: Ignite
  Issue Type: Improvement
  Components: ml
Reporter: Yury Babak
Assignee: Alexey Platonov
 Fix For: 2.8






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5556: IGNITE-10368: Add mvcc cache test suite 9.

2018-12-03 Thread AMashenkov
GitHub user AMashenkov opened a pull request:

https://github.com/apache/ignite/pull/5556

IGNITE-10368: Add mvcc cache test suite 9.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10368

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5556.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5556


commit 6ff64debbf943384641f93cf6c60605abf3278b0
Author: Andrey V. Mashenkov 
Date:   2018-12-03T13:40:31Z

IGNITE-10368: Add mvcc cache test suite 9.




---


[GitHub] ignite pull request #5555: IGNITE-10509 reordered setting of timeout flag an...

2018-12-03 Thread akalash
GitHub user akalash opened a pull request:

https://github.com/apache/ignite/pull/

IGNITE-10509 reordered setting of timeout flag and transaction state



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10509

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #


commit ada8fc9cb664e70f975312007f9d5003ca6a82b0
Author: Anton Kalashnikov 
Date:   2018-12-03T13:49:24Z

IGNITE-10509 reordered setting of timeout flag and transaction state




---


[jira] [Created] (IGNITE-10509) Rollback exception instead of timeout exception

2018-12-03 Thread Anton Kalashnikov (JIRA)
Anton Kalashnikov created IGNITE-10509:
--

 Summary: Rollback exception instead of timeout exception
 Key: IGNITE-10509
 URL: https://issues.apache.org/jira/browse/IGNITE-10509
 Project: Ignite
  Issue Type: Bug
Reporter: Anton Kalashnikov
Assignee: Anton Kalashnikov


Looks like we have race on changing transaction state between timedOut and 
state set
Reproducer - TxRollbackOnTimeoutNearCacheTest.testEnlistManyWrite



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10508) Need to support the new checkpoint feature not wait for the previous operation to complete

2018-12-03 Thread Dmitriy Govorukhin (JIRA)
Dmitriy Govorukhin created IGNITE-10508:
---

 Summary: Need to support the new checkpoint feature not wait for 
the previous operation to complete
 Key: IGNITE-10508
 URL: https://issues.apache.org/jira/browse/IGNITE-10508
 Project: Ignite
  Issue Type: Improvement
Reporter: Dmitriy Govorukhin


There are cases when we should trigger the checkpoint, some operations will be 
sure that all operation finished before the checkpoint. It is necessary to 
support the possibility of run checkpoint without waiting for the completion of 
the previous checkpoint.

Solution:

Merge checkpoint pages and append write new dirty pages to a current checkpoint.

Restrictions:

Trigger new checkpoint should not wait for the previous checkpoint operation 
completed.

- It should not break crash recovery mechanisms

- Only one merged is allow in the first implementation (potentially OOM, if we 
will try to merge many checkpoint operations)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10507) VisorIdleVerifyTask should return exceptions from all nodes if they are occured

2018-12-03 Thread Sergey Antonov (JIRA)
Sergey Antonov created IGNITE-10507:
---

 Summary: VisorIdleVerifyTask should return exceptions from all 
nodes if they are occured
 Key: IGNITE-10507
 URL: https://issues.apache.org/jira/browse/IGNITE-10507
 Project: Ignite
  Issue Type: Improvement
  Components: visor
Affects Versions: 2.6
Reporter: Sergey Antonov
Assignee: Sergey Antonov
 Fix For: 2.8


We should return exceptions from all nodes, if they are occured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


RE: Query regarding Ignite unit tests

2018-12-03 Thread Stanislav Lukyanov
Hi,

This is better to be asked on the dev-list – added that to the To, and Bcc’ed 
user-list.

I actually don’t think you can run tests for a specific module – either a 
single test, or a single test suite, or all of them.
I would usually either run a single test from IDEA or run all tests via 
TeamCity https://ci.ignite.apache.org.

Igniters, please help Namrata here with the best practices of working with 
tests.

Stan

From: Namrata Bhave
Sent: 3 декабря 2018 г. 14:54
To: u...@ignite.apache.org
Subject: Query regarding Ignite unit tests

Hi,

I have recently started working with Apache Ignite. Build on x86 Ubuntu 16.04 
is complete. However, while running tests using `mvn test` command, the 
execution gets stuck while running `ignite-core` module.
Hence started running tests on individual modules, where similar behavior was 
seen in ignite-indexing, ignite-clients and ignite-ml modules as well.
I have tried setting JAVA heap settings, running on a system with 32GB RAM. 
Is there a way to avoid this and get complete test results? Also, is there any 
CI or such environment where I can get results of unit tests?

Would appreciate any help provided.

Thanks and Regards,
Namrata



[GitHub] dspavlov commented on a change in pull request #86: IGNITE-10071 Queued and running builds hang in the TC bot

2018-12-03 Thread GitBox
dspavlov commented on a change in pull request #86: IGNITE-10071 Queued and 
running builds hang in the TC bot
URL: https://github.com/apache/ignite-teamcity-bot/pull/86#discussion_r238243619
 
 

 ##
 File path: 
ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/teamcity/ignited/fatbuild/ProactiveFatBuildSync.java
 ##
 @@ -124,28 +130,34 @@ public synchronized SyncTask getSyncTask(ITeamcityConn 
conn) {
 protected String findMissingBuildsFromBuildRef(String srvId, ITeamcityConn 
conn) {
 int srvIdMaskHigh = ITeamcityIgnited.serverIdToInt(srvId);
 
-final int[] buildRefKeys = buildRefDao.getAllIds(srvIdMaskHigh);
+Stream buildRefs = 
buildRefDao.compactedBuildsForServer(srvIdMaskHigh);
 
 List buildsIdsToLoad = new ArrayList<>();
-int totalAskedToLoad = 0;
+AtomicInteger totalAskedToLoad = new AtomicInteger();
 
 Review comment:
   We don't need a volatile semantic as the stream is not parallel, but we used 
AtomicInt as a counter. I know that it is a kind of antipattern according to 
JCIP by Brian Goetz, but I find it more readable than creating of an array 
int[1] and incrementing the value of v[0]+=value. So I suggest to keep it as is 
for now and probably create later some common non-volatile counter.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (IGNITE-10506) Emphasize the need to close the query cursors in docs

2018-12-03 Thread Stanislav Lukyanov (JIRA)
Stanislav Lukyanov created IGNITE-10506:
---

 Summary: Emphasize the need to close the query cursors in docs
 Key: IGNITE-10506
 URL: https://issues.apache.org/jira/browse/IGNITE-10506
 Project: Ignite
  Issue Type: Bug
  Components: documentation
Reporter: Stanislav Lukyanov


Currently the need to close query cursors is mentioned only in two places:
- Queries docs on readme.io: 
https://apacheignite.readme.io/docs/cache-queries#section-querycursor
- QueryCursor::close javadoc 
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/query/QueryCursor.html#close--

A failure to close a cursor may lead to severe resource leaks. It seems 
reasonable to better emphasize and explain how to approach cursors closing. 
Most importantly, we need to have a mention of that in the Java API secton of 
SQL readme.io docs (https://apacheignite-sql.readme.io/docs/java-sql-api).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Historical rebalance

2018-12-03 Thread Vladimir Ozerov
Roman,

What is the advantage of your algorithm compared to previous one? Previous
algorithm does almost the same, but without updating two separate counters,
and looks simpler to me. Only one update is sufficient - at transaction
commit. When transaction starts we just read currently active update
counter (LWM), which is enough for us to know where to start from.
Moreover, we do not need to learn any kind of WAL pointers and write
additional WAL records.

Please note that we are trying to solve more difficult problem - how to
rebalance as less WAL as possible in case of long-running transactions.

On Mon, Dec 3, 2018 at 2:29 PM Roman Kondakov 
wrote:

> Vladimir,
>
> the difference between per-transaction basis and update-counters basis
> is the fact that at the first update we don't know the actual update
> counter of this update - we just count deltas on enlist phase. Actual
> update counter of this update will be assigned on transaction commit.
> But for per-transaction based the actual HWM is known for each
> transaction from the very beginning and this value is the same for
> primary and backups. Having this number it is very easy to find where
> transaction begins on any node.
>
>
> --
> Kind Regards
> Roman Kondakov
>
> On 03.12.2018 13:46, Vladimir Ozerov wrote:
> > Roman,
> >
> > We already track updates on per-transaction basis. The only difference is
> > that instead of doing a single "increment(1)" for transaction we do
> > "increment(X)" where X is number of updates in the given transaction.
> >
> > On Mon, Dec 3, 2018 at 1:16 PM Roman Kondakov  >
> > wrote:
> >
> >> Igor, Vladimir, Ivan,
> >>
> >> perhaps, we are focused too much on update counters. This feature was
> >> designed for the continuous queries and it may not be suited well for
> >> the historical rebalance. What if we would track updates on
> >> per-transaction basis instead of per-update basis? Let's consider two
> >> counters: low-water mark (LWM) and high-water mark (HWM) which should be
> >> added to each partition. They have the following properties:
> >>
> >> * HWM - is a plane atomic counter. When Tx makes its first write on
> >> primary node it does incrementAndGet for this counter and remembers
> >> obtained value within its context. This counter can be considered as tx
> >> id within current partition - transactions should maintain per-partition
> >> map of their HWM ids. WAL pointer to the first record should remembered
> >> in this map. Also this id should be recorded to WAL data records.
> >>
> >> When Tx sends updates to backups it sends Tx HWM too. When backup
> >> receives this message from the primary node it takes HWM and do
> >> setIfGreater on the local HWM counter.
> >>
> >> * LWM - is a plane atomic counter. When Tx terminates (either with
> >> commit or rollback) it updates its local LWM in the same manner as
> >> update counters do it using holes tracking. For example, if partition's
> >> LWM = 10 now, and tx with id (HWM id) = 12 commits, we do not update
> >> partition LWM until tx with id = 11 is committed. When id = 11 is
> >> committed, LWM is set to 12. If we have LWM == N, this means that all
> >> transactions with id <= N have been terminated for the current partition
> >> and all data is already recorded in the local partition.
> >>
> >> Brief summary for both counters: HWM - means that partition has already
> >> seen at least one update of transactions with id <= HWM. LWM means that
> >> partition has all updates made by transactions wth id <= LWM.
> >>
> >> LWM is always <= HWM.
> >>
> >> On checkpoint we should store only these two counters in checkpoint
> >> record. As optimization we can also store list of pending LWMs - ids
> >> which haven't been merged to LWM because of the holes in sequence.
> >>
> >> Historical rebalance:
> >>
> >> 1. Demander knows its LWM - all updates before it has been applied.
> >> Demander sends LWM to supplier.
> >>
> >> 2. Supplier finds the earliest checkpoint where HWM(supplier) <= LWM
> >> (demander)
> >>
> >> 3. Supplier starts moving forward on WAL until it finds first data
> >> record with HWM id = LWM (demander). From this point WAL can be
> >> rebalanced to demander.
> >>
> >> In this approach updates and checkpoints on primary and backup can be
> >> reordered in any way, but we can always find a proper point to read WAL
> >> from.
> >>
> >> Let's consider a couple of examples. In this examples transaction
> >> updates marked as w1(a) - transaction 1 updates key=a, c1 - transaction
> >> 1 is committed, cp(1, 0) - checkpoint with HWM=1 and  LWM=0. (HWM,LWM) -
> >> current counters after operation. (HWM,LWM[hole1, hole2]) - counters
> >> with holes in LWM.
> >>
> >>
> >> 1. Simple case with no reordering:
> >>
> >> PRIMARY
> >>
> -w1(a)---cp(1,0)---w2(b)w1(c)--c1c2-cp(2,2)
> >> (HWM,LWM)(1,0) (2,0)(2,0) (2,1)
>  (2,2)
> >> |  || |

Re: Historical rebalance

2018-12-03 Thread Roman Kondakov

Vladimir,

the difference between per-transaction basis and update-counters basis 
is the fact that at the first update we don't know the actual update 
counter of this update - we just count deltas on enlist phase. Actual 
update counter of this update will be assigned on transaction commit. 
But for per-transaction based the actual HWM is known for each 
transaction from the very beginning and this value is the same for 
primary and backups. Having this number it is very easy to find where 
transaction begins on any node.



--
Kind Regards
Roman Kondakov

On 03.12.2018 13:46, Vladimir Ozerov wrote:

Roman,

We already track updates on per-transaction basis. The only difference is
that instead of doing a single "increment(1)" for transaction we do
"increment(X)" where X is number of updates in the given transaction.

On Mon, Dec 3, 2018 at 1:16 PM Roman Kondakov 
wrote:


Igor, Vladimir, Ivan,

perhaps, we are focused too much on update counters. This feature was
designed for the continuous queries and it may not be suited well for
the historical rebalance. What if we would track updates on
per-transaction basis instead of per-update basis? Let's consider two
counters: low-water mark (LWM) and high-water mark (HWM) which should be
added to each partition. They have the following properties:

* HWM - is a plane atomic counter. When Tx makes its first write on
primary node it does incrementAndGet for this counter and remembers
obtained value within its context. This counter can be considered as tx
id within current partition - transactions should maintain per-partition
map of their HWM ids. WAL pointer to the first record should remembered
in this map. Also this id should be recorded to WAL data records.

When Tx sends updates to backups it sends Tx HWM too. When backup
receives this message from the primary node it takes HWM and do
setIfGreater on the local HWM counter.

* LWM - is a plane atomic counter. When Tx terminates (either with
commit or rollback) it updates its local LWM in the same manner as
update counters do it using holes tracking. For example, if partition's
LWM = 10 now, and tx with id (HWM id) = 12 commits, we do not update
partition LWM until tx with id = 11 is committed. When id = 11 is
committed, LWM is set to 12. If we have LWM == N, this means that all
transactions with id <= N have been terminated for the current partition
and all data is already recorded in the local partition.

Brief summary for both counters: HWM - means that partition has already
seen at least one update of transactions with id <= HWM. LWM means that
partition has all updates made by transactions wth id <= LWM.

LWM is always <= HWM.

On checkpoint we should store only these two counters in checkpoint
record. As optimization we can also store list of pending LWMs - ids
which haven't been merged to LWM because of the holes in sequence.

Historical rebalance:

1. Demander knows its LWM - all updates before it has been applied.
Demander sends LWM to supplier.

2. Supplier finds the earliest checkpoint where HWM(supplier) <= LWM
(demander)

3. Supplier starts moving forward on WAL until it finds first data
record with HWM id = LWM (demander). From this point WAL can be
rebalanced to demander.

In this approach updates and checkpoints on primary and backup can be
reordered in any way, but we can always find a proper point to read WAL
from.

Let's consider a couple of examples. In this examples transaction
updates marked as w1(a) - transaction 1 updates key=a, c1 - transaction
1 is committed, cp(1, 0) - checkpoint with HWM=1 and  LWM=0. (HWM,LWM) -
current counters after operation. (HWM,LWM[hole1, hole2]) - counters
with holes in LWM.


1. Simple case with no reordering:

PRIMARY
-w1(a)---cp(1,0)---w2(b)w1(c)--c1c2-cp(2,2)
(HWM,LWM)(1,0) (2,0)(2,0) (2,1) (2,2)
|  || ||
BACKUP
--w1(a)-w2(b)w1(c)---cp(2,0)c1c2-cp(2,2)
(HWM,LWM)(1,0) (2,0)(2,0) (2,1) (2,2)


In this case if backup failed before c1 it will receive all updates from
the beginning (HWM=0).
If it fails between c1 and c2, it will receive WAL from primary's cp(1,0),
because tx with id=1 is fully processed on backup: HWM(supplier cp(1,0))=1
== LWM(demander)=1
if backup fails after c2, it will receive nothing because it has all
updates HWM(supplier)=2 == LWM(demander)=2



2. Case with reordering

PRIMARY
-w1(a)---cp(1,0)---w2(b)--cp(2,0)--w1(c)--c1-c2---cp(2,2)
(HWM,LWM)(1,0) (2,0)   (2,0)
  (2,1)  (2,2)
  \_   |   |
\   |
\___   |   |
  \__|___
\__|__ |
   |   \
   |  \|
  

Re: [VOTE] Apache Ignite 2.7.0 RC2

2018-12-03 Thread Vyacheslav Daradur
+1

I've downloaded and built the sources, run several examples.
Also, I checked several times the test
`testAtomicOnheapTwoBackupAsyncFullSync` related to the issue
discussed in a separated thread - OK.
On Sun, Dec 2, 2018 at 11:26 PM Dmitriy Pavlov  wrote:
>
> +1 binding
>
> I've checked new RC using Apache Ignite TeamCity Bot. The bot now uses
> Apache Ignite V2.7.0-RC2, tested locally & deployed to the server.
>
> PS Nikolay, thanks for sharing the link.
>
> вс, 2 дек. 2018 г. в 21:58, Nikolay Izhikov :
>
> > Hello, Dmitriy
> >
> > RC2 artifacts are here -
> > https://repository.apache.org/content/repositories/orgapacheignite-1435/
> >
> > В Вс, 02/12/2018 в 01:08 +0300, Dmitriy Pavlov пишет:
> > > Nikolay, Igniters,
> > >
> > > Could you please advice where can I find a staging for RC-2?
> > >
> > > I can't find it in https://repository.apache.org/content/repositories/
> > >
> > > Or should I reuse the old one?
> > > https://repository.apache.org/content/repositories/orgapacheignite-1431/
> > >
> > > Thank you in advance.
> > >
> > > Sincerely,
> > > Dmitriy Pavlov
> > >
> > > сб, 1 дек. 2018 г. в 09:48, Pavel Tupitsyn :
> > >
> > > > +1
> > > >
> > > > Downloaded sources, build Java and .NET parts, ran examples.
> > > > There is a minor issue with .NET Core examples, compiler warning is
> > > > displayed (certainly not a blocker) [1]
> > > >
> > > > [1] https://issues.apache.org/jira/browse/IGNITE-10500
> > > >
> > > > On Sat, Dec 1, 2018 at 12:47 AM Nikolay Izhikov 
> > > > wrote:
> > > >
> > > > > Igniters,
> > > > >
> > > > > We've uploaded a 2.7.0 release candidate to
> > > > >
> > > > > https://dist.apache.org/repos/dist/dev/ignite/2.7.0-rc2/
> > > > >
> > > > > Git tag name is 2.7.0-rc2
> > > > >
> > > > > This release includes the following changes:
> > > > >
> > > > > Apache Ignite In-Memory Database and Caching Platform 2.7
> > > > > -
> > > > >
> > > > > Ignite:
> > > > > * Added experimental support for multi-version concurrency control
> > with
> > > > > snapshot isolation
> > > > >   - available for both cache API and SQL
> > > > >   - use CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT to enable it
> > > > >   - not production ready, data consistency is not guaranteed in case
> > of
> > > > > node failures
> > > > > * Implemented Transparent Data Encryption based on JKS certificates
> > > > > * Implemented Node.JS Thin Client
> > > > > * Implemented Python Thin Client
> > > > > * Implemented PHP Thin Client
> > > > > * Ignite start scripts now support Java 9 and higher
> > > > > * Added ability to set WAL history size in bytes
> > > > > * Added SslContextFactory.protocols and
> > SslContextFactory.cipherSuites
> > > > > properties to control which SSL encryption algorithms can be used
> > > > > * Added JCache 1.1 compliance
> > > > > * Added IgniteCompute.withNoResultCache method with semantics
> > similar to
> > > > > ComputeTaskNoResultCache annotation
> > > > > * Spring Data 2.0 is now supported in the separate module
> > > > > 'ignite-spring-data_2.0'
> > > > > * Added monitoring of critical system workers
> > > > > * Added ability to provide custom implementations of
> > ExceptionListener
> > > >
> > > > for
> > > > > JmsStreamer
> > > > > * Ignite KafkaStreamer was upgraded to use new KafkaConsmer
> > configuration
> > > > > * S3 IP Finder now supports subfolder usage instead of bucket root
> > > > > * Improved dynamic cache start speed
> > > > > * Improved checkpoint performance by decreasing mark duration.
> > > > > * Added ability to manage compression level for compressed WAL
> > archives.
> > > > > * Added metrics for Entry Processor invocations.
> > > > > * Added JMX metrics: ClusterMetricsMXBean.getTotalBaselineNodes and
> > > > > ClusterMetricsMXBean.getActiveBaselineNodes
> > > > > * Node uptime metric now includes days count
> > > > > * Exposed info about thin client connections through JMX
> > > > > * Introduced new system property IGNITE_REUSE_MEMORY_ON_DEACTIVATE to
> > > > > enable reuse of allocated memory on node deactivation (disabled by
> > > >
> > > > default)
> > > > > * Optimistic transaction now will be properly rolled back if waiting
> > too
> > > > > long for a new topology on remap
> > > > > * ScanQuery with setLocal flag now checks if the partition is
> > actually
> > > > > present on local node
> > > > > * Improved cluster behaviour when a left node does not cause
> > partition
> > > > > affinity assignment changes
> > > > > * Interrupting user thread during partition initialization will no
> > longer
> > > > > cause node to stop
> > > > > * Fixed problem when partition lost event was not triggered if
> > multiple
> > > > > nodes left cluster
> > > > > * Fixed massive node drop from the cluster on temporary network
> > issues
> > > > > * Fixed service redeployment on cluster reactivation
> > > > > * Fixed client node stability under ZooKeeper discovery
> > > > > * Massive performance and stability 

Re: Apache Ignite 2.7. Last Mile

2018-12-03 Thread Vladimir Ozerov
Confirming. Test never failed in AI 2.7 even though it contains mentioned
MVCC commit.

On Mon, Dec 3, 2018 at 1:36 PM Vyacheslav Daradur 
wrote:

> Guys, I checked that `testAtomicOnheapTwoBackupAsyncFullSync` failed
> in the master (as described Ivan), but it passes in branch ignite-2.7
> (tag 2.7.0-rc2), so this shouldn't block the release.
>
> Ivan, were you able to reproduce this issue in ignite-2.7 branch?
>
>
> On Mon, Dec 3, 2018 at 1:03 PM Ivan Fedotov  wrote:
> >
> > Nikolay,
> >
> > I think that end-user may face the problem during call IgniteCache#invoke
> > on a cache with registered continious query if cache's configuration is
> as
> > in the failed test: [PARTITIONED, ATOMIC, FULL_SYNCH, 2 backups].
> >
> > I've found that failure has been introduced by MVCC commit [1]. As I
> > understand the issue relates to the process of updating metadata, when
> the
> > future of binary metadata registration hangs because of an unclear
> reason.
> >
> > I don't know if the issue the blocker, but seems it's regression because
> > the test has been passed on Ignite 2.6
> >
> > What do you think?
> >
> > [1]
> >
> https://github.com/apache/ignite/commit/51a202a4c48220fa919f47147bd4889033cd35a8
> >
> > пн, 3 дек. 2018 г. в 11:14, Nikolay Izhikov :
> >
> > > Ivan, please, clarify.
> > >
> > > How your investigation are related to 2.7 release?
> > > Do you think it's a release blocker?
> > > If yes, please, describe impact to users and how users can reproduce
> this
> > > issue.
> > >
> > > пн, 3 дек. 2018 г., 9:30 Ivan Fedotov ivanan...@gmail.com:
> > >
> > > > I've created the PR 
> which
> > > > includes changes <
> https://github.com/1vanan/ignite/commits/before-MVCC>
> > > > just before integration MVCC with Continuous Query and from the
> TeamCity
> > > > <
> > > >
> > >
> https://ci.ignite.apache.org/viewLog.html?buildId=2434057=buildResultsDiv=IgniteTests24Java8_ContinuousQuery1
> > > > >
> > > > it is clear that before this changes the
> > > > test testAtomicOnheapTwoBackupAsyncFullSync is green.
> > > >
> > > > Also Roman Kondakov gave his view on this problem in the comments
> > > > . Now the
> problem
> > > > becomes more understandable, but the root reason is still unclear.
> > > >
> > > > May be a few of you have any suggestions why hang of threads on the
> > > binary
> > > > metadata registration future appears?
> > > >
> > > > пт, 30 нояб. 2018 г. в 13:48, Ivan Fedotov :
> > > >
> > > > > Igor, thank you for explanation.
> > > > >
> > > > > Now it seems that when the one thread tries to invoke
> > > > > GridCacheMapEntry#touch, the another one makes
> > > > > GridCacheProcessor#stopCache. If I am wrong, please feel free to
> > > correct
> > > > me.
> > > > >
> > > > > But it still does not clear for me why this fail appears after
> commit
> > > > > <
> > > >
> > >
> https://github.com/apache/ignite/commit/51a202a4c48220fa919f47147bd4889033cd35a8
> > > >
> > > > which
> > > > > is about MVCC. Moreover, NPE appears only with
> BinaryObjectException,
> > > and
> > > > > when the test is green, I can not find NPE in the log.
> > > > >
> > > > > Now I tried to run test locally 1000 times on the version before
> MVCC
> > > and
> > > > > could not find error on this concretely case (but it exists the
> another
> > > > > one
> > > > > <
> > > >
> > >
> https://github.com/apache/ignite/blob/master/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryOrderingEventTest.java#L426
> > > >
> > > > which
> > > > > is about assertion on received events).
> > > > >
> > > > > пт, 30 нояб. 2018 г. в 13:37, Roman Kondakov
> > >  > > > >:
> > > > >
> > > > >> Nikolay,
> > > > >>
> > > > >> I couldn't quickly find the root cause of this problem because
> I'm not
> > > > >> an expert in the binary metadata flow. I think community should
> decide
> > > > >> whether this is a release blocker or not.
> > > > >>
> > > > >>
> > > > >> --
> > > > >> Kind Regards
> > > > >> Roman Kondakov
> > > > >>
> > > > >> On 30.11.2018 13:23, Nikolay Izhikov wrote:
> > > > >> > Hello, Roman.
> > > > >> >
> > > > >> > Is this issue blocks the 2.7 release?
> > > > >> >
> > > > >> > пт, 30 нояб. 2018 г., 13:19 Roman Kondakov
> > > kondako...@mail.ru.invalid
> > > > :
> > > > >> >
> > > > >> >> Hi all!
> > > > >> >>
> > > > >> >> I've reproduced this problem locally and attached the log to
> the
> > > > ticket
> > > > >> >> in my comment [1].
> > > > >> >>
> > > > >> >> As Igor noted, NPE there is caused by node stop in the end of
> the
> > > > test.
> > > > >> >> The real problem here seems to be in the binary metadata
> > > registration
> > > > >> flow.
> > > > >> >>
> > > > >> >>
> > > > >> >> [1]
> > > > >> >>
> > > > >> >>
> > > > >>
> > > >
> > >
> 

Re: Historical rebalance

2018-12-03 Thread Vladimir Ozerov
Roman,

We already track updates on per-transaction basis. The only difference is
that instead of doing a single "increment(1)" for transaction we do
"increment(X)" where X is number of updates in the given transaction.

On Mon, Dec 3, 2018 at 1:16 PM Roman Kondakov 
wrote:

> Igor, Vladimir, Ivan,
>
> perhaps, we are focused too much on update counters. This feature was
> designed for the continuous queries and it may not be suited well for
> the historical rebalance. What if we would track updates on
> per-transaction basis instead of per-update basis? Let's consider two
> counters: low-water mark (LWM) and high-water mark (HWM) which should be
> added to each partition. They have the following properties:
>
> * HWM - is a plane atomic counter. When Tx makes its first write on
> primary node it does incrementAndGet for this counter and remembers
> obtained value within its context. This counter can be considered as tx
> id within current partition - transactions should maintain per-partition
> map of their HWM ids. WAL pointer to the first record should remembered
> in this map. Also this id should be recorded to WAL data records.
>
> When Tx sends updates to backups it sends Tx HWM too. When backup
> receives this message from the primary node it takes HWM and do
> setIfGreater on the local HWM counter.
>
> * LWM - is a plane atomic counter. When Tx terminates (either with
> commit or rollback) it updates its local LWM in the same manner as
> update counters do it using holes tracking. For example, if partition's
> LWM = 10 now, and tx with id (HWM id) = 12 commits, we do not update
> partition LWM until tx with id = 11 is committed. When id = 11 is
> committed, LWM is set to 12. If we have LWM == N, this means that all
> transactions with id <= N have been terminated for the current partition
> and all data is already recorded in the local partition.
>
> Brief summary for both counters: HWM - means that partition has already
> seen at least one update of transactions with id <= HWM. LWM means that
> partition has all updates made by transactions wth id <= LWM.
>
> LWM is always <= HWM.
>
> On checkpoint we should store only these two counters in checkpoint
> record. As optimization we can also store list of pending LWMs - ids
> which haven't been merged to LWM because of the holes in sequence.
>
> Historical rebalance:
>
> 1. Demander knows its LWM - all updates before it has been applied.
> Demander sends LWM to supplier.
>
> 2. Supplier finds the earliest checkpoint where HWM(supplier) <= LWM
> (demander)
>
> 3. Supplier starts moving forward on WAL until it finds first data
> record with HWM id = LWM (demander). From this point WAL can be
> rebalanced to demander.
>
> In this approach updates and checkpoints on primary and backup can be
> reordered in any way, but we can always find a proper point to read WAL
> from.
>
> Let's consider a couple of examples. In this examples transaction
> updates marked as w1(a) - transaction 1 updates key=a, c1 - transaction
> 1 is committed, cp(1, 0) - checkpoint with HWM=1 and  LWM=0. (HWM,LWM) -
> current counters after operation. (HWM,LWM[hole1, hole2]) - counters
> with holes in LWM.
>
>
> 1. Simple case with no reordering:
>
> PRIMARY
> -w1(a)---cp(1,0)---w2(b)w1(c)--c1c2-cp(2,2)
> (HWM,LWM)(1,0) (2,0)(2,0) (2,1) (2,2)
>|  || ||
> BACKUP
> --w1(a)-w2(b)w1(c)---cp(2,0)c1c2-cp(2,2)
> (HWM,LWM)(1,0) (2,0)(2,0) (2,1) (2,2)
>
>
> In this case if backup failed before c1 it will receive all updates from
> the beginning (HWM=0).
> If it fails between c1 and c2, it will receive WAL from primary's cp(1,0),
> because tx with id=1 is fully processed on backup: HWM(supplier cp(1,0))=1
> == LWM(demander)=1
> if backup fails after c2, it will receive nothing because it has all
> updates HWM(supplier)=2 == LWM(demander)=2
>
>
>
> 2. Case with reordering
>
> PRIMARY
> -w1(a)---cp(1,0)---w2(b)--cp(2,0)--w1(c)--c1-c2---cp(2,2)
> (HWM,LWM)(1,0) (2,0)   (2,0)
>  (2,1)  (2,2)
>  \_   |   |
> \   |
>\___   |   |
>  \__|___
>\__|__ |
>   |   \
>   |  \|
>   |\
> BACKUP
> -w2(b)---w1(a)cp(2,0)---w1(c)c2---c1-cp(2,2)
> (HWM,LWM)   (2,0)   (2,0)  (2,0)
> (2,0[2])  (2,2)
>
>
> Note here we have a hole on backup when tx2 has committed earlier than tx1
> and LWM wasn't changed at this moment.
>
> In last case if backup is failed before c1, the entire WAL will be
> supplied because LWM=0 until this moment.
> If 

Re: Apache Ignite 2.7. Last Mile

2018-12-03 Thread Vyacheslav Daradur
Guys, I checked that `testAtomicOnheapTwoBackupAsyncFullSync` failed
in the master (as described Ivan), but it passes in branch ignite-2.7
(tag 2.7.0-rc2), so this shouldn't block the release.

Ivan, were you able to reproduce this issue in ignite-2.7 branch?


On Mon, Dec 3, 2018 at 1:03 PM Ivan Fedotov  wrote:
>
> Nikolay,
>
> I think that end-user may face the problem during call IgniteCache#invoke
> on a cache with registered continious query if cache's configuration is as
> in the failed test: [PARTITIONED, ATOMIC, FULL_SYNCH, 2 backups].
>
> I've found that failure has been introduced by MVCC commit [1]. As I
> understand the issue relates to the process of updating metadata, when the
> future of binary metadata registration hangs because of an unclear reason.
>
> I don't know if the issue the blocker, but seems it's regression because
> the test has been passed on Ignite 2.6
>
> What do you think?
>
> [1]
> https://github.com/apache/ignite/commit/51a202a4c48220fa919f47147bd4889033cd35a8
>
> пн, 3 дек. 2018 г. в 11:14, Nikolay Izhikov :
>
> > Ivan, please, clarify.
> >
> > How your investigation are related to 2.7 release?
> > Do you think it's a release blocker?
> > If yes, please, describe impact to users and how users can reproduce this
> > issue.
> >
> > пн, 3 дек. 2018 г., 9:30 Ivan Fedotov ivanan...@gmail.com:
> >
> > > I've created the PR  which
> > > includes changes 
> > > just before integration MVCC with Continuous Query and from the TeamCity
> > > <
> > >
> > https://ci.ignite.apache.org/viewLog.html?buildId=2434057=buildResultsDiv=IgniteTests24Java8_ContinuousQuery1
> > > >
> > > it is clear that before this changes the
> > > test testAtomicOnheapTwoBackupAsyncFullSync is green.
> > >
> > > Also Roman Kondakov gave his view on this problem in the comments
> > > . Now the problem
> > > becomes more understandable, but the root reason is still unclear.
> > >
> > > May be a few of you have any suggestions why hang of threads on the
> > binary
> > > metadata registration future appears?
> > >
> > > пт, 30 нояб. 2018 г. в 13:48, Ivan Fedotov :
> > >
> > > > Igor, thank you for explanation.
> > > >
> > > > Now it seems that when the one thread tries to invoke
> > > > GridCacheMapEntry#touch, the another one makes
> > > > GridCacheProcessor#stopCache. If I am wrong, please feel free to
> > correct
> > > me.
> > > >
> > > > But it still does not clear for me why this fail appears after commit
> > > > <
> > >
> > https://github.com/apache/ignite/commit/51a202a4c48220fa919f47147bd4889033cd35a8
> > >
> > > which
> > > > is about MVCC. Moreover, NPE appears only with BinaryObjectException,
> > and
> > > > when the test is green, I can not find NPE in the log.
> > > >
> > > > Now I tried to run test locally 1000 times on the version before MVCC
> > and
> > > > could not find error on this concretely case (but it exists the another
> > > > one
> > > > <
> > >
> > https://github.com/apache/ignite/blob/master/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryOrderingEventTest.java#L426
> > >
> > > which
> > > > is about assertion on received events).
> > > >
> > > > пт, 30 нояб. 2018 г. в 13:37, Roman Kondakov
> >  > > >:
> > > >
> > > >> Nikolay,
> > > >>
> > > >> I couldn't quickly find the root cause of this problem because I'm not
> > > >> an expert in the binary metadata flow. I think community should decide
> > > >> whether this is a release blocker or not.
> > > >>
> > > >>
> > > >> --
> > > >> Kind Regards
> > > >> Roman Kondakov
> > > >>
> > > >> On 30.11.2018 13:23, Nikolay Izhikov wrote:
> > > >> > Hello, Roman.
> > > >> >
> > > >> > Is this issue blocks the 2.7 release?
> > > >> >
> > > >> > пт, 30 нояб. 2018 г., 13:19 Roman Kondakov
> > kondako...@mail.ru.invalid
> > > :
> > > >> >
> > > >> >> Hi all!
> > > >> >>
> > > >> >> I've reproduced this problem locally and attached the log to the
> > > ticket
> > > >> >> in my comment [1].
> > > >> >>
> > > >> >> As Igor noted, NPE there is caused by node stop in the end of the
> > > test.
> > > >> >> The real problem here seems to be in the binary metadata
> > registration
> > > >> flow.
> > > >> >>
> > > >> >>
> > > >> >> [1]
> > > >> >>
> > > >> >>
> > > >>
> > >
> > https://issues.apache.org/jira/browse/IGNITE-10376?focusedCommentId=16704510=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16704510
> > > >> >>
> > > >> >> --
> > > >> >> Kind Regards
> > > >> >> Roman Kondakov
> > > >> >>
> > > >> >> On 30.11.2018 11:56, Seliverstov Igor wrote:
> > > >> >>> Null pointer there due to cache stop. Look at
> > > GridCacheContext#cleanup
> > > >> >>> (GridCacheContext.java:2050)
> > > >> >>> which is called by GridCacheProcessor#stopCache
> > > >> >>> (GridCacheProcessor.java:1372)
> > > >> >>>
> > > >> >>> That's why 

Re: Is it time to move forward to JUnit4 (5)?

2018-12-03 Thread Павлухин Иван
Hi Oleg,

I noticed that GridAbstractTest is now capable to run junit4 tests.
What are the current recommendations for writing new tests? Can we use
junit4 annotation for new tests?
пн, 12 нояб. 2018 г. в 19:58, oignatenko :
>
> Hi Ivan,
>
> I am currently testing approach you used in pull/5354 in the "pilot"
> sub-task with examples tests (IGNITE-10174).
>
> So far it looks more and more like the way to go. The most promising thing I
> observed is that after I changed classes in our test framework the way you
> did, execution of (unchanged) examples tests went exactly the same as it was
> before changes.
>
> This indicates that existing tests won't be affected making it indeed low
> risk.
>
> After that I converted examples tests to Junit 4 by adding @RunWith and
> @Test annotations and tried a few, and these looked okay.
>
> Currently I am running full examples test suite and after it is over I will
> compare results to the reference list I made by running it prior to
> migration.
>
> regards, Oleg
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Historical rebalance

2018-12-03 Thread Roman Kondakov

Igor, Vladimir, Ivan,

perhaps, we are focused too much on update counters. This feature was 
designed for the continuous queries and it may not be suited well for 
the historical rebalance. What if we would track updates on 
per-transaction basis instead of per-update basis? Let's consider two 
counters: low-water mark (LWM) and high-water mark (HWM) which should be 
added to each partition. They have the following properties:


* HWM - is a plane atomic counter. When Tx makes its first write on 
primary node it does incrementAndGet for this counter and remembers 
obtained value within its context. This counter can be considered as tx 
id within current partition - transactions should maintain per-partition 
map of their HWM ids. WAL pointer to the first record should remembered 
in this map. Also this id should be recorded to WAL data records.


When Tx sends updates to backups it sends Tx HWM too. When backup 
receives this message from the primary node it takes HWM and do 
setIfGreater on the local HWM counter.


* LWM - is a plane atomic counter. When Tx terminates (either with 
commit or rollback) it updates its local LWM in the same manner as 
update counters do it using holes tracking. For example, if partition's 
LWM = 10 now, and tx with id (HWM id) = 12 commits, we do not update 
partition LWM until tx with id = 11 is committed. When id = 11 is 
committed, LWM is set to 12. If we have LWM == N, this means that all 
transactions with id <= N have been terminated for the current partition 
and all data is already recorded in the local partition.


Brief summary for both counters: HWM - means that partition has already 
seen at least one update of transactions with id <= HWM. LWM means that 
partition has all updates made by transactions wth id <= LWM.


LWM is always <= HWM.

On checkpoint we should store only these two counters in checkpoint 
record. As optimization we can also store list of pending LWMs - ids 
which haven't been merged to LWM because of the holes in sequence.


Historical rebalance:

1. Demander knows its LWM - all updates before it has been applied. 
Demander sends LWM to supplier.


2. Supplier finds the earliest checkpoint where HWM(supplier) <= LWM 
(demander)


3. Supplier starts moving forward on WAL until it finds first data 
record with HWM id = LWM (demander). From this point WAL can be 
rebalanced to demander.


In this approach updates and checkpoints on primary and backup can be 
reordered in any way, but we can always find a proper point to read WAL 
from.


Let's consider a couple of examples. In this examples transaction 
updates marked as w1(a) - transaction 1 updates key=a, c1 - transaction 
1 is committed, cp(1, 0) - checkpoint with HWM=1 and  LWM=0. (HWM,LWM) - 
current counters after operation. (HWM,LWM[hole1, hole2]) - counters 
with holes in LWM.



1. Simple case with no reordering:

PRIMARY 
-w1(a)---cp(1,0)---w2(b)w1(c)--c1c2-cp(2,2)
(HWM,LWM)(1,0) (2,0)(2,0) (2,1) (2,2)
  |  || ||
BACKUP 
--w1(a)-w2(b)w1(c)---cp(2,0)c1c2-cp(2,2)
(HWM,LWM)(1,0) (2,0)(2,0) (2,1) (2,2)

  
In this case if backup failed before c1 it will receive all updates from the beginning (HWM=0).

If it fails between c1 and c2, it will receive WAL from primary's cp(1,0), 
because tx with id=1 is fully processed on backup: HWM(supplier cp(1,0))=1 == 
LWM(demander)=1
if backup fails after c2, it will receive nothing because it has all updates 
HWM(supplier)=2 == LWM(demander)=2



2. Case with reordering

PRIMARY 
-w1(a)---cp(1,0)---w2(b)--cp(2,0)--w1(c)--c1-c2---cp(2,2)
(HWM,LWM)(1,0) (2,0)   (2,0) (2,1)  
(2,2)
\_   |   |  \   |
  \___   |   |   
\__|___
  \__|__ |  |   
\
 |  \|  |   
 \
BACKUP 
-w2(b)---w1(a)cp(2,0)---w1(c)c2---c1-cp(2,2)
(HWM,LWM)   (2,0)   (2,0)  (2,0)  
(2,0[2])  (2,2)


Note here we have a hole on backup when tx2 has committed earlier than tx1 and 
LWM wasn't changed at this moment.

In last case if backup is failed before c1, the entire WAL will be supplied 
because LWM=0 until this moment.
If backup fails after c1 - there is nothing to rebalance, because 
HWM(supplier)=2 == LWM(demander)=2


What do you think?


--
Kind Regards
Roman Kondakov

On 30.11.2018 2:01, Seliverstov Igor wrote:

Vladimir,

Look at my example:

One active transaction (Tx1 which does opX ops) while another tx (Tx2 which
does opX' ops) is finishes with uc4:


[GitHub] SomeFire commented on a change in pull request #86: IGNITE-10071 Queued and running builds hang in the TC bot

2018-12-03 Thread GitBox
SomeFire commented on a change in pull request #86: IGNITE-10071 Queued and 
running builds hang in the TC bot
URL: https://github.com/apache/ignite-teamcity-bot/pull/86#discussion_r238206522
 
 

 ##
 File path: 
ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/teamcity/ignited/fatbuild/ProactiveFatBuildSync.java
 ##
 @@ -124,28 +130,34 @@ public synchronized SyncTask getSyncTask(ITeamcityConn 
conn) {
 protected String findMissingBuildsFromBuildRef(String srvId, ITeamcityConn 
conn) {
 int srvIdMaskHigh = ITeamcityIgnited.serverIdToInt(srvId);
 
-final int[] buildRefKeys = buildRefDao.getAllIds(srvIdMaskHigh);
+Stream buildRefs = 
buildRefDao.compactedBuildsForServer(srvIdMaskHigh);
 
 List buildsIdsToLoad = new ArrayList<>();
-int totalAskedToLoad = 0;
+AtomicInteger totalAskedToLoad = new AtomicInteger();
 
 Review comment:
   Why we need `AtomicInteger` here? May be List should be concurrent too?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] SomeFire commented on a change in pull request #86: IGNITE-10071 Queued and running builds hang in the TC bot

2018-12-03 Thread GitBox
SomeFire commented on a change in pull request #86: IGNITE-10071 Queued and 
running builds hang in the TC bot
URL: https://github.com/apache/ignite-teamcity-bot/pull/86#discussion_r238204941
 
 

 ##
 File path: build.gradle
 ##
 @@ -41,6 +41,7 @@ allprojects {
 ext {
 jettyVer = '9.4.12.v20180830'
 
+//ignVer = '2.6.0'
 
 Review comment:
   Do we need it?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] SomeFire commented on a change in pull request #86: IGNITE-10071 Queued and running builds hang in the TC bot

2018-12-03 Thread GitBox
SomeFire commented on a change in pull request #86: IGNITE-10071 Queued and 
running builds hang in the TC bot
URL: https://github.com/apache/ignite-teamcity-bot/pull/86#discussion_r238206934
 
 

 ##
 File path: 
ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/teamcity/ignited/fatbuild/ProactiveFatBuildSync.java
 ##
 @@ -239,16 +256,32 @@ public void invokeLaterFindMissingByBuildRef(String 
srvName, ITeamcityConn conn)
  */
 @Nullable
 public FatBuildCompacted loadBuild(ITeamcityConn conn, int buildId,
-   @Nullable FatBuildCompacted 
existingBuild,
-   SyncMode mode) {
+@Nullable FatBuildCompacted existingBuild,
+SyncMode mode) {
 if (existingBuild != null && !existingBuild.isOutdatedEntityVersion()) 
{
-boolean finished = !existingBuild.isRunning(compactor) && 
!existingBuild.isQueued(compactor);
+boolean finished =
+existingBuild.state(compactor) != null // don't count old fake 
builds as finished
+&& !existingBuild.isRunning(compactor)
+&& !existingBuild.isQueued(compactor);
 
 if (finished || mode != SyncMode.RELOAD_QUEUED)
 return null;
 }
 
-return reloadBuild(conn, buildId, existingBuild);
+FatBuildCompacted savedVer = reloadBuild(conn, buildId, existingBuild);
+
+BuildRefCompacted refCompacted = new BuildRefCompacted(savedVer);
+if (savedVer.isFakeStub())
+refCompacted.setId(buildId); //to provide possiblity to save the 
build
+
+final String srvNme = conn.serverId();
 
 Review comment:
   `srvNme` -> `srvName`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ignite pull request #4663: Ignite-6173

2018-12-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4663


---


Re: Apache Ignite 2.7. Last Mile

2018-12-03 Thread Ivan Fedotov
Nikolay,

I think that end-user may face the problem during call IgniteCache#invoke
on a cache with registered continious query if cache's configuration is as
in the failed test: [PARTITIONED, ATOMIC, FULL_SYNCH, 2 backups].

I've found that failure has been introduced by MVCC commit [1]. As I
understand the issue relates to the process of updating metadata, when the
future of binary metadata registration hangs because of an unclear reason.

I don't know if the issue the blocker, but seems it's regression because
the test has been passed on Ignite 2.6

What do you think?

[1]
https://github.com/apache/ignite/commit/51a202a4c48220fa919f47147bd4889033cd35a8

пн, 3 дек. 2018 г. в 11:14, Nikolay Izhikov :

> Ivan, please, clarify.
>
> How your investigation are related to 2.7 release?
> Do you think it's a release blocker?
> If yes, please, describe impact to users and how users can reproduce this
> issue.
>
> пн, 3 дек. 2018 г., 9:30 Ivan Fedotov ivanan...@gmail.com:
>
> > I've created the PR  which
> > includes changes 
> > just before integration MVCC with Continuous Query and from the TeamCity
> > <
> >
> https://ci.ignite.apache.org/viewLog.html?buildId=2434057=buildResultsDiv=IgniteTests24Java8_ContinuousQuery1
> > >
> > it is clear that before this changes the
> > test testAtomicOnheapTwoBackupAsyncFullSync is green.
> >
> > Also Roman Kondakov gave his view on this problem in the comments
> > . Now the problem
> > becomes more understandable, but the root reason is still unclear.
> >
> > May be a few of you have any suggestions why hang of threads on the
> binary
> > metadata registration future appears?
> >
> > пт, 30 нояб. 2018 г. в 13:48, Ivan Fedotov :
> >
> > > Igor, thank you for explanation.
> > >
> > > Now it seems that when the one thread tries to invoke
> > > GridCacheMapEntry#touch, the another one makes
> > > GridCacheProcessor#stopCache. If I am wrong, please feel free to
> correct
> > me.
> > >
> > > But it still does not clear for me why this fail appears after commit
> > > <
> >
> https://github.com/apache/ignite/commit/51a202a4c48220fa919f47147bd4889033cd35a8
> >
> > which
> > > is about MVCC. Moreover, NPE appears only with BinaryObjectException,
> and
> > > when the test is green, I can not find NPE in the log.
> > >
> > > Now I tried to run test locally 1000 times on the version before MVCC
> and
> > > could not find error on this concretely case (but it exists the another
> > > one
> > > <
> >
> https://github.com/apache/ignite/blob/master/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryOrderingEventTest.java#L426
> >
> > which
> > > is about assertion on received events).
> > >
> > > пт, 30 нояб. 2018 г. в 13:37, Roman Kondakov
>  > >:
> > >
> > >> Nikolay,
> > >>
> > >> I couldn't quickly find the root cause of this problem because I'm not
> > >> an expert in the binary metadata flow. I think community should decide
> > >> whether this is a release blocker or not.
> > >>
> > >>
> > >> --
> > >> Kind Regards
> > >> Roman Kondakov
> > >>
> > >> On 30.11.2018 13:23, Nikolay Izhikov wrote:
> > >> > Hello, Roman.
> > >> >
> > >> > Is this issue blocks the 2.7 release?
> > >> >
> > >> > пт, 30 нояб. 2018 г., 13:19 Roman Kondakov
> kondako...@mail.ru.invalid
> > :
> > >> >
> > >> >> Hi all!
> > >> >>
> > >> >> I've reproduced this problem locally and attached the log to the
> > ticket
> > >> >> in my comment [1].
> > >> >>
> > >> >> As Igor noted, NPE there is caused by node stop in the end of the
> > test.
> > >> >> The real problem here seems to be in the binary metadata
> registration
> > >> flow.
> > >> >>
> > >> >>
> > >> >> [1]
> > >> >>
> > >> >>
> > >>
> >
> https://issues.apache.org/jira/browse/IGNITE-10376?focusedCommentId=16704510=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16704510
> > >> >>
> > >> >> --
> > >> >> Kind Regards
> > >> >> Roman Kondakov
> > >> >>
> > >> >> On 30.11.2018 11:56, Seliverstov Igor wrote:
> > >> >>> Null pointer there due to cache stop. Look at
> > GridCacheContext#cleanup
> > >> >>> (GridCacheContext.java:2050)
> > >> >>> which is called by GridCacheProcessor#stopCache
> > >> >>> (GridCacheProcessor.java:1372)
> > >> >>>
> > >> >>> That's why at the time GridCacheMapEntry#touch
> > >> >> (GridCacheMapEntry.java:5063)
> > >> >>>invoked there is no eviction manager.
> > >> >>>
> > >> >>> This is a result of "normal" flow because message processing
> doesn't
> > >> >> enter
> > >> >>> cache gate like user API does.
> > >> >>>
> > >> >>> пт, 30 нояб. 2018 г. в 10:26, Nikolay Izhikov <
> nizhi...@apache.org
> > >:
> > >> >>>
> > >>  Ivan. Please, provide a link for a ticket with NPE stack trace
> > >> attached.
> > >> 
> > >>  I've looked at IGNITE-10376 and can't see any attachments.
> > >> 
> 

[GitHub] ignite pull request #5554: IGNITE-10483: Fix marshaller failures handling fo...

2018-12-03 Thread AMashenkov
GitHub user AMashenkov opened a pull request:

https://github.com/apache/ignite/pull/5554

IGNITE-10483: Fix marshaller failures handling for mvcc Enlist requests.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10483

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5554.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5554


commit 24f2eff38d648fbef75ebafa2977ff530b4562f1
Author: Andrey V. Mashenkov 
Date:   2018-11-30T14:41:42Z

IGNITE-10483: Fix marshaller failures handling for mvcc Enlist requests.




---


Re: JDBC thin driver: support connection timeout

2018-12-03 Thread Alexander Lapin
Hi Ivan, Vladimir,

@Ivan

> 1. According to the jdbc spec [1] setNetworkTimeout method is
> optional. What user problem we are going to solve by implementing that
> method?
>
We are going to give user an ability to set custom connection timeout.

> Also I checked another quite popular jdbc driver provided by
> MariaDB [2]. They ignore an executor argument as well and set a socket
> timeout instead. So, I think that we are on a safe side if we ignore
> an executor.
>
Got it. Thank you!

So, I'll implemented connection timeout with the help of socket timeout
ignoring an executor.

Thanks,
Alexander

пн, 3 дек. 2018 г. в 00:41, Vladimir Ozerov :

> +1
>
> вс, 2 дек. 2018 г. в 18:39, Павлухин Иван :
>
> > Missing ref:
> > [2]
> >
> https://mvnrepository.com/artifact/org.mariadb.jdbc/mariadb-java-client/2.3.0
> >
> > 2018-12-02 18:31 GMT+03:00, Павлухин Иван :
> > > Hi Alexander,
> > >
> > > I have 2 points.
> > >
> > > 1. According to the jdbc spec [1] setNetworkTimeout method is
> > > optional. What user problem we are going to solve by implementing that
> > > method?
> > > 2. Also I checked another quite popular jdbc driver provided by
> > > MariaDB [2]. They ignore an executor argument as well and set a socket
> > > timeout instead. So, I think that we are on a safe side if we ignore
> > > an executor.
> > >
> > > [1]
> > https://download.oracle.com/otndocs/jcp/jdbc-4_2-mrel2-spec/index.html
> > > пт, 30 нояб. 2018 г. в 16:28, Alexander Lapin :
> > >>
> > >> Hi Igniters,
> > >>
> > >> Within context of connection timeout [
> > >> https://issues.apache.org/jira/browse/IGNITE-5234] it's not obvious
> > >> whether
> > >> it's required to use setNetworkTimeout's executor or not.
> > >>
> > >> According to the javadoc of
> > >> java.sql.Connection#setNetworkTimeout(Executor
> > >> executor, int milliseconds), executor is "The Executor
> > >> implementation which will be used by setNetworkTimeout."
> > >> Seems that executor supposed to take care of connection
> closing/aborting
> > >> in
> > >> case of timeout, based on submitted Runnable implementation. On the
> > other
> > >> hand it's possible to ignore executor and implement
> > >> timeout-detection/cancellation logic with Timer. Something like
> > following
> > >> (pseudo-code):
> > >>
> > >> ConnectionTimeoutTimerTask connectionTimeoutTimerTask = new
> > >> ConnectionTimeoutTimerTask(timeout);
> > >> timer.schedule(connectionTimeoutTimerTask, 0, REQUEST_TIMEOUT_PERIOD);
> > >> ...
> > >> JdbcResponse res = cliIo.sendRequest(req);
> > >> ...
> > >>
> > >> private class ConnectionTimeoutTimerTask extends TimerTask {
> > >> ...
> > >> @Override public void run() {
> > >> if (remainingConnectionTimeout <= 0)
> > >> close(); //connection.close();
> > >>
> > >> remainingConnectionTimeout -= REQUEST_TIMEOUT_PERIOD;
> > >> }
> > >> ...
> > >> }
> > >>
> > >> It worth to mention that MSSQL Jdbc driver doesn't use executor and
> > >> PostgreSQL doesn't implement setNetworkTimeout() at all.
> > >>
> > >> From my point of view it might be better to ignore executor, is it
> > >> suitable?
> > >>
> > >> Any ideas?
> > >
> > >
> > >
> > > --
> > > Best regards,
> > > Ivan Pavlukhin
> > >
> >
> >
> > --
> > Best regards,
> > Ivan Pavlukhin
> >
>


[jira] [Created] (IGNITE-10504) If client have cache resource with not configurate data region it stop by handler

2018-12-03 Thread ARomantsov (JIRA)
ARomantsov created IGNITE-10504:
---

 Summary: If client have cache resource with not configurate data 
region it stop by handler
 Key: IGNITE-10504
 URL: https://issues.apache.org/jira/browse/IGNITE-10504
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.7
Reporter: ARomantsov
 Fix For: 2.8






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5457: IGNITE-10352 Fix cache get request can be mapped ...

2018-12-03 Thread dgovorukhin
Github user dgovorukhin closed the pull request at:

https://github.com/apache/ignite/pull/5457


---


[GitHub] ignite pull request #5553: IGNITE-10205 add to utility command - ./control.s...

2018-12-03 Thread vldpyatkov
GitHub user vldpyatkov opened a pull request:

https://github.com/apache/ignite/pull/5553

IGNITE-10205 add to utility command - ./control.sh --cache idle_verif…

…y --dump abbility to exclude cache from output file

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10205

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5553.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5553


commit 84181467ec13e4dd5e671b24acea552474ec869f
Author: vd-pyatkov 
Date:   2018-12-03T08:43:09Z

IGNITE-10205 add to utility command - ./control.sh --cache idle_verify 
--dump abbility to exclude cache from output file




---


[GitHub] ignite pull request #5552: IGNITE-10491 added test to investigate max thread...

2018-12-03 Thread akalash
GitHub user akalash opened a pull request:

https://github.com/apache/ignite/pull/5552

IGNITE-10491 added test to investigate max threads



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10491

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5552.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5552


commit cc380b5c0aaf49fa6ef2bce0aaf50e303fbefa86
Author: Anton Kalashnikov 
Date:   2018-11-30T13:07:23Z

IGNITE-14091 added test to max threads

commit aa0a2a6755eafd6e456db7af3f69d6e47fa47de4
Author: Anton Kalashnikov 
Date:   2018-11-30T14:20:04Z

IGNITE-10491 added bash check

commit df89f0b81781114ca1ee6d704c1a87cd83eea789
Author: Anton Kalashnikov 
Date:   2018-11-30T14:42:31Z

IGNITE-10491 systemout to log

commit d1939f40987b04bccd7d41f50ff93d44fff0f869
Author: Anton Kalashnikov 
Date:   2018-12-03T08:30:18Z

IGNITE-10491 return threads test




---


Re: Apache Ignite 2.7. Last Mile

2018-12-03 Thread Nikolay Izhikov
Ivan, please, clarify.

How your investigation are related to 2.7 release?
Do you think it's a release blocker?
If yes, please, describe impact to users and how users can reproduce this
issue.

пн, 3 дек. 2018 г., 9:30 Ivan Fedotov ivanan...@gmail.com:

> I've created the PR  which
> includes changes 
> just before integration MVCC with Continuous Query and from the TeamCity
> <
> https://ci.ignite.apache.org/viewLog.html?buildId=2434057=buildResultsDiv=IgniteTests24Java8_ContinuousQuery1
> >
> it is clear that before this changes the
> test testAtomicOnheapTwoBackupAsyncFullSync is green.
>
> Also Roman Kondakov gave his view on this problem in the comments
> . Now the problem
> becomes more understandable, but the root reason is still unclear.
>
> May be a few of you have any suggestions why hang of threads on the binary
> metadata registration future appears?
>
> пт, 30 нояб. 2018 г. в 13:48, Ivan Fedotov :
>
> > Igor, thank you for explanation.
> >
> > Now it seems that when the one thread tries to invoke
> > GridCacheMapEntry#touch, the another one makes
> > GridCacheProcessor#stopCache. If I am wrong, please feel free to correct
> me.
> >
> > But it still does not clear for me why this fail appears after commit
> > <
> https://github.com/apache/ignite/commit/51a202a4c48220fa919f47147bd4889033cd35a8>
> which
> > is about MVCC. Moreover, NPE appears only with BinaryObjectException, and
> > when the test is green, I can not find NPE in the log.
> >
> > Now I tried to run test locally 1000 times on the version before MVCC and
> > could not find error on this concretely case (but it exists the another
> > one
> > <
> https://github.com/apache/ignite/blob/master/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryOrderingEventTest.java#L426>
> which
> > is about assertion on received events).
> >
> > пт, 30 нояб. 2018 г. в 13:37, Roman Kondakov  >:
> >
> >> Nikolay,
> >>
> >> I couldn't quickly find the root cause of this problem because I'm not
> >> an expert in the binary metadata flow. I think community should decide
> >> whether this is a release blocker or not.
> >>
> >>
> >> --
> >> Kind Regards
> >> Roman Kondakov
> >>
> >> On 30.11.2018 13:23, Nikolay Izhikov wrote:
> >> > Hello, Roman.
> >> >
> >> > Is this issue blocks the 2.7 release?
> >> >
> >> > пт, 30 нояб. 2018 г., 13:19 Roman Kondakov kondako...@mail.ru.invalid
> :
> >> >
> >> >> Hi all!
> >> >>
> >> >> I've reproduced this problem locally and attached the log to the
> ticket
> >> >> in my comment [1].
> >> >>
> >> >> As Igor noted, NPE there is caused by node stop in the end of the
> test.
> >> >> The real problem here seems to be in the binary metadata registration
> >> flow.
> >> >>
> >> >>
> >> >> [1]
> >> >>
> >> >>
> >>
> https://issues.apache.org/jira/browse/IGNITE-10376?focusedCommentId=16704510=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16704510
> >> >>
> >> >> --
> >> >> Kind Regards
> >> >> Roman Kondakov
> >> >>
> >> >> On 30.11.2018 11:56, Seliverstov Igor wrote:
> >> >>> Null pointer there due to cache stop. Look at
> GridCacheContext#cleanup
> >> >>> (GridCacheContext.java:2050)
> >> >>> which is called by GridCacheProcessor#stopCache
> >> >>> (GridCacheProcessor.java:1372)
> >> >>>
> >> >>> That's why at the time GridCacheMapEntry#touch
> >> >> (GridCacheMapEntry.java:5063)
> >> >>>invoked there is no eviction manager.
> >> >>>
> >> >>> This is a result of "normal" flow because message processing doesn't
> >> >> enter
> >> >>> cache gate like user API does.
> >> >>>
> >> >>> пт, 30 нояб. 2018 г. в 10:26, Nikolay Izhikov  >:
> >> >>>
> >>  Ivan. Please, provide a link for a ticket with NPE stack trace
> >> attached.
> >> 
> >>  I've looked at IGNITE-10376 and can't see any attachments.
> >> 
> >>  пт, 30 нояб. 2018 г., 10:14 Ivan Fedotov ivanan...@gmail.com:
> >> 
> >> > Igor,
> >> > NPE is available in a full log, now I also attached it in the
> >> ticket.
> >> >
> >> > IGNITE-7953
> >> > <
> >> >
> >> >>
> >>
> https://github.com/apache/ignite/commit/51a202a4c48220fa919f47147bd4889033cd35a8
> >> > was commited on the 15 October. I could not take a look on the
> >> > testAtomicOnheapTwoBackupAsyncFullSync before this date, because
> the
> >>  oldest
> >> > test in the history on TC dates 12 November.
> >> >
> >> > So, I tested it locally and could not reproduce mentioned error.
> >> >
> >> > чт, 29 нояб. 2018 г. в 20:07, Seliverstov Igor <
> >> gvvinbl...@gmail.com>:
> >> >
> >> >> Ivan,
> >> >>
> >> >> Could you provide a bit more details?
> >> >>
> >> >> I don't see any NPE among all available logs.
> >> >>
> >> >> I don't think the issue is caused by changes in scope of
> 

[GitHub] ignite pull request #5448: IGNITE-10277 set prepared state before send prepa...

2018-12-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5448


---