[jira] [Reopened] (IGNITE-8121) Web console: import cluster from database doesn't work correctly for second import

2018-08-14 Thread Alexey Kuznetsov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov reopened IGNITE-8121:
--

> Web console: import cluster from database doesn't work correctly for second 
> import
> --
>
> Key: IGNITE-8121
> URL: https://issues.apache.org/jira/browse/IGNITE-8121
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Pavel Konstantinov
>Assignee: Alexey Kuznetsov
>Priority: Major
> Fix For: 2.7
>
> Attachments: import-1.json, import-2.json
>
>
> # initial state - no one cluster exists
> # import from any DB - the first time all imported correctly
> # import the second time - error "Cluster ... already exists"
> # refresh the cluster list - the second cluster apear
> After that the second imported cluster can't be downloaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IGNITE-8121) Web console: import cluster from database doesn't work correctly for second import

2018-08-14 Thread Alexey Kuznetsov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov resolved IGNITE-8121.
--
   Resolution: Cannot Reproduce
Fix Version/s: (was: 2.7)

> Web console: import cluster from database doesn't work correctly for second 
> import
> --
>
> Key: IGNITE-8121
> URL: https://issues.apache.org/jira/browse/IGNITE-8121
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Pavel Konstantinov
>Assignee: Alexey Kuznetsov
>Priority: Major
> Attachments: import-1.json, import-2.json
>
>
> # initial state - no one cluster exists
> # import from any DB - the first time all imported correctly
> # import the second time - error "Cluster ... already exists"
> # refresh the cluster list - the second cluster apear
> After that the second imported cluster can't be downloaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (IGNITE-8121) Web console: import cluster from database doesn't work correctly for second import

2018-08-14 Thread Alexey Kuznetsov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov closed IGNITE-8121.


> Web console: import cluster from database doesn't work correctly for second 
> import
> --
>
> Key: IGNITE-8121
> URL: https://issues.apache.org/jira/browse/IGNITE-8121
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Pavel Konstantinov
>Assignee: Alexey Kuznetsov
>Priority: Major
> Attachments: import-1.json, import-2.json
>
>
> # initial state - no one cluster exists
> # import from any DB - the first time all imported correctly
> # import the second time - error "Cluster ... already exists"
> # refresh the cluster list - the second cluster apear
> After that the second imported cluster can't be downloaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7687) SQL SELECT doesn't update TTL for Touched/AccessedExpiryPolicy

2018-08-14 Thread Vladimir Ozerov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579419#comment-16579419
 ] 

Vladimir Ozerov commented on IGNITE-7687:
-

[~satendra], hi. Unfortunately, I do not see any workarounds at the moment.

> SQL SELECT doesn't update TTL for Touched/AccessedExpiryPolicy
> --
>
> Key: IGNITE-7687
> URL: https://issues.apache.org/jira/browse/IGNITE-7687
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.5
>Reporter: Stanislav Lukyanov
>Priority: Major
>
> SQL SELECT queries don't update TTLs when TouchedExpiryPolicy or 
> AccessedExpiryPolicy is used (unlike IgniteCache::get which does update the 
> TTLs).
> Example (modified SqlDmlExample):
> 
> CacheConfiguration orgCacheCfg = new 
> CacheConfiguration(ORG_CACHE)
> .setIndexedTypes(Long.class, Organization.class)
> .setExpiryPolicyFactory(TouchedExpiryPolicy.factoryOf(new 
> Duration(TimeUnit.SECONDS, 10)));
> 
> IgniteCache orgCache = 
> ignite.getOrCreateCache(orgCacheCfg);
> 
> SqlFieldsQuery qry = new SqlFieldsQuery("insert into Organization (_key, 
> id, name) values (?, ?, ?)");
> orgCache.query(qry.setArgs(1L, 1L, "ASF"));
> orgCache.query(qry.setArgs(2L, 2L, "Eclipse"));
> 
> SqlFieldsQuery qry1 = new SqlFieldsQuery("select id, name from 
> Organization as o");
> for (int i = 0; ;i++) {
> List> res = orgCache.query(qry1).getAll();
> print("i = " + i);
> for (Object next : res)
> System.out.println(">>> " + next);
> U.sleep(5000);
> }
> 
> Output:
> >>> i = 0
> >>> [1, ASF]
> >>> [2, Eclipse]
> >>> i = 1
> >>> [1, ASF]
> >>> [2, Eclipse]
> >>> i = 2
> >>> i = 3
> ...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-6055) SQL: Add String length constraint

2018-08-14 Thread Nikolay Izhikov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16580004#comment-16580004
 ] 

Nikolay Izhikov commented on IGNITE-6055:
-

[~vozerov]

All comments are fixed.
New tests added.

New run all - 
https://ci.ignite.apache.org/viewQueued.html?itemId=1655461=queuedBuildOverviewTab

> SQL: Add String length constraint
> -
>
> Key: IGNITE-6055
> URL: https://issues.apache.org/jira/browse/IGNITE-6055
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.1
>Reporter: Vladimir Ozerov
>Assignee: Nikolay Izhikov
>Priority: Major
>  Labels: sql-engine
> Fix For: 2.7
>
>
> We should support {{CHAR(X)}} and {{VARCHAR{X}} syntax. Currently, we ignore 
> it. First, it affects semantics. E.g., one can insert a string with greater 
> length into a cache/table without any problems. Second, it limits efficiency 
> of our default configuration. E.g., index inline cannot be applied to 
> {{String}} data type as we cannot guess it's length.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-4680) Properly split batch atomic cache operations between stripes (putAll, removeAll, etc)

2018-08-14 Thread Pavel Kovalenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-4680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Kovalenko updated IGNITE-4680:

Labels: thread-per-partition  (was: )

> Properly split batch atomic cache operations between stripes (putAll, 
> removeAll, etc)
> -
>
> Key: IGNITE-4680
> URL: https://issues.apache.org/jira/browse/IGNITE-4680
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Yakov Zhdanov
>Priority: Major
>  Labels: thread-per-partition
>
> Need to revisit keys/requests mapping procedure and map the update directly 
> to stripe on remote node.
> Here are some points:
> # Abovementioned will require adding stripes count attribute to node's 
> attribute list
> # Need to make sure we take all benefits of biased locking and stripes never 
> get mutually blocked
> # locking all entries before processing request can be removed since it does 
> not make too much sense to atomic cache



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9272) PureJavaCrc32 vs j.u.zip.CRC32 benchmark and probably replace.

2018-08-14 Thread Stanilovsky Evgeny (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stanilovsky Evgeny updated IGNITE-9272:
---
Description: 
I see that Ignite has its own crc32 realization called: PureJavaCrc32 and from 
desc it seems to be : _The current version is ~10x to 1.8x as fast as Sun's 
native java.util.zip.CRC32 in Java 1.6_ But my jmh tests show opposite results.
+ If it really so, looks like backward compatibility would be easy, all that 
need is just to take lower part of long form zip.crc32 realization.

jmh results:
Benchmark   Mode  CntScoreError  Units
BenchmarkCRC.Crc32  avgt5  1521060.716 ±  44083.424  ns/op
BenchmarkCRC.pureJavaCrc32  avgt5  4657756.671 ± 177243.254  ns/op

# JMH version: 1.21
# VM version: JDK 1.8.0_131, Java HotSpot(TM) 64-Bit Server VM, 25.131-b11
# VM invoker: /usr/lib/jvm/java-8-oracle/jre/bin/java
op system : ubuntu 16.10

  was:
I see that Ignite has its own crc32 realization called: PureJavaCrc32 and from 
desc it seems to be : _The current version is ~10x to 1.8x as fast as Sun's 
native java.util.zip.CRC32 in Java 1.6_ But my jmh tests show opposite results.
+ If it really so, looks like backward compatibility would be easy, all that 
need is just to take lower part of long form zip.crc32 realization.


> PureJavaCrc32 vs j.u.zip.CRC32 benchmark and probably replace.
> --
>
> Key: IGNITE-9272
> URL: https://issues.apache.org/jira/browse/IGNITE-9272
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 2.6
>Reporter: Stanilovsky Evgeny
>Priority: Major
> Attachments: BenchmarkCRC.java
>
>
> I see that Ignite has its own crc32 realization called: PureJavaCrc32 and 
> from desc it seems to be : _The current version is ~10x to 1.8x as fast as 
> Sun's native java.util.zip.CRC32 in Java 1.6_ But my jmh tests show opposite 
> results.
> + If it really so, looks like backward compatibility would be easy, all that 
> need is just to take lower part of long form zip.crc32 realization.
> jmh results:
> Benchmark   Mode  CntScoreError  Units
> BenchmarkCRC.Crc32  avgt5  1521060.716 ±  44083.424  ns/op
> BenchmarkCRC.pureJavaCrc32  avgt5  4657756.671 ± 177243.254  ns/op
> # JMH version: 1.21
> # VM version: JDK 1.8.0_131, Java HotSpot(TM) 64-Bit Server VM, 25.131-b11
> # VM invoker: /usr/lib/jvm/java-8-oracle/jre/bin/java
> op system : ubuntu 16.10



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9272) PureJavaCrc32 vs j.u.zip.CRC32 benchmark and probably replace.

2018-08-14 Thread Stanilovsky Evgeny (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stanilovsky Evgeny updated IGNITE-9272:
---
Description: 
I see that Ignite has its own crc32 realization called: PureJavaCrc32 and from 
desc it seems to be : _The current version is ~10x to 1.8x as fast as Sun's 
native java.util.zip.CRC32 in Java 1.6_ But my jmh tests show opposite results.
+ If it really so, looks like backward compatibility would be easy, all that 
need is just to take lower part of long form zip.crc32 realization.

jmh results:
Benchmark   Mode  CntScoreError  Units
BenchmarkCRC.Crc32  avgt5  1521060.716 ±  44083.424  ns/op
BenchmarkCRC.pureJavaCrc32  avgt5  4657756.671 ± 177243.254  ns/op

JMH version: 1.21
VM version: JDK 1.8.0_131, Java HotSpot(TM) 64-Bit Server VM, 25.131-b11
VM invoker: /usr/lib/jvm/java-8-oracle/jre/bin/java
op system : ubuntu 16.10

  was:
I see that Ignite has its own crc32 realization called: PureJavaCrc32 and from 
desc it seems to be : _The current version is ~10x to 1.8x as fast as Sun's 
native java.util.zip.CRC32 in Java 1.6_ But my jmh tests show opposite results.
+ If it really so, looks like backward compatibility would be easy, all that 
need is just to take lower part of long form zip.crc32 realization.

jmh results:
Benchmark   Mode  CntScoreError  Units
BenchmarkCRC.Crc32  avgt5  1521060.716 ±  44083.424  ns/op
BenchmarkCRC.pureJavaCrc32  avgt5  4657756.671 ± 177243.254  ns/op

# JMH version: 1.21
# VM version: JDK 1.8.0_131, Java HotSpot(TM) 64-Bit Server VM, 25.131-b11
# VM invoker: /usr/lib/jvm/java-8-oracle/jre/bin/java
op system : ubuntu 16.10


> PureJavaCrc32 vs j.u.zip.CRC32 benchmark and probably replace.
> --
>
> Key: IGNITE-9272
> URL: https://issues.apache.org/jira/browse/IGNITE-9272
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 2.6
>Reporter: Stanilovsky Evgeny
>Priority: Major
> Attachments: BenchmarkCRC.java
>
>
> I see that Ignite has its own crc32 realization called: PureJavaCrc32 and 
> from desc it seems to be : _The current version is ~10x to 1.8x as fast as 
> Sun's native java.util.zip.CRC32 in Java 1.6_ But my jmh tests show opposite 
> results.
> + If it really so, looks like backward compatibility would be easy, all that 
> need is just to take lower part of long form zip.crc32 realization.
> jmh results:
> Benchmark   Mode  CntScoreError  Units
> BenchmarkCRC.Crc32  avgt5  1521060.716 ±  44083.424  ns/op
> BenchmarkCRC.pureJavaCrc32  avgt5  4657756.671 ± 177243.254  ns/op
> JMH version: 1.21
> VM version: JDK 1.8.0_131, Java HotSpot(TM) 64-Bit Server VM, 25.131-b11
> VM invoker: /usr/lib/jvm/java-8-oracle/jre/bin/java
> op system : ubuntu 16.10



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9195) Split PDS 2 TC configuration.

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16580223#comment-16580223
 ] 

ASF GitHub Bot commented on IGNITE-9195:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4496


> Split PDS 2 TC configuration.
> -
>
> Key: IGNITE-9195
> URL: https://issues.apache.org/jira/browse/IGNITE-9195
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.6
>Reporter: Pavel Pereslegin
>Assignee: Pavel Pereslegin
>Priority: Major
> Fix For: 2.7
>
>
> PDS 2 TC configuration takes too long time to complete (avg >2h)  and should 
> be split into two.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7701) SQL system view for node attributes

2018-08-14 Thread Aleksey Plekhanov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16580009#comment-16580009
 ] 

Aleksey Plekhanov commented on IGNITE-7701:
---

About negative tests: there already was test for malformed UUID:
{code:sql}
 SELECT NODE_ID FROM IGNITE.NODE_ATTRIBUTES WHERE NODE_ID = '-' AND NAME = ?
{code}

> SQL system view for node attributes
> ---
>
> Key: IGNITE-7701
> URL: https://issues.apache.org/jira/browse/IGNITE-7701
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Labels: iep-13, sql
> Fix For: 2.7
>
>
> Implement SQL system view to show attributes for each node in topology.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9271) Implement transaction commit using thread per partition model

2018-08-14 Thread Pavel Kovalenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Kovalenko updated IGNITE-9271:

Labels: thread-per-partition  (was: )

> Implement transaction commit using thread per partition model
> -
>
> Key: IGNITE-9271
> URL: https://issues.apache.org/jira/browse/IGNITE-9271
> Project: Ignite
>  Issue Type: Sub-task
>  Components: cache
>Reporter: Pavel Kovalenko
>Assignee: Pavel Kovalenko
>Priority: Major
>  Labels: thread-per-partition
> Fix For: 2.7
>
>
> Currently, we perform commit of a transaction from sys thread and do write 
> operations with multiple partitions.
> We should delegate such operations to an appropriate thread and wait for 
> results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-4682) Need to finish transition to thread-per-partition model

2018-08-14 Thread Pavel Kovalenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Kovalenko updated IGNITE-4682:

Labels: thread-per-partition  (was: cache)

> Need to finish transition to thread-per-partition model
> ---
>
> Key: IGNITE-4682
> URL: https://issues.apache.org/jira/browse/IGNITE-4682
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Yakov Zhdanov
>Assignee: Pavel Kovalenko
>Priority: Major
>  Labels: thread-per-partition
>
> 1) Investigate performance on switching to tpp model and choose best solution 
> - https://issues.apache.org/jira/browse/IGNITE-9270
> 2) Implement transactions commit using tpp model - 
> https://issues.apache.org/jira/browse/IGNITE-9271
> 3) Implement putAll, removeAll on atomic caches using tpp model - 
> https://issues.apache.org/jira/browse/IGNITE-4680
> 4) Rebalance should works through tpp
> 5) Get rid of deleted entries buffer from DhtLocalPartition



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-4682) Need to finish transition to thread-per-partition model

2018-08-14 Thread Pavel Kovalenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Kovalenko updated IGNITE-4682:

Labels: cache  (was: )

> Need to finish transition to thread-per-partition model
> ---
>
> Key: IGNITE-4682
> URL: https://issues.apache.org/jira/browse/IGNITE-4682
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Yakov Zhdanov
>Assignee: Pavel Kovalenko
>Priority: Major
>  Labels: thread-per-partition
>
> 1) Investigate performance on switching to tpp model and choose best solution 
> - https://issues.apache.org/jira/browse/IGNITE-9270
> 2) Implement transactions commit using tpp model - 
> https://issues.apache.org/jira/browse/IGNITE-9271
> 3) Implement putAll, removeAll on atomic caches using tpp model - 
> https://issues.apache.org/jira/browse/IGNITE-4680
> 4) Rebalance should works through tpp
> 5) Get rid of deleted entries buffer from DhtLocalPartition



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9270) Design thread per partition model

2018-08-14 Thread Pavel Kovalenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Kovalenko updated IGNITE-9270:

Labels: thread-per-partition  (was: )

> Design thread per partition model
> -
>
> Key: IGNITE-9270
> URL: https://issues.apache.org/jira/browse/IGNITE-9270
> Project: Ignite
>  Issue Type: Sub-task
>  Components: cache
>Reporter: Pavel Kovalenko
>Assignee: Pavel Kovalenko
>Priority: Major
>  Labels: thread-per-partition
> Fix For: 2.7
>
>
> A new model of executing cache partition operations (READ, CREATE, UPDATE, 
> DELETE) should satisfy following conditions
> 1) All modify operations (CREATE, UPDATE, DELETE) on some partition must be 
> performed by the same thread. 
> 2) Read operations can be executed by any thread.
> 3) Ordering of modify operations on primary and backup nodes should be same.
> We should investigate performance if we choose dedicated executor service for 
> such operations, or we can use a messaging model to use network threads to 
> perform such operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9195) Split PDS 2 TC configuration.

2018-08-14 Thread Dmitriy Pavlov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-9195:
---
Ignite Flags:   (was: Docs Required)

> Split PDS 2 TC configuration.
> -
>
> Key: IGNITE-9195
> URL: https://issues.apache.org/jira/browse/IGNITE-9195
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.6
>Reporter: Pavel Pereslegin
>Assignee: Pavel Pereslegin
>Priority: Major
> Fix For: 2.7
>
>
> PDS 2 TC configuration takes too long time to complete (avg >2h)  and should 
> be split into two.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-4682) Need to finish transition to thread-per-partition model

2018-08-14 Thread Pavel Kovalenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Kovalenko reassigned IGNITE-4682:
---

Assignee: Pavel Kovalenko  (was: Yakov Zhdanov)

> Need to finish transition to thread-per-partition model
> ---
>
> Key: IGNITE-4682
> URL: https://issues.apache.org/jira/browse/IGNITE-4682
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Yakov Zhdanov
>Assignee: Pavel Kovalenko
>Priority: Major
>
> Need to create sub-tasks with proper description
> -atomic cache is almost done
> -tx cache - need to start working
> -rebalancing seems to be pretty easy to move to this approach
> -then we can remove deleted entries buffer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9195) Split PDS 2 TC configuration.

2018-08-14 Thread Eduard Shangareev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16580099#comment-16580099
 ] 

Eduard Shangareev commented on IGNITE-9195:
---

I am absolutely OK with the change.

So, there only a few tc configurations left which run more than an hour!

> Split PDS 2 TC configuration.
> -
>
> Key: IGNITE-9195
> URL: https://issues.apache.org/jira/browse/IGNITE-9195
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.6
>Reporter: Pavel Pereslegin
>Assignee: Pavel Pereslegin
>Priority: Major
> Fix For: 2.7
>
>
> PDS 2 TC configuration takes too long time to complete (avg >2h)  and should 
> be split into two.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9044) Update scala dependency version in Apache Ignite

2018-08-14 Thread Dmitriy Pavlov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16580159#comment-16580159
 ] 

Dmitriy Pavlov commented on IGNITE-9044:


Change looks good to me, I would like to merge it to master. But it has 
conflicts to be resolved.

[~zzzadruga] could you please rebase/merge master into your branch to resolve 
conflicts?

> Update scala dependency version in Apache Ignite
> 
>
> Key: IGNITE-9044
> URL: https://issues.apache.org/jira/browse/IGNITE-9044
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Dmitriy Pavlov
>Assignee: Nikolai Kulagin
>Priority: Major
> Fix For: 2.7
>
>
> *ignite-scalar*
> scala.library.version=2.11.8, need to be at least 2.11.12 or newer.
> *ignite-scalar_2.10*
> scala210.library.version 2.10.6, need to be at least 2.10.7, probably newer
> *visor 2.10*
> scala210.jline.version = 2.10.4 , need to be at least 2.10.7, probably newer
> Probably impact would be wider.
> We need at least run run-all, local build.sh and optionally release candate 
> step on TC.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-4682) Need to finish transition to thread-per-partition model

2018-08-14 Thread Pavel Kovalenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Kovalenko updated IGNITE-4682:

Description: 
1) Investigate performance on switching to tpp model and choose best solution
2) Implement transactions commit using tpp model
3) Implement putAll, removeAll on atomic caches using tpp model - 
https://issues.apache.org/jira/browse/IGNITE-4680
4) 

Need to create sub-tasks with proper description
-atomic cache is almost done
-tx cache - need to start working
-rebalancing seems to be pretty easy to move to this approach
-then we can remove deleted entries buffer

  was:
Need to create sub-tasks with proper description
-atomic cache is almost done
-tx cache - need to start working
-rebalancing seems to be pretty easy to move to this approach
-then we can remove deleted entries buffer


> Need to finish transition to thread-per-partition model
> ---
>
> Key: IGNITE-4682
> URL: https://issues.apache.org/jira/browse/IGNITE-4682
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Yakov Zhdanov
>Assignee: Pavel Kovalenko
>Priority: Major
>
> 1) Investigate performance on switching to tpp model and choose best solution
> 2) Implement transactions commit using tpp model
> 3) Implement putAll, removeAll on atomic caches using tpp model - 
> https://issues.apache.org/jira/browse/IGNITE-4680
> 4) 
> Need to create sub-tasks with proper description
> -atomic cache is almost done
> -tx cache - need to start working
> -rebalancing seems to be pretty easy to move to this approach
> -then we can remove deleted entries buffer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-4682) Need to finish transition to thread-per-partition model

2018-08-14 Thread Pavel Kovalenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Kovalenko updated IGNITE-4682:

Description: 
1) Investigate performance on switching to tpp model and choose best solution
2) Implement transactions commit using tpp model
3) Implement putAll, removeAll on atomic caches using tpp model - 
https://issues.apache.org/jira/browse/IGNITE-4680
4) Rebalance should works through tpp
5) Get rid of deleted entries buffer from DhtLocalPartition

Need to create sub-tasks with proper description
-atomic cache is almost done
-tx cache - need to start working
-rebalancing seems to be pretty easy to move to this approach
-then we can remove deleted entries buffer

  was:
1) Investigate performance on switching to tpp model and choose best solution
2) Implement transactions commit using tpp model
3) Implement putAll, removeAll on atomic caches using tpp model - 
https://issues.apache.org/jira/browse/IGNITE-4680
4) 

Need to create sub-tasks with proper description
-atomic cache is almost done
-tx cache - need to start working
-rebalancing seems to be pretty easy to move to this approach
-then we can remove deleted entries buffer


> Need to finish transition to thread-per-partition model
> ---
>
> Key: IGNITE-4682
> URL: https://issues.apache.org/jira/browse/IGNITE-4682
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Yakov Zhdanov
>Assignee: Pavel Kovalenko
>Priority: Major
>
> 1) Investigate performance on switching to tpp model and choose best solution
> 2) Implement transactions commit using tpp model
> 3) Implement putAll, removeAll on atomic caches using tpp model - 
> https://issues.apache.org/jira/browse/IGNITE-4680
> 4) Rebalance should works through tpp
> 5) Get rid of deleted entries buffer from DhtLocalPartition
> Need to create sub-tasks with proper description
> -atomic cache is almost done
> -tx cache - need to start working
> -rebalancing seems to be pretty easy to move to this approach
> -then we can remove deleted entries buffer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9031) SpringCacheManager throws AssertionError during Spring initialization

2018-08-14 Thread Amir Akhmedov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16580194#comment-16580194
 ] 

Amir Akhmedov commented on IGNITE-9031:
---

[~vkulichenko], I don't think it's possible since {{ContextRefreshEvent}} is 
published once (when all initialization is done) and from application thread

[https://github.com/spring-projects/spring-framework/blob/master/spring-context/src/main/java/org/springframework/context/support/AbstractApplicationContext.java#L873]

Besides, I looked at documentation and found nothing about multi-threading.

> SpringCacheManager throws AssertionError during Spring initialization
> -
>
> Key: IGNITE-9031
> URL: https://issues.apache.org/jira/browse/IGNITE-9031
> Project: Ignite
>  Issue Type: Bug
>  Components: spring
>Affects Versions: 2.6
>Reporter: Joel Lang
>Assignee: Amir Akhmedov
>Priority: Major
>
> When initializing Ignite using an IgniteSpringBean and also having a 
> SpringCacheManager defined, the SpringCacheManager throws an AssertionError 
> in the onApplicationEvent() method due to it being called more than once.
> There is an "assert ignite == null" that fails after the first call.
> This is related to the changes in IGNITE-8740. This happened immediately when 
> I first tried to start Ignite after upgrading from 2.5 to 2.6.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-4682) Need to finish transition to thread-per-partition model

2018-08-14 Thread Pavel Kovalenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Kovalenko updated IGNITE-4682:

Description: 
1) Investigate performance on switching to tpp model and choose best solution
2) Implement transactions commit using tpp model
3) Implement putAll, removeAll on atomic caches using tpp model - 
https://issues.apache.org/jira/browse/IGNITE-4680
4) Rebalance should works through tpp
5) Get rid of deleted entries buffer from DhtLocalPartition

  was:
1) Investigate performance on switching to tpp model and choose best solution
2) Implement transactions commit using tpp model
3) Implement putAll, removeAll on atomic caches using tpp model - 
https://issues.apache.org/jira/browse/IGNITE-4680
4) Rebalance should works through tpp
5) Get rid of deleted entries buffer from DhtLocalPartition

Need to create sub-tasks with proper description
-atomic cache is almost done
-tx cache - need to start working
-rebalancing seems to be pretty easy to move to this approach
-then we can remove deleted entries buffer


> Need to finish transition to thread-per-partition model
> ---
>
> Key: IGNITE-4682
> URL: https://issues.apache.org/jira/browse/IGNITE-4682
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Yakov Zhdanov
>Assignee: Pavel Kovalenko
>Priority: Major
>
> 1) Investigate performance on switching to tpp model and choose best solution
> 2) Implement transactions commit using tpp model
> 3) Implement putAll, removeAll on atomic caches using tpp model - 
> https://issues.apache.org/jira/browse/IGNITE-4680
> 4) Rebalance should works through tpp
> 5) Get rid of deleted entries buffer from DhtLocalPartition



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-4682) Need to finish transition to thread-per-partition model

2018-08-14 Thread Pavel Kovalenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Kovalenko updated IGNITE-4682:

Description: 
1) Investigate performance on switching to tpp model and choose best solution - 
https://issues.apache.org/jira/browse/IGNITE-9270
2) Implement transactions commit using tpp model - 
https://issues.apache.org/jira/browse/IGNITE-9271
3) Implement putAll, removeAll on atomic caches using tpp model - 
https://issues.apache.org/jira/browse/IGNITE-4680
4) Rebalance should works through tpp
5) Get rid of deleted entries buffer from DhtLocalPartition

  was:
1) Investigate performance on switching to tpp model and choose best solution - 
https://issues.apache.org/jira/browse/IGNITE-9270
2) Implement transactions commit using tpp model
3) Implement putAll, removeAll on atomic caches using tpp model - 
https://issues.apache.org/jira/browse/IGNITE-4680
4) Rebalance should works through tpp
5) Get rid of deleted entries buffer from DhtLocalPartition


> Need to finish transition to thread-per-partition model
> ---
>
> Key: IGNITE-4682
> URL: https://issues.apache.org/jira/browse/IGNITE-4682
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Yakov Zhdanov
>Assignee: Pavel Kovalenko
>Priority: Major
>
> 1) Investigate performance on switching to tpp model and choose best solution 
> - https://issues.apache.org/jira/browse/IGNITE-9270
> 2) Implement transactions commit using tpp model - 
> https://issues.apache.org/jira/browse/IGNITE-9271
> 3) Implement putAll, removeAll on atomic caches using tpp model - 
> https://issues.apache.org/jira/browse/IGNITE-4680
> 4) Rebalance should works through tpp
> 5) Get rid of deleted entries buffer from DhtLocalPartition



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9270) Design thread per partition model

2018-08-14 Thread Pavel Kovalenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Kovalenko updated IGNITE-9270:

Description: 
A new model of executing cache partition operations (READ, CREATE, UPDATE, 
DELETE) should satisfy following conditions
1) All modify operations (CREATE, UPDATE, DELETE) on some partition must be 
performed by the same thread. 
2) Read operations can be executed by any thread.
3) Ordering of modify operations on primary and backup nodes should be same.

We should investigate performance if we choose dedicated executor service for 
such operations, or we can use a messaging model to use network threads to 
perform such operations.

  was:
A new model of executing cache partition operations (READ, CREATE, UPDATE, 
DELETE) should satisfy following conditions
1) All modify operations (CREATE, UPDATE, DELETE) on some partition must be 
performed by the same thread. 
2) Read operations can be executed by any thread.

We should investigate performance if we choose dedicated executor service for 
such operations, or we can use a messaging model to use network threads to 
perform such operations.


> Design thread per partition model
> -
>
> Key: IGNITE-9270
> URL: https://issues.apache.org/jira/browse/IGNITE-9270
> Project: Ignite
>  Issue Type: Sub-task
>  Components: cache
>Reporter: Pavel Kovalenko
>Assignee: Pavel Kovalenko
>Priority: Major
> Fix For: 2.7
>
>
> A new model of executing cache partition operations (READ, CREATE, UPDATE, 
> DELETE) should satisfy following conditions
> 1) All modify operations (CREATE, UPDATE, DELETE) on some partition must be 
> performed by the same thread. 
> 2) Read operations can be executed by any thread.
> 3) Ordering of modify operations on primary and backup nodes should be same.
> We should investigate performance if we choose dedicated executor service for 
> such operations, or we can use a messaging model to use network threads to 
> perform such operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9270) Design thread per partition model

2018-08-14 Thread Pavel Kovalenko (JIRA)
Pavel Kovalenko created IGNITE-9270:
---

 Summary: Design thread per partition model
 Key: IGNITE-9270
 URL: https://issues.apache.org/jira/browse/IGNITE-9270
 Project: Ignite
  Issue Type: Sub-task
  Components: cache
Reporter: Pavel Kovalenko
Assignee: Pavel Kovalenko
 Fix For: 2.7


A new model of executing cache partition operations (READ, CREATE, UPDATE, 
DELETE) should satisfy following conditions
1) All modify operations (CREATE, UPDATE, DELETE) on some partition must be 
performed by the same thread. 
2) Read operations can be executed by any thread.

We should investigate performance if we choose dedicated executor service for 
such operations, or we can use a messaging model to use network threads to 
perform such operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-4682) Need to finish transition to thread-per-partition model

2018-08-14 Thread Pavel Kovalenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Kovalenko updated IGNITE-4682:

Description: 
1) Investigate performance on switching to tpp model and choose best solution - 
https://issues.apache.org/jira/browse/IGNITE-9270
2) Implement transactions commit using tpp model
3) Implement putAll, removeAll on atomic caches using tpp model - 
https://issues.apache.org/jira/browse/IGNITE-4680
4) Rebalance should works through tpp
5) Get rid of deleted entries buffer from DhtLocalPartition

  was:
1) Investigate performance on switching to tpp model and choose best solution
2) Implement transactions commit using tpp model
3) Implement putAll, removeAll on atomic caches using tpp model - 
https://issues.apache.org/jira/browse/IGNITE-4680
4) Rebalance should works through tpp
5) Get rid of deleted entries buffer from DhtLocalPartition


> Need to finish transition to thread-per-partition model
> ---
>
> Key: IGNITE-4682
> URL: https://issues.apache.org/jira/browse/IGNITE-4682
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Yakov Zhdanov
>Assignee: Pavel Kovalenko
>Priority: Major
>
> 1) Investigate performance on switching to tpp model and choose best solution 
> - https://issues.apache.org/jira/browse/IGNITE-9270
> 2) Implement transactions commit using tpp model
> 3) Implement putAll, removeAll on atomic caches using tpp model - 
> https://issues.apache.org/jira/browse/IGNITE-4680
> 4) Rebalance should works through tpp
> 5) Get rid of deleted entries buffer from DhtLocalPartition



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9271) Implement transaction commit using thread per partition model

2018-08-14 Thread Pavel Kovalenko (JIRA)
Pavel Kovalenko created IGNITE-9271:
---

 Summary: Implement transaction commit using thread per partition 
model
 Key: IGNITE-9271
 URL: https://issues.apache.org/jira/browse/IGNITE-9271
 Project: Ignite
  Issue Type: Sub-task
  Components: cache
Reporter: Pavel Kovalenko
Assignee: Pavel Kovalenko
 Fix For: 2.7


Currently, we perform commit of a transaction from sys thread and do write 
operations with multiple partitions.
We should delegate such operations to an appropriate thread and wait for 
results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8220) Discovery worker termination in PDS test

2018-08-14 Thread Eduard Shangareev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16580082#comment-16580082
 ] 

Eduard Shangareev commented on IGNITE-8220:
---

https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_RunAll_IgniteTests24Java8=pull%2F4386%2Fhead=buildTypeStatusDiv

> Discovery worker termination in PDS test
> 
>
> Key: IGNITE-8220
> URL: https://issues.apache.org/jira/browse/IGNITE-8220
> Project: Ignite
>  Issue Type: Test
>  Components: persistence
>Reporter: Dmitriy Pavlov
>Assignee: Eduard Shangareev
>Priority: Critical
>  Labels: MakeTeamcityGreenAgain, Muted_test
> Fix For: 2.7
>
>
> 3 suites failed 
> https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_IgnitePds1_IgniteTests24Java8=%3Cdefault%3E=buildTypeStatusDiv
> https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_PdsDirectIo1_IgniteTests24Java8=%3Cdefault%3E=buildTypeStatusDiv
> https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_ActivateDeactivateCluster_IgniteTests24Java8=%3Cdefault%3E=buildTypeStatusDiv
> Example of tests failed:
> - IgniteClusterActivateDeactivateTestWithPersistence.testActivateFailover3
> - IgniteClusterActivateDeactivateTestWithPersistence.testDeactivateFailover3  
> {noformat}
> [2018-04-11 
> 02:43:09,769][ERROR][tcp-disco-srvr-#2298%cache.IgniteClusterActivateDeactivateTestWithPersistence0%][IgniteTestResources]
>  Critical failure. Will be handled accordingly to configured handler 
> [hnd=class o.a.i.failure.NoOpFailureHandler, failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=java.lang.IllegalStateException: Thread 
> tcp-disco-srvr-#2298%cache.IgniteClusterActivateDeactivateTestWithPersistence0%
>  is terminated unexpectedly.]] 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9272) PureJavaCrc32 vs j.u.zip.CRC32 benchmark and probably replace.

2018-08-14 Thread Stanilovsky Evgeny (JIRA)
Stanilovsky Evgeny created IGNITE-9272:
--

 Summary: PureJavaCrc32 vs j.u.zip.CRC32 benchmark and probably 
replace.
 Key: IGNITE-9272
 URL: https://issues.apache.org/jira/browse/IGNITE-9272
 Project: Ignite
  Issue Type: Improvement
  Components: general
Affects Versions: 2.6
Reporter: Stanilovsky Evgeny
 Attachments: BenchmarkCRC.java

I see that Ignite has its own crc32 realization called: PureJavaCrc32 and from 
desc it seems to be : _The current version is ~10x to 1.8x as fast as Sun's 
native java.util.zip.CRC32 in Java 1.6_ But my jmh tests show opposite results.
+ If it really so, looks like backward compatibility would be easy, all that 
need is just to take lower part of long form zip.crc32 realization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IGNITE-9131) Upgrade guava version in Apache Ignite

2018-08-14 Thread Dmitriy Pavlov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov resolved IGNITE-9131.

Resolution: Fixed

Merged to master, 0a9d7f07f0b3e0cf13c9f096cf986ef03c206172 

> Upgrade guava version in Apache Ignite
> --
>
> Key: IGNITE-9131
> URL: https://issues.apache.org/jira/browse/IGNITE-9131
> Project: Ignite
>  Issue Type: Task
>Reporter: Dmitriy Pavlov
>Assignee: Dmitriy Pavlov
>Priority: Major
> Fix For: 2.7
>
>
> In most cases guava is used only for testing, but for some modules it is used 
> for production code.
> Current version is 18 
> https://mvnrepository.com/artifact/com.google.guava/guava/18.0, which was 
> released in 2014.
> It is suggested to upgrade to fresh version of library, e.g. to 
> https://mvnrepository.com/artifact/com.google.guava/guava/25.1-jre 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9264) Lost partitions raised twice if node left during previous exchange

2018-08-14 Thread Pavel Vinokurov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16580272#comment-16580272
 ] 

Pavel Vinokurov commented on IGNITE-9264:
-

[~agoncharuk] Please review

> Lost partitions raised twice if node left during previous exchange
> --
>
> Key: IGNITE-9264
> URL: https://issues.apache.org/jira/browse/IGNITE-9264
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Pavel Vinokurov
>Assignee: Pavel Vinokurov
>Priority: Major
>
> There is possible situation that GridDhtPartitionTopologyImpl#update receives 
> full map with node that left on a previous exchange with firing lost events. 
> It leads to raising events twice.
> IgniteCachePartitionLossPolicySelfTest was changed to check raising events 
> for all lost partitions 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-9272) PureJavaCrc32 vs j.u.zip.CRC32 benchmark and probably replace.

2018-08-14 Thread Stanilovsky Evgeny (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stanilovsky Evgeny reassigned IGNITE-9272:
--

Assignee: Stanilovsky Evgeny

> PureJavaCrc32 vs j.u.zip.CRC32 benchmark and probably replace.
> --
>
> Key: IGNITE-9272
> URL: https://issues.apache.org/jira/browse/IGNITE-9272
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 2.6
>Reporter: Stanilovsky Evgeny
>Assignee: Stanilovsky Evgeny
>Priority: Major
> Attachments: BenchmarkCRC.java
>
>
> I see that Ignite has its own crc32 realization called: PureJavaCrc32 and 
> from desc it seems to be : _The current version is ~10x to 1.8x as fast as 
> Sun's native java.util.zip.CRC32 in Java 1.6_ But my jmh tests show opposite 
> results.
> + If it really so, looks like backward compatibility would be easy, all that 
> need is just to take lower part of long form zip.crc32 realization.
> jmh results:
> Benchmark   Mode  CntScoreError  Units
> BenchmarkCRC.Crc32  avgt5  1521060.716 ±  44083.424  ns/op
> BenchmarkCRC.pureJavaCrc32  avgt5  4657756.671 ± 177243.254  ns/op
> JMH version: 1.21
> VM version: JDK 1.8.0_131, Java HotSpot(TM) 64-Bit Server VM, 25.131-b11
> VM invoker: /usr/lib/jvm/java-8-oracle/jre/bin/java
> op system : ubuntu 16.10



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9262) Web console: missed generation of query entities for imported domain modelss

2018-08-14 Thread Vasiliy Sisko (JIRA)
Vasiliy Sisko created IGNITE-9262:
-

 Summary: Web console: missed generation of query entities for 
imported domain modelss
 Key: IGNITE-9262
 URL: https://issues.apache.org/jira/browse/IGNITE-9262
 Project: Ignite
  Issue Type: Bug
  Components: wizards
Reporter: Vasiliy Sisko
Assignee: Vasiliy Sisko


# Open configuration overview.
 # Import cluster from dabase.
 # Download generated project.

Downloaded project does not contains generated QueryEntities, that are visible 
in project preview.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8919) Wrong documentation of exec methods in StartNodeCallableImpl class

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579478#comment-16579478
 ] 

ASF GitHub Bot commented on IGNITE-8919:


GitHub user 1vanan opened a pull request:

https://github.com/apache/ignite/pull/4535

IGNITE-8919 Wrong documentation of exec methods in StartNodeCallableImpl 
class



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/1vanan/ignite IGNITE-8919

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4535.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4535


commit 9dec47224b51e9cba53863a62c2b60c5acfbfb7b
Author: Fedotov 
Date:   2018-08-14T08:57:34Z

change description of exec method




> Wrong documentation of exec methods in StartNodeCallableImpl class
> --
>
> Key: IGNITE-8919
> URL: https://issues.apache.org/jira/browse/IGNITE-8919
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ivan Fedotov
>Assignee: Ivan Fedotov
>Priority: Major
>
> It seems that in StartNodeCallableImpl class methods 
> {code:java}
> private String exec()
> {code}
>  have wrong documentation [1].
> It's necessary to change documentation to more appropriate.
> [1][https://github.com/apache/ignite/blob/master/modules/ssh/src/main/java/org/apache/ignite/internal/util/nodestart/StartNodeCallableImpl.java#L393]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9260) StandaloneWalRecordsIterator broken on WalSegmentTailReachedException not in work dir

2018-08-14 Thread Alexey Goncharuk (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579499#comment-16579499
 ] 

Alexey Goncharuk commented on IGNITE-9260:
--

Looks good to me, given that TC passes.

> StandaloneWalRecordsIterator broken on WalSegmentTailReachedException not in 
> work dir
> -
>
> Key: IGNITE-9260
> URL: https://issues.apache.org/jira/browse/IGNITE-9260
> Project: Ignite
>  Issue Type: Bug
>Reporter: Dmitriy Govorukhin
>Assignee: Dmitriy Govorukhin
>Priority: Major
> Fix For: 2.7
>
>
> After implementation IGNITE-9050, StandaloneWalRecordsIterator became broke 
> because in the standalone mode we can stop the iteration at any moment when 
> the last available segment will be fully read.  And validation which was 
> implemented in IGNITE-9050 is not applicable for standalone mode. Need to 
> change behavior and validate what we stop an iteration in last available WAL 
> segment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-602) [Test] GridToStringBuilder is vulnerable for StackOverflowError caused by infinite recursion

2018-08-14 Thread Ryabov Dmitrii (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579500#comment-16579500
 ] 

Ryabov Dmitrii commented on IGNITE-602:
---

[~agoncharuk], I made the 
[fix|https://issues.apache.org/jira/browse/IGNITE-9209], can you take a look?

> [Test] GridToStringBuilder is vulnerable for StackOverflowError caused by 
> infinite recursion
> 
>
> Key: IGNITE-602
> URL: https://issues.apache.org/jira/browse/IGNITE-602
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Reporter: Artem Shutak
>Assignee: Ryabov Dmitrii
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, Muted_test
> Fix For: 2.7
>
>
> See test 
> org.gridgain.grid.util.tostring.GridToStringBuilderSelfTest#_testToStringCheckAdvancedRecursionPrevention
>  and related TODO in same source file.
> Also take a look at 
> http://stackoverflow.com/questions/11300203/most-efficient-way-to-prevent-an-infinite-recursion-in-tostring
> Test should be unmuted on TC after fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-6055) SQL: Add String length constraint

2018-08-14 Thread Vladimir Ozerov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579609#comment-16579609
 ] 

Vladimir Ozerov edited comment on IGNITE-6055 at 8/14/18 10:22 AM:
---

[~NIzhikov], my comments:

1) {{IgniteQueryErrorCode}} - tow new codes were added, but they are not 
converted to SqlState in {{codeToSqlState}} method. In addition, there should 
be new JDBC tests confirming that proper SQL state is returned.


 2) {{QueryUtils.buildBinaryProperty}} - unused parameter {{cacheName}}


 3) What is the purposes of changes in REST classes 
({{GridClientHandshakeResponse}}, {{GridTcpRestParser}})? I see that passed 
version is not used. Also note that REST client is not thin client, and thus 
should not use thin client versioning.


 4) Something is wrong with {{QueryUtils.processBinaryMeta}} - why validation 
of not-null fields is handled inside {{QueryBinaryProperty.addProperty}}, but 
validation of precision is handled in 
{{QueryBinaryProperty.addValidateProperty}}? This way you may end up in a 
situation when the same property is add to "validate" collection twice, if it 
is not-null and has prevision. Then if you use {{DROP COLUMN}} command, then 
only first property will be removed from "validate" collection, and *all* 
subsequent {{INSERT}} commands will fail. Please remove {{addValidateProperty}} 
method and make sure that validation properties are populatied in a single 
place. Also please add a test as follows: create not-null column with precision 
-> test both constraints -> drop column -> insert some row. Insert should pass.

5) {{ClientRequestHandler}} should not write server version back, as it doesn't 
make sense for a client. Our protocol works as follows: client proposes 
communication version to the server. If server accepted this version, then 
{{true}} is returned. Otherwise server proposes another version, and client 
re-tries handshake with alternative version if possible. In any case, current 
server version should never be used in any decision on the client side. 

6) Please find a way to avoid passing version to {{IBinaryRawWriteAware}}. This 
is general-purpose interface and we should not add irrelevant data to it. I 
would rather add internal transient field to {{CacheConfiguration}} or so. 
Alterantively, you may add extended version of {{IBinaryRawWriter}} (e.g. 
{{IBinaryRawWriterEx}}), which will expose required version.

7) {{ClientSocket.cs}} - same as p.5. Server version should not be used. You 
should use version both server and client agreed upon.

 

Once these problems are addressed and tested we should ask other community 
memebers to fix ODBC, CPP thin client, Node.JS thin client, Python thin client.


was (Author: vozerov):
[~NIzhikov], my comments:


 1) {{IgniteQueryErrorCode}} - tow new codes were added, but they are not 
converted to SqlState in {{codeToSqlState}} method. In addition, there should 
be new JDBC tests confirming that proper SQL state is returned.
 2) {{QueryUtils.buildBinaryProperty}} - unused parameter {{cacheName}}
 3) What is the purposes of changes in REST classes 
({{GridClientHandshakeResponse}}, {{GridTcpRestParser}})? I see that passed 
version is not used. Also note that REST client is not thin client, and thus 
should not use thin client versioning.
 4) Something is wrong with {{QueryUtils.processBinaryMeta}} - why validation 
of not-null fields is handled inside {{QueryBinaryProperty.addProperty}}, but 
validation of precision is handled in 
{{QueryBinaryProperty.addValidateProperty}}? This way you may end up in a 
situation when the same property is add to "validate" collection twice, if it 
is not-null and has prevision. Then if you use {{DROP COLUMN}} command, then 
only first property will be removed from "validate" collection, and *all* 
subsequent {{INSERT}} commands will fail. Please remove {{addValidateProperty}} 
method and make sure that validation properties are populatied in a single 
place. Also please add a test as follows: create not-null column with precision 
-> test both constraints -> drop column -> insert some row. Insert should pass.
 5) {{ClientRequestHandler}} should not write server version back, as it 
doesn't make sense for a client. Our protocol works as follows: client proposes 
communication version to the server. If server accepted this version, then 
{{true}} is returned. Otherwise server proposes another version, and client 
re-tries handshake with alternative version if possible. In any case, current 
server version should never be used in any decision on the client side. 
 6) Please find a way to avoid passing version to {{IBinaryRawWriteAware}}. 
This is general-purpose interface and we should not add irrelevant data to it. 
I would rather add internal transient field to {{CacheConfiguration}} or so. 
Alterantively, you may add extended version of {{IBinaryRawWriter}} (e.g. 

[jira] [Commented] (IGNITE-8920) Node should be failed when during tx finish indices are corrupted.

2018-08-14 Thread Pavel Kovalenko (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579610#comment-16579610
 ] 

Pavel Kovalenko commented on IGNITE-8920:
-

[~agoncharuk] [~dpavlov] I've merged last master and re-run TC. Could you 
please look again?

> Node should be failed when during tx finish indices are corrupted.
> --
>
> Key: IGNITE-8920
> URL: https://issues.apache.org/jira/browse/IGNITE-8920
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.5
>Reporter: Ivan Daschinskiy
>Assignee: Pavel Kovalenko
>Priority: Major
> Fix For: 2.7
>
>
> While transaction is processed after receiving finish request 
> (IgniteTxHandler.finish) , node should be failed by FailureHandler if page 
> content of indices is corrupted. Currently this case is not handled properly 
> and cause to long running transactions over the grid. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8919) Wrong documentation of exec methods in StartNodeCallableImpl class

2018-08-14 Thread Ivan Fedotov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Fedotov updated IGNITE-8919:
-
Description: 
It seems that in StartNodeCallableImpl class methods
{code:java}
private String exec()
{code}
has wrong documentation [1].

It's necessary to change documentation to more appropriate.

[1][https://github.com/apache/ignite/blob/master/modules/ssh/src/main/java/org/apache/ignite/internal/util/nodestart/StartNodeCallableImpl.java#L393]

 

  was:
It seems that in StartNodeCallableImpl class methods 
{code:java}
private String exec()
{code}
 have wrong documentation [1].

It's necessary to change documentation to more appropriate.

[1][https://github.com/apache/ignite/blob/master/modules/ssh/src/main/java/org/apache/ignite/internal/util/nodestart/StartNodeCallableImpl.java#L393]

 


> Wrong documentation of exec methods in StartNodeCallableImpl class
> --
>
> Key: IGNITE-8919
> URL: https://issues.apache.org/jira/browse/IGNITE-8919
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ivan Fedotov
>Assignee: Ivan Fedotov
>Priority: Major
>
> It seems that in StartNodeCallableImpl class methods
> {code:java}
> private String exec()
> {code}
> has wrong documentation [1].
> It's necessary to change documentation to more appropriate.
> [1][https://github.com/apache/ignite/blob/master/modules/ssh/src/main/java/org/apache/ignite/internal/util/nodestart/StartNodeCallableImpl.java#L393]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8915) NPE during executing local SqlQuery from client node

2018-08-14 Thread Vladimir Ozerov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579429#comment-16579429
 ] 

Vladimir Ozerov commented on IGNITE-8915:
-

[~NIzhikov], my comment about JDBC/ODBC was about thin drivers. Please see how 
it is executed here: 
{{org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler#executeQuery}}.
 That is, it bypasses cache objects, and goes directly to query processor.

> NPE during executing local SqlQuery from client node
> 
>
> Key: IGNITE-8915
> URL: https://issues.apache.org/jira/browse/IGNITE-8915
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Vyacheslav Daradur
>Assignee: Nikolay Izhikov
>Priority: Major
> Fix For: 2.7
>
> Attachments: IgniteCacheReplicatedClientLocalQuerySelfTest.java
>
>
> NPE when trying to execute {{SqlQuery}} with {{setLocal(true)}} from client 
> node.
> [Reproducer|^IgniteCacheReplicatedClientLocalQuerySelfTest.java].
> UPD:
> Right behavior:
> Local query should be forbidden and a sensible exception should be thrown if 
> it is executed on client node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8915) NPE during executing local SqlQuery from client node

2018-08-14 Thread Vladimir Ozerov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579431#comment-16579431
 ] 

Vladimir Ozerov commented on IGNITE-8915:
-

Hi [~zstan], I am not sure I understand your question. What exactly concerns 
you?

> NPE during executing local SqlQuery from client node
> 
>
> Key: IGNITE-8915
> URL: https://issues.apache.org/jira/browse/IGNITE-8915
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Vyacheslav Daradur
>Assignee: Nikolay Izhikov
>Priority: Major
> Fix For: 2.7
>
> Attachments: IgniteCacheReplicatedClientLocalQuerySelfTest.java
>
>
> NPE when trying to execute {{SqlQuery}} with {{setLocal(true)}} from client 
> node.
> [Reproducer|^IgniteCacheReplicatedClientLocalQuerySelfTest.java].
> UPD:
> Right behavior:
> Local query should be forbidden and a sensible exception should be thrown if 
> it is executed on client node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7165) Re-balancing is cancelled if client node joins

2018-08-14 Thread Dmitry Sherstobitov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-7165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Sherstobitov updated IGNITE-7165:

Attachment: (was: node-NO_REBALANCE-7165.log)

> Re-balancing is cancelled if client node joins
> --
>
> Key: IGNITE-7165
> URL: https://issues.apache.org/jira/browse/IGNITE-7165
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Cherkasov
>Assignee: Maxim Muzafarov
>Priority: Critical
>  Labels: rebalance
> Fix For: 2.7
>
>
> Re-balancing is canceled if client node joins. Re-balancing can take hours 
> and each time when client node joins it starts again:
> [15:10:05,700][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager]
>  Added new node to topology: TcpDiscoveryNode 
> [id=979cf868-1c37-424a-9ad1-12db501f32ef, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 
> 172.31.16.213], sockAddrs=[/0:0:0:0:0:0:0:1:0, /127.0.0.1:0, 
> /172.31.16.213:0], discPort=0, order=36, intOrder=24, 
> lastExchangeTime=1512907805688, loc=false, ver=2.3.1#20171129-sha1:4b1ec0fe, 
> isClient=true]
> [15:10:05,701][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager]
>  Topology snapshot [ver=36, servers=7, clients=5, CPUs=128, heap=160.0GB]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Started 
> exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> crd=false, evt=NODE_JOINED, evtNode=979cf868-1c37-424a-9ad1-12db501f32ef, 
> customEvt=null, allowMerge=true]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionsExchangeFuture]
>  Finish exchange future [startVer=AffinityTopologyVersion [topVer=36, 
> minorTopVer=0], resVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> err=null]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Finished 
> exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> crd=false]
> [15:10:05,703][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion 
> [topVer=36, minorTopVer=0], evt=NODE_JOINED, 
> node=979cf868-1c37-424a-9ad1-12db501f32ef]
> [15:10:08,706][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Cancelled rebalancing from all nodes [topology=AffinityTopologyVersion 
> [topVer=35, minorTopVer=0]]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Rebalancing scheduled [order=[statementp]]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Rebalancing started [top=null, evt=NODE_JOINED, 
> node=a8be3c14-9add-48c3-b099-3fd304cfdbf4]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=2f6bde48-ffb5-4815-bd32-df4e57dc13e0, partitionsCount=18, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=35d01141-4dce-47dd-adf6-a4f3b2bb9da9, partitionsCount=15, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=b3a8be53-e61f-4023-a906-a265923837ba, partitionsCount=15, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=f825cb4e-7dcc-405f-a40d-c1dc1a3ade5a, partitionsCount=12, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=4ae1db91-8b88-4180-a84b-127a303959e9, partitionsCount=11, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=7c286481-7638-49e4-8c68-fa6aa65d8b76, partitionsCount=18, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> so in clusters with a big amount of data and the frequent client left/join 
> events this means that a new server will never receive its partitions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8915) NPE during executing local SqlQuery from client node

2018-08-14 Thread Nikolay Izhikov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579580#comment-16579580
 ] 

Nikolay Izhikov commented on IGNITE-8915:
-

[~vozerov]

> goes directly to query processor.

Got it.
I moved checks to {{GridQueryProcessor#validateSqlFieldsQuery}}, so they will 
be executed for your case.

> What exactly concerns you?

For now, I propose to disallow execution of *local* sql queries for all types 
of caches(including LOCAL cache) to preserve API consistency.
Is it correct?

> NPE during executing local SqlQuery from client node
> 
>
> Key: IGNITE-8915
> URL: https://issues.apache.org/jira/browse/IGNITE-8915
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Vyacheslav Daradur
>Assignee: Nikolay Izhikov
>Priority: Major
> Fix For: 2.7
>
> Attachments: IgniteCacheReplicatedClientLocalQuerySelfTest.java
>
>
> NPE when trying to execute {{SqlQuery}} with {{setLocal(true)}} from client 
> node.
> [Reproducer|^IgniteCacheReplicatedClientLocalQuerySelfTest.java].
> UPD:
> Right behavior:
> Local query should be forbidden and a sensible exception should be thrown if 
> it is executed on client node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7165) Re-balancing is cancelled if client node joins

2018-08-14 Thread Dmitry Sherstobitov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579579#comment-16579579
 ] 

Dmitry Sherstobitov commented on IGNITE-7165:
-

I'm afraid I cannot give you correct reproducer on Java

Attached log from node with cleared LFS [^node-NO_REBALANCE-7165.log]

There is some messaged with "Skipping rebalancing (no affinity changes)" after 
node joins cluster while in previous version following text appears in log

{code:java}
[12:53:44,127][INFO][disco-event-worker-#61][GridDiscoveryManager] Topology 
snapshot [ver=18, servers=4, clients=0, CPUs=32, offheap=75.0GB, heap=120.0GB]
[12:53:44,127][INFO][disco-event-worker-#61][GridDiscoveryManager]   ^-- Node 
[id=61E12BC1-31A0-473A-BF79-DDD51C879722, clusterState=ACTIVE]
[12:53:44,127][INFO][disco-event-worker-#61][GridDiscoveryManager]   ^-- 
Baseline [id=0, size=4, online=4, offline=0]
[12:53:44,127][INFO][disco-event-worker-#61][GridDiscoveryManager] Data Regions 
Configured:
[12:53:44,128][INFO][disco-event-worker-#61][GridDiscoveryManager]   ^-- 
default [initSize=256.0 MiB, maxSize=18.8 GiB, persistenceEnabled=true]
[12:53:44,128][INFO][exchange-worker-#62][time] Started exchange init 
[topVer=AffinityTopologyVersion [topVer=18, minorTopVer=0], crd=false, 
evt=NODE_FAILED, evtNode=02e72065-13c8-4b47-a905-874d723cc3c1, customEvt=null, 
allowMerge=true]
[12:53:44,129][INFO][exchange-worker-#62][GridDhtPartitionsExchangeFuture] 
Finish exchange future [startVer=AffinityTopologyVersion [topVer=18, 
minorTopVer=0], resVer=AffinityTopologyVersion [topVer=18, minorTopVer=0], 
err=null]
[12:53:44,130][INFO][exchange-worker-#62][time] Finished exchange init 
[topVer=AffinityTopologyVersion [topVer=18, minorTopVer=0], crd=false]
[12:53:44,141][INFO][exchange-worker-#62][GridDhtPartitionDemander] Cancelled 
rebalancing from all nodes [topology=AffinityTopologyVersion [topVer=17, 
minorTopVer=0]]
[12:53:44,141][INFO][exchange-worker-#62][GridDhtPartitionDemander] Completed 
rebalance future: RebalanceFuture [grp=CacheGroupContext 
[grp=cache_group_1_028], topVer=AffinityTopologyVersion [topVer=17, 
minorTopVer=0], rebalanceId=6]
[12:53:44,141][INFO][exchange-worker-#62][GridDhtPartitionDemander] Cancelled 
rebalancing from all nodes [topology=AffinityTopologyVersion [topVer=17, 
minorTopVer=0]]
[12:53:44,141][INFO][exchange-worker-#62][GridDhtPartitionDemander] Completed 
rebalance future: RebalanceFuture [grp=CacheGroupContext 
[grp=cache_group_3_088], topVer=AffinityTopologyVersion [topVer=17, 
minorTopVer=0], rebalanceId=6]
[12:53:44,141][INFO][exchange-worker-#62][GridDhtPartitionDemander] Cancelled 
rebalancing from all nodes [topology=AffinityTopologyVersion [topVer=17, 
minorTopVer=0]]
[12:53:44,141][INFO][exchange-worker-#62][GridDhtPartitionDemander] Completed 
rebalance future: RebalanceFuture [grp=CacheGroupContext 
[grp=cache_group_1_015], topVer=AffinityTopologyVersion [topVer=17, 
minorTopVer=0], rebalanceId=6]
[12:53:44,141][INFO][exchange-worker-#62][GridDhtPartitionDemander] Cancelled 
rebalancing from all nodes [topology=AffinityTopologyVersion [topVer=17, 
minorTopVer=0]]
[12:53:44,141][INFO][exchange-worker-#62][GridDhtPartitionDemander] Completed 
rebalance future: RebalanceFuture [grp=CacheGroupContext 
[grp=cache_group_4_118], topVer=AffinityTopologyVersion [topVer=17, 
minorTopVer=0], rebalanceId=6]
[12:53:44,141][INFO][exchange-worker-#62][GridDhtPartitionDemander] Cancelled 
rebalancing from all nodes [topology=AffinityTopologyVersion [topVer=17, 
minorTopVer=0]]
[12:53:44,141][INFO][exchange-worker-#62][GridDhtPartitionDemander] Completed 
rebalance future: RebalanceFuture [grp=CacheGroupContext 
[grp=cache_group_2_058], topVer=AffinityTopologyVersion [topVer=17, 
minorTopVer=0], rebalanceId=6]
[12:53:44,141][INFO][exchange-worker-#62][GridDhtPartitionDemander] Cancelled 
rebalancing from all nodes [topology=AffinityTopologyVersion [topVer=17, 
minorTopVer=0]]
[12:53:44,142][INFO][exchange-worker-#62][GridDhtPartitionDemander] Completed 
rebalance future: RebalanceFuture [grp=CacheGroupContext [grp=cache_group_6], 
topVer=AffinityTopologyVersion [topVer=17, minorTopVer=0], rebalanceId=6]
[12:53:44,142][INFO][exchange-worker-#62][GridDhtPartitionDemander] Cancelled 
rebalancing from all nodes [topology=AffinityTopologyVersion [topVer=17, 
minorTopVer=0]]
[12:53:44,142][INFO][exchange-worker-#62][GridDhtPartitionDemander] Completed 
rebalance future: RebalanceFuture [grp=CacheGroupContext [grp=cache_group_5], 
topVer=AffinityTopologyVersion [topVer=17, minorTopVer=0], rebalanceId=6]
[12:53:44,142][INFO][exchange-worker-#62][GridDhtPartitionDemander] Cancelled 
rebalancing from all nodes [topology=AffinityTopologyVersion [topVer=17, 
minorTopVer=0]]
[12:53:44,142][INFO][exchange-worker-#62][GridDhtPartitionDemander] Completed 
rebalance future: RebalanceFuture [grp=CacheGroupContext [grp=cache_group_4], 

[jira] [Issue Comment Deleted] (IGNITE-7165) Re-balancing is cancelled if client node joins

2018-08-14 Thread Dmitry Sherstobitov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-7165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Sherstobitov updated IGNITE-7165:

Comment: was deleted

(was: I'm afraid I cannot give you correct reproducer on Java

Attached log from node with cleared LFS [^node-NO_REBALANCE-7165.log]

There is some messaged with "Skipping rebalancing (no affinity changes)" after 
node joins cluster while in previous version following text appears in log

{code:java}
[12:53:44,127][INFO][disco-event-worker-#61][GridDiscoveryManager] Topology 
snapshot [ver=18, servers=4, clients=0, CPUs=32, offheap=75.0GB, heap=120.0GB]
[12:53:44,127][INFO][disco-event-worker-#61][GridDiscoveryManager]   ^-- Node 
[id=61E12BC1-31A0-473A-BF79-DDD51C879722, clusterState=ACTIVE]
[12:53:44,127][INFO][disco-event-worker-#61][GridDiscoveryManager]   ^-- 
Baseline [id=0, size=4, online=4, offline=0]
[12:53:44,127][INFO][disco-event-worker-#61][GridDiscoveryManager] Data Regions 
Configured:
[12:53:44,128][INFO][disco-event-worker-#61][GridDiscoveryManager]   ^-- 
default [initSize=256.0 MiB, maxSize=18.8 GiB, persistenceEnabled=true]
[12:53:44,128][INFO][exchange-worker-#62][time] Started exchange init 
[topVer=AffinityTopologyVersion [topVer=18, minorTopVer=0], crd=false, 
evt=NODE_FAILED, evtNode=02e72065-13c8-4b47-a905-874d723cc3c1, customEvt=null, 
allowMerge=true]
[12:53:44,129][INFO][exchange-worker-#62][GridDhtPartitionsExchangeFuture] 
Finish exchange future [startVer=AffinityTopologyVersion [topVer=18, 
minorTopVer=0], resVer=AffinityTopologyVersion [topVer=18, minorTopVer=0], 
err=null]
[12:53:44,130][INFO][exchange-worker-#62][time] Finished exchange init 
[topVer=AffinityTopologyVersion [topVer=18, minorTopVer=0], crd=false]
[12:53:44,141][INFO][exchange-worker-#62][GridDhtPartitionDemander] Cancelled 
rebalancing from all nodes [topology=AffinityTopologyVersion [topVer=17, 
minorTopVer=0]]
[12:53:44,141][INFO][exchange-worker-#62][GridDhtPartitionDemander] Completed 
rebalance future: RebalanceFuture [grp=CacheGroupContext 
[grp=cache_group_1_028], topVer=AffinityTopologyVersion [topVer=17, 
minorTopVer=0], rebalanceId=6]
[12:53:44,141][INFO][exchange-worker-#62][GridDhtPartitionDemander] Cancelled 
rebalancing from all nodes [topology=AffinityTopologyVersion [topVer=17, 
minorTopVer=0]]
[12:53:44,141][INFO][exchange-worker-#62][GridDhtPartitionDemander] Completed 
rebalance future: RebalanceFuture [grp=CacheGroupContext 
[grp=cache_group_3_088], topVer=AffinityTopologyVersion [topVer=17, 
minorTopVer=0], rebalanceId=6]
[12:53:44,141][INFO][exchange-worker-#62][GridDhtPartitionDemander] Cancelled 
rebalancing from all nodes [topology=AffinityTopologyVersion [topVer=17, 
minorTopVer=0]]
[12:53:44,141][INFO][exchange-worker-#62][GridDhtPartitionDemander] Completed 
rebalance future: RebalanceFuture [grp=CacheGroupContext 
[grp=cache_group_1_015], topVer=AffinityTopologyVersion [topVer=17, 
minorTopVer=0], rebalanceId=6]
[12:53:44,141][INFO][exchange-worker-#62][GridDhtPartitionDemander] Cancelled 
rebalancing from all nodes [topology=AffinityTopologyVersion [topVer=17, 
minorTopVer=0]]
[12:53:44,141][INFO][exchange-worker-#62][GridDhtPartitionDemander] Completed 
rebalance future: RebalanceFuture [grp=CacheGroupContext 
[grp=cache_group_4_118], topVer=AffinityTopologyVersion [topVer=17, 
minorTopVer=0], rebalanceId=6]
[12:53:44,141][INFO][exchange-worker-#62][GridDhtPartitionDemander] Cancelled 
rebalancing from all nodes [topology=AffinityTopologyVersion [topVer=17, 
minorTopVer=0]]
[12:53:44,141][INFO][exchange-worker-#62][GridDhtPartitionDemander] Completed 
rebalance future: RebalanceFuture [grp=CacheGroupContext 
[grp=cache_group_2_058], topVer=AffinityTopologyVersion [topVer=17, 
minorTopVer=0], rebalanceId=6]
[12:53:44,141][INFO][exchange-worker-#62][GridDhtPartitionDemander] Cancelled 
rebalancing from all nodes [topology=AffinityTopologyVersion [topVer=17, 
minorTopVer=0]]
[12:53:44,142][INFO][exchange-worker-#62][GridDhtPartitionDemander] Completed 
rebalance future: RebalanceFuture [grp=CacheGroupContext [grp=cache_group_6], 
topVer=AffinityTopologyVersion [topVer=17, minorTopVer=0], rebalanceId=6]
[12:53:44,142][INFO][exchange-worker-#62][GridDhtPartitionDemander] Cancelled 
rebalancing from all nodes [topology=AffinityTopologyVersion [topVer=17, 
minorTopVer=0]]
[12:53:44,142][INFO][exchange-worker-#62][GridDhtPartitionDemander] Completed 
rebalance future: RebalanceFuture [grp=CacheGroupContext [grp=cache_group_5], 
topVer=AffinityTopologyVersion [topVer=17, minorTopVer=0], rebalanceId=6]
[12:53:44,142][INFO][exchange-worker-#62][GridDhtPartitionDemander] Cancelled 
rebalancing from all nodes [topology=AffinityTopologyVersion [topVer=17, 
minorTopVer=0]]
[12:53:44,142][INFO][exchange-worker-#62][GridDhtPartitionDemander] Completed 
rebalance future: RebalanceFuture [grp=CacheGroupContext [grp=cache_group_4], 

[jira] [Assigned] (IGNITE-8100) jdbc getSchemas method could miss schemas for not started remote caches

2018-08-14 Thread Ilya Kasnacheev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Kasnacheev reassigned IGNITE-8100:
---

Assignee: (was: Ilya Kasnacheev)

> jdbc getSchemas method could miss schemas for not started remote caches
> ---
>
> Key: IGNITE-8100
> URL: https://issues.apache.org/jira/browse/IGNITE-8100
> Project: Ignite
>  Issue Type: Bug
>Reporter: Pavel Kuznetsov
>Priority: Major
>
> On jdbc side we have 
> org.apache.ignite.internal.jdbc.thin.JdbcThinDatabaseMetadata#getSchemas(java.lang.String,
>  java.lang.String)
> on the server side result is constructed by this:
> {noformat}
> for (String cacheName : ctx.cache().publicCacheNames()) {
> for (GridQueryTypeDescriptor table : ctx.query().types(cacheName)) {
> if (matches(table.schemaName(), schemaPtrn))
>schemas.add(table.schemaName());
> }
> }
> {noformat}
> see 
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler#getSchemas
> If we havn't started cache(with a table) on some remote node, we will miss 
> that scheme.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8060) Sqline creating tables on client nodes works incorrect in case of node's shutdown

2018-08-14 Thread Ilya Kasnacheev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Kasnacheev reassigned IGNITE-8060:
---

Assignee: Ilya Kasnacheev

> Sqline creating tables on client nodes works incorrect in case of node's 
> shutdown
> -
>
> Key: IGNITE-8060
> URL: https://issues.apache.org/jira/browse/IGNITE-8060
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc
>Affects Versions: 2.4
>Reporter: Andrey Aleksandrov
>Assignee: Ilya Kasnacheev
>Priority: Major
> Attachments: ignite-76cc6387.log, ignite-a1c90af9.log
>
>
> For reproducing (master branch)
> You should start one local server and one local client nodes and follow the 
> instructions:
> 1)Connect to client node:
> sqlline.bat --color=true --verbose=true -u jdbc:ignite:thin://127.0.0.1:10801
> 2)Create new table on client node:
> CREATE TABLE City (id LONG PRIMARY KEY, name VARCHAR)WITH 
> "template=replicated";
> 3)Check that table exists from server node:
> !tables
> On this step table should be shown in the response.
> 4)Drop the client node
> 5)Create new client node
> 6)Connect to new client node:
> sqlline.bat --color=true --verbose=true -u jdbc:ignite:thin://127.0.0.1:10801
> 7)Check that table exists from server node:
> !tables
> *On this step there is no "city" table in the list.*
> 8)Try to drop the table:
>  DROP TABLE City;
>  java.sql.SQLException: Table doesn't exist: CITY
>  at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:671)
>  at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:130)
>  at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:299)
>  at sqlline.Commands.execute(Commands.java:823)
>  at sqlline.Commands.sql(Commands.java:733)
>  at sqlline.SqlLine.dispatch(SqlLine.java:795)
>  at sqlline.SqlLine.begin(SqlLine.java:668)
>  at sqlline.SqlLine.start(SqlLine.java:373)
>  at sqlline.SqlLine.main(SqlLine.java:265)
> 9)Try to create new table:
>  CREATE TABLE City (id LONG PRIMARY KEY, name VARCHAR)WITH 
> "template=replicated";
> java.sql.SQLException: Table already exists: CITY
>  at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:671)
>  at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:130)
>  at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:299)
>  at sqlline.Commands.execute(Commands.java:823)
>  at sqlline.Commands.sql(Commands.java:733)
>  at sqlline.SqlLine.dispatch(SqlLine.java:795)
>  at sqlline.SqlLine.begin(SqlLine.java:668)
>  at sqlline.SqlLine.start(SqlLine.java:373)
>  at sqlline.SqlLine.main(SqlLine.java:265)
> Update:
> Exceptions on CREATE/REMOVE are thrown only until first SELECT isn't done.
>  !tables doen\t work even after SELECT
>  SELECT works OK.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8060) Sqline creating tables on client nodes works incorrect in case of node's shutdown

2018-08-14 Thread Ilya Kasnacheev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Kasnacheev reassigned IGNITE-8060:
---

Assignee: (was: Ilya Kasnacheev)

> Sqline creating tables on client nodes works incorrect in case of node's 
> shutdown
> -
>
> Key: IGNITE-8060
> URL: https://issues.apache.org/jira/browse/IGNITE-8060
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc
>Affects Versions: 2.4
>Reporter: Andrey Aleksandrov
>Priority: Major
> Attachments: ignite-76cc6387.log, ignite-a1c90af9.log
>
>
> For reproducing (master branch)
> You should start one local server and one local client nodes and follow the 
> instructions:
> 1)Connect to client node:
> sqlline.bat --color=true --verbose=true -u jdbc:ignite:thin://127.0.0.1:10801
> 2)Create new table on client node:
> CREATE TABLE City (id LONG PRIMARY KEY, name VARCHAR)WITH 
> "template=replicated";
> 3)Check that table exists from server node:
> !tables
> On this step table should be shown in the response.
> 4)Drop the client node
> 5)Create new client node
> 6)Connect to new client node:
> sqlline.bat --color=true --verbose=true -u jdbc:ignite:thin://127.0.0.1:10801
> 7)Check that table exists from server node:
> !tables
> *On this step there is no "city" table in the list.*
> 8)Try to drop the table:
>  DROP TABLE City;
>  java.sql.SQLException: Table doesn't exist: CITY
>  at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:671)
>  at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:130)
>  at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:299)
>  at sqlline.Commands.execute(Commands.java:823)
>  at sqlline.Commands.sql(Commands.java:733)
>  at sqlline.SqlLine.dispatch(SqlLine.java:795)
>  at sqlline.SqlLine.begin(SqlLine.java:668)
>  at sqlline.SqlLine.start(SqlLine.java:373)
>  at sqlline.SqlLine.main(SqlLine.java:265)
> 9)Try to create new table:
>  CREATE TABLE City (id LONG PRIMARY KEY, name VARCHAR)WITH 
> "template=replicated";
> java.sql.SQLException: Table already exists: CITY
>  at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:671)
>  at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:130)
>  at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:299)
>  at sqlline.Commands.execute(Commands.java:823)
>  at sqlline.Commands.sql(Commands.java:733)
>  at sqlline.SqlLine.dispatch(SqlLine.java:795)
>  at sqlline.SqlLine.begin(SqlLine.java:668)
>  at sqlline.SqlLine.start(SqlLine.java:373)
>  at sqlline.SqlLine.main(SqlLine.java:265)
> Update:
> Exceptions on CREATE/REMOVE are thrown only until first SELECT isn't done.
>  !tables doen\t work even after SELECT
>  SELECT works OK.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8100) jdbc getSchemas method could miss schemas for not started remote caches

2018-08-14 Thread Ilya Kasnacheev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Kasnacheev reassigned IGNITE-8100:
---

Assignee: Ilya Kasnacheev

> jdbc getSchemas method could miss schemas for not started remote caches
> ---
>
> Key: IGNITE-8100
> URL: https://issues.apache.org/jira/browse/IGNITE-8100
> Project: Ignite
>  Issue Type: Bug
>Reporter: Pavel Kuznetsov
>Assignee: Ilya Kasnacheev
>Priority: Major
>
> On jdbc side we have 
> org.apache.ignite.internal.jdbc.thin.JdbcThinDatabaseMetadata#getSchemas(java.lang.String,
>  java.lang.String)
> on the server side result is constructed by this:
> {noformat}
> for (String cacheName : ctx.cache().publicCacheNames()) {
> for (GridQueryTypeDescriptor table : ctx.query().types(cacheName)) {
> if (matches(table.schemaName(), schemaPtrn))
>schemas.add(table.schemaName());
> }
> }
> {noformat}
> see 
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler#getSchemas
> If we havn't started cache(with a table) on some remote node, we will miss 
> that scheme.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8919) Wrong documentation of exec methods in StartNodeCallableImpl class

2018-08-14 Thread Ryabov Dmitrii (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579533#comment-16579533
 ] 

Ryabov Dmitrii commented on IGNITE-8919:


Javadocs looks good now. Move to PA. [~dpavlov], can you merge it?

> Wrong documentation of exec methods in StartNodeCallableImpl class
> --
>
> Key: IGNITE-8919
> URL: https://issues.apache.org/jira/browse/IGNITE-8919
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ivan Fedotov
>Assignee: Ivan Fedotov
>Priority: Major
>
> It seems that in StartNodeCallableImpl class methods 
> {code:java}
> private String exec()
> {code}
>  have wrong documentation [1].
> It's necessary to change documentation to more appropriate.
> [1][https://github.com/apache/ignite/blob/master/modules/ssh/src/main/java/org/apache/ignite/internal/util/nodestart/StartNodeCallableImpl.java#L393]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7165) Re-balancing is cancelled if client node joins

2018-08-14 Thread Dmitry Sherstobitov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-7165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Sherstobitov updated IGNITE-7165:

Attachment: node-NO_REBALANCE-7165.log

> Re-balancing is cancelled if client node joins
> --
>
> Key: IGNITE-7165
> URL: https://issues.apache.org/jira/browse/IGNITE-7165
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Cherkasov
>Assignee: Maxim Muzafarov
>Priority: Critical
>  Labels: rebalance
> Fix For: 2.7
>
> Attachments: node-NO_REBALANCE-7165.log
>
>
> Re-balancing is canceled if client node joins. Re-balancing can take hours 
> and each time when client node joins it starts again:
> [15:10:05,700][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager]
>  Added new node to topology: TcpDiscoveryNode 
> [id=979cf868-1c37-424a-9ad1-12db501f32ef, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 
> 172.31.16.213], sockAddrs=[/0:0:0:0:0:0:0:1:0, /127.0.0.1:0, 
> /172.31.16.213:0], discPort=0, order=36, intOrder=24, 
> lastExchangeTime=1512907805688, loc=false, ver=2.3.1#20171129-sha1:4b1ec0fe, 
> isClient=true]
> [15:10:05,701][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager]
>  Topology snapshot [ver=36, servers=7, clients=5, CPUs=128, heap=160.0GB]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Started 
> exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> crd=false, evt=NODE_JOINED, evtNode=979cf868-1c37-424a-9ad1-12db501f32ef, 
> customEvt=null, allowMerge=true]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionsExchangeFuture]
>  Finish exchange future [startVer=AffinityTopologyVersion [topVer=36, 
> minorTopVer=0], resVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> err=null]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Finished 
> exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> crd=false]
> [15:10:05,703][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion 
> [topVer=36, minorTopVer=0], evt=NODE_JOINED, 
> node=979cf868-1c37-424a-9ad1-12db501f32ef]
> [15:10:08,706][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Cancelled rebalancing from all nodes [topology=AffinityTopologyVersion 
> [topVer=35, minorTopVer=0]]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Rebalancing scheduled [order=[statementp]]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Rebalancing started [top=null, evt=NODE_JOINED, 
> node=a8be3c14-9add-48c3-b099-3fd304cfdbf4]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=2f6bde48-ffb5-4815-bd32-df4e57dc13e0, partitionsCount=18, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=35d01141-4dce-47dd-adf6-a4f3b2bb9da9, partitionsCount=15, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=b3a8be53-e61f-4023-a906-a265923837ba, partitionsCount=15, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=f825cb4e-7dcc-405f-a40d-c1dc1a3ade5a, partitionsCount=12, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=4ae1db91-8b88-4180-a84b-127a303959e9, partitionsCount=11, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=7c286481-7638-49e4-8c68-fa6aa65d8b76, partitionsCount=18, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> so in clusters with a big amount of data and the frequent client left/join 
> events this means that a new server will never receive its partitions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7165) Re-balancing is cancelled if client node joins

2018-08-14 Thread Dmitry Sherstobitov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-7165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Sherstobitov updated IGNITE-7165:

Attachment: node-NO_REBALANCE-7165.log

> Re-balancing is cancelled if client node joins
> --
>
> Key: IGNITE-7165
> URL: https://issues.apache.org/jira/browse/IGNITE-7165
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Cherkasov
>Assignee: Maxim Muzafarov
>Priority: Critical
>  Labels: rebalance
> Fix For: 2.7
>
> Attachments: node-NO_REBALANCE-7165.log
>
>
> Re-balancing is canceled if client node joins. Re-balancing can take hours 
> and each time when client node joins it starts again:
> [15:10:05,700][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager]
>  Added new node to topology: TcpDiscoveryNode 
> [id=979cf868-1c37-424a-9ad1-12db501f32ef, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 
> 172.31.16.213], sockAddrs=[/0:0:0:0:0:0:0:1:0, /127.0.0.1:0, 
> /172.31.16.213:0], discPort=0, order=36, intOrder=24, 
> lastExchangeTime=1512907805688, loc=false, ver=2.3.1#20171129-sha1:4b1ec0fe, 
> isClient=true]
> [15:10:05,701][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager]
>  Topology snapshot [ver=36, servers=7, clients=5, CPUs=128, heap=160.0GB]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Started 
> exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> crd=false, evt=NODE_JOINED, evtNode=979cf868-1c37-424a-9ad1-12db501f32ef, 
> customEvt=null, allowMerge=true]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionsExchangeFuture]
>  Finish exchange future [startVer=AffinityTopologyVersion [topVer=36, 
> minorTopVer=0], resVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> err=null]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Finished 
> exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> crd=false]
> [15:10:05,703][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion 
> [topVer=36, minorTopVer=0], evt=NODE_JOINED, 
> node=979cf868-1c37-424a-9ad1-12db501f32ef]
> [15:10:08,706][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Cancelled rebalancing from all nodes [topology=AffinityTopologyVersion 
> [topVer=35, minorTopVer=0]]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Rebalancing scheduled [order=[statementp]]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Rebalancing started [top=null, evt=NODE_JOINED, 
> node=a8be3c14-9add-48c3-b099-3fd304cfdbf4]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=2f6bde48-ffb5-4815-bd32-df4e57dc13e0, partitionsCount=18, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=35d01141-4dce-47dd-adf6-a4f3b2bb9da9, partitionsCount=15, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=b3a8be53-e61f-4023-a906-a265923837ba, partitionsCount=15, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=f825cb4e-7dcc-405f-a40d-c1dc1a3ade5a, partitionsCount=12, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=4ae1db91-8b88-4180-a84b-127a303959e9, partitionsCount=11, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=7c286481-7638-49e4-8c68-fa6aa65d8b76, partitionsCount=18, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> so in clusters with a big amount of data and the frequent client left/join 
> events this means that a new server will never receive its partitions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7701) SQL system view for node attributes

2018-08-14 Thread Aleksey Plekhanov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579608#comment-16579608
 ] 

Aleksey Plekhanov commented on IGNITE-7701:
---

[~vozerov], thanks for review.
I made changes according to your comment, {{IgniteParentChildIterator}} 
replaced with {{F.concat()}}/{{F.iterator()}} and {{getRows()}} now returns 
{{Iterator}}. 
TC passed, please have a look again.

> SQL system view for node attributes
> ---
>
> Key: IGNITE-7701
> URL: https://issues.apache.org/jira/browse/IGNITE-7701
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Labels: iep-13, sql
> Fix For: 2.7
>
>
> Implement SQL system view to show attributes for each node in topology.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-6055) SQL: Add String length constraint

2018-08-14 Thread Vladimir Ozerov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579609#comment-16579609
 ] 

Vladimir Ozerov commented on IGNITE-6055:
-

[~NIzhikov], my comments:


 1) {{IgniteQueryErrorCode}} - tow new codes were added, but they are not 
converted to SqlState in {{codeToSqlState}} method. In addition, there should 
be new JDBC tests confirming that proper SQL state is returned.
 2) {{QueryUtils.buildBinaryProperty}} - unused parameter {{cacheName}}
 3) What is the purposes of changes in REST classes 
({{GridClientHandshakeResponse}}, {{GridTcpRestParser}})? I see that passed 
version is not used. Also note that REST client is not thin client, and thus 
should not use thin client versioning.
 4) Something is wrong with {{QueryUtils.processBinaryMeta}} - why validation 
of not-null fields is handled inside {{QueryBinaryProperty.addProperty}}, but 
validation of precision is handled in 
{{QueryBinaryProperty.addValidateProperty}}? This way you may end up in a 
situation when the same property is add to "validate" collection twice, if it 
is not-null and has prevision. Then if you use {{DROP COLUMN}} command, then 
only first property will be removed from "validate" collection, and *all* 
subsequent {{INSERT}} commands will fail. Please remove {{addValidateProperty}} 
method and make sure that validation properties are populatied in a single 
place. Also please add a test as follows: create not-null column with precision 
-> test both constraints -> drop column -> insert some row. Insert should pass.
 5) {{ClientRequestHandler}} should not write server version back, as it 
doesn't make sense for a client. Our protocol works as follows: client proposes 
communication version to the server. If server accepted this version, then 
{{true}} is returned. Otherwise server proposes another version, and client 
re-tries handshake with alternative version if possible. In any case, current 
server version should never be used in any decision on the client side. 
 6) Please find a way to avoid passing version to {{IBinaryRawWriteAware}}. 
This is general-purpose interface and we should not add irrelevant data to it. 
I would rather add internal transient field to {{CacheConfiguration}} or so. 
Alterantively, you may add extended version of {{IBinaryRawWriter}} (e.g. 
{{IBinaryRawWriterEx}}), which will expose required version.
 7) {{ClientSocket.cs}} - same as p.5. Server version should not be used. You 
should use version both server and client agreed upon.

 

Once these problems are addressed and tested we should ask other community 
memebers to fix ODBC, CPP thin client, Node.JS thin client, Python thin client.

> SQL: Add String length constraint
> -
>
> Key: IGNITE-6055
> URL: https://issues.apache.org/jira/browse/IGNITE-6055
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.1
>Reporter: Vladimir Ozerov
>Assignee: Nikolay Izhikov
>Priority: Major
>  Labels: sql-engine
> Fix For: 2.7
>
>
> We should support {{CHAR(X)}} and {{VARCHAR{X}} syntax. Currently, we ignore 
> it. First, it affects semantics. E.g., one can insert a string with greater 
> length into a cache/table without any problems. Second, it limits efficiency 
> of our default configuration. E.g., index inline cannot be applied to 
> {{String}} data type as we cannot guess it's length.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-6055) SQL: Add String length constraint

2018-08-14 Thread Vladimir Ozerov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579609#comment-16579609
 ] 

Vladimir Ozerov edited comment on IGNITE-6055 at 8/14/18 10:22 AM:
---

[~NIzhikov], my comments:

1) {{IgniteQueryErrorCode}} - tow new codes were added, but they are not 
converted to SqlState in {{codeToSqlState}} method. In addition, there should 
be new JDBC tests confirming that proper SQL state is returned.

2) {{QueryUtils.buildBinaryProperty}} - unused parameter {{cacheName}}

3) What is the purposes of changes in REST classes 
({{GridClientHandshakeResponse}}, {{GridTcpRestParser}})? I see that passed 
version is not used. Also note that REST client is not thin client, and thus 
should not use thin client versioning.

4) Something is wrong with {{QueryUtils.processBinaryMeta}} - why validation of 
not-null fields is handled inside {{QueryBinaryProperty.addProperty}}, but 
validation of precision is handled in 
{{QueryBinaryProperty.addValidateProperty}}? This way you may end up in a 
situation when the same property is add to "validate" collection twice, if it 
is not-null and has prevision. Then if you use {{DROP COLUMN}} command, then 
only first property will be removed from "validate" collection, and *all* 
subsequent {{INSERT}} commands will fail. Please remove {{addValidateProperty}} 
method and make sure that validation properties are populatied in a single 
place. Also please add a test as follows: create not-null column with precision 
-> test both constraints -> drop column -> insert some row. Insert should pass.

5) {{ClientRequestHandler}} should not write server version back, as it doesn't 
make sense for a client. Our protocol works as follows: client proposes 
communication version to the server. If server accepted this version, then 
{{true}} is returned. Otherwise server proposes another version, and client 
re-tries handshake with alternative version if possible. In any case, current 
server version should never be used in any decision on the client side.

6) Please find a way to avoid passing version to {{IBinaryRawWriteAware}}. This 
is general-purpose interface and we should not add irrelevant data to it. I 
would rather add internal transient field to {{CacheConfiguration}} or so. 
Alterantively, you may add extended version of {{IBinaryRawWriter}} (e.g. 
{{IBinaryRawWriterEx}}), which will expose required version.

7) {{ClientSocket.cs}} - same as p.5. Server version should not be used. You 
should use version both server and client agreed upon.

Once these problems are addressed and tested we should ask other community 
memebers to fix ODBC, CPP thin client, Node.JS thin client, Python thin client.


was (Author: vozerov):
[~NIzhikov], my comments:

1) {{IgniteQueryErrorCode}} - tow new codes were added, but they are not 
converted to SqlState in {{codeToSqlState}} method. In addition, there should 
be new JDBC tests confirming that proper SQL state is returned.


 2) {{QueryUtils.buildBinaryProperty}} - unused parameter {{cacheName}}


 3) What is the purposes of changes in REST classes 
({{GridClientHandshakeResponse}}, {{GridTcpRestParser}})? I see that passed 
version is not used. Also note that REST client is not thin client, and thus 
should not use thin client versioning.


 4) Something is wrong with {{QueryUtils.processBinaryMeta}} - why validation 
of not-null fields is handled inside {{QueryBinaryProperty.addProperty}}, but 
validation of precision is handled in 
{{QueryBinaryProperty.addValidateProperty}}? This way you may end up in a 
situation when the same property is add to "validate" collection twice, if it 
is not-null and has prevision. Then if you use {{DROP COLUMN}} command, then 
only first property will be removed from "validate" collection, and *all* 
subsequent {{INSERT}} commands will fail. Please remove {{addValidateProperty}} 
method and make sure that validation properties are populatied in a single 
place. Also please add a test as follows: create not-null column with precision 
-> test both constraints -> drop column -> insert some row. Insert should pass.

5) {{ClientRequestHandler}} should not write server version back, as it doesn't 
make sense for a client. Our protocol works as follows: client proposes 
communication version to the server. If server accepted this version, then 
{{true}} is returned. Otherwise server proposes another version, and client 
re-tries handshake with alternative version if possible. In any case, current 
server version should never be used in any decision on the client side. 

6) Please find a way to avoid passing version to {{IBinaryRawWriteAware}}. This 
is general-purpose interface and we should not add irrelevant data to it. I 
would rather add internal transient field to {{CacheConfiguration}} or so. 
Alterantively, you may add extended version of {{IBinaryRawWriter}} (e.g. 

[jira] [Commented] (IGNITE-8538) Web Console: Refactor redirecting to default state.

2018-08-14 Thread Ilya Borisov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16580616#comment-16580616
 ] 

Ilya Borisov commented on IGNITE-8538:
--

[~kuaw26] the default state handling works as follows: the {{DefaultState}} 
service provides a method {{setRedirectTo}} which accepts a ui-router state 
redirect function. By default, it's configured to lead to 
{{base.configuration.overview}}, but this behavior can always be customized 
dynamically. To navigate to default state, use {{default-state}} ui-router 
state name. Regarding the console logo link, it checks whether the user is 
logged in and leads to either {{landing}} or {{default-state}}.

> Web Console: Refactor redirecting to default state.
> ---
>
> Key: IGNITE-8538
> URL: https://issues.apache.org/jira/browse/IGNITE-8538
> Project: Ignite
>  Issue Type: Improvement
>  Components: wizards
>Reporter: Alexey Kuznetsov
>Assignee: Ilya Borisov
>Priority: Major
> Fix For: 2.7
>
>
> We need to refactor and fix redirection to default state from Queries screen, 
> 40x screens and other similar places.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9209) GridDistributedTxMapping.toString() returns broken string

2018-08-14 Thread Dmitriy Pavlov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-9209:
---
Fix Version/s: 2.7

> GridDistributedTxMapping.toString() returns broken string
> -
>
> Key: IGNITE-9209
> URL: https://issues.apache.org/jira/browse/IGNITE-9209
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ryabov Dmitrii
>Assignee: Ryabov Dmitrii
>Priority: Minor
> Fix For: 2.7
>
>
> Something wrong with `GridDistributedTxMapping` when we try to get string 
> representation by `GridToStringBuilder`.
> It should looks like
> {noformat}
> GridDistributedTxMapping [entries=LinkedHashSet [/*values here*/], 
> explicitLock=false, dhtVer=null, last=false, nearEntries=0,/*more text*/]
> {noformat}
> But currently it looks like
> {noformat}
> KeyCacheObjectImpl [part=1, val=1, hasValBytes=false]KeyCacheObjectImpl 
> [part=1, val=1, hasValBytes=false],// more text
> {noformat}
> Reproducer:
> {code:java}
> public class GridToStringBuilderSelfTest extends GridCommonAbstractTest {
> /**
>  * @throws Exception
>  */
> public void testGridDistributedTxMapping() throws Exception {
> IgniteEx ignite = startGrid(0);
> IgniteCache cache = 
> ignite.createCache(defaultCacheConfiguration());
> try (Transaction tx = ignite.transactions().txStart()) {
> cache.put(1, 1);
> GridDistributedTxMapping mapping = new 
> GridDistributedTxMapping(grid(0).localNode());
> assertTrue("Wrong string: " + mapping, 
> mapping.toString().startsWith("GridDistributedTxMapping ["));
> 
> mapping.add(((TransactionProxyImpl)tx).tx().txState().allEntries().stream().findAny().get());
> assertTrue("Wrong string: " + mapping, 
> mapping.toString().startsWith("GridDistributedTxMapping ["));
> }
> stopAllGrids();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9209) GridDistributedTxMapping.toString() returns broken string

2018-08-14 Thread Dmitriy Pavlov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-9209:
---
Ignite Flags:   (was: Docs Required)

> GridDistributedTxMapping.toString() returns broken string
> -
>
> Key: IGNITE-9209
> URL: https://issues.apache.org/jira/browse/IGNITE-9209
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ryabov Dmitrii
>Assignee: Ryabov Dmitrii
>Priority: Minor
> Fix For: 2.7
>
>
> Something wrong with `GridDistributedTxMapping` when we try to get string 
> representation by `GridToStringBuilder`.
> It should looks like
> {noformat}
> GridDistributedTxMapping [entries=LinkedHashSet [/*values here*/], 
> explicitLock=false, dhtVer=null, last=false, nearEntries=0,/*more text*/]
> {noformat}
> But currently it looks like
> {noformat}
> KeyCacheObjectImpl [part=1, val=1, hasValBytes=false]KeyCacheObjectImpl 
> [part=1, val=1, hasValBytes=false],// more text
> {noformat}
> Reproducer:
> {code:java}
> public class GridToStringBuilderSelfTest extends GridCommonAbstractTest {
> /**
>  * @throws Exception
>  */
> public void testGridDistributedTxMapping() throws Exception {
> IgniteEx ignite = startGrid(0);
> IgniteCache cache = 
> ignite.createCache(defaultCacheConfiguration());
> try (Transaction tx = ignite.transactions().txStart()) {
> cache.put(1, 1);
> GridDistributedTxMapping mapping = new 
> GridDistributedTxMapping(grid(0).localNode());
> assertTrue("Wrong string: " + mapping, 
> mapping.toString().startsWith("GridDistributedTxMapping ["));
> 
> mapping.add(((TransactionProxyImpl)tx).tx().txState().allEntries().stream().findAny().get());
> assertTrue("Wrong string: " + mapping, 
> mapping.toString().startsWith("GridDistributedTxMapping ["));
> }
> stopAllGrids();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8919) Wrong documentation of exec methods in StartNodeCallableImpl class

2018-08-14 Thread Ivan Fedotov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Fedotov updated IGNITE-8919:
-
Description: 
It seems that in StartNodeCallableImpl class methods
{code:java}
private String exec()
{code}
have wrong documentation [1].

It's necessary to change documentation to more appropriate.

[1][https://github.com/apache/ignite/blob/master/modules/ssh/src/main/java/org/apache/ignite/internal/util/nodestart/StartNodeCallableImpl.java#L393]

 

  was:
It seems that in StartNodeCallableImpl class methods
{code:java}
private String exec()
{code}
has wrong documentation [1].

It's necessary to change documentation to more appropriate.

[1][https://github.com/apache/ignite/blob/master/modules/ssh/src/main/java/org/apache/ignite/internal/util/nodestart/StartNodeCallableImpl.java#L393]

 


> Wrong documentation of exec methods in StartNodeCallableImpl class
> --
>
> Key: IGNITE-8919
> URL: https://issues.apache.org/jira/browse/IGNITE-8919
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ivan Fedotov
>Assignee: Ivan Fedotov
>Priority: Major
>
> It seems that in StartNodeCallableImpl class methods
> {code:java}
> private String exec()
> {code}
> have wrong documentation [1].
> It's necessary to change documentation to more appropriate.
> [1][https://github.com/apache/ignite/blob/master/modules/ssh/src/main/java/org/apache/ignite/internal/util/nodestart/StartNodeCallableImpl.java#L393]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7701) SQL system view for node attributes

2018-08-14 Thread Aleksey Plekhanov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579910#comment-16579910
 ] 

Aleksey Plekhanov commented on IGNITE-7701:
---

[~vozerov] this issue was handled by try/catch block, but yes, it's unnecessary 
warning in the log. I added explicit null check, please have a look.

> SQL system view for node attributes
> ---
>
> Key: IGNITE-7701
> URL: https://issues.apache.org/jira/browse/IGNITE-7701
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Labels: iep-13, sql
> Fix For: 2.7
>
>
> Implement SQL system view to show attributes for each node in topology.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-4150) B-Tree index cannot be used efficiently with IN clause.

2018-08-14 Thread Vladimir Ozerov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-4150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579920#comment-16579920
 ] 

Vladimir Ozerov commented on IGNITE-4150:
-

One more TC before merge to master: 
https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_RunAllSql

> B-Tree index cannot be used efficiently with IN clause.
> ---
>
> Key: IGNITE-4150
> URL: https://issues.apache.org/jira/browse/IGNITE-4150
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 1.7
>Reporter: Vladimir Ozerov
>Assignee: Taras Ledkov
>Priority: Major
>  Labels: performance, sql-stability
> Fix For: 2.7
>
>
> Consider the following query:
> {code}
> SELECT * FROM table
> WHERE a = ? AND b IN (?, ?)
> {code}
> If there is an index {{(a, b)}}, it will not be used properly: only column 
> {{a}} will be used. This will leads to multiple unnecessary comparisons.
> Most obvious way to fix that - use temporary table and {{JOIN}}. However, 
> this approach doesn't work well when there are multiple {{IN}}'s. 
> Proper solution would be to hack deeper into H2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9209) GridDistributedTxMapping.toString() returns broken string

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579934#comment-16579934
 ] 

ASF GitHub Bot commented on IGNITE-9209:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4519


> GridDistributedTxMapping.toString() returns broken string
> -
>
> Key: IGNITE-9209
> URL: https://issues.apache.org/jira/browse/IGNITE-9209
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ryabov Dmitrii
>Assignee: Ryabov Dmitrii
>Priority: Minor
> Fix For: 2.7
>
>
> Something wrong with `GridDistributedTxMapping` when we try to get string 
> representation by `GridToStringBuilder`.
> It should looks like
> {noformat}
> GridDistributedTxMapping [entries=LinkedHashSet [/*values here*/], 
> explicitLock=false, dhtVer=null, last=false, nearEntries=0,/*more text*/]
> {noformat}
> But currently it looks like
> {noformat}
> KeyCacheObjectImpl [part=1, val=1, hasValBytes=false]KeyCacheObjectImpl 
> [part=1, val=1, hasValBytes=false],// more text
> {noformat}
> Reproducer:
> {code:java}
> public class GridToStringBuilderSelfTest extends GridCommonAbstractTest {
> /**
>  * @throws Exception
>  */
> public void testGridDistributedTxMapping() throws Exception {
> IgniteEx ignite = startGrid(0);
> IgniteCache cache = 
> ignite.createCache(defaultCacheConfiguration());
> try (Transaction tx = ignite.transactions().txStart()) {
> cache.put(1, 1);
> GridDistributedTxMapping mapping = new 
> GridDistributedTxMapping(grid(0).localNode());
> assertTrue("Wrong string: " + mapping, 
> mapping.toString().startsWith("GridDistributedTxMapping ["));
> 
> mapping.add(((TransactionProxyImpl)tx).tx().txState().allEntries().stream().findAny().get());
> assertTrue("Wrong string: " + mapping, 
> mapping.toString().startsWith("GridDistributedTxMapping ["));
> }
> stopAllGrids();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9266) Cache 2 TC configuration timeouts because of hangs on latch

2018-08-14 Thread Eduard Shangareev (JIRA)
Eduard Shangareev created IGNITE-9266:
-

 Summary: Cache 2 TC configuration timeouts because of hangs on 
latch
 Key: IGNITE-9266
 URL: https://issues.apache.org/jira/browse/IGNITE-9266
 Project: Ignite
  Issue Type: Bug
Reporter: Eduard Shangareev


Two threads hanged on waiting latch because of it 
GridCacheLocalMultithreadedSelfTest couldn't stop. 

{code}
Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.171-b11 mixed mode):

"Attach Listener" #2048 daemon prio=9 os_prio=0 tid=0x7f09cc001000 
nid=0x6237 waiting on condition [0x]
   java.lang.Thread.State: RUNNABLE

"sys-#1820%local.GridCacheLocalMultithreadedSelfTest%" #2047 prio=5 os_prio=0 
tid=0x7f089401a000 nid=0x6207 waiting on condition [0x7f09e69ee000]
   java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0xfad87890> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

"sys-#1819%local.GridCacheLocalMultithreadedSelfTest%" #2046 prio=5 os_prio=0 
tid=0x7f0894019000 nid=0x6206 waiting on condition [0x7f09e6aef000]
   java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0xfad87890> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

"sys-#1818%local.GridCacheLocalMultithreadedSelfTest%" #2045 prio=5 os_prio=0 
tid=0x7f0894018000 nid=0x6205 waiting on condition [0x7f09e6bf]
   java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0xfad87890> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

"sys-#1817%local.GridCacheLocalMultithreadedSelfTest%" #2044 prio=5 os_prio=0 
tid=0x7f0894017000 nid=0x6204 waiting on condition [0x7f09e7ffe000]
   java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0xfad87890> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

"sys-#1816%local.GridCacheLocalMultithreadedSelfTest%" #2043 prio=5 os_prio=0 
tid=0x7f0894015800 nid=0x6203 waiting on condition [0x7f09e6ff2000]
   java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)

[jira] [Assigned] (IGNITE-9266) Cache 2 TC configuration timeouts because of hangs on latch

2018-08-14 Thread Eduard Shangareev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eduard Shangareev reassigned IGNITE-9266:
-

Assignee: Eduard Shangareev

> Cache 2 TC configuration timeouts because of hangs on latch
> ---
>
> Key: IGNITE-9266
> URL: https://issues.apache.org/jira/browse/IGNITE-9266
> Project: Ignite
>  Issue Type: Bug
>Reporter: Eduard Shangareev
>Assignee: Eduard Shangareev
>Priority: Major
>
> Two threads hanged on waiting latch because of it 
> GridCacheLocalMultithreadedSelfTest couldn't stop. 
> {code}
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.171-b11 mixed mode):
> "Attach Listener" #2048 daemon prio=9 os_prio=0 tid=0x7f09cc001000 
> nid=0x6237 waiting on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "sys-#1820%local.GridCacheLocalMultithreadedSelfTest%" #2047 prio=5 os_prio=0 
> tid=0x7f089401a000 nid=0x6207 waiting on condition [0x7f09e69ee000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0xfad87890> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>   at 
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
>   at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> "sys-#1819%local.GridCacheLocalMultithreadedSelfTest%" #2046 prio=5 os_prio=0 
> tid=0x7f0894019000 nid=0x6206 waiting on condition [0x7f09e6aef000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0xfad87890> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>   at 
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
>   at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> "sys-#1818%local.GridCacheLocalMultithreadedSelfTest%" #2045 prio=5 os_prio=0 
> tid=0x7f0894018000 nid=0x6205 waiting on condition [0x7f09e6bf]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0xfad87890> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>   at 
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
>   at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> "sys-#1817%local.GridCacheLocalMultithreadedSelfTest%" #2044 prio=5 os_prio=0 
> tid=0x7f0894017000 nid=0x6204 waiting on condition [0x7f09e7ffe000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0xfad87890> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>   at 
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
>   at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>   at 
> 

[jira] [Created] (IGNITE-9267) Deadlock between unsuccessful client reconnecting and stopping.

2018-08-14 Thread Vitaliy Biryukov (JIRA)
Vitaliy Biryukov created IGNITE-9267:


 Summary: Deadlock between unsuccessful client reconnecting and 
stopping.
 Key: IGNITE-9267
 URL: https://issues.apache.org/jira/browse/IGNITE-9267
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.6
Reporter: Vitaliy Biryukov
 Fix For: 2.7


Reconnecting thread:
{noformat}
"zk-internal.IgniteClientReconnectCacheTest3-EventThread" #593633 daemon prio=5 
os_prio=0 tid=0x7ff8e4063800 nid=0x478e waiting for monitor entry 
[0x7ff90f2f]
   java.lang.Thread.State: BLOCKED (on object monitor)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.stop0(IgnitionEx.java:2565)
- waiting to lock <0xe9429280> (a 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.stop(IgnitionEx.java:2557)
at org.apache.ignite.internal.IgnitionEx.stop(IgnitionEx.java:374)
at org.apache.ignite.Ignition.stop(Ignition.java:229)
at org.apache.ignite.internal.IgniteKernal.close(IgniteKernal.java:3417)
at 
org.apache.ignite.internal.IgniteKernal.onReconnected(IgniteKernal.java:3904)
at 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.onDiscovery0(GridDiscoveryManager.java:831)
at 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.onDiscovery(GridDiscoveryManager.java:590)
- locked <0xe9429468> (a java.lang.Object)
at 
org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.processLocalJoin(ZookeeperDiscoveryImpl.java:2960)
- locked <0xe9429478> (a java.lang.Object)
at 
org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.processBulkJoin(ZookeeperDiscoveryImpl.java:2760)
at 
org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.processNewEvents(ZookeeperDiscoveryImpl.java:2623)
at 
org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.processNewEvents(ZookeeperDiscoveryImpl.java:2598)
at 
org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.access$2000(ZookeeperDiscoveryImpl.java:108)
at 
org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl$ZkWatcher.processResult(ZookeeperDiscoveryImpl.java:4108)
at 
org.apache.ignite.spi.discovery.zk.internal.ZookeeperClient$DataCallbackWrapper.processResult(ZookeeperClient.java:1219)
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:561)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
{noformat}

Stopping thread:

{noformat}
"main" #1 prio=5 os_prio=0 tid=0x7ffba000e000 nid=0x6aa3 waiting for 
monitor entry [0x7ffba8c83000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at 
org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.stop0(ZookeeperDiscoveryImpl.java:3838)
- waiting to lock <0xe9429478> (a java.lang.Object)
at 
org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.stop(ZookeeperDiscoveryImpl.java:3813)
at 
org.apache.ignite.spi.discovery.zk.ZookeeperDiscoverySpi.spiStop(ZookeeperDiscoverySpi.java:501)
at 
org.apache.ignite.internal.managers.GridManagerAdapter.stopSpi(GridManagerAdapter.java:330)
at 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.stop(GridDiscoveryManager.java:1683)
at org.apache.ignite.internal.IgniteKernal.stop0(IgniteKernal.java:2206)
at org.apache.ignite.internal.IgniteKernal.stop(IgniteKernal.java:2081)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.stop0(IgnitionEx.java:2594)
- locked <0xe9429280> (a 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.stop(IgnitionEx.java:2557)
at org.apache.ignite.internal.IgnitionEx.stop(IgnitionEx.java:374)
at org.apache.ignite.Ignition.stop(Ignition.java:229)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.stopGrid(GridAbstractTest.java:1153)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.stopAllGrids(GridAbstractTest.java:1193)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.stopAllGrids(GridAbstractTest.java:1174)
at 
org.apache.ignite.internal.IgniteClientReconnectCacheTest.afterTest(IgniteClientReconnectCacheTest.java:151)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.tearDown(GridAbstractTest.java:1763)
at 
org.apache.ignite.testframework.junits.common.GridCommonAbstractTest.tearDown(GridCommonAbstractTest.java:503)
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9268) Hangs on await offheap read lock

2018-08-14 Thread Anton Kalashnikov (JIRA)
Anton Kalashnikov created IGNITE-9268:
-

 Summary: Hangs on await offheap read lock
 Key: IGNITE-9268
 URL: https://issues.apache.org/jira/browse/IGNITE-9268
 Project: Ignite
  Issue Type: Test
Reporter: Anton Kalashnikov
Assignee: Anton Kalashnikov


During awaitiing of read lock node has failed and handler are stopping the 
node. And nobody can wake up awaiting thread.
{noformat}
Lock 
[object=java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@65067d90,
 ownerName=null, ownerId=-1]
[12:24:51] : [Step 3/4] at sun.misc.Unsafe.park(Native Method)
[12:24:51] : [Step 3/4] at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
[12:24:51] : [Step 3/4] at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
[12:24:51] : [Step 3/4] at 
o.a.i.i.util.OffheapReadWriteLock.waitAcquireReadLock(OffheapReadWriteLock.java:435)
[12:24:51] : [Step 3/4] at 
o.a.i.i.util.OffheapReadWriteLock.readLock(OffheapReadWriteLock.java:142)
[12:24:51] : [Step 3/4] at 
o.a.i.i.pagemem.impl.PageMemoryNoStoreImpl.readLock(PageMemoryNoStoreImpl.java:463)
[12:24:51] : [Step 3/4] at 
o.a.i.i.processors.cache.persistence.tree.util.PageHandler.readLock(PageHandler.java:185)
[12:24:51] : [Step 3/4] at 
o.a.i.i.processors.cache.persistence.tree.util.PageHandler.readPage(PageHandler.java:157)
[12:24:51] : [Step 3/4] at 
o.a.i.i.processors.cache.persistence.DataStructure.read(DataStructure.java:334)
[12:24:51] : [Step 3/4] at 
o.a.i.i.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2348)
[12:24:51] : [Step 3/4] at 
o.a.i.i.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2360)
[12:24:51] : [Step 3/4] at 
o.a.i.i.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2360)
[12:24:51] : [Step 3/4] at 
o.a.i.i.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2360)
[12:24:51] : [Step 3/4] at 
o.a.i.i.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2360)
[12:24:51] : [Step 3/4] at 
o.a.i.i.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2360)
[12:24:51] : [Step 3/4] at 
o.a.i.i.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2360)
[12:24:51] : [Step 3/4] at 
o.a.i.i.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2360)
[12:24:51] : [Step 3/4] at 
o.a.i.i.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2360)
[12:24:51] : [Step 3/4] at 
o.a.i.i.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2360)
[12:24:51] : [Step 3/4] at 
o.a.i.i.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2360)
[12:24:51] : [Step 3/4] at 
o.a.i.i.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2360)
[12:24:51] : [Step 3/4] at 
o.a.i.i.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2360)
[12:24:51] : [Step 3/4] at 
o.a.i.i.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2360)
[12:24:51] : [Step 3/4] at 
o.a.i.i.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2360)
[12:24:51] : [Step 3/4] at 
o.a.i.i.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2360)
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9141) SQL: Trace and test query mapping problems

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579673#comment-16579673
 ] 

ASF GitHub Bot commented on IGNITE-9141:


GitHub user SGrimstad opened a pull request:

https://github.com/apache/ignite/pull/4536

IGNITE-9141  Implemented



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-9141

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4536.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4536


commit 26543e10f8143dbc2d313b870081d633baf4cd05
Author: SGrimstad 
Date:   2018-08-14T11:21:13Z

IGNITE-9141  Implemented




> SQL: Trace and test query mapping problems
> --
>
> Key: IGNITE-9141
> URL: https://issues.apache.org/jira/browse/IGNITE-9141
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.6
>Reporter: Vladimir Ozerov
>Assignee: Sergey Grimstad
>Priority: Major
>  Labels: sql-stability
> Fix For: 2.7
>
> Attachments: IGNITE-9141__Implemented_.patch
>
>
> One of mandatory steps of SQL query execution is topology mapping - we need 
> to select nodes where required caches are located, and make sure that their 
> partition distribution is valid for the given SQL query. Once nodes are 
> detected, we try to reserve partitions of interest on mapper nodes to make 
> sure that they will not be evicted during query execution. 
> However, mapping step may fail for many reasons. Most often this is rebalance 
> or concurrent node failures. In this case we simply retry the whole query 
> execution from scratch. In IGNITE-9114 we ensured that retry cycle is not 
> infinite and that root cause of remap is logged. However, original root cause 
> of remap is not propagated to client node making the problem hard to debug 
> for end users. Also we do not have enough tests for remap events. Let's fix 
> this.
> Proposed implementation flow:
> 1) Add {{retryCause: String}} field to {{GridQueryNextPageResponse}} which 
> should be populated along with {{retry}} field on mapper node. See 
> {{GridMapQueryExecutor#sendRetry}} method to understand what may cause 
> retries (failed to reserve partitions or failed to execute non-collocated 
> join). Make sure that these error messages are as verbose as possible with 
> all necessary details (root cause, cache names, affected partitions, etc).
> 2) Make sure that root cause is set in {{ReduceQueryRun#state}} and then 
> propagated to user exception in case of retry timeout.
> 3) Evaluate all places inside 
> {{org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor#query}}
>  which may lead to re-try and make sure that root cause is verbose and 
> propagated to user exception in case of retry timeout. 
> 4) Add tests covering all re-try branches and ensure that query fails after 
> timeout and that error message is correct.
> *NB*: Once propagation of error message to reducer is implemented, we may 
> remove additional logging altogether.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7165) Re-balancing is cancelled if client node joins

2018-08-14 Thread Maxim Muzafarov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579688#comment-16579688
 ] 

Maxim Muzafarov commented on IGNITE-7165:
-

Dmitry,

I've checked logs you provided. Actually it's a common case with rebalacing 
procedure and it completes successfully (accorgind your logs).
We have a lot of tests covered this e.g.:
* {{IgniteCacheGroupsTest#testRestartsAndCacheCreateDestroy}} – 10 casches, 10 
nodes (server + client) random put\get operations on caches.
* 
{{CacheLateAffinityAssignmentTest#testConcurrentStartStaticCachesWithClientNodes}}

So, probably, your issue about {{LocalNodeMovingPartitionsCount}} metrics 
propagation for client nodes. 
I can check it additionally, but It very hepls me if you provide info about 
your test suite.

> Re-balancing is cancelled if client node joins
> --
>
> Key: IGNITE-7165
> URL: https://issues.apache.org/jira/browse/IGNITE-7165
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Cherkasov
>Assignee: Maxim Muzafarov
>Priority: Critical
>  Labels: rebalance
> Fix For: 2.7
>
> Attachments: node-NO_REBALANCE-7165.log
>
>
> Re-balancing is canceled if client node joins. Re-balancing can take hours 
> and each time when client node joins it starts again:
> [15:10:05,700][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager]
>  Added new node to topology: TcpDiscoveryNode 
> [id=979cf868-1c37-424a-9ad1-12db501f32ef, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 
> 172.31.16.213], sockAddrs=[/0:0:0:0:0:0:0:1:0, /127.0.0.1:0, 
> /172.31.16.213:0], discPort=0, order=36, intOrder=24, 
> lastExchangeTime=1512907805688, loc=false, ver=2.3.1#20171129-sha1:4b1ec0fe, 
> isClient=true]
> [15:10:05,701][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager]
>  Topology snapshot [ver=36, servers=7, clients=5, CPUs=128, heap=160.0GB]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Started 
> exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> crd=false, evt=NODE_JOINED, evtNode=979cf868-1c37-424a-9ad1-12db501f32ef, 
> customEvt=null, allowMerge=true]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionsExchangeFuture]
>  Finish exchange future [startVer=AffinityTopologyVersion [topVer=36, 
> minorTopVer=0], resVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> err=null]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Finished 
> exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> crd=false]
> [15:10:05,703][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion 
> [topVer=36, minorTopVer=0], evt=NODE_JOINED, 
> node=979cf868-1c37-424a-9ad1-12db501f32ef]
> [15:10:08,706][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Cancelled rebalancing from all nodes [topology=AffinityTopologyVersion 
> [topVer=35, minorTopVer=0]]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Rebalancing scheduled [order=[statementp]]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Rebalancing started [top=null, evt=NODE_JOINED, 
> node=a8be3c14-9add-48c3-b099-3fd304cfdbf4]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=2f6bde48-ffb5-4815-bd32-df4e57dc13e0, partitionsCount=18, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=35d01141-4dce-47dd-adf6-a4f3b2bb9da9, partitionsCount=15, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=b3a8be53-e61f-4023-a906-a265923837ba, partitionsCount=15, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=f825cb4e-7dcc-405f-a40d-c1dc1a3ade5a, partitionsCount=12, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=4ae1db91-8b88-4180-a84b-127a303959e9, partitionsCount=11, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> 

[jira] [Commented] (IGNITE-9258) NodeJS - Fail to handle more than one client in the same app

2018-08-14 Thread ekaterina.vergizova (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579755#comment-16579755
 ] 

ekaterina.vergizova commented on IGNITE-9258:
-

Thanks a lot for the fix. But the problem is a bit more deeper: 
BinaryTypeStorage should not be shared between different IgniteClients. I'm 
preparing the full fix, it will be ready soon.

> NodeJS - Fail to handle more than one client in the same app
> 
>
> Key: IGNITE-9258
> URL: https://issues.apache.org/jira/browse/IGNITE-9258
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms
>Affects Versions: 2.6
>Reporter: Eran Betzalel
>Assignee: ekaterina.vergizova
>Priority: Major
>
> BinaryTypeStorage initialized with non-connected state causing multiple 
> clients to fail



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9264) Lost partitions raised twice

2018-08-14 Thread Pavel Vinokurov (JIRA)
Pavel Vinokurov created IGNITE-9264:
---

 Summary: Lost partitions raised twice 
 Key: IGNITE-9264
 URL: https://issues.apache.org/jira/browse/IGNITE-9264
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.4
Reporter: Pavel Vinokurov






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9264) Lost partitions raised twice if node failed during exchange

2018-08-14 Thread Pavel Vinokurov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Vinokurov updated IGNITE-9264:

Summary: Lost partitions raised twice if node failed during exchange  (was: 
Lost partitions raised twice )

> Lost partitions raised twice if node failed during exchange
> ---
>
> Key: IGNITE-9264
> URL: https://issues.apache.org/jira/browse/IGNITE-9264
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Pavel Vinokurov
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8927) Hangs when executing an SQL query when there are LOST partitions

2018-08-14 Thread Vladimir Ozerov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-8927:

Fix Version/s: 2.7

> Hangs when executing an SQL query when there are LOST partitions
> 
>
> Key: IGNITE-8927
> URL: https://issues.apache.org/jira/browse/IGNITE-8927
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.5
>Reporter: Dmitriy Gladkikh
>Assignee: Vladimir Ozerov
>Priority: Major
>  Labels: sql-stability
> Fix For: 2.7
>
>
> If there are partitions in the LOST state, SQL query hang.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7293) "BinaryObjectException: Cannot find schema for object with compact footer" when "not null" field is defined

2018-08-14 Thread Stanilovsky Evgeny (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579869#comment-16579869
 ] 

Stanilovsky Evgeny commented on IGNITE-7293:


can`t reproduce in master:

{code:java}
execute("CREATE TABLE \"Person2\" (\"id\" int, \"city\" varchar," +
" \"name\" varchar, \"surname\" varchar, \"age\" int not null, 
PRIMARY KEY (\"id\", \"city\")) WITH " +
"wrap_key,wrap_value,\"template=cache,affinity_key='city'\"");
{code}

{code:java}
[2018-08-14 17:17:55,243][ERROR][main][root] Test failed.
class org.apache.ignite.internal.processors.query.IgniteSQLException: Null 
value is not allowed for column 'age'
at 
org.apache.ignite.internal.processors.query.QueryTypeDescriptorImpl.validateKeyAndValue(QueryTypeDescriptorImpl.java:547)
at 
org.apache.ignite.internal.processors.query.h2.dml.UpdatePlan.processRow(UpdatePlan.java:277)
{code}

> "BinaryObjectException: Cannot find schema for object with compact footer" 
> when "not null" field is defined
> ---
>
> Key: IGNITE-7293
> URL: https://issues.apache.org/jira/browse/IGNITE-7293
> Project: Ignite
>  Issue Type: Bug
>  Components: binary, sql
>Affects Versions: 2.3
>Reporter: Kirill Shirokov
>Assignee: Vladimir Ozerov
>Priority: Major
>  Labels: sql-stability
> Fix For: 2.7
>
>
> If the following test:
> org.apache.ignite.internal.processors.cache.index.H2DynamicTableSelfTest#testAffinityKey
> is modified by adding "not null" constraint to "age" column definition in 
> Person2 table:
> {noformat}
> execute("CREATE TABLE \"Person2\" (\"id\" int, \"city\" 
> varchar," +
> " \"name\" varchar, \"surname\" varchar, \"age\" int not 
> null, PRIMARY KEY (\"id\", \"city\")) WITH " +
> 
> "wrap_key,wrap_value,\"template=cache,affinity_key='city'\"");}}
> {noformat}
> The test fails with the following stack trace during INSERT operation:
> {noformat}
> class org.apache.ignite.binary.BinaryObjectException: Cannot find schema for 
> object with compact footer [typeId=-1199546406, schemaId=0]
>   at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.getOrCreateSchema(BinaryReaderExImpl.java:2020)
>   at 
> org.apache.ignite.internal.binary.BinaryObjectImpl.createSchema(BinaryObjectImpl.java:668)
>   at 
> org.apache.ignite.internal.binary.BinaryFieldImpl.fieldOrder(BinaryFieldImpl.java:284)
>   at 
> org.apache.ignite.internal.binary.BinaryFieldImpl.value(BinaryFieldImpl.java:106)
>   at 
> org.apache.ignite.internal.processors.query.property.QueryBinaryProperty.fieldValue(QueryBinaryProperty.java:243)
>   at 
> org.apache.ignite.internal.processors.query.property.QueryBinaryProperty.value(QueryBinaryProperty.java:139)
>   at 
> org.apache.ignite.internal.processors.query.QueryTypeDescriptorImpl.validateKeyAndValue(QueryTypeDescriptorImpl.java:512)
>   at 
> org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.rowToKeyValue(DmlStatementsProcessor.java:1031)
>   at 
> org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.doInsert(DmlStatementsProcessor.java:877)
>   at 
> org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.processDmlSelectResult(DmlStatementsProcessor.java:438)
>   at 
> org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.executeUpdateStatement(DmlStatementsProcessor.java:420)
>   at 
> org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.updateSqlFields(DmlStatementsProcessor.java:194)
>   at 
> org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.updateSqlFieldsDistributed(DmlStatementsProcessor.java:229)
>   at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSqlFields(IgniteH2Indexing.java:1568)
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor$6.applyx(GridQueryProcessor.java:1983)
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor$6.applyx(GridQueryProcessor.java:1979)
>   at 
> org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2465)
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFieldsNoCache(GridQueryProcessor.java:1988)
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFieldsNoCache(GridQueryProcessor.java:1944)
>   at 
> org.apache.ignite.internal.processors.cache.index.H2DynamicTableSelfTest.checkAffinityKey(H2DynamicTableSelfTest.java:1375)

[jira] [Assigned] (IGNITE-9181) Continuous query with remote filter factory doesn't let nodes join

2018-08-14 Thread Denis Mekhanikov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Mekhanikov reassigned IGNITE-9181:


Assignee: Denis Mekhanikov

> Continuous query with remote filter factory doesn't let nodes join 
> ---
>
> Key: IGNITE-9181
> URL: https://issues.apache.org/jira/browse/IGNITE-9181
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.6
>Reporter: Denis Mekhanikov
>Assignee: Denis Mekhanikov
>Priority: Major
> Attachments: ContinuousQueryNodeJoinTest.java
>
>
> When continuous query is registered, that has a remote filter factory 
> configured, and P2P class loading is enabled, then all new nodes fail with an 
> exception, which doesn't let them join the cluster.
> Exception:
> {noformat}
> [ERROR][tcp-disco-msg-worker-#15%continuous.ContinuousQueryNodeJoinTest1%][TestTcpDiscoverySpi]
>  Runtime error caught during grid runnable execution: GridWorker 
> [name=tcp-disco-msg-worker, 
> igniteInstanceName=continuous.ContinuousQueryNodeJoinTest1, finished=false, 
> hashCode=726450632, interrupted=false, 
> runner=tcp-disco-msg-worker-#15%continuous.ContinuousQueryNodeJoinTest1%], 
> nextNode=[null]
> java.lang.NullPointerException
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandlerV2.getEventFilter(CacheContinuousQueryHandlerV2.java:108)
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler.register(CacheContinuousQueryHandler.java:330)
>   at 
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.registerHandler(GridContinuousProcessor.java:1738)
>   at 
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.onDiscoDataReceived(GridContinuousProcessor.java:646)
>   at 
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.onGridDataReceived(GridContinuousProcessor.java:538)
>   at 
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$5.onExchange(GridDiscoveryManager.java:889)
>   at 
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.onExchange(TcpDiscoverySpi.java:1993)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processNodeAddedMessage(ServerImpl.java:4502)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2804)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2604)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorker.body(ServerImpl.java:7115)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2688)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerThread.body(ServerImpl.java:7059)
>   at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
> {noformat}
> Reproducer is in the attachment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9265) MVCC TX: Two rows with the same key in one MERGE statement produce an exception

2018-08-14 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-9265:


 Summary: MVCC TX: Two rows with the same key in one MERGE 
statement produce an exception
 Key: IGNITE-9265
 URL: https://issues.apache.org/jira/browse/IGNITE-9265
 Project: Ignite
  Issue Type: Bug
Reporter: Igor Seliverstov


In case the operation like {{MERGE INTO INTEGER (_key, _val) KEY(_key) VALUES 
(1,1),(1,2)}} is called an exception is occurred.

Correct behavior: each next update on the same key overwrites pervious one 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9263) Extra batch param usage in GridCacheAdapter removeAll.

2018-08-14 Thread Stanilovsky Evgeny (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stanilovsky Evgeny updated IGNITE-9263:
---
Description: 
what does GG-11231 means?

{code:java}
// TODO GG-11231 (workaround for GG-11231).
private static final int REMOVE_ALL_KEYS_BATCH = 1;
{code}

  was:
Don`t understand why do we need batch code here:

{code:java}
for (Iterator it = 
ctx.offheap().cacheIterator(ctx.cacheId(), true, true, null);
it.hasNext() && keys.size() < REMOVE_ALL_KEYS_BATCH; )
keys.add((K)it.next().key());

...

// TODO GG-11231 (workaround for GG-11231).
private static final int REMOVE_ALL_KEYS_BATCH = 1;
{code}

what does GG-11231 means?


> Extra batch param usage in GridCacheAdapter removeAll.
> --
>
> Key: IGNITE-9263
> URL: https://issues.apache.org/jira/browse/IGNITE-9263
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.6
>Reporter: Stanilovsky Evgeny
>Assignee: Stanilovsky Evgeny
>Priority: Minor
>
> what does GG-11231 means?
> {code:java}
> // TODO GG-11231 (workaround for GG-11231).
> private static final int REMOVE_ALL_KEYS_BATCH = 1;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9256) SQL: make sure that fetched results are cleared from iterator when last element is fetched

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579815#comment-16579815
 ] 

ASF GitHub Bot commented on IGNITE-9256:


GitHub user SGrimstad opened a pull request:

https://github.com/apache/ignite/pull/4540

IGNITE-9256 Implemented



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite IGNITE-9256

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4540.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4540


commit a357744a4539d245e9eceb743fa526089a9325b8
Author: SGrimstad 
Date:   2018-08-14T13:27:34Z

IGNITE-9256 Implemented




> SQL: make sure that fetched results are cleared from iterator when last 
> element is fetched
> --
>
> Key: IGNITE-9256
> URL: https://issues.apache.org/jira/browse/IGNITE-9256
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.6
>Reporter: Vladimir Ozerov
>Assignee: Sergey Grimstad
>Priority: Major
>  Labels: sql-stability
> Fix For: 2.7
>
> Attachments: IGNITE-9256__Implemented.patch
>
>
> In practice it is possible for user to forget to nullify Ignite's result set 
> after iteration is finished. Or he may delay cleanup for some reason. 
> The problem is that we hold the whole H2's result set inside our iterator 
> even after all results are delivered to the user. 
> We should forcibly close and then nullify all H2 objects once all results are 
> returned. 
> Key code pieces:
> {{IgniteH2Indexing.executeSqlQueryWithTimer}} - how we get result from H2
> {{H2ResultSetIterator}} - base iterator with H2 objects inside



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9250) Replace CacheAffinitySharedManager.CachesInfo by ClusterCachesInfo

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579832#comment-16579832
 ] 

ASF GitHub Bot commented on IGNITE-9250:


GitHub user akalash opened a pull request:

https://github.com/apache/ignite/pull/4541

IGNITE-9250 Save configuration during register cache descriptor.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-9250-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4541.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4541


commit af67a9124533c8f0a927d310aab89b5469ad4841
Author: Anton Kalashnikov 
Date:   2018-08-13T08:47:49Z

IGNITE-9250 Save configuration during register cache descriptor.




> Replace CacheAffinitySharedManager.CachesInfo by ClusterCachesInfo
> --
>
> Key: IGNITE-9250
> URL: https://issues.apache.org/jira/browse/IGNITE-9250
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Anton Kalashnikov
>Assignee: Anton Kalashnikov
>Priority: Major
>
> Now we have duplicate of registerCaches(and groups). They holds in 
> ClusterCachesInfo - main storage, and also they holds in 
> CacheAffinitySharedManager.CachesInfo. It looks like redundantly and can lead 
> to unconsistancy of caches info.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7701) SQL system view for node attributes

2018-08-14 Thread Vladimir Ozerov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579841#comment-16579841
 ] 

Vladimir Ozerov commented on IGNITE-7701:
-

[~alex_pl], the only outstanding issue I see is 
\{{org.apache.ignite.internal.processors.query.h2.sys.view.SqlSystemViewNodeAttributes#getRows}}
 - missing null check for \{{nodeId}} (in case of malformed UUID). It would be 
great to have negative tests for bad UUID here. The rest looks good to me.

> SQL system view for node attributes
> ---
>
> Key: IGNITE-7701
> URL: https://issues.apache.org/jira/browse/IGNITE-7701
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Labels: iep-13, sql
> Fix For: 2.7
>
>
> Implement SQL system view to show attributes for each node in topology.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7293) "BinaryObjectException: Cannot find schema for object with compact footer" when "not null" field is defined

2018-08-14 Thread Vladimir Ozerov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-7293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-7293:

Labels: sql-stability  (was: )

> "BinaryObjectException: Cannot find schema for object with compact footer" 
> when "not null" field is defined
> ---
>
> Key: IGNITE-7293
> URL: https://issues.apache.org/jira/browse/IGNITE-7293
> Project: Ignite
>  Issue Type: Bug
>  Components: binary, sql
>Affects Versions: 2.3
>Reporter: Kirill Shirokov
>Assignee: Vladimir Ozerov
>Priority: Major
>  Labels: sql-stability
> Fix For: 2.7
>
>
> If the following test:
> org.apache.ignite.internal.processors.cache.index.H2DynamicTableSelfTest#testAffinityKey
> is modified by adding "not null" constraint to "age" column definition in 
> Person2 table:
> {noformat}
> execute("CREATE TABLE \"Person2\" (\"id\" int, \"city\" 
> varchar," +
> " \"name\" varchar, \"surname\" varchar, \"age\" int not 
> null, PRIMARY KEY (\"id\", \"city\")) WITH " +
> 
> "wrap_key,wrap_value,\"template=cache,affinity_key='city'\"");}}
> {noformat}
> The test fails with the following stack trace during INSERT operation:
> {noformat}
> class org.apache.ignite.binary.BinaryObjectException: Cannot find schema for 
> object with compact footer [typeId=-1199546406, schemaId=0]
>   at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.getOrCreateSchema(BinaryReaderExImpl.java:2020)
>   at 
> org.apache.ignite.internal.binary.BinaryObjectImpl.createSchema(BinaryObjectImpl.java:668)
>   at 
> org.apache.ignite.internal.binary.BinaryFieldImpl.fieldOrder(BinaryFieldImpl.java:284)
>   at 
> org.apache.ignite.internal.binary.BinaryFieldImpl.value(BinaryFieldImpl.java:106)
>   at 
> org.apache.ignite.internal.processors.query.property.QueryBinaryProperty.fieldValue(QueryBinaryProperty.java:243)
>   at 
> org.apache.ignite.internal.processors.query.property.QueryBinaryProperty.value(QueryBinaryProperty.java:139)
>   at 
> org.apache.ignite.internal.processors.query.QueryTypeDescriptorImpl.validateKeyAndValue(QueryTypeDescriptorImpl.java:512)
>   at 
> org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.rowToKeyValue(DmlStatementsProcessor.java:1031)
>   at 
> org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.doInsert(DmlStatementsProcessor.java:877)
>   at 
> org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.processDmlSelectResult(DmlStatementsProcessor.java:438)
>   at 
> org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.executeUpdateStatement(DmlStatementsProcessor.java:420)
>   at 
> org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.updateSqlFields(DmlStatementsProcessor.java:194)
>   at 
> org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.updateSqlFieldsDistributed(DmlStatementsProcessor.java:229)
>   at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSqlFields(IgniteH2Indexing.java:1568)
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor$6.applyx(GridQueryProcessor.java:1983)
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor$6.applyx(GridQueryProcessor.java:1979)
>   at 
> org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2465)
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFieldsNoCache(GridQueryProcessor.java:1988)
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFieldsNoCache(GridQueryProcessor.java:1944)
>   at 
> org.apache.ignite.internal.processors.cache.index.H2DynamicTableSelfTest.checkAffinityKey(H2DynamicTableSelfTest.java:1375)
>   at 
> org.apache.ignite.internal.processors.cache.index.H2DynamicTableSelfTest.testAffinityKey(H2DynamicTableSelfTest.java:1318)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8493) GridToStringBuilder fails with NPE deals with primitive arrays operations.

2018-08-14 Thread Stanilovsky Evgeny (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579842#comment-16579842
 ] 

Stanilovsky Evgeny commented on IGNITE-8493:


thank u [~dpavlov], new TC linked.

> GridToStringBuilder fails with NPE deals with primitive arrays operations.
> --
>
> Key: IGNITE-8493
> URL: https://issues.apache.org/jira/browse/IGNITE-8493
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.4
>Reporter: Stanilovsky Evgeny
>Assignee: Stanilovsky Evgeny
>Priority: Major
> Fix For: 2.7
>
>
> GridToStringBuilder#arrayToString fails with NPE, if input is a primitive 
> array.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7766) Ignite Queries 2: Test always failed IgniteCacheQueryNodeRestartTxSelfTest.testRestarts

2018-08-14 Thread Vladimir Ozerov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-7766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-7766:

Labels: MakeTeamcityGreenAgain sql-stability  (was: MakeTeamcityGreenAgain)

> Ignite Queries 2: Test always failed 
> IgniteCacheQueryNodeRestartTxSelfTest.testRestarts
> ---
>
> Key: IGNITE-7766
> URL: https://issues.apache.org/jira/browse/IGNITE-7766
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Dmitriy Pavlov
>Assignee: Evgenii Zagumennov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, sql-stability
>
> Ignite Queries 2 
>  IgniteBinaryCacheQueryTestSuite2: 
> IgniteCacheQueryNodeRestartTxSelfTest.testRestarts (fail rate 76,1%)
> IgniteCacheQueryNodeRestartTxSelfTest.testRestarts 
>  Current failure: refs/heads/master #345 No changes 11 Feb 18 03:03
> junit.framework.AssertionFailedError: On large page size must retry.
> Last runs fails with 100% probability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9264) Lost partitions raised twice if node left during previous exchange

2018-08-14 Thread Pavel Vinokurov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Vinokurov updated IGNITE-9264:

Description: 
There is possible situation that GridDhtPartitionTopologyImpl#update received 
full map with node that left on a previous exchange with firing lost events. It 
leads to raising events twice.
IgniteCachePartitionLossPolicySelfTest was changed to check raising events for 
all lost partitions 

  was:
There is possible situation that GridDhtPartitionTopologyImpl#update received 
full map with node that left on a previous exchange with firing lost events. It 
leads to raising evens twice.
IgniteCachePartitionLossPolicySelfTest was changed to check raising events for 
all lost partitions 


> Lost partitions raised twice if node left during previous exchange
> --
>
> Key: IGNITE-9264
> URL: https://issues.apache.org/jira/browse/IGNITE-9264
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Pavel Vinokurov
>Assignee: Pavel Vinokurov
>Priority: Major
>
> There is possible situation that GridDhtPartitionTopologyImpl#update received 
> full map with node that left on a previous exchange with firing lost events. 
> It leads to raising events twice.
> IgniteCachePartitionLossPolicySelfTest was changed to check raising events 
> for all lost partitions 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7766) Ignite Queries 2: Test always failed IgniteCacheQueryNodeRestartTxSelfTest.testRestarts

2018-08-14 Thread Vladimir Ozerov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579848#comment-16579848
 ] 

Vladimir Ozerov commented on IGNITE-7766:
-

[~ivan.glukos], unfortunately tests still fail even after IGNITE-8694 merge.

> Ignite Queries 2: Test always failed 
> IgniteCacheQueryNodeRestartTxSelfTest.testRestarts
> ---
>
> Key: IGNITE-7766
> URL: https://issues.apache.org/jira/browse/IGNITE-7766
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Dmitriy Pavlov
>Assignee: Evgenii Zagumennov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, sql-stability
>
> Ignite Queries 2 
>  IgniteBinaryCacheQueryTestSuite2: 
> IgniteCacheQueryNodeRestartTxSelfTest.testRestarts (fail rate 76,1%)
> IgniteCacheQueryNodeRestartTxSelfTest.testRestarts 
>  Current failure: refs/heads/master #345 No changes 11 Feb 18 03:03
> junit.framework.AssertionFailedError: On large page size must retry.
> Last runs fails with 100% probability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9263) Extra batch param usage in GridCacheAdapter removeAll.

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579847#comment-16579847
 ] 

ASF GitHub Bot commented on IGNITE-9263:


GitHub user zstan opened a pull request:

https://github.com/apache/ignite/pull/4542

IGNITE-9263 replace weird commit code.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-9263

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4542.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4542


commit 09d226e3a0d83b3ad27d5298df94329ab64be10c
Author: Evgeny Stanilovskiy 
Date:   2018-08-14T13:59:17Z

IGNITE-9263 replace weird commit code.




> Extra batch param usage in GridCacheAdapter removeAll.
> --
>
> Key: IGNITE-9263
> URL: https://issues.apache.org/jira/browse/IGNITE-9263
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.6
>Reporter: Stanilovsky Evgeny
>Assignee: Stanilovsky Evgeny
>Priority: Minor
>
> what does GG-11231 means?
> {code:java}
> // TODO GG-11231 (workaround for GG-11231).
> private static final int REMOVE_ALL_KEYS_BATCH = 1;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9264) Lost partitions raised twice if node left during previous exchange

2018-08-14 Thread Pavel Vinokurov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Vinokurov updated IGNITE-9264:

Ignite Flags:   (was: Docs Required)

> Lost partitions raised twice if node left during previous exchange
> --
>
> Key: IGNITE-9264
> URL: https://issues.apache.org/jira/browse/IGNITE-9264
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Pavel Vinokurov
>Assignee: Pavel Vinokurov
>Priority: Major
>
> There is possible situation that GridDhtPartitionTopologyImpl#update received 
> full map with node that left on a previous exchange with firing lost events. 
> It leads to raising events twice.
> IgniteCachePartitionLossPolicySelfTest was changed to check raising events 
> for all lost partitions 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9264) Lost partitions raised twice if node left during previous exchange

2018-08-14 Thread Pavel Vinokurov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Vinokurov updated IGNITE-9264:

Description: 
There is possible situation that GridDhtPartitionTopologyImpl#update receives 
full map with node that left on a previous exchange with firing lost events. It 
leads to raising events twice.
IgniteCachePartitionLossPolicySelfTest was changed to check raising events for 
all lost partitions 

  was:
There is possible situation that GridDhtPartitionTopologyImpl#update received 
full map with node that left on a previous exchange with firing lost events. It 
leads to raising events twice.
IgniteCachePartitionLossPolicySelfTest was changed to check raising events for 
all lost partitions 


> Lost partitions raised twice if node left during previous exchange
> --
>
> Key: IGNITE-9264
> URL: https://issues.apache.org/jira/browse/IGNITE-9264
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Pavel Vinokurov
>Assignee: Pavel Vinokurov
>Priority: Major
>
> There is possible situation that GridDhtPartitionTopologyImpl#update receives 
> full map with node that left on a previous exchange with firing lost events. 
> It leads to raising events twice.
> IgniteCachePartitionLossPolicySelfTest was changed to check raising events 
> for all lost partitions 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7165) Re-balancing is cancelled if client node joins

2018-08-14 Thread Dmitry Sherstobitov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579657#comment-16579657
 ] 

Dmitry Sherstobitov commented on IGNITE-7165:
-

For now, I have no reproducer on Java.
I've investigated persistent store in my test and found that there is 
rebalanced data in storage on the node with cleared LFS, but metrics 
LocalNodeMovingPartitionsCount is definitely broken after client node joins the 
cluster. If I remove the client join event after the node is back - rebalance 
finished correctly.

Here is code from my test log: (Rebalance didn't finish in 240 seconds, while 
in previous versions it's done in 10-15 seconds)

[13:14:17][:568 :617] Wait rebalance to finish 8/240Current metric state for 
cache cache_group_3_088 on node 2: 19

[13:18:04][:568 :617] Wait rebalance to finish 235/240Current metric state for 
cache cache_group_3_088 on node 2: 19

> Re-balancing is cancelled if client node joins
> --
>
> Key: IGNITE-7165
> URL: https://issues.apache.org/jira/browse/IGNITE-7165
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Cherkasov
>Assignee: Maxim Muzafarov
>Priority: Critical
>  Labels: rebalance
> Fix For: 2.7
>
> Attachments: node-NO_REBALANCE-7165.log
>
>
> Re-balancing is canceled if client node joins. Re-balancing can take hours 
> and each time when client node joins it starts again:
> [15:10:05,700][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager]
>  Added new node to topology: TcpDiscoveryNode 
> [id=979cf868-1c37-424a-9ad1-12db501f32ef, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 
> 172.31.16.213], sockAddrs=[/0:0:0:0:0:0:0:1:0, /127.0.0.1:0, 
> /172.31.16.213:0], discPort=0, order=36, intOrder=24, 
> lastExchangeTime=1512907805688, loc=false, ver=2.3.1#20171129-sha1:4b1ec0fe, 
> isClient=true]
> [15:10:05,701][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager]
>  Topology snapshot [ver=36, servers=7, clients=5, CPUs=128, heap=160.0GB]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Started 
> exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> crd=false, evt=NODE_JOINED, evtNode=979cf868-1c37-424a-9ad1-12db501f32ef, 
> customEvt=null, allowMerge=true]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionsExchangeFuture]
>  Finish exchange future [startVer=AffinityTopologyVersion [topVer=36, 
> minorTopVer=0], resVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> err=null]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Finished 
> exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> crd=false]
> [15:10:05,703][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion 
> [topVer=36, minorTopVer=0], evt=NODE_JOINED, 
> node=979cf868-1c37-424a-9ad1-12db501f32ef]
> [15:10:08,706][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Cancelled rebalancing from all nodes [topology=AffinityTopologyVersion 
> [topVer=35, minorTopVer=0]]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Rebalancing scheduled [order=[statementp]]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Rebalancing started [top=null, evt=NODE_JOINED, 
> node=a8be3c14-9add-48c3-b099-3fd304cfdbf4]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=2f6bde48-ffb5-4815-bd32-df4e57dc13e0, partitionsCount=18, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=35d01141-4dce-47dd-adf6-a4f3b2bb9da9, partitionsCount=15, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=b3a8be53-e61f-4023-a906-a265923837ba, partitionsCount=15, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=f825cb4e-7dcc-405f-a40d-c1dc1a3ade5a, partitionsCount=12, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=4ae1db91-8b88-4180-a84b-127a303959e9, 

[jira] [Comment Edited] (IGNITE-7165) Re-balancing is cancelled if client node joins

2018-08-14 Thread Maxim Muzafarov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579688#comment-16579688
 ] 

Maxim Muzafarov edited comment on IGNITE-7165 at 8/14/18 11:57 AM:
---

Dmitry,

I've checked logs you provided. Actually it's a common case with rebalacing 
procedure and it completes successfully (accorgind your logs).
We have a lot of tests covered this e.g.:
* {{IgniteCacheGroupsTest#testRestartsAndCacheCreateDestroy}} – 10 casches, 10 
nodes (server + client) random put\get operations on caches.
* 
{{CacheLateAffinityAssignmentTest#testConcurrentStartStaticCachesWithClientNodes}}

So, probably, your issue about {{LocalNodeMovingPartitionsCount}} metrics 
propagation with client nodes. 
I can check it additionally, but It very hepls me if you provide info about 
your test suite.


was (Author: mmuzaf):
Dmitry,

I've checked logs you provided. Actually it's a common case with rebalacing 
procedure and it completes successfully (accorgind your logs).
We have a lot of tests covered this e.g.:
* {{IgniteCacheGroupsTest#testRestartsAndCacheCreateDestroy}} – 10 casches, 10 
nodes (server + client) random put\get operations on caches.
* 
{{CacheLateAffinityAssignmentTest#testConcurrentStartStaticCachesWithClientNodes}}

So, probably, your issue about {{LocalNodeMovingPartitionsCount}} metrics 
propagation for client nodes. 
I can check it additionally, but It very hepls me if you provide info about 
your test suite.

> Re-balancing is cancelled if client node joins
> --
>
> Key: IGNITE-7165
> URL: https://issues.apache.org/jira/browse/IGNITE-7165
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Cherkasov
>Assignee: Maxim Muzafarov
>Priority: Critical
>  Labels: rebalance
> Fix For: 2.7
>
> Attachments: node-NO_REBALANCE-7165.log
>
>
> Re-balancing is canceled if client node joins. Re-balancing can take hours 
> and each time when client node joins it starts again:
> [15:10:05,700][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager]
>  Added new node to topology: TcpDiscoveryNode 
> [id=979cf868-1c37-424a-9ad1-12db501f32ef, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 
> 172.31.16.213], sockAddrs=[/0:0:0:0:0:0:0:1:0, /127.0.0.1:0, 
> /172.31.16.213:0], discPort=0, order=36, intOrder=24, 
> lastExchangeTime=1512907805688, loc=false, ver=2.3.1#20171129-sha1:4b1ec0fe, 
> isClient=true]
> [15:10:05,701][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager]
>  Topology snapshot [ver=36, servers=7, clients=5, CPUs=128, heap=160.0GB]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Started 
> exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> crd=false, evt=NODE_JOINED, evtNode=979cf868-1c37-424a-9ad1-12db501f32ef, 
> customEvt=null, allowMerge=true]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionsExchangeFuture]
>  Finish exchange future [startVer=AffinityTopologyVersion [topVer=36, 
> minorTopVer=0], resVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> err=null]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Finished 
> exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> crd=false]
> [15:10:05,703][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion 
> [topVer=36, minorTopVer=0], evt=NODE_JOINED, 
> node=979cf868-1c37-424a-9ad1-12db501f32ef]
> [15:10:08,706][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Cancelled rebalancing from all nodes [topology=AffinityTopologyVersion 
> [topVer=35, minorTopVer=0]]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Rebalancing scheduled [order=[statementp]]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Rebalancing started [top=null, evt=NODE_JOINED, 
> node=a8be3c14-9add-48c3-b099-3fd304cfdbf4]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=2f6bde48-ffb5-4815-bd32-df4e57dc13e0, partitionsCount=18, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=35d01141-4dce-47dd-adf6-a4f3b2bb9da9, partitionsCount=15, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=b3a8be53-e61f-4023-a906-a265923837ba, 

[jira] [Commented] (IGNITE-9260) StandaloneWalRecordsIterator broken on WalSegmentTailReachedException not in work dir

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579702#comment-16579702
 ] 

ASF GitHub Bot commented on IGNITE-9260:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4533


> StandaloneWalRecordsIterator broken on WalSegmentTailReachedException not in 
> work dir
> -
>
> Key: IGNITE-9260
> URL: https://issues.apache.org/jira/browse/IGNITE-9260
> Project: Ignite
>  Issue Type: Bug
>Reporter: Dmitriy Govorukhin
>Assignee: Dmitriy Govorukhin
>Priority: Major
> Fix For: 2.7
>
>
> After implementation IGNITE-9050, StandaloneWalRecordsIterator became broke 
> because in the standalone mode we can stop the iteration at any moment when 
> the last available segment will be fully read.  And validation which was 
> implemented in IGNITE-9050 is not applicable for standalone mode. Need to 
> change behavior and validate what we stop an iteration in last available WAL 
> segment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-602) [Test] GridToStringBuilder is vulnerable for StackOverflowError caused by infinite recursion

2018-08-14 Thread Alexey Goncharuk (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579724#comment-16579724
 ] 

Alexey Goncharuk commented on IGNITE-602:
-

Thanks, [~SomeFire], we will take a look at the fix shortly.

> [Test] GridToStringBuilder is vulnerable for StackOverflowError caused by 
> infinite recursion
> 
>
> Key: IGNITE-602
> URL: https://issues.apache.org/jira/browse/IGNITE-602
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Reporter: Artem Shutak
>Assignee: Ryabov Dmitrii
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, Muted_test
> Fix For: 2.7
>
>
> See test 
> org.gridgain.grid.util.tostring.GridToStringBuilderSelfTest#_testToStringCheckAdvancedRecursionPrevention
>  and related TODO in same source file.
> Also take a look at 
> http://stackoverflow.com/questions/11300203/most-efficient-way-to-prevent-an-infinite-recursion-in-tostring
> Test should be unmuted on TC after fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8189) Improve ZkDistributedCollectDataFuture#deleteFutureData implementation

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579725#comment-16579725
 ] 

ASF GitHub Bot commented on IGNITE-8189:


GitHub user NSAmelchev opened a pull request:

https://github.com/apache/ignite/pull/4537

IGNITE-8189

Improve batching deleteAll operations.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/NSAmelchev/ignite ignite-8189

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4537.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4537


commit 422102ce298733bde2f10af26a4020c2710347e6
Author: NSAmelchev 
Date:   2018-08-14T12:16:19Z

Draft implementation




> Improve ZkDistributedCollectDataFuture#deleteFutureData implementation
> --
>
> Key: IGNITE-8189
> URL: https://issues.apache.org/jira/browse/IGNITE-8189
> Project: Ignite
>  Issue Type: Improvement
>  Components: zookeeper
>Reporter: Sergey Chugunov
>Assignee: Amelchev Nikita
>Priority: Major
>
> Three issues need to be improved in implementation:
> * two more deleteIfExists methods within the *deleteFutureData* to be 
> included in batching *deleteAll* operation;
> * if request exceeds ZooKeeper max size limit fallback to one-by-one deletion 
> should be used (related ticket IGNITE-8188);
> * ZookeeperClient#deleteAll implementation may throw NoNodeException is case 
> of concurrent operation removing the same nodes, in this case fallback to 
> one-by-one deletion should be used too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9264) Lost partitions raised twice if node failed during exchange

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579767#comment-16579767
 ] 

ASF GitHub Bot commented on IGNITE-9264:


GitHub user pvinokurov opened a pull request:

https://github.com/apache/ignite/pull/4539

IGNITE-9264 Lost partitions raised twice if node failed during exchange



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-9264

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4539.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4539


commit 246cc0dad76f5d9e361470cba533a620246057f1
Author: pvinokurov 
Date:   2018-08-14T12:59:31Z

IGNITE-9264 Lost partitions raised twice if node failed during exchange




> Lost partitions raised twice if node failed during exchange
> ---
>
> Key: IGNITE-9264
> URL: https://issues.apache.org/jira/browse/IGNITE-9264
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Pavel Vinokurov
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9264) Lost partitions raised twice if node left during exchange

2018-08-14 Thread Pavel Vinokurov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Vinokurov updated IGNITE-9264:

Summary: Lost partitions raised twice if node left during exchange  (was: 
Lost partitions raised twice if node failed during exchange)

> Lost partitions raised twice if node left during exchange
> -
>
> Key: IGNITE-9264
> URL: https://issues.apache.org/jira/browse/IGNITE-9264
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Pavel Vinokurov
>Assignee: Pavel Vinokurov
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9249) Tests hang when different threads try to start and stop nodes at the same time.

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579776#comment-16579776
 ] 

ASF GitHub Bot commented on IGNITE-9249:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4515


> Tests hang when different threads try to start and stop nodes at the same 
> time.
> ---
>
> Key: IGNITE-9249
> URL: https://issues.apache.org/jira/browse/IGNITE-9249
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ilya Lantukh
>Assignee: Ilya Lantukh
>Priority: Major
>
> An example of such test is 
> GridCachePartitionedNearDisabledOptimisticTxNodeRestartTest.testRestartWithPutFourNodesOneBackupsOffheapEvict().
> Hanged threads:
> {code}
> "restart-worker-1@63424" prio=5 tid=0x7f5e nid=NA waiting
>   java.lang.Thread.State: WAITING
> at java.lang.Object.wait(Object.java:-1)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:949)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:389)
> at 
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2002)
> at 
> org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)
> at 
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:916)
> at 
> org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1754)
> at 
> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1050)
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2020)
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1725)
> - locked <0xfc36> (a 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance)
> at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1153)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:651)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:920)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:858)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:846)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:812)
> at 
> org.apache.ignite.internal.processors.cache.distributed.GridCacheAbstractNodeRestartSelfTest.access$1000(GridCacheAbstractNodeRestartSelfTest.java:64)
> at 
> org.apache.ignite.internal.processors.cache.distributed.GridCacheAbstractNodeRestartSelfTest$2.run(GridCacheAbstractNodeRestartSelfTest.java:665)
> at java.lang.Thread.run(Thread.java:748)
> "restart-worker-0@63423" prio=5 tid=0x7f5d nid=NA waiting
>   java.lang.Thread.State: WAITING
> at sun.misc.Unsafe.park(Unsafe.java:-1)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
> at 
> org.apache.ignite.internal.util.IgniteUtils.awaitQuiet(IgniteUtils.java:7584)
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.grid(IgnitionEx.java:1666)
> at 
> org.apache.ignite.internal.IgnitionEx.allGrids(IgnitionEx.java:1284)
> at 
> org.apache.ignite.internal.IgnitionEx.allGrids(IgnitionEx.java:1262)
> at org.apache.ignite.Ignition.allGrids(Ignition.java:502)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.awaitTopologyChange(GridAbstractTest.java:2258)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.stopGrid(GridAbstractTest.java:1158)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.stopGrid(GridAbstractTest.java:1133)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.stopGrid(GridAbstractTest.java:1433)
> at 
> org.apache.ignite.internal.processors.cache.distributed.GridCacheAbstractNodeRestartSelfTest.access$800(GridCacheAbstractNodeRestartSelfTest.java:64)
> at 
> org.apache.ignite.internal.processors.cache.distributed.GridCacheAbstractNodeRestartSelfTest$2.run(GridCacheAbstractNodeRestartSelfTest.java:661)
> at 

[jira] [Updated] (IGNITE-9264) Lost partitions raised twice if node left during exchange

2018-08-14 Thread Pavel Vinokurov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Vinokurov updated IGNITE-9264:

Description: 
There is possible situation that GridDhtPartitionTopologyImpl#update received 
full map with node that left on a previous exchange with firing lost events. It 
leads to raising evens twice.
IgniteCachePartitionLossPolicySelfTest was changed to check raising events for 
all lost partitions 

> Lost partitions raised twice if node left during exchange
> -
>
> Key: IGNITE-9264
> URL: https://issues.apache.org/jira/browse/IGNITE-9264
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Pavel Vinokurov
>Assignee: Pavel Vinokurov
>Priority: Major
>
> There is possible situation that GridDhtPartitionTopologyImpl#update received 
> full map with node that left on a previous exchange with firing lost events. 
> It leads to raising evens twice.
> IgniteCachePartitionLossPolicySelfTest was changed to check raising events 
> for all lost partitions 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8673) Reconcile isClient* methods

2018-08-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579789#comment-16579789
 ] 

ASF GitHub Bot commented on IGNITE-8673:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4104


> Reconcile isClient* methods
> ---
>
> Key: IGNITE-8673
> URL: https://issues.apache.org/jira/browse/IGNITE-8673
> Project: Ignite
>  Issue Type: Bug
>Reporter: Eduard Shangareev
>Assignee: Eduard Shangareev
>Priority: Critical
> Fix For: 2.7
>
>
> Now isClient (Mode, Cache and so on) methods semantic could mean different 
> things:
> -the same as IgniteConfiguration#setClientMode;
> -or the way how a node is connected to cluster (in the ring or not).
> Almost in all cases, we need the first thing, but actually methods could 
> return second.
> For example, ClusterNode.isClient means second, but all of us use as first.
> So, I propose to make all methods return first.
> And if there are places which require second replace them with the usage of 
> forceClientMode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9264) Lost partitions raised twice if node left during previous exchange

2018-08-14 Thread Pavel Vinokurov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Vinokurov updated IGNITE-9264:

Summary: Lost partitions raised twice if node left during previous exchange 
 (was: Lost partitions raised twice if node left during exchange)

> Lost partitions raised twice if node left during previous exchange
> --
>
> Key: IGNITE-9264
> URL: https://issues.apache.org/jira/browse/IGNITE-9264
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Pavel Vinokurov
>Assignee: Pavel Vinokurov
>Priority: Major
>
> There is possible situation that GridDhtPartitionTopologyImpl#update received 
> full map with node that left on a previous exchange with firing lost events. 
> It leads to raising evens twice.
> IgniteCachePartitionLossPolicySelfTest was changed to check raising events 
> for all lost partitions 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-7165) Re-balancing is cancelled if client node joins

2018-08-14 Thread Dmitry Sherstobitov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579657#comment-16579657
 ] 

Dmitry Sherstobitov edited comment on IGNITE-7165 at 8/14/18 11:22 AM:
---

For now, I have no reproducer on Java.
 I've investigated persistent store in my test and found that there is 
rebalanced data in storage on the node with cleared LFS, but metrics 
LocalNodeMovingPartitionsCount is definitely broken after client node joins the 
cluster. If I remove the client join event after the node is back - rebalance 
finished correctly.

Here is code from my test log: (Rebalance didn't finish in 240 seconds, while 
in previous versions it's done in 10-15 seconds)

[13:14:17][:568 :617] Wait rebalance to finish 8/240Current metric state for 
cache cache_group_3_088 on node 2: 19
 
 [13:18:04][:568 :617] Wait rebalance to finish 235/240Current metric state for 
cache cache_group_3_088 on node 2: 19

 

P.S. Test runs on a distributed environment, not on a single machine


was (Author: qvad):
For now, I have no reproducer on Java.
I've investigated persistent store in my test and found that there is 
rebalanced data in storage on the node with cleared LFS, but metrics 
LocalNodeMovingPartitionsCount is definitely broken after client node joins the 
cluster. If I remove the client join event after the node is back - rebalance 
finished correctly.

Here is code from my test log: (Rebalance didn't finish in 240 seconds, while 
in previous versions it's done in 10-15 seconds)

[13:14:17][:568 :617] Wait rebalance to finish 8/240Current metric state for 
cache cache_group_3_088 on node 2: 19

[13:18:04][:568 :617] Wait rebalance to finish 235/240Current metric state for 
cache cache_group_3_088 on node 2: 19

> Re-balancing is cancelled if client node joins
> --
>
> Key: IGNITE-7165
> URL: https://issues.apache.org/jira/browse/IGNITE-7165
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Cherkasov
>Assignee: Maxim Muzafarov
>Priority: Critical
>  Labels: rebalance
> Fix For: 2.7
>
> Attachments: node-NO_REBALANCE-7165.log
>
>
> Re-balancing is canceled if client node joins. Re-balancing can take hours 
> and each time when client node joins it starts again:
> [15:10:05,700][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager]
>  Added new node to topology: TcpDiscoveryNode 
> [id=979cf868-1c37-424a-9ad1-12db501f32ef, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 
> 172.31.16.213], sockAddrs=[/0:0:0:0:0:0:0:1:0, /127.0.0.1:0, 
> /172.31.16.213:0], discPort=0, order=36, intOrder=24, 
> lastExchangeTime=1512907805688, loc=false, ver=2.3.1#20171129-sha1:4b1ec0fe, 
> isClient=true]
> [15:10:05,701][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager]
>  Topology snapshot [ver=36, servers=7, clients=5, CPUs=128, heap=160.0GB]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Started 
> exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> crd=false, evt=NODE_JOINED, evtNode=979cf868-1c37-424a-9ad1-12db501f32ef, 
> customEvt=null, allowMerge=true]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionsExchangeFuture]
>  Finish exchange future [startVer=AffinityTopologyVersion [topVer=36, 
> minorTopVer=0], resVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> err=null]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Finished 
> exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> crd=false]
> [15:10:05,703][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion 
> [topVer=36, minorTopVer=0], evt=NODE_JOINED, 
> node=979cf868-1c37-424a-9ad1-12db501f32ef]
> [15:10:08,706][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Cancelled rebalancing from all nodes [topology=AffinityTopologyVersion 
> [topVer=35, minorTopVer=0]]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Rebalancing scheduled [order=[statementp]]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Rebalancing started [top=null, evt=NODE_JOINED, 
> node=a8be3c14-9add-48c3-b099-3fd304cfdbf4]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=2f6bde48-ffb5-4815-bd32-df4e57dc13e0, partitionsCount=18, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> 

[jira] [Updated] (IGNITE-9262) Web console: missed generation of query entities for imported domain modelss

2018-08-14 Thread Vasiliy Sisko (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasiliy Sisko updated IGNITE-9262:
--
Description: 
# Open configuration overview.
 # Import cluster from dabase.
 # Download generated project.

Downloaded project does not contains generated QueryEntities, that are visible 
in project preview.

Second problem:

 

  was:
# Open configuration overview.
 # Import cluster from dabase.
 # Download generated project.

Downloaded project does not contains generated QueryEntities, that are visible 
in project preview.


> Web console: missed generation of query entities for imported domain modelss
> 
>
> Key: IGNITE-9262
> URL: https://issues.apache.org/jira/browse/IGNITE-9262
> Project: Ignite
>  Issue Type: Bug
>  Components: wizards
>Reporter: Vasiliy Sisko
>Assignee: Vasiliy Sisko
>Priority: Major
>
> # Open configuration overview.
>  # Import cluster from dabase.
>  # Download generated project.
> Downloaded project does not contains generated QueryEntities, that are 
> visible in project preview.
> Second problem:
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >