[jira] [Commented] (IGNITE-8787) Striped Executor thread failure is not processed by IgniteFailureProcessor

2018-06-13 Thread Andrew Medvedev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511969#comment-16511969
 ] 

Andrew Medvedev commented on IGNITE-8787:
-

relates to https://issues.apache.org/jira/browse/IGNITE-7772

> Striped Executor thread failure is not processed by IgniteFailureProcessor
> --
>
> Key: IGNITE-8787
> URL: https://issues.apache.org/jira/browse/IGNITE-8787
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Andrew Medvedev
>Priority: Major
> Attachments: after.jstack, before.jstack
>
>
> org.apache.ignite.internal.util.StripedExecutor.Stripe#run currently does not 
> invoke IgniteFailureProcessor upon thread death. This can lead to dying all 
> striped threads on a running node. see jstacks attached taken before and 
> after killing all striped threads (via JMX).
>  
> If striped executor threads are considered critical, they should be processed 
> by IgniteFailureProcessor as well



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8787) Striped Executor thread failure is not processed by IgniteFailureProcessor

2018-06-13 Thread Andrew Medvedev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Medvedev updated IGNITE-8787:

Affects Version/s: 2.5

> Striped Executor thread failure is not processed by IgniteFailureProcessor
> --
>
> Key: IGNITE-8787
> URL: https://issues.apache.org/jira/browse/IGNITE-8787
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Andrew Medvedev
>Priority: Major
> Attachments: after.jstack, before.jstack
>
>
> org.apache.ignite.internal.util.StripedExecutor.Stripe#run currently does not 
> invoke IgniteFailureProcessor upon thread death. This can lead to dying all 
> striped threads on a running node. see jstacks attached taken before and 
> after killing all striped threads (via JMX).
>  
> If striped executor threads are considered critical, they should be processed 
> by IgniteFailureProcessor as well



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8787) Striped Executor thread failure is not processed by IgniteFailureProcessor

2018-06-13 Thread Andrew Medvedev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Medvedev updated IGNITE-8787:

Description: 
org.apache.ignite.internal.util.StripedExecutor.Stripe#run currently does not 
invoke IgniteFailureProcessor upon thread death. This can lead to dying all 
striped threads on a running node. see jstacks attached taken before and after 
killing all striped threads (via JMX).

 

If striped executor threads are considered critical, they should be processed 
by IgniteFailureProcessor as well

  was:
org.apache.ignite.internal.util.StripedExecutor.Stripe#run currently does not 
invoke IgniteFailureProcessor upon thread death. This can lead to dying all 
striped threads on a running node. see jstacks attach before and after killing 
all striped threads (via JMX).

 

If striped executor threads are considered critical, they should be processed 
by IgniteFailureProcessor as well


> Striped Executor thread failure is not processed by IgniteFailureProcessor
> --
>
> Key: IGNITE-8787
> URL: https://issues.apache.org/jira/browse/IGNITE-8787
> Project: Ignite
>  Issue Type: Bug
>Reporter: Andrew Medvedev
>Priority: Major
> Attachments: after.jstack, before.jstack
>
>
> org.apache.ignite.internal.util.StripedExecutor.Stripe#run currently does not 
> invoke IgniteFailureProcessor upon thread death. This can lead to dying all 
> striped threads on a running node. see jstacks attached taken before and 
> after killing all striped threads (via JMX).
>  
> If striped executor threads are considered critical, they should be processed 
> by IgniteFailureProcessor as well



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8787) Striped Executor thread failure is not processed by IgniteFailureProcessor

2018-06-13 Thread Andrew Medvedev (JIRA)
Andrew Medvedev created IGNITE-8787:
---

 Summary: Striped Executor thread failure is not processed by 
IgniteFailureProcessor
 Key: IGNITE-8787
 URL: https://issues.apache.org/jira/browse/IGNITE-8787
 Project: Ignite
  Issue Type: Bug
Reporter: Andrew Medvedev
 Attachments: after.jstack, before.jstack

org.apache.ignite.internal.util.StripedExecutor.Stripe#run currently does not 
invoke IgniteFailureProcessor upon thread death. This can lead to dying all 
striped threads on a running node. see jstacks attach before and after killing 
all striped threads (via JMX).

 

If striped executor threads are considered critical, they should be processed 
by IgniteFailureProcessor as well



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8737) Improve checkpoint logging information

2018-06-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511908#comment-16511908
 ] 

ASF GitHub Bot commented on IGNITE-8737:


GitHub user andrewmed opened a pull request:

https://github.com/apache/ignite/pull/4186

IGNITE-8737: Improve checkpoint logging information



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/andrewmed/ignite ignite-8737

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4186.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4186


commit f6469a5b7cdd0fe9eea4dfb6b78655268720a2cd
Author: AMedvedev 
Date:   2018-06-14T02:22:38Z

IGNITE-8737: Improve checkpoint logging information




> Improve checkpoint logging information
> --
>
> Key: IGNITE-8737
> URL: https://issues.apache.org/jira/browse/IGNITE-8737
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Priority: Major
> Fix For: 2.6
>
>
> 1) Move to INFO log rollover and log archivation events
> 2) Make sure log rollover and archive errors are logged
> 3) When checkpoint finishes, we need to print out which segments were fully 
> covered by this checkpoint in the "Checkpoint finished ..." message



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8737) Improve checkpoint logging information

2018-06-13 Thread Andrew Medvedev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Medvedev reassigned IGNITE-8737:
---

Assignee: Andrew Medvedev

> Improve checkpoint logging information
> --
>
> Key: IGNITE-8737
> URL: https://issues.apache.org/jira/browse/IGNITE-8737
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Assignee: Andrew Medvedev
>Priority: Major
> Fix For: 2.6
>
>
> 1) Move to INFO log rollover and log archivation events
> 2) Make sure log rollover and archive errors are logged
> 3) When checkpoint finishes, we need to print out which segments were fully 
> covered by this checkpoint in the "Checkpoint finished ..." message



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8737) Improve checkpoint logging information

2018-06-13 Thread Andrew Medvedev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511910#comment-16511910
 ] 

Andrew Medvedev commented on IGNITE-8737:
-

TC https://ci.ignite.apache.org/viewQueued.html?itemId=1385433

> Improve checkpoint logging information
> --
>
> Key: IGNITE-8737
> URL: https://issues.apache.org/jira/browse/IGNITE-8737
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Assignee: Andrew Medvedev
>Priority: Major
> Fix For: 2.6
>
>
> 1) Move to INFO log rollover and log archivation events
> 2) Make sure log rollover and archive errors are logged
> 3) When checkpoint finishes, we need to print out which segments were fully 
> covered by this checkpoint in the "Checkpoint finished ..." message



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8322) Yardstick benchmark preloading option

2018-06-13 Thread Andrey Gura (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511521#comment-16511521
 ] 

Andrey Gura commented on IGNITE-8322:
-

[~oleg-ostanin] LGTM. Merged to master branch. Thanks for contribution!

> Yardstick benchmark preloading option
> -
>
> Key: IGNITE-8322
> URL: https://issues.apache.org/jira/browse/IGNITE-8322
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Oleg Ostanin
>Assignee: Oleg Ostanin
>Priority: Major
> Fix For: 2.6
>
>
> Yardstick has no benchmarks with eviction on the disk (PDS). For that puspose 
> we need following:
> 1. Make new configuration and put every cache into a separate date region:
> atomic,tx,atomic-index,query,compute
> 2. Add a new preload option for a benchmark: preload up to a size passed from 
> that option. There two options:
>  * total size of preload data (bytes)
>  * the size of data in memory against total size (percent)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8786) session.removeAttribute does not work as expected

2018-06-13 Thread Dana Shaw (JIRA)
Dana Shaw created IGNITE-8786:
-

 Summary: session.removeAttribute does not work as expected
 Key: IGNITE-8786
 URL: https://issues.apache.org/jira/browse/IGNITE-8786
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.5, 2.4
 Environment: Java 8, Java 9, Java 10

Ignite 2.4 and Ignite 2.5
Reporter: Dana Shaw


Sample project: [https://github.com/daynok/ignite-webapp]

 

What I'm noticing is that  session.removeAttribute doesn't really remove the 

attribute, it only sets the value to null.  I'm not sure if this is a setup 
issue on my end or what.  I thought this might be related to jsf, removed jsf 
and the issue persists. 

Thanks in advance and please help! 

dshaw 

The closest issue that I could find to my particular issue was: 
[https://github.com/apache/ignite/pull/2243]

I patched my local ignite repo with #2243 , redeployed to tomcat and 2 
ignite nodes but am seeing the same issue. 

The closest issue that I could find to my particular issue was: 
[https://github.com/apache/ignite/pull/2243]


My setup (client/server): 
- Ignite 2.5.0 (2 node cluster) 
- Apache Tomcat 7 
- Java 9 

tomcat setenv.sh 
#!/bin/sh 
export JAVA_HOME=/opt/java/jdk-10.0.1 
export PATH=$JAVA_HOME/bin:$PATH 
export CATALINA_OPTS="--add-exports java.base/jdk.internal.misc=ALL-UNNAMED 
    --add-exports java.base/sun.nio.ch=ALL-UNNAMED 
    --add-exports java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED 
    --add-exports jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED 
    --add-modules java.xml.bind" 
export CATALINA_HOME=/opt/apache/apache-tomcat-7.0.86_node 


# Config use by 3 tomcat notes 
client-config.xml (used by tomcat 7) 


 

 

http://www.springframework.org/schema/beans]" 
       xmlns:xsi="[http://www.w3.org/2001/XMLSchema-instance]" 
       xmlns:util="[http://www.springframework.org/schema/util]" 
       xsi:schemaLocation=" 
        [http://www.springframework.org/schema/beans]
        [http://www.springframework.org/schema/beans/spring-beans.xsd]
        [http://www.springframework.org/schema/util]
        [http://www.springframework.org/schema/util/spring-util.xsd];> 





 
     
       

       
     
       
         
           
             
              172.24.2.156:47500..47509 
              172.24.3.28:47500..47509 
             
           
         
       
     
   
    
 



# Config use by 2 ignite nodes 

http://www.springframework.org/schema/beans]" 
       xmlns:xsi="[http://www.w3.org/2001/XMLSchema-instance]" 
       xmlns:util="[http://www.springframework.org/schema/util]" 
       xsi:schemaLocation=" 
       [http://www.springframework.org/schema/beans]
       [http://www.springframework.org/schema/beans/spring-beans.xsd]
       [http://www.springframework.org/schema/util]
       [http://www.springframework.org/schema/util/spring-util.xsd];> 
     
     

         
         
                 
         

         
             
                 
                     
                     
                     
                     
                 
             
         
         
                 
                         
                                 
                                         
                                                 
                                                        
172.24.2.156:47500..47509 
                                                        
172.24.3.28:47500..47509 
                                                 
                                         
                                 
                         
                 
         
     
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-8402) Long running transaction JMX

2018-06-13 Thread Ivan Kapralov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511486#comment-16511486
 ] 

Ivan Kapralov edited comment on IGNITE-8402 at 6/13/18 6:19 PM:


Facing necessity to monitor long running transactions parameters on cluster 
nodes via JMX.

Solution implemented in IGN-7910 is unfortunately inappropriate, because can 
not be run by automatic monitoring systems.


was (Author: ivan kapralov):
Facing necessity to monitor long running transactions parameters on cluster 
nodes via JMX.

> Long running transaction JMX
> 
>
> Key: IGNITE-8402
> URL: https://issues.apache.org/jira/browse/IGNITE-8402
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 2.4
>Reporter: Ivan Kapralov
>Priority: Major
> Fix For: 2.5
>
>
> Facing necessity in Long Running Transactions JMX metric implementation.
> Needed: transaction start time, node ID, duration, cache full qualified name, 
> originator ID.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (IGNITE-8402) Long running transaction JMX

2018-06-13 Thread Ivan Kapralov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Kapralov reopened IGNITE-8402:
---

Facing necessity to monitor long running transactions parameters on cluster 
nodes via JMX.

> Long running transaction JMX
> 
>
> Key: IGNITE-8402
> URL: https://issues.apache.org/jira/browse/IGNITE-8402
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 2.4
>Reporter: Ivan Kapralov
>Priority: Major
> Fix For: 2.5
>
>
> Facing necessity in Long Running Transactions JMX metric implementation.
> Needed: transaction start time, node ID, duration, cache full qualified name, 
> originator ID.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7700) SQL system view for list of nodes

2018-06-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511416#comment-16511416
 ] 

ASF GitHub Bot commented on IGNITE-7700:


GitHub user alex-plekhanov opened a pull request:

https://github.com/apache/ignite/pull/4185

IGNITE-7700 SQL system view for list of nodes



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alex-plekhanov/ignite ignite-7700

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4185.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4185


commit f65eca5c41eeda35960d8d010b2640fdbf98bf46
Author: Aleksey Plekhanov 
Date:   2018-06-13T14:52:25Z

IGNITE-7700 SQL system view for list of nodes




> SQL system view for list of nodes
> -
>
> Key: IGNITE-7700
> URL: https://issues.apache.org/jira/browse/IGNITE-7700
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Labels: iep-13, sql
> Fix For: 2.6
>
>
> Implement SQL system view to show list of nodes in topology.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8785) Node may hang indefinitely in CONNECTING state during cluster segmentation

2018-06-13 Thread Pavel Kovalenko (JIRA)
Pavel Kovalenko created IGNITE-8785:
---

 Summary: Node may hang indefinitely in CONNECTING state during 
cluster segmentation
 Key: IGNITE-8785
 URL: https://issues.apache.org/jira/browse/IGNITE-8785
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.5
Reporter: Pavel Kovalenko
 Fix For: 2.6


Affected test: 
org.apache.ignite.internal.processors.cache.IgniteTopologyValidatorGridSplitCacheTest#testTopologyValidatorWithCacheGroup

Node hangs with following stacktrace:

{noformat}
"grid-starter-testTopologyValidatorWithCacheGroup-22" #117619 prio=5 os_prio=0 
tid=0x7f17dd19b800 nid=0x304a in Object.wait() [0x7f16b19df000]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:931)
- locked <0x000705ee4a60> (a java.lang.Object)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:373)
at 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:1948)
at 
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)
at 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:915)
at 
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1739)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1046)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2014)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1723)
- locked <0x000705995ec0> (a 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1151)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:649)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:882)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:845)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:833)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:799)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$3.call(GridAbstractTest.java:742)
at 
org.apache.ignite.testframework.GridTestThread.run(GridTestThread.java:86)
{noformat}

It seems that node never receives acknowledgment from coordinator.

There were some failure before:

{noformat}
[org.apache.ignite:ignite-core] [2018-06-10 04:59:18,876][WARN 
][grid-starter-testTopologyValidatorWithCacheGroup-22][IgniteCacheTopologySplitAbstractTest$SplitTcpDiscoverySpi]
 Node has not been connected to topology and will repeat join process. Check 
remote nodes logs for possible error messages. Note that large topology may 
require significant time to start. Increase 'TcpDiscoverySpi.networkTimeout' 
configuration property if getting this message on the starting nodes 
[networkTimeout=5000]
{noformat}





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8386) SQL: Make sure PK index do not use wrapped object

2018-06-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511394#comment-16511394
 ] 

ASF GitHub Bot commented on IGNITE-8386:


GitHub user alexpaschenko opened a pull request:

https://github.com/apache/ignite/pull/4184

IGNITE-8386



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8386

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4184.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4184


commit ce3de793e46e3f77dadf44643fd7e5054b8f4fa3
Author: Alexander Paschenko 
Date:   2018-05-14T17:38:47Z

IGNITE-8384 First steps

commit 03617740ab06296558bd30b3e4bdcb79781f12b1
Author: Alexander Paschenko 
Date:   2018-05-16T17:37:41Z

IGNITE-8384 Test fixes.

commit 6a3bd7e97741301d55d309d2fb9ad1b7fc2ca4fc
Author: Alexander Paschenko 
Date:   2018-05-18T17:09:49Z

Test fixes

commit f5b75db8430a63024719f2f25c4f98fa2f1b1348
Author: Alexander Paschenko 
Date:   2018-05-29T12:51:54Z

Correct check for exhausted indexes.

commit d47caf7bd55fba83d8d9c27495678655a0750b82
Author: Alexander Paschenko 
Date:   2018-05-29T16:15:59Z

Don't rely on links on update

commit 42b9bac6bd3c415cd7e534bdf347c5971d60b0a0
Author: Alexander Paschenko 
Date:   2018-05-29T17:15:02Z

Merge remote-tracking branch 'apache/master' into ignite-8384

commit ad3f6f11b4e5c31b8eb1d84c02dafd6b1adc71ba
Author: Alexander Paschenko 
Date:   2018-05-30T15:23:44Z

Test fix

commit 8a8c906d918c6846a724379308f966a7a23d7df1
Author: Alexander Paschenko 
Date:   2018-06-01T18:08:04Z

Tests fix

commit f1d7fbbc886ea30904040e9796cfeec16046f449
Author: Alexander Paschenko 
Date:   2018-06-01T18:16:07Z

Minors

commit a3331b96f45683f88b500aec013ad429799ce564
Author: Alexander Paschenko 
Date:   2018-06-01T18:27:45Z

Revert "Minors"

This reverts commit f1d7fbb

commit 563f1ab8527d38902eeb697bf6ebf932930ba289
Author: Alexander Paschenko 
Date:   2018-06-01T18:44:59Z

Fix?

commit 5d8d8369ee435812b6f4058efe03be5cf840432b
Author: Alexander Paschenko 
Date:   2018-06-04T12:44:14Z

PK comparison fix.

commit 934f7205bc15e20df50e515ef4d29c32435a8bc4
Author: Alexander Paschenko 
Date:   2018-06-04T12:45:29Z

Merge remote-tracking branch 'apache/master' into ignite-8384

commit 5741ce50db6c1020dbfd7c4187b80a16874d2e36
Author: Alexander Paschenko 
Date:   2018-06-04T12:51:21Z

Minors

commit 405135d1c9e539e54181b2c712db0caf659e9c96
Author: Alexander Paschenko 
Date:   2018-06-04T13:56:02Z

Minor

commit 9f06e2ec50d18aa8cc3a053d2636329447744677
Author: Alexander Paschenko 
Date:   2018-06-04T14:56:02Z

Minor

commit bd87b61fc434eb0a0bf26981581a72426d82a3ba
Author: Alexander Paschenko 
Date:   2018-06-06T13:11:52Z

Non-inline idxs fix

commit f7a4a42d1b7e2f1105f734c6aeae30d14449b422
Author: Alexander Paschenko 
Date:   2018-06-06T13:12:47Z

Merge remote-tracking branch 'apache/master' into ignite-8384

commit 244d1243d8e8303b2e39788ddcb6d1a05ec0b5de
Author: Alexander Paschenko 
Date:   2018-06-06T15:30:16Z

Restore.

commit 2696eb87af15b4591a724ea85c5cf99ba8924ff9
Author: Alexander Paschenko 
Date:   2018-06-08T18:58:39Z

Index migration test

commit b6273d303ec5e84c550dba980c0b080ef48c7535
Author: Alexander Paschenko 
Date:   2018-06-09T11:39:52Z

Merge remote-tracking branch 'apache/master' into ignite-8384

commit 9fd0fabab0585727f5afd7ff4d55ffa2b00c7c64
Author: devozerov 
Date:   2018-06-09T13:17:39Z

Merge branch 'master' into ignite-8384

commit fb32f2389c60141fa5a27e5b0c3706c959ada7f0
Author: devozerov 
Date:   2018-06-09T14:19:35Z

Minors.

commit ad9c692ab99dd68e368cf1bee927b0bf0065a15c
Author: devozerov 
Date:   2018-06-09T14:20:29Z

Merge remote-tracking branch 'upstream/ignite-8384' into ignite-8384

commit 93b57df524e19c771127766ca9c68fe67ac77109
Author: Alexander Paschenko 
Date:   2018-06-09T17:37:03Z

IGNITE-8386 Contd

commit a9d65d48bcee042678aeb2e00ffc51c31179cf36
Author: Alexander Paschenko 
Date:   2018-06-13T15:52:31Z

JDBC test fixes

commit 61b64a2011d16a69d7d589cf11766a4452e4a296
Author: Alexander Paschenko 
Date:   2018-06-13T15:53:54Z

Merge remote-tracking branch 'apache/master' into ignite-8386

commit e0a366a3a85154790a929e88f960c6e65ff2a76f
Author: Alexander Paschenko 
Date:   2018-06-13T16:31:36Z

Merge remote-tracking branch 'origin/ignite-8384' into ignite-8386




> SQL: Make sure PK index do not use wrapped object
> -
>
> Key: IGNITE-8386
> URL: https://issues.apache.org/jira/browse/IGNITE-8386
> Project: Ignite
>  Issue Type: Task
>  

[jira] [Created] (IGNITE-8784) Deadlock during simultaneous client reconnect and node stop

2018-06-13 Thread Pavel Kovalenko (JIRA)
Pavel Kovalenko created IGNITE-8784:
---

 Summary: Deadlock during simultaneous client reconnect and node 
stop
 Key: IGNITE-8784
 URL: https://issues.apache.org/jira/browse/IGNITE-8784
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.5
Reporter: Pavel Kovalenko
 Fix For: 2.6



{noformat}
[18:48:22,665][ERROR][tcp-client-disco-msg-worker-#467%client%][IgniteKernal%client]
 Failed to reconnect, will stop node
class org.apache.ignite.IgniteException: Failed to wait for local node joined 
event (grid is stopping).
at 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.localJoin(GridDiscoveryManager.java:2193)
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.onKernalStart(GridCachePartitionExchangeManager.java:583)
at 
org.apache.ignite.internal.processors.cache.GridCacheSharedContext.onReconnected(GridCacheSharedContext.java:396)
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.onReconnected(GridCacheProcessor.java:1159)
at 
org.apache.ignite.internal.IgniteKernal.onReconnected(IgniteKernal.java:3915)
at 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.onDiscovery0(GridDiscoveryManager.java:830)
at 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.onDiscovery(GridDiscoveryManager.java:589)
at 
org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.notifyDiscovery(ClientImpl.java:2423)
at 
org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.notifyDiscovery(ClientImpl.java:2402)
at 
org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.processNodeAddFinishedMessage(ClientImpl.java:2047)
at 
org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.processDiscoveryMessage(ClientImpl.java:1896)
at 
org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.body(ClientImpl.java:1788)
at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to wait for 
local node joined event (grid is stopping).
at 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.onKernalStop0(GridDiscoveryManager.java:1657)
at 
org.apache.ignite.internal.managers.GridManagerAdapter.onKernalStop(GridManagerAdapter.java:652)
at org.apache.ignite.internal.IgniteKernal.stop0(IgniteKernal.java:2218)
at org.apache.ignite.internal.IgniteKernal.stop(IgniteKernal.java:2166)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.stop0(IgnitionEx.java:2588)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.stop(IgnitionEx.java:2551)
at org.apache.ignite.internal.IgnitionEx.stop(IgnitionEx.java:372)
at org.apache.ignite.Ignition.stop(Ignition.java:229)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.stopGrid(GridAbstractTest.java:1088)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.stopAllGrids(GridAbstractTest.java:1128)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.stopAllGrids(GridAbstractTest.java:1109)
at 
org.gridgain.grid.internal.processors.cache.database.IgniteDbSnapshotNotStableTopologiesTest.afterTest(IgniteDbSnapshotNotStableTopologiesTest.java:250)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.tearDown(GridAbstractTest.java:1694)
at 
org.apache.ignite.testframework.junits.common.GridCommonAbstractTest.tearDown(GridCommonAbstractTest.java:492)
at junit.framework.TestCase.runBare(TestCase.java:146)
at junit.framework.TestResult$1.protect(TestResult.java:122)
at junit.framework.TestResult.runProtected(TestResult.java:142)
at junit.framework.TestResult.run(TestResult.java:125)
at junit.framework.TestCase.run(TestCase.java:129)
at junit.framework.TestSuite.runTest(TestSuite.java:255)
at junit.framework.TestSuite.run(TestSuite.java:250)
at junit.framework.TestSuite.runTest(TestSuite.java:255)
at junit.framework.TestSuite.run(TestSuite.java:250)
at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 

[jira] [Assigned] (IGNITE-8783) Failover tests periodically cause hanging of the whole Data Structures suite on TC

2018-06-13 Thread Anton Vinogradov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov reassigned IGNITE-8783:


Assignee: Anton Vinogradov

> Failover tests periodically cause hanging of the whole Data Structures suite 
> on TC
> --
>
> Key: IGNITE-8783
> URL: https://issues.apache.org/jira/browse/IGNITE-8783
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures
>Reporter: Ivan Rakov
>Assignee: Anton Vinogradov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> History of suite runs: 
> https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_DataStructures=buildTypeHistoryList_IgniteTests24Java8=%3Cdefault%3E
> Chance of suite hang is 18% in master (based on previous 50 runs).
> Hang is always caused by one of the following failover tests:
> {noformat}
> GridCacheReplicatedDataStructuresFailoverSelfTest#testAtomicSequenceConstantTopologyChange
> GridCachePartitionedDataStructuresFailoverSelfTest#testFairReentrantLockConstantTopologyChangeNonFailoverSafe
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8783) Failover tests periodically cause hanging of the whole Data Structures suite on TC

2018-06-13 Thread Ivan Rakov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Rakov updated IGNITE-8783:
---
Description: 
History of suite runs: 
https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_DataStructures=buildTypeHistoryList_IgniteTests24Java8=%3Cdefault%3E
Chance of suite hang is 18% in master (based on previous 50 runs).
Hang is always caused by one of the following failover tests:
{noformat}
GridCacheReplicatedDataStructuresFailoverSelfTest#testAtomicSequenceConstantTopologyChange
GridCachePartitionedDataStructuresFailoverSelfTest#testFairReentrantLockConstantTopologyChangeNonFailoverSafe
{noformat}

  was:
History of suite runs: 
https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_DataStructures=buildTypeHistoryList_IgniteTests24Java8=%3Cdefault%3E
Chance of suite hang is 18% in master (based on previous 50 runs).
One of the following failover tests is always a reason of hang:
{noformat}
GridCacheReplicatedDataStructuresFailoverSelfTest#testAtomicSequenceConstantTopologyChange
GridCachePartitionedDataStructuresFailoverSelfTest#testFairReentrantLockConstantTopologyChangeNonFailoverSafe
{noformat}


> Failover tests periodically cause hanging of the whole Data Structures suite 
> on TC
> --
>
> Key: IGNITE-8783
> URL: https://issues.apache.org/jira/browse/IGNITE-8783
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures
>Reporter: Ivan Rakov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> History of suite runs: 
> https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_DataStructures=buildTypeHistoryList_IgniteTests24Java8=%3Cdefault%3E
> Chance of suite hang is 18% in master (based on previous 50 runs).
> Hang is always caused by one of the following failover tests:
> {noformat}
> GridCacheReplicatedDataStructuresFailoverSelfTest#testAtomicSequenceConstantTopologyChange
> GridCachePartitionedDataStructuresFailoverSelfTest#testFairReentrantLockConstantTopologyChangeNonFailoverSafe
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8783) Failover tests periodically cause hanging of the whole Data Structures suite on TC

2018-06-13 Thread Ivan Rakov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Rakov updated IGNITE-8783:
---
Description: 
History of suite runs: 
https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_DataStructures=buildTypeHistoryList_IgniteTests24Java8=%3Cdefault%3E
Chance of suite hang is 18% in master (based on previous 50 runs).
One of the following failover tests is always a reason of hang:
{noformat}
GridCacheReplicatedDataStructuresFailoverSelfTest#testAtomicSequenceConstantTopologyChange
GridCachePartitionedDataStructuresFailoverSelfTest#testFairReentrantLockConstantTopologyChangeNonFailoverSafe
{noformat}

  was:
History of suite runs: 
https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_DataStructures=buildTypeHistoryList_IgniteTests24Java8=%3Cdefault%3E
Chance of suite hang is 18% (based on previous 50 runs).
One of the following failover tests is always a reason of hang:
{noformat}
GridCacheReplicatedDataStructuresFailoverSelfTest#testAtomicSequenceConstantTopologyChange
GridCachePartitionedDataStructuresFailoverSelfTest#testFairReentrantLockConstantTopologyChangeNonFailoverSafe
{noformat}


> Failover tests periodically cause hanging of the whole Data Structures suite 
> on TC
> --
>
> Key: IGNITE-8783
> URL: https://issues.apache.org/jira/browse/IGNITE-8783
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures
>Reporter: Ivan Rakov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> History of suite runs: 
> https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_DataStructures=buildTypeHistoryList_IgniteTests24Java8=%3Cdefault%3E
> Chance of suite hang is 18% in master (based on previous 50 runs).
> One of the following failover tests is always a reason of hang:
> {noformat}
> GridCacheReplicatedDataStructuresFailoverSelfTest#testAtomicSequenceConstantTopologyChange
> GridCachePartitionedDataStructuresFailoverSelfTest#testFairReentrantLockConstantTopologyChangeNonFailoverSafe
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8783) Failover tests periodically cause hanging of the whole Data Structures suite on TC

2018-06-13 Thread Ivan Rakov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Rakov updated IGNITE-8783:
---
Labels: MakeTeamcityGreenAgain  (was: )

> Failover tests periodically cause hanging of the whole Data Structures suite 
> on TC
> --
>
> Key: IGNITE-8783
> URL: https://issues.apache.org/jira/browse/IGNITE-8783
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures
>Reporter: Ivan Rakov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> History of suite runs: 
> https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_DataStructures=buildTypeHistoryList_IgniteTests24Java8=%3Cdefault%3E
> Chance of suite hang is 18% in master (based on previous 50 runs).
> Hang is always caused by one of the following failover tests:
> {noformat}
> GridCacheReplicatedDataStructuresFailoverSelfTest#testAtomicSequenceConstantTopologyChange
> GridCachePartitionedDataStructuresFailoverSelfTest#testFairReentrantLockConstantTopologyChangeNonFailoverSafe
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8783) Failover tests periodically cause hanging of the whole Data Structures suite on TC

2018-06-13 Thread Ivan Rakov (JIRA)
Ivan Rakov created IGNITE-8783:
--

 Summary: Failover tests periodically cause hanging of the whole 
Data Structures suite on TC
 Key: IGNITE-8783
 URL: https://issues.apache.org/jira/browse/IGNITE-8783
 Project: Ignite
  Issue Type: Bug
  Components: data structures
Reporter: Ivan Rakov


History of suite runs: 
https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_DataStructures=buildTypeHistoryList_IgniteTests24Java8=%3Cdefault%3E
Chance of suite hang is 18% (based on previous 50 runs).
One of the following failover tests is always a reason of hang:
{noformat}
GridCacheReplicatedDataStructuresFailoverSelfTest#testAtomicSequenceConstantTopologyChange
GridCachePartitionedDataStructuresFailoverSelfTest#testFairReentrantLockConstantTopologyChangeNonFailoverSafe
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8777) REST: metadata command failed on cluster of size 1.

2018-06-13 Thread Alexey Kuznetsov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511326#comment-16511326
 ] 

Alexey Kuznetsov commented on IGNITE-8777:
--

[~Chandresh Pancholi] You need to add more tests to 
JettyRestProcessorAbstractSelfTest class.
For example with names testMetadataOneNode(), testMetadataManyNodes() + helper 
method checkMetadata(int nodesCnt)
{code}
private void checkMetadata(int nodesCnt) {
 ... test logic ...
}

private void testMetadataOneNode() {
 checkMetadata(1);
}

private void testMetadataManyNodes() {
 checkMetadata(2);
}
{code}

Note, you will need to rework method int gridCount() to be able return needed 
value.

> REST: metadata command failed on cluster of size 1.
> ---
>
> Key: IGNITE-8777
> URL: https://issues.apache.org/jira/browse/IGNITE-8777
> Project: Ignite
>  Issue Type: Improvement
>  Components: rest
>Affects Versions: 2.5
>Reporter: Alexey Kuznetsov
>Assignee: Chandresh Pancholi
>Priority: Major
>  Labels: newbie
>
> Start *only one *node.
> Execute REST command: 
> http://localhost:8080/ignite?cmd=getorcreate=myNewPartionedCache=2
> Cache will be created.
> Execute 
> http://localhost:8080/ignite?cmd=metadata=myNewPartionedCache
> Error will be returned:  {“successStatus”:1,“error”:“Failed to handle 
> request: [req=CACHE_METADATA, err=Failed to request meta data. 
> myNewPartionedCache is not found]“,”response”:null,“sessionToken”:null}
> After some debug, I see in code GridCacheCommandHandler.MetadataTask#map:
> {code}
> ...
> for (int i = 1; i < subgrid.size(); i++) {
>  
> }
> if (map.isEmpty())
> throw new IgniteException("Failed to request meta data. " 
> + cacheName + " is not found");
> ...
> {code}
> So, in case of cluster with only one node this code will throw exception.
> I guess the fix should be - just replace "int i = 1" with "int i = 0".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8605) sqlline can't work with terminal on newer ncurses

2018-06-13 Thread Oleg Ostanin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511318#comment-16511318
 ] 

Oleg Ostanin commented on IGNITE-8605:
--

[~ilyak], changes looks good to me. [~vozerov], please merge.

> sqlline can't work with terminal on newer ncurses
> -
>
> Key: IGNITE-8605
> URL: https://issues.apache.org/jira/browse/IGNITE-8605
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.5
>Reporter: Ilya Kasnacheev
>Assignee: Ilya Kasnacheev
>Priority: Major
>
> On Ubuntu 18.04:
> {code}
> ~/w/incubator-ignite/modules/sqlline% IGNITE_HOME=~/w/incubator-ignite . 
> bin/sqlline.sh
> [ERROR] Failed to construct terminal; falling back to unsupported
> java.lang.NumberFormatException: For input string: "0x100"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.lang.Integer.parseInt(Integer.java:580)
> at java.lang.Integer.valueOf(Integer.java:766)
> at jline.internal.InfoCmp.parseInfoCmp(InfoCmp.java:59)
> at jline.UnixTerminal.parseInfoCmp(UnixTerminal.java:242)
> at jline.UnixTerminal.(UnixTerminal.java:65)
> at jline.UnixTerminal.(UnixTerminal.java:50)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at jline.TerminalFactory.getFlavor(TerminalFactory.java:211)
> at jline.TerminalFactory.create(TerminalFactory.java:102)
> at jline.TerminalFactory.get(TerminalFactory.java:186)
> at jline.TerminalFactory.get(TerminalFactory.java:192)
> at sqlline.SqlLineOpts.(SqlLineOpts.java:45)
> at sqlline.SqlLine.(SqlLine.java:54)
> at sqlline.SqlLine.start(SqlLine.java:372)
> at sqlline.SqlLine.main(SqlLine.java:265)
> [ERROR] Failed to construct terminal; falling back to unsupported
> java.lang.NumberFormatException: For input string: "0x100"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.lang.Integer.parseInt(Integer.java:580)
> at java.lang.Integer.valueOf(Integer.java:766)
> at jline.internal.InfoCmp.parseInfoCmp(InfoCmp.java:59)
> at jline.UnixTerminal.parseInfoCmp(UnixTerminal.java:242)
> at jline.UnixTerminal.(UnixTerminal.java:65)
> at jline.UnixTerminal.(UnixTerminal.java:50)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at jline.TerminalFactory.getFlavor(TerminalFactory.java:211)
> at jline.TerminalFactory.create(TerminalFactory.java:102)
> at jline.TerminalFactory.create(TerminalFactory.java:51)
> at sqlline.SqlLine.getConsoleReader(SqlLine.java:705)
> at sqlline.SqlLine.begin(SqlLine.java:639)
> at sqlline.SqlLine.start(SqlLine.java:373)
> at sqlline.SqlLine.main(SqlLine.java:265)
> sqlline version 1.3.0
> sqlline>
> ... and then history and command editing won't work
> {code}
> See also https://github.com/jline/jline2/issues/281
> I think we should manually peg jline verison to 2.14.4



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-602) [Test] GridToStringBuilder is vulnerable for StackOverflowError caused by infinite recursion

2018-06-13 Thread Andrey Gura (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511300#comment-16511300
 ] 

Andrey Gura commented on IGNITE-602:


[~SomeFire] Could you please move ticket to "Patch available" status?

> [Test] GridToStringBuilder is vulnerable for StackOverflowError caused by 
> infinite recursion
> 
>
> Key: IGNITE-602
> URL: https://issues.apache.org/jira/browse/IGNITE-602
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Reporter: Artem Shutak
>Assignee: Ryabov Dmitrii
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, Muted_test
> Fix For: 2.6
>
>
> See test 
> org.gridgain.grid.util.tostring.GridToStringBuilderSelfTest#_testToStringCheckAdvancedRecursionPrevention
>  and related TODO in same source file.
> Also take a look at 
> http://stackoverflow.com/questions/11300203/most-efficient-way-to-prevent-an-infinite-recursion-in-tostring
> Test should be unmuted on TC after fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-602) [Test] GridToStringBuilder is vulnerable for StackOverflowError caused by infinite recursion

2018-06-13 Thread Andrey Gura (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511297#comment-16511297
 ] 

Andrey Gura commented on IGNITE-602:


[~SomeFire] Seems ok. But it would be better to review somebody else. 
[~agoncharuk] Could you please take a look? (Actual PR: 
https://github.com/apache/ignite/pull/1558)

> [Test] GridToStringBuilder is vulnerable for StackOverflowError caused by 
> infinite recursion
> 
>
> Key: IGNITE-602
> URL: https://issues.apache.org/jira/browse/IGNITE-602
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Reporter: Artem Shutak
>Assignee: Ryabov Dmitrii
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, Muted_test
> Fix For: 2.6
>
>
> See test 
> org.gridgain.grid.util.tostring.GridToStringBuilderSelfTest#_testToStringCheckAdvancedRecursionPrevention
>  and related TODO in same source file.
> Also take a look at 
> http://stackoverflow.com/questions/11300203/most-efficient-way-to-prevent-an-infinite-recursion-in-tostring
> Test should be unmuted on TC after fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8777) REST: metadata command failed on cluster of size 1.

2018-06-13 Thread Chandresh Pancholi (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511294#comment-16511294
 ] 

Chandresh Pancholi commented on IGNITE-8777:


[~kuaw26] Could you please help me out with the unit test? 

> REST: metadata command failed on cluster of size 1.
> ---
>
> Key: IGNITE-8777
> URL: https://issues.apache.org/jira/browse/IGNITE-8777
> Project: Ignite
>  Issue Type: Improvement
>  Components: rest
>Affects Versions: 2.5
>Reporter: Alexey Kuznetsov
>Assignee: Chandresh Pancholi
>Priority: Major
>  Labels: newbie
>
> Start *only one *node.
> Execute REST command: 
> http://localhost:8080/ignite?cmd=getorcreate=myNewPartionedCache=2
> Cache will be created.
> Execute 
> http://localhost:8080/ignite?cmd=metadata=myNewPartionedCache
> Error will be returned:  {“successStatus”:1,“error”:“Failed to handle 
> request: [req=CACHE_METADATA, err=Failed to request meta data. 
> myNewPartionedCache is not found]“,”response”:null,“sessionToken”:null}
> After some debug, I see in code GridCacheCommandHandler.MetadataTask#map:
> {code}
> ...
> for (int i = 1; i < subgrid.size(); i++) {
>  
> }
> if (map.isEmpty())
> throw new IgniteException("Failed to request meta data. " 
> + cacheName + " is not found");
> ...
> {code}
> So, in case of cluster with only one node this code will throw exception.
> I guess the fix should be - just replace "int i = 1" with "int i = 0".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8782) Wrong message may be printed during simultaneous deactivation and rebalance

2018-06-13 Thread Pavel Kovalenko (JIRA)
Pavel Kovalenko created IGNITE-8782:
---

 Summary: Wrong message may be printed during simultaneous 
deactivation and rebalance
 Key: IGNITE-8782
 URL: https://issues.apache.org/jira/browse/IGNITE-8782
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.4
Reporter: Pavel Kovalenko
 Fix For: 2.6


A message located at: GridCachePartitionExchangeManager.java:394 may be printed 
out if cache group doesn't exist while rebalance process is still finishing. 
This may happen after deactivation during rebalance.
We should put this logging under if (grp != null) block and print other message 
if cache group was actually stopped.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-2313) Need to add a mode to fail atomic operations within a transaction

2018-06-13 Thread Alexey Goncharuk (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-2313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511263#comment-16511263
 ] 

Alexey Goncharuk commented on IGNITE-2313:
--

[~SomeFire], unfortunately, the fix did not make in Ignite 2.0 timeframe and 
now it makes a breaking change for current Ignite users. 

I will bump up the discussion to check in with community regarding this change.

> Need to add a mode to fail atomic operations within a transaction
> -
>
> Key: IGNITE-2313
> URL: https://issues.apache.org/jira/browse/IGNITE-2313
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Reporter: Dmitriy Setrakyan
>Assignee: Ryabov Dmitrii
>Priority: Major
> Fix For: 2.6
>
>
> Currently atomic operations within a transaction succeed without alarming a 
> user that no transaction really occurs. We should add a mode to fail such 
> operations (such mode should be turned off by default).
> New transaction configuration flag (default is {{false}}):
> {code}TransactionConfiguration.isAllowAtomicUpdatesInTransaction(){code}
> If the flag is violated, we should throw an exception with the following 
> error message: {{Transaction spans operations on atomic cache (consider 
> setting TransactionConfiguration.isAllowAttomicUpdatesInTransaction() flag to 
> true)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8751) Possible race on node segmentation.

2018-06-13 Thread Alexey Goncharuk (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511245#comment-16511245
 ] 

Alexey Goncharuk commented on IGNITE-8751:
--

Changes look good to me.

> Possible race on node segmentation.
> ---
>
> Key: IGNITE-8751
> URL: https://issues.apache.org/jira/browse/IGNITE-8751
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Andrew Mashenkov
>Assignee: Andrey Gura
>Priority: Major
> Fix For: 2.6
>
>
> Segmentation policy may be ignored, probably, due to a race.
> See [1] for details.
>  [1] 
> [http://apache-ignite-users.70518.x6.nabble.com/Node-pause-for-no-obvious-reason-td21923.html]
> Logs from segmented node.
> [08:42:42,290][INFO][tcp-disco-sock-reader-#15][TcpDiscoverySpi] Finished 
> serving remote node connection [rmtAddr=/10.29.42.45:38712, rmtPort=38712 
> [08:42:42,290][WARNING][disco-event-worker-#161][GridDiscoveryManager] Local 
> node SEGMENTED: TcpDiscoveryNode [id=8333aa56-8bf4-4558-a387-809b1d2e2e5b, 
> addrs=[10.29.42.44, 127.0.0.1], sockAddrs=[sap-datanode1/10.29.42.44:49500, 
> /127.0.0.1:49500], discPort=49500, order=1, intOrder=1, 
> lastExchangeTime=1528447362286, loc=true, ver=2.5.0#20180523-sha1:86e110c7, 
> isClient=false] 
> [08:42:42,294][SEVERE][tcp-disco-srvr-#2][] Critical system error detected. 
> Will be handled accordingly to configured handler [hnd=class 
> o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=java.lang.IllegalStateException: Thread 
> tcp-disco-srvr-#2 is terminated unexpectedly.]] 
> java.lang.IllegalStateException: Thread tcp-disco-srvr-#2 is terminated 
> unexpectedly. 
>         at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$TcpServer.body(ServerImpl.java:5686)
>  
>         at 
> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62) 
> [08:42:42,294][SEVERE][tcp-disco-srvr-#2][] JVM will be halted immediately 
> due to the failure: [failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=java.lang.IllegalStateException: Thread 
> tcp-disco-srvr-#2 is terminated unexpectedly.]] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8747) Remove\RemoveAll method should not count expired entry as removed.

2018-06-13 Thread Ivan Rakov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Rakov updated IGNITE-8747:
---
Fix Version/s: 2.6

> Remove\RemoveAll method should not count expired entry as removed.
> --
>
> Key: IGNITE-8747
> URL: https://issues.apache.org/jira/browse/IGNITE-8747
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Andrew Mashenkov
>Assignee: Andrew Mashenkov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, tck, test-failure
> Fix For: 2.6
>
>
> We have 2 TCK 1.0 test that are passed due to we have eagerTtl=true by 
> default.
> The reason is remove() return true even if an expired entry was removed.
> Seems, we have to evict expired entry from cache on remove(), but do not 
> count it as removed.
> java.lang.AssertionError
>  at 
> org.jsr107.tck.expiry.CacheExpiryTest.expire_whenAccessed(CacheExpiryTest.java:326)
> java.lang.AssertionError: expected:<0> but was:<1> at 
> org.jsr107.tck.expiry.CacheExpiryTest.testCacheStatisticsRemoveAll(CacheExpiryTest.java:160)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8503) Fix wrong GridCacheMapEntry startVersion initialization.

2018-06-13 Thread Ivan Rakov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Rakov updated IGNITE-8503:
---
Fix Version/s: 2.6

> Fix wrong GridCacheMapEntry startVersion initialization.
> 
>
> Key: IGNITE-8503
> URL: https://issues.apache.org/jira/browse/IGNITE-8503
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache, persistence
>Reporter: Dmitriy Pavlov
>Assignee: Andrew Mashenkov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, Muted_test, tck
> Fix For: 2.6
>
>
> GridCacheMapEntry initialize startVersion in wrong way.
> This leads to IgnitePdsWithTtlTest.testTtlIsAppliedAfterRestart failure and 
> reason is "Entry which should be expired by TTL policy is available after 
> grid restart."
>  
> Test was added during https://issues.apache.org/jira/browse/IGNITE-5874 
> development.
> This test restarts grid and checks all entries are not present in grid.
> But with high possiblity one from 7000 entries to be expired is resurrected 
> instead and returned by cache get.
> {noformat}
> After timeout {{
> >>> 
> >>> Cache memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
> >>> cache=expirableCache]
> >>>  Cache size: 0
> >>>  Cache partition topology stats 
> >>> [igniteInstanceName=db.IgnitePdsWithTtlTest0, grp=group1]
> >>> 
> >>> Cache event manager memory stats 
> >>> [igniteInstanceName=db.IgnitePdsWithTtlTest0, cache=expirableCache, 
> >>> stats=N/A]
> >>>
> >>> Query manager memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
> >>> cache=expirableCache]
> >>>   threadsSize: 0
> >>>   futsSize: 0
> >>>
> >>> TTL processor memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
> >>> cache=expirableCache]
> >>>   pendingEntriesSize: 0
> }} After timeout
> {noformat}
> [https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=5798755758125626876=testDetails_IgniteTests24Java8=%3Cdefault%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8780) File I/O operations must be retried if buffer hasn't read/written completely

2018-06-13 Thread Pavel Kovalenko (JIRA)
Pavel Kovalenko created IGNITE-8780:
---

 Summary: File I/O operations must be retried if buffer hasn't 
read/written completely
 Key: IGNITE-8780
 URL: https://issues.apache.org/jira/browse/IGNITE-8780
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.5
Reporter: Pavel Kovalenko
 Fix For: 2.6


Currently we don't actually ensure that we write or read some buffer completely:
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager#writeCheckpointEntry
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager#nodeStart

As result we may not write to the disk actual data and after node restart we 
can get BufferUnderflowException, like this:

{noformat}
java.nio.BufferUnderflowException
at java.nio.Buffer.nextGetIndex(Buffer.java:506)
at java.nio.HeapByteBuffer.getLong(HeapByteBuffer.java:412)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.readPointer(GridCacheDatabaseSharedManager.java:1915)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.readCheckpointStatus(GridCacheDatabaseSharedManager.java:1892)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.readMetastore(GridCacheDatabaseSharedManager.java:565)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.start0(GridCacheDatabaseSharedManager.java:525)
at 
org.apache.ignite.internal.processors.cache.GridCacheSharedManagerAdapter.start(GridCacheSharedManagerAdapter.java:61)
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.start(GridCacheProcessor.java:700)
at 
org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1738)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:985)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2014)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1723)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1151)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:671)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:596)
at org.apache.ignite.Ignition.start(Ignition.java:327)
at org.apache.ignite.ci.db.TcHelperDb.start(TcHelperDb.java:67)
at 
org.apache.ignite.ci.web.CtxListener.contextInitialized(CtxListener.java:37)
at 
org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:890)
at 
org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:532)
at 
org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:853)
at 
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:344)
at 
org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1501)
at 
org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1463)
at 
org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:785)
at 
org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:261)
at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:545)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
at org.eclipse.jetty.server.Server.start(Server.java:452)
at 
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
at 
org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
at org.eclipse.jetty.server.Server.doStart(Server.java:419)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.apache.ignite.ci.web.Launcher.runServer(Launcher.java:68)
at 
org.apache.ignite.ci.TcHelperJettyLauncher.main(TcHelperJettyLauncher.java:10)
{noformat}

and node become into unrecoverable state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8781) nio-acceptor threads are indistinguishable in GridNioServer

2018-06-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511237#comment-16511237
 ] 

ASF GitHub Bot commented on IGNITE-8781:


GitHub user agura opened a pull request:

https://github.com/apache/ignite/pull/4183

IGNITE-8781 GridNioServer accepter threads should have different names



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/agura/incubator-ignite ignite-8781

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4183.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4183


commit d9f4078276c6086700c5dd6ee9bc663b8fd5dfbf
Author: Andrey Gura 
Date:   2018-06-13T14:30:18Z

IGNITE-8781 GridNioServer accepter threads should have different names




> nio-acceptor threads are indistinguishable in GridNioServer
> ---
>
> Key: IGNITE-8781
> URL: https://issues.apache.org/jira/browse/IGNITE-8781
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Andrey Gura
>Assignee: Andrey Gura
>Priority: Major
> Fix For: 2.6
>
>
> nio-acceptor threads are indistinguishable in {{GridNioServer}}. All threads 
> have exactly same name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8781) nio-acceptor threads are indistinguishable in GridNioServer

2018-06-13 Thread Andrey Gura (JIRA)
Andrey Gura created IGNITE-8781:
---

 Summary: nio-acceptor threads are indistinguishable in 
GridNioServer
 Key: IGNITE-8781
 URL: https://issues.apache.org/jira/browse/IGNITE-8781
 Project: Ignite
  Issue Type: Improvement
Reporter: Andrey Gura
Assignee: Andrey Gura
 Fix For: 2.6


nio-acceptor threads are indistinguishable in {{GridNioServer}}. All threads 
have exactly same name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8777) REST: metadata command failed on cluster of size 1.

2018-06-13 Thread Alexey Kuznetsov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511211#comment-16511211
 ] 

Alexey Kuznetsov commented on IGNITE-8777:
--

[~Chandresh Pancholi] Please also add appropriate test for this bug.
I think it should be added to 
org.apache.ignite.internal.processors.rest.JettyRestProcessorAbstractSelfTest.

> REST: metadata command failed on cluster of size 1.
> ---
>
> Key: IGNITE-8777
> URL: https://issues.apache.org/jira/browse/IGNITE-8777
> Project: Ignite
>  Issue Type: Improvement
>  Components: rest
>Affects Versions: 2.5
>Reporter: Alexey Kuznetsov
>Assignee: Chandresh Pancholi
>Priority: Major
>  Labels: newbie
>
> Start *only one *node.
> Execute REST command: 
> http://localhost:8080/ignite?cmd=getorcreate=myNewPartionedCache=2
> Cache will be created.
> Execute 
> http://localhost:8080/ignite?cmd=metadata=myNewPartionedCache
> Error will be returned:  {“successStatus”:1,“error”:“Failed to handle 
> request: [req=CACHE_METADATA, err=Failed to request meta data. 
> myNewPartionedCache is not found]“,”response”:null,“sessionToken”:null}
> After some debug, I see in code GridCacheCommandHandler.MetadataTask#map:
> {code}
> ...
> for (int i = 1; i < subgrid.size(); i++) {
>  
> }
> if (map.isEmpty())
> throw new IgniteException("Failed to request meta data. " 
> + cacheName + " is not found");
> ...
> {code}
> So, in case of cluster with only one node this code will throw exception.
> I guess the fix should be - just replace "int i = 1" with "int i = 0".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-7595) Find and switch to alternate documentation engine

2018-06-13 Thread Denis Magda (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441620#comment-16441620
 ] 

Denis Magda edited comment on IGNITE-7595 at 6/13/18 2:17 PM:
--

Eventually, we'll give a try to Jekyll because it seems to be more flexible and 
easy to use than Docusaurus. 

Things to do next:
# The docs will be stored in {{docs}} folder of the main Ignite repository. 
That's a standard approach adopted by Kafka, Spark, Flink, Cassandra, Storm and 
other ASF projects.
# A future version of the docs will be stored in the master branch. Specific 
branches will be used for specific documentation versions. If we'd like to 
update docs of version X after Ignite X is released, then we create 
"ignite-X-docs" branch after the release, make the changes there and regenerate 
the HTML.
# Jekyll will be used to generate HTML pages from the markdown stored in Ignite 
GIT repository. We can apply CSS and JS of our choice. 
# HTML pages, CSS styles, and JS scripts are hosted on ignite.apache.org - 
which is an SVN repository.

The following scripts are needed:
* Readme to the standard markdown (Jekyll) migration script.
* Script that calls Jekyll to generate the HTML then applies CSS and JS 
snippets and then merges changes to Ignite site svn repository.
* An option to the script above that does all the same but for a subset of the 
changes of a specific GIT commit.
* Script that will merge changes to the docs master and listed branches of 
previous Ignite versions.


was (Author: dmagda):
Eventually, we'll give a try to Jekyll because it seems to be more flexible and 
easy to use than Docusaurus. 

Things to do next:
# Export readme.io content to markdown format using a script. The script has to 
turn readme specific markdown (code, tables, etc.) to the standard one.
# The docs will be stored in {{docs}} folder of the main Ignite repository. 
That's a standard approach adopted by Kafka, Spark, Flink, Cassandra, Storm and 
other ASF projects.
# A future version of the docs will be stored in the master branch. Specific 
branches will be used for specific documentation versions. If we'd like to 
update docs of version X after Ignite X is released, then we create 
"ignite-X-docs" branch after the release, make the changes there and regenerate 
the HTML.
# Jekyll will be used to generate HTML pages from the markdown stored in Ignite 
GIT repository. We can apply CSS and JS of our choice. 
# HTML pages, CSS styles, and JS scripts are hosted on ignite.apache.org - 
which is an SVN repository.

The following scripts are needed:
* Readme to the standard markdown (Jekyll) migration script.
* Script that calls Jekyll to generate the HTML then applies CSS and JS 
snippets and then merges changes to Ignite site svn repository.
* An option to the script above that does all the same but for a subset of the 
changes of a specific GIT commit.
* Script that will merge changes to the docs master and listed branches of 
previous Ignite versions.

> Find and switch to alternate documentation engine
> -
>
> Key: IGNITE-7595
> URL: https://issues.apache.org/jira/browse/IGNITE-7595
> Project: Ignite
>  Issue Type: Task
>  Components: documentation
>Reporter: Denis Magda
>Assignee: Prachi Garg
>Priority: Critical
> Fix For: 2.6
>
> Attachments: Docusaurus-GitBook comparison.docx, 
> readme-markdown-mapping.xlsx
>
>
> Current readme.io documentation has many drawbacks that make the life of 
> Ignite technical writers hard. Some of the problems are:
>  * Each "version" is just a copy of the previous one. When fixing something, 
> you have to update
> all the versions.
>  * No good way to review changes.
>  * "Propose edit" functionality is a not suitable for review. You can only 
> accept or reject an
> edit, no way to communicate with a contributor, etc
>  * There is no way to prevent Google from indexing old documentation 
> versions. Thus, it's common to come across old doc version in a google 
> search. 
> We might consider GitHub based documentation or another approach. The 
> discussion is here:
> http://apache-ignite-developers.2346864.n4.nabble.com/Move-documentation-from-readme-io-to-GitHub-pages-td16409.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8736) Add transaction label to CU.txString() method output

2018-06-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511213#comment-16511213
 ] 

ASF GitHub Bot commented on IGNITE-8736:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4152


> Add transaction label to CU.txString() method output
> 
>
> Key: IGNITE-8736
> URL: https://issues.apache.org/jira/browse/IGNITE-8736
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Assignee: Sergey Kosarev
>Priority: Major
> Fix For: 2.6
>
>
> This information may be useful during deadlocked and forcibly rolled back 
> transactions printout



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8712) [Test Failed] IgniteDataStructureUniqueNameTest#testUniqueNameMultithreaded fails sometimes in master.

2018-06-13 Thread Ryabov Dmitrii (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511167#comment-16511167
 ] 

Ryabov Dmitrii commented on IGNITE-8712:


Looks good.

> [Test Failed] IgniteDataStructureUniqueNameTest#testUniqueNameMultithreaded 
> fails sometimes in master.
> --
>
> Key: IGNITE-8712
> URL: https://issues.apache.org/jira/browse/IGNITE-8712
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures
>Affects Versions: 2.5
>Reporter: Pavel Pereslegin
>Assignee: Pavel Pereslegin
>Priority: Minor
>  Labels: MakeTeamcityGreenAgain
>
> https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8==testDetails=5920780021361517364=TEST_STATUS_DESC_IgniteTests24Java8=%3Cdefault%3E=10
> Typical output:
> {noformat}
> junit.framework.AssertionFailedError: expected: org.apache.ignite.internal.processors.datastructures.GridCacheSetProxy> but 
> was: org.apache.ignite.internal.processors.datastructures.GridCacheAtomicStampedImpl>
> at 
> org.apache.ignite.internal.processors.cache.datastructures.IgniteDataStructureUniqueNameTest.testUniqueName(IgniteDataStructureUniqueNameTest.java:385)
> at 
> org.apache.ignite.internal.processors.cache.datastructures.IgniteDataStructureUniqueNameTest.testUniqueNameMultithreaded(IgniteDataStructureUniqueNameTest.java:85)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-6552) The ability to set WAL history size in time units

2018-06-13 Thread Anton Kalashnikov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-6552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511157#comment-16511157
 ] 

Anton Kalashnikov commented on IGNITE-6552:
---

UP - [https://reviews.ignite.apache.org/ignite/review/IGNT-CR-503]

> The ability to set WAL history size in time units
> -
>
> Key: IGNITE-6552
> URL: https://issues.apache.org/jira/browse/IGNITE-6552
> Project: Ignite
>  Issue Type: Improvement
>  Components: persistence
>Affects Versions: 2.2
>Reporter: Vladislav Pyatkov
>Assignee: Anton Kalashnikov
>Priority: Major
> Fix For: 2.6
>
>
> We can to set size of WAL history in number of checkpoints.
> {code}
> org.apache.ignite.configuration.PersistentStoreConfiguration#setWalHistorySize
> {code}
> But it is not convenient fro end user. Nobody to say how many checkpoint to 
> occur over several minutes.
> I think, it will be better if we will have ability to set WAL history size in 
> time units (milliseconds for example).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8293) BinaryUtils#isCustomJavaSerialization fails when only readObject is declared in a class

2018-06-13 Thread MihkelJ (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511143#comment-16511143
 ] 

MihkelJ commented on IGNITE-8293:
-

It seems this method was a barrier to keep enums from being marked as having 
custom serialization. I've updated the pull request to add an explicit check.

Of the listed tests, this change solves everything but 
{{BinaryMarshallerNonCompactSelfTest.testWriteReplaceInheritable}}. That test 
assumes guava's ImmutableList doesn't use custom serialization, which isn't 
really true.

> BinaryUtils#isCustomJavaSerialization fails when only readObject is declared 
> in a class
> ---
>
> Key: IGNITE-8293
> URL: https://issues.apache.org/jira/browse/IGNITE-8293
> Project: Ignite
>  Issue Type: Bug
>  Components: binary
>Affects Versions: 2.4
>Reporter: MihkelJ
>Assignee: MihkelJ
>Priority: Minor
> Fix For: 2.6
>
>
> Consider this class:
>  
> {code:java}
> public class Test implements Serializable {
> private transient AtomicBoolean dirty = new AtomicBoolean(false);
> private void readObject(java.io.ObjectInputStream in) throws IOException, 
> ClassNotFoundException {
> dirty = new AtomicBoolean(false);
> }
> //methods to check and mark class as dirty
> }{code}
> {{isCustomJavaSerialization}} will get a {{NoSuchMethodException}} when 
> trying to grab the {{writeObject}} method and falsely conclude that Test 
> doesn't use custom serialization.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-4210) CacheLoadingConcurrentGridStartSelfTest.testLoadCacheFromStore() test lose data.

2018-06-13 Thread Alexey Kuznetsov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511060#comment-16511060
 ] 

Alexey Kuznetsov edited comment on IGNITE-4210 at 6/13/18 1:28 PM:
---

[~agura] 
Am I understand you correctly ? :
1) User starts 1 grid
2) User initiates cache store loading
3) Additional grids connect to cluster. During PME they observe cache store 
loading progress(certain future was created on initiator), 
they cancel cache store loading, pass exception to user.
4) User receives exception during mass node start. Cache contains some values, 
loaded from store.

I have the only question left :
Should we cancel cache store loading if PME was initiated due to node left, or 
new cache created etc.? I think yes. 


was (Author: alexey kuznetsov):
[~agura] 
Am I understand you correctly ? :
1) User starts 1 grid
2) User initiates cache store loading
3) Additional grids connect to cluster. During PME they observe cache store 
loading progress(certain future was created on initiator), 
they cancel cache store loading, pass exception to user.
4) User receives exception during mass node start. Cache contains some values, 
loaded from store.

> CacheLoadingConcurrentGridStartSelfTest.testLoadCacheFromStore() test lose 
> data.
> 
>
> Key: IGNITE-4210
> URL: https://issues.apache.org/jira/browse/IGNITE-4210
> Project: Ignite
>  Issue Type: Bug
>Reporter: Anton Vinogradov
>Assignee: Alexey Kuznetsov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> org.apache.ignite.internal.processors.cache.distributed.CacheLoadingConcurrentGridStartSelfTest#testLoadCacheFromStore
>  sometimes have failures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7526) SQL: Introduce memory region for reducer merge results with disk offload

2018-06-13 Thread Vladimir Ozerov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1652#comment-1652
 ] 

Vladimir Ozerov commented on IGNITE-7526:
-

Another H2 facility for row offload: {{org.h2.result.RowList}}

> SQL: Introduce memory region for reducer merge results with disk offload
> 
>
> Key: IGNITE-7526
> URL: https://issues.apache.org/jira/browse/IGNITE-7526
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Taras Ledkov
>Priority: Major
>
> Currently all results received from map nodes are stored inside reducer's 
> heap memory. What is worse, in case of complex queries, such as having sorts 
> or groupings, need to collect all results from mappers first before final 
> processing could be applied. In case of big results set (or intermediate 
> results) this could easily lead to OOME on reducer. 
> To mitigate this we should introduce special memory area where intermediate 
> results could be stored. All final processing should be stored in the same 
> area as well. This area should be of limited size and should be able to 
> offload results to disk in case of overflow.
> We could start with our B+Tree and free list and store results in some K-V 
> form. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7526) SQL: Introduce memory region for reducer merge results with disk offload

2018-06-13 Thread Vladimir Ozerov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511100#comment-16511100
 ] 

Vladimir Ozerov commented on IGNITE-7526:
-

Created a prototype with persistent database - was able to execute SQL query 
successfully.

> SQL: Introduce memory region for reducer merge results with disk offload
> 
>
> Key: IGNITE-7526
> URL: https://issues.apache.org/jira/browse/IGNITE-7526
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Taras Ledkov
>Priority: Major
>
> Currently all results received from map nodes are stored inside reducer's 
> heap memory. What is worse, in case of complex queries, such as having sorts 
> or groupings, need to collect all results from mappers first before final 
> processing could be applied. In case of big results set (or intermediate 
> results) this could easily lead to OOME on reducer. 
> To mitigate this we should introduce special memory area where intermediate 
> results could be stored. All final processing should be stored in the same 
> area as well. This area should be of limited size and should be able to 
> offload results to disk in case of overflow.
> We could start with our B+Tree and free list and store results in some K-V 
> form. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8657) Simultaneous start of bunch of client nodes may lead to some clients hangs

2018-06-13 Thread Sergey Chugunov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511085#comment-16511085
 ] 

Sergey Chugunov commented on IGNITE-8657:
-

It turned out that *forceServerMode* setting applied to client makes things 
harder to fix: as in this mode client nodes joins the ring itself (like server 
nodes) they cannot reconnect.

In that case we should add logic that will fail client nodes that didn't join 
the cluster from the first attempt.

> Simultaneous start of bunch of client nodes may lead to some clients hangs
> --
>
> Key: IGNITE-8657
> URL: https://issues.apache.org/jira/browse/IGNITE-8657
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Sergey Chugunov
>Assignee: Sergey Chugunov
>Priority: Major
> Fix For: 2.6
>
>
> h3. Description
> PartitionExchangeManager uses a system property 
> *IGNITE_EXCHANGE_HISTORY_SIZE* to manage max number of exchange objects and 
> optimize memory consumption.
> Default value of the property is 1000 but in scenarios with many caches and 
> partitions it is reasonable to set exchange history size to a smaller values 
> around few dozens.
> Then if user starts up at once more client nodes than history size some 
> clients may hang because their exchange information was preempted and no 
> longer available.
> h3. Workarounds
> Two workarounds are possible: 
> * Do not start at once more clients than history size.
> * Restart hanging client node.
> h3. Solution
> Forcing client node to reconnect when server detected loosing its exchange 
> information prevents client nodes hanging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8131) ZookeeperDiscoverySpiTest#testClientReconnectSessionExpire* tests fail on TC

2018-06-13 Thread Sergey Chugunov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511075#comment-16511075
 ] 

Sergey Chugunov commented on IGNITE-8131:
-

[~garus.d.g],

You can find logs here: [^ZK_client_reconnect_failure.log]  
[^ZK_client_reconnect_success.log]

> ZookeeperDiscoverySpiTest#testClientReconnectSessionExpire* tests fail on TC
> 
>
> Key: IGNITE-8131
> URL: https://issues.apache.org/jira/browse/IGNITE-8131
> Project: Ignite
>  Issue Type: Bug
>  Components: zookeeper
>Reporter: Sergey Chugunov
>Assignee: Denis Garus
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
> Attachments: ZK_client_reconnect_failure.log, 
> ZK_client_reconnect_success.log
>
>
> Two tests always fail on TC with the assertion
> {noformat}
> junit.framework.AssertionFailedError: Failed to wait for disconnect/reconnect 
> event.
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.waitReconnectEvent(ZookeeperDiscoverySpiTest.java:4221)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.reconnectClientNodes(ZookeeperDiscoverySpiTest.java:4183)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.clientReconnectSessionExpire(ZookeeperDiscoverySpiTest.java:2231)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.testClientReconnectSessionExpire1_1(ZookeeperDiscoverySpiTest.java:2206)
> {noformat}
> from client disconnect/reconnect events check. Obviously client doesn't 
> generate these events as it supposed to do.
> (TC runs can be found 
> [here|https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_IgniteZooKeeperDiscovery_IgniteTests24Java8=pull%2F3730%2Fhead=buildTypeStatusDiv]).
> It is possible to reproduce test failure locally as well, but with low 
> probability: one failure for 50 or even 300 successful executions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8131) ZookeeperDiscoverySpiTest#testClientReconnectSessionExpire* tests fail on TC

2018-06-13 Thread Sergey Chugunov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Chugunov updated IGNITE-8131:

Attachment: ZK_client_reconnect_success.log
ZK_client_reconnect_failure.log

> ZookeeperDiscoverySpiTest#testClientReconnectSessionExpire* tests fail on TC
> 
>
> Key: IGNITE-8131
> URL: https://issues.apache.org/jira/browse/IGNITE-8131
> Project: Ignite
>  Issue Type: Bug
>  Components: zookeeper
>Reporter: Sergey Chugunov
>Assignee: Denis Garus
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
> Attachments: ZK_client_reconnect_failure.log, 
> ZK_client_reconnect_success.log
>
>
> Two tests always fail on TC with the assertion
> {noformat}
> junit.framework.AssertionFailedError: Failed to wait for disconnect/reconnect 
> event.
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.waitReconnectEvent(ZookeeperDiscoverySpiTest.java:4221)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.reconnectClientNodes(ZookeeperDiscoverySpiTest.java:4183)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.clientReconnectSessionExpire(ZookeeperDiscoverySpiTest.java:2231)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.testClientReconnectSessionExpire1_1(ZookeeperDiscoverySpiTest.java:2206)
> {noformat}
> from client disconnect/reconnect events check. Obviously client doesn't 
> generate these events as it supposed to do.
> (TC runs can be found 
> [here|https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_IgniteZooKeeperDiscovery_IgniteTests24Java8=pull%2F3730%2Fhead=buildTypeStatusDiv]).
> It is possible to reproduce test failure locally as well, but with low 
> probability: one failure for 50 or even 300 successful executions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-4210) CacheLoadingConcurrentGridStartSelfTest.testLoadCacheFromStore() test lose data.

2018-06-13 Thread Alexey Kuznetsov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511060#comment-16511060
 ] 

Alexey Kuznetsov commented on IGNITE-4210:
--

[~agura] 
Am I understand you correctly ? :
1) User starts 1 grid
2) User initiates cache store loading
3) Additional grids connect to cluster. During PME they observe cache store 
loading progress(certain future was created on initiator), 
they cancel cache store loading, pass exception to user.
4) User receives exception during mass node start. Cache contains some values, 
loaded from store.

> CacheLoadingConcurrentGridStartSelfTest.testLoadCacheFromStore() test lose 
> data.
> 
>
> Key: IGNITE-4210
> URL: https://issues.apache.org/jira/browse/IGNITE-4210
> Project: Ignite
>  Issue Type: Bug
>Reporter: Anton Vinogradov
>Assignee: Alexey Kuznetsov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> org.apache.ignite.internal.processors.cache.distributed.CacheLoadingConcurrentGridStartSelfTest#testLoadCacheFromStore
>  sometimes have failures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-7752) Update Ignite KafkaStreamer to use new KafkaConsmer configuration.

2018-06-13 Thread Chandresh Pancholi (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-7752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandresh Pancholi reassigned IGNITE-7752:
--

Assignee: Chandresh Pancholi

> Update Ignite KafkaStreamer to use new KafkaConsmer configuration.
> --
>
> Key: IGNITE-7752
> URL: https://issues.apache.org/jira/browse/IGNITE-7752
> Project: Ignite
>  Issue Type: Task
>  Components: streaming
>Reporter: Andrew Mashenkov
>Assignee: Chandresh Pancholi
>Priority: Major
>  Labels: newbie
> Fix For: 2.6
>
>
> Seems, for now it is impossible to use new style KafkaConsumer configuration 
> in KafkaStreamer.
> The issue here is Ignite use 
> kafka.consumer.Consumer.createJavaConsumerConnector() method which creates 
> old consumer (ZookeeperConsumerConnector).
> We should create a new KafkaConsumer instead which looks like support both, 
> old and new style configs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8293) BinaryUtils#isCustomJavaSerialization fails when only readObject is declared in a class

2018-06-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511032#comment-16511032
 ] 

ASF GitHub Bot commented on IGNITE-8293:


Github user agura closed the pull request at:

https://github.com/apache/ignite/pull/4074


> BinaryUtils#isCustomJavaSerialization fails when only readObject is declared 
> in a class
> ---
>
> Key: IGNITE-8293
> URL: https://issues.apache.org/jira/browse/IGNITE-8293
> Project: Ignite
>  Issue Type: Bug
>  Components: binary
>Affects Versions: 2.4
>Reporter: MihkelJ
>Assignee: MihkelJ
>Priority: Minor
> Fix For: 2.6
>
>
> Consider this class:
>  
> {code:java}
> public class Test implements Serializable {
> private transient AtomicBoolean dirty = new AtomicBoolean(false);
> private void readObject(java.io.ObjectInputStream in) throws IOException, 
> ClassNotFoundException {
> dirty = new AtomicBoolean(false);
> }
> //methods to check and mark class as dirty
> }{code}
> {{isCustomJavaSerialization}} will get a {{NoSuchMethodException}} when 
> trying to grab the {{writeObject}} method and falsely conclude that Test 
> doesn't use custom serialization.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-8293) BinaryUtils#isCustomJavaSerialization fails when only readObject is declared in a class

2018-06-13 Thread Andrey Gura (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511026#comment-16511026
 ] 

Andrey Gura edited comment on IGNITE-8293 at 6/13/18 12:13 PM:
---

[~MihkelJ] Unfortunately, TC run has failed tests due to your change. At least 
following tests are failed:

{noformat}
org.apache.ignite.testsuites.IgniteBinaryObjectsTestSuite: 
org.apache.ignite.internal.binary (12)
BinaryEnumsSelfTest.testDeclaredBodyEnumNotRegistered   
BinaryEnumsSelfTest.testDeclaredBodyEnumRegistered  
BinaryEnumsSelfTest.testNestedBuilderNotRegistered  
BinaryEnumsSelfTest.testNestedNotRegistered 
BinaryEnumsSelfTest.testSimpleArrayNotRegistered
BinaryEnumsSelfTest.testSimpleArrayRegistered   
BinaryEnumsSelfTest.testSimpleNotRegistered 
BinaryEnumsSelfTest.testSimpleRegistered
BinaryMarshallerSelfTest.testWriteReplaceInheritable
BinaryObjectBuilderAdditionalSelfTest.testEnum  
BinaryObjectBuilderAdditionalSelfTest.testMetadataChanging  
BinaryObjectBuilderAdditionalSelfTest.testSimpleTypeFieldOverride   

org.apache.ignite.testsuites.IgniteBinaryObjectsTestSuite: 
org.apache.ignite.internal.binary.noncompact (4)

BinaryMarshallerNonCompactSelfTest.testWriteReplaceInheritable  
BinaryObjectBuilderAdditionalNonCompactSelfTest.testEnum
BinaryObjectBuilderAdditionalNonCompactSelfTest.testMetadataChanging
BinaryObjectBuilderAdditionalNonCompactSelfTest.testSimpleTypeFieldOverride 
 
{noformat}




was (Author: agura):
[~MihkelJ] Unfortunately, TC run has failed tests due to your change. At least 
following changes are failed:

{noformat}
org.apache.ignite.testsuites.IgniteBinaryObjectsTestSuite: 
org.apache.ignite.internal.binary (12)
BinaryEnumsSelfTest.testDeclaredBodyEnumNotRegistered   
BinaryEnumsSelfTest.testDeclaredBodyEnumRegistered  
BinaryEnumsSelfTest.testNestedBuilderNotRegistered  
BinaryEnumsSelfTest.testNestedNotRegistered 
BinaryEnumsSelfTest.testSimpleArrayNotRegistered
BinaryEnumsSelfTest.testSimpleArrayRegistered   
BinaryEnumsSelfTest.testSimpleNotRegistered 
BinaryEnumsSelfTest.testSimpleRegistered
BinaryMarshallerSelfTest.testWriteReplaceInheritable
BinaryObjectBuilderAdditionalSelfTest.testEnum  
BinaryObjectBuilderAdditionalSelfTest.testMetadataChanging  
BinaryObjectBuilderAdditionalSelfTest.testSimpleTypeFieldOverride   

org.apache.ignite.testsuites.IgniteBinaryObjectsTestSuite: 
org.apache.ignite.internal.binary.noncompact (4)

BinaryMarshallerNonCompactSelfTest.testWriteReplaceInheritable  
BinaryObjectBuilderAdditionalNonCompactSelfTest.testEnum
BinaryObjectBuilderAdditionalNonCompactSelfTest.testMetadataChanging
BinaryObjectBuilderAdditionalNonCompactSelfTest.testSimpleTypeFieldOverride 
 
{noformat}



> BinaryUtils#isCustomJavaSerialization fails when only readObject is declared 
> in a class
> ---
>
> Key: IGNITE-8293
> URL: https://issues.apache.org/jira/browse/IGNITE-8293
> Project: Ignite
>  Issue Type: Bug
>  Components: binary
>Affects Versions: 2.4
>Reporter: MihkelJ
>Assignee: MihkelJ
>Priority: Minor
> Fix For: 2.6
>
>
> Consider this class:
>  
> {code:java}
> public class Test implements Serializable {
> private transient AtomicBoolean dirty = new AtomicBoolean(false);
> private void readObject(java.io.ObjectInputStream in) throws IOException, 
> ClassNotFoundException {
> dirty = new AtomicBoolean(false);
> }
> //methods to check and mark class as dirty
> }{code}
> {{isCustomJavaSerialization}} will get a {{NoSuchMethodException}} when 
> trying to grab the {{writeObject}} method and falsely conclude that Test 
> doesn't use custom serialization.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8293) BinaryUtils#isCustomJavaSerialization fails when only readObject is declared in a class

2018-06-13 Thread Andrey Gura (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511026#comment-16511026
 ] 

Andrey Gura commented on IGNITE-8293:
-

[~MihkelJ] Unfortunately, TC run has failed tests due to your change. At least 
following changes are failed:

{noformat}
org.apache.ignite.testsuites.IgniteBinaryObjectsTestSuite: 
org.apache.ignite.internal.binary (12)
BinaryEnumsSelfTest.testDeclaredBodyEnumNotRegistered   
BinaryEnumsSelfTest.testDeclaredBodyEnumRegistered  
BinaryEnumsSelfTest.testNestedBuilderNotRegistered  
BinaryEnumsSelfTest.testNestedNotRegistered 
BinaryEnumsSelfTest.testSimpleArrayNotRegistered
BinaryEnumsSelfTest.testSimpleArrayRegistered   
BinaryEnumsSelfTest.testSimpleNotRegistered 
BinaryEnumsSelfTest.testSimpleRegistered
BinaryMarshallerSelfTest.testWriteReplaceInheritable
BinaryObjectBuilderAdditionalSelfTest.testEnum  
BinaryObjectBuilderAdditionalSelfTest.testMetadataChanging  
BinaryObjectBuilderAdditionalSelfTest.testSimpleTypeFieldOverride   

org.apache.ignite.testsuites.IgniteBinaryObjectsTestSuite: 
org.apache.ignite.internal.binary.noncompact (4)

BinaryMarshallerNonCompactSelfTest.testWriteReplaceInheritable  
BinaryObjectBuilderAdditionalNonCompactSelfTest.testEnum
BinaryObjectBuilderAdditionalNonCompactSelfTest.testMetadataChanging
BinaryObjectBuilderAdditionalNonCompactSelfTest.testSimpleTypeFieldOverride 
 
{noformat}



> BinaryUtils#isCustomJavaSerialization fails when only readObject is declared 
> in a class
> ---
>
> Key: IGNITE-8293
> URL: https://issues.apache.org/jira/browse/IGNITE-8293
> Project: Ignite
>  Issue Type: Bug
>  Components: binary
>Affects Versions: 2.4
>Reporter: MihkelJ
>Assignee: MihkelJ
>Priority: Minor
> Fix For: 2.6
>
>
> Consider this class:
>  
> {code:java}
> public class Test implements Serializable {
> private transient AtomicBoolean dirty = new AtomicBoolean(false);
> private void readObject(java.io.ObjectInputStream in) throws IOException, 
> ClassNotFoundException {
> dirty = new AtomicBoolean(false);
> }
> //methods to check and mark class as dirty
> }{code}
> {{isCustomJavaSerialization}} will get a {{NoSuchMethodException}} when 
> trying to grab the {{writeObject}} method and falsely conclude that Test 
> doesn't use custom serialization.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8757) idle_verify utility doesn't show both update counter and hash conflicts

2018-06-13 Thread Ivan Rakov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511019#comment-16511019
 ] 

Ivan Rakov commented on IGNITE-8757:


TC: 
https://ci.ignite.apache.org/viewLog.html?buildId=1374429=buildResultsDiv=IgniteTests24Java8_RunAll

> idle_verify utility doesn't show both update counter and hash conflicts
> ---
>
> Key: IGNITE-8757
> URL: https://issues.apache.org/jira/browse/IGNITE-8757
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ivan Rakov
>Assignee: Ivan Rakov
>Priority: Major
>
> If there are two partitions in cluster, one with different update counters 
> and one with different data, idle_verify will show only partition with broken 
> counters. We should show both for better visibility. 
> We should also show notify user about rebalancing partitions that were 
> excluded from analysis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8641) SpringDataExample should use example-ignite.xml config

2018-06-13 Thread Chandresh Pancholi (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandresh Pancholi reassigned IGNITE-8641:
--

Assignee: Chandresh Pancholi

> SpringDataExample should use example-ignite.xml config
> --
>
> Key: IGNITE-8641
> URL: https://issues.apache.org/jira/browse/IGNITE-8641
> Project: Ignite
>  Issue Type: Bug
>Reporter: Andrey Gura
>Assignee: Chandresh Pancholi
>Priority: Major
>  Labels: newbie
> Fix For: 2.6
>
>
> {{SpringDataExample}} uses 
> {{org.apache.ignite.examples.springdata.SpringAppCfg}} as Spring 
> configuration while all other examples use {{example-ignite.xml}} 
> configuration file.
> It leads to inconsistent examples behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8777) REST: metadata command failed on cluster of size 1.

2018-06-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511013#comment-16511013
 ] 

ASF GitHub Bot commented on IGNITE-8777:


GitHub user chandresh-pancholi opened a pull request:

https://github.com/apache/ignite/pull/4181

IGNITE-8777: REST: metadata command failed on cluster of size 1

Signed-off-by: Chandresh Pancholi 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chandresh-pancholi/ignite master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4181.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4181


commit 682ed08ca9980c8f267c85358bb2ee829b4b3553
Author: Chandresh Pancholi 
Date:   2018-06-13T12:00:18Z

IGNITE-8777: REST: metadata command failed on cluster of size 1
Signed-off-by: Chandresh Pancholi 




> REST: metadata command failed on cluster of size 1.
> ---
>
> Key: IGNITE-8777
> URL: https://issues.apache.org/jira/browse/IGNITE-8777
> Project: Ignite
>  Issue Type: Improvement
>  Components: rest
>Affects Versions: 2.5
>Reporter: Alexey Kuznetsov
>Assignee: Chandresh Pancholi
>Priority: Major
>  Labels: newbie
>
> Start *only one *node.
> Execute REST command: 
> http://localhost:8080/ignite?cmd=getorcreate=myNewPartionedCache=2
> Cache will be created.
> Execute 
> http://localhost:8080/ignite?cmd=metadata=myNewPartionedCache
> Error will be returned:  {“successStatus”:1,“error”:“Failed to handle 
> request: [req=CACHE_METADATA, err=Failed to request meta data. 
> myNewPartionedCache is not found]“,”response”:null,“sessionToken”:null}
> After some debug, I see in code GridCacheCommandHandler.MetadataTask#map:
> {code}
> ...
> for (int i = 1; i < subgrid.size(); i++) {
>  
> }
> if (map.isEmpty())
> throw new IgniteException("Failed to request meta data. " 
> + cacheName + " is not found");
> ...
> {code}
> So, in case of cluster with only one node this code will throw exception.
> I guess the fix should be - just replace "int i = 1" with "int i = 0".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8777) REST: metadata command failed on cluster of size 1.

2018-06-13 Thread Chandresh Pancholi (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandresh Pancholi reassigned IGNITE-8777:
--

Assignee: Chandresh Pancholi

> REST: metadata command failed on cluster of size 1.
> ---
>
> Key: IGNITE-8777
> URL: https://issues.apache.org/jira/browse/IGNITE-8777
> Project: Ignite
>  Issue Type: Improvement
>  Components: rest
>Affects Versions: 2.5
>Reporter: Alexey Kuznetsov
>Assignee: Chandresh Pancholi
>Priority: Major
>  Labels: newbie
>
> Start *only one *node.
> Execute REST command: 
> http://localhost:8080/ignite?cmd=getorcreate=myNewPartionedCache=2
> Cache will be created.
> Execute 
> http://localhost:8080/ignite?cmd=metadata=myNewPartionedCache
> Error will be returned:  {“successStatus”:1,“error”:“Failed to handle 
> request: [req=CACHE_METADATA, err=Failed to request meta data. 
> myNewPartionedCache is not found]“,”response”:null,“sessionToken”:null}
> After some debug, I see in code GridCacheCommandHandler.MetadataTask#map:
> {code}
> ...
> for (int i = 1; i < subgrid.size(); i++) {
>  
> }
> if (map.isEmpty())
> throw new IgniteException("Failed to request meta data. " 
> + cacheName + " is not found");
> ...
> {code}
> So, in case of cluster with only one node this code will throw exception.
> I guess the fix should be - just replace "int i = 1" with "int i = 0".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8774) Daemon moves cluster to compatibility mode when joins

2018-06-13 Thread Alexey Goncharuk (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk reassigned IGNITE-8774:


Assignee: (was: Alexey Goncharuk)

> Daemon moves cluster to compatibility mode when joins
> -
>
> Key: IGNITE-8774
> URL: https://issues.apache.org/jira/browse/IGNITE-8774
> Project: Ignite
>  Issue Type: Bug
>Reporter: Stanislav Lukyanov
>Priority: Major
> Fix For: 2.6
>
>
> When a daemon node joins the cluster seems to switch to compatibility mode 
> (allowing nodes without baseline support). It prevents baseline nodes from 
> being restarted.
> Example:
> {code}
> Ignite ignite1 = 
> IgnitionEx.start("examples/config/persistentstore/example-persistent-store.xml",
>  "srv1");
> Ignite ignite2 = 
> IgnitionEx.start("examples/config/persistentstore/example-persistent-store.xml",
>  "srv2");
> ignite2.cluster().active(true);
> IgnitionEx.setClientMode(true);
> IgnitionEx.setDaemon(true);
> Ignite daemon = 
> IgnitionEx.start("examples/config/persistentstore/example-persistent-store.xml",
>  "daemon");
> IgnitionEx.setClientMode(false);
> IgnitionEx.setDaemon(false);
> ignite2.close();
> IgnitionEx.start("examples/config/persistentstore/example-persistent-store.xml",
>  "srv2");
> {code}
> The attempt to restart ignite2 throws an exception:
> {code}
> [2018-06-11 18:45:25,766][ERROR][tcp-disco-msg-worker-#39%srv2%][root] 
> Critical system error detected. Will be handled accordingly to configured 
> handler [hnd=class o.a.i.failure.StopNodeOrHaltFailureHandler, 
> failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION, err=class 
> o.a.i.IgniteException: Node with BaselineTopology cannot join mixed cluster 
> running in compatibility mode]]
> class org.apache.ignite.IgniteException: Node with BaselineTopology cannot 
> join mixed cluster running in compatibility mode
>   at 
> org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.onGridDataReceived(GridClusterStateProcessor.java:714)
>   at 
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$5.onExchange(GridDiscoveryManager.java:883)
>   at 
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.onExchange(TcpDiscoverySpi.java:1939)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processNodeAddedMessage(ServerImpl.java:4354)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2744)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2536)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerAdapter.body(ServerImpl.java:6775)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2621)
>   at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-8722) Issue in REST API 2.5

2018-06-13 Thread Alexey Kuznetsov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510950#comment-16510950
 ] 

Alexey Kuznetsov edited comment on IGNITE-8722 at 6/13/18 11:05 AM:


[~skymania] I confirm this bug was introduced in IGNITE-7803 for data 
structures that has references to self-type.
The fix is simple (remove one "else" key word in 
IGNITE_BINARY_OBJECT_SERIALIZER) + add test.
I will fix this today/tomorrow.


was (Author: kuaw26):
[~skymania] I confirm this bug was introduced in IGNITE-7803 for data 
structures that has references.
The fix is simple (remove one "else" key word in 
IGNITE_BINARY_OBJECT_SERIALIZER) + add test.
I will fix this today/tomorrow.

> Issue in REST API 2.5
> -
>
> Key: IGNITE-8722
> URL: https://issues.apache.org/jira/browse/IGNITE-8722
> Project: Ignite
>  Issue Type: Bug
>  Components: rest
>Affects Versions: 2.5
>Reporter: Denis Dijak
>Assignee: Alexey Kuznetsov
>Priority: Major
>  Labels: rest
> Fix For: 2.6
>
> Attachments: rest.api.zip
>
>
> In 2.5 ignite REST-API dosent show cache value structure correctly
> rest-api 2.4
> "0013289414": {
>  "timeFrom": 1527166800,
>  "timeTo": 1528199550,
>  "results": ["BUSINESS-EU"],
>  "child":
> { "timeFrom": 1527166800, "timeTo": 10413788400, "results": ["BUSINESS-EU"], 
> "child": null }
> }
>  
>  rest-api2.5
> "0013289414":
> { "timeFrom": 1527166800, "timeTo": 1528199550, "results": ["BUSINESS-EU"] }
> As you can see the child is missing. If i switch back to 2.4 REST-API 
> everything works as expected. 
> The above structure is class ValidityNode and the child that is missing in 
> 2.5 is also a ValidityNode. The structure is meant to be as parent-child 
> implementation.
> public class ValidityNode {
>  private long timeFrom;
>  private long timeTo; 
>  private ArrayList results = null;
>  private ValidityNode child = null;
> public ValidityNode()
> { // default constructor }
> public long getTimeFrom()
> { return timeFrom; }
> public void setTimeFrom(long timeFrom)
> { this.timeFrom = timeFrom; }
> public long getTimeTo()
> { return timeTo; }
> public void setTimeTo(long timeTo)
> { this.timeTo = timeTo; }
> public ArrayList getResults()
> { return results; }
> public void setResults(ArrayList results)
> { this.results = results; }
> public ValidityNode getChild()
> { return child; }
> public void setChild(ValidityNode child)
> { this.child = child; }
> @Override
>  public String toString()
> { return "ValidityNode [timeFrom=" + timeFrom + ", timeTo=" + timeTo + ", 
> results=" + results + ", child=" + child + "]"; }
> Is this issue maybe related to keyType and valueType that were intruduced in 
> 2.5?
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8722) Issue in REST API 2.5

2018-06-13 Thread Alexey Kuznetsov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510950#comment-16510950
 ] 

Alexey Kuznetsov commented on IGNITE-8722:
--

[~skymania] I confirm this bug was introduced in IGNITE-7803 for data 
structures that has references.
The fix is simple (remove one "else" key word in 
IGNITE_BINARY_OBJECT_SERIALIZER) + add test.
I will fix this today/tomorrow.

> Issue in REST API 2.5
> -
>
> Key: IGNITE-8722
> URL: https://issues.apache.org/jira/browse/IGNITE-8722
> Project: Ignite
>  Issue Type: Bug
>  Components: rest
>Affects Versions: 2.5
>Reporter: Denis Dijak
>Assignee: Alexey Kuznetsov
>Priority: Major
>  Labels: rest
> Fix For: 2.6
>
> Attachments: rest.api.zip
>
>
> In 2.5 ignite REST-API dosent show cache value structure correctly
> rest-api 2.4
> "0013289414": {
>  "timeFrom": 1527166800,
>  "timeTo": 1528199550,
>  "results": ["BUSINESS-EU"],
>  "child":
> { "timeFrom": 1527166800, "timeTo": 10413788400, "results": ["BUSINESS-EU"], 
> "child": null }
> }
>  
>  rest-api2.5
> "0013289414":
> { "timeFrom": 1527166800, "timeTo": 1528199550, "results": ["BUSINESS-EU"] }
> As you can see the child is missing. If i switch back to 2.4 REST-API 
> everything works as expected. 
> The above structure is class ValidityNode and the child that is missing in 
> 2.5 is also a ValidityNode. The structure is meant to be as parent-child 
> implementation.
> public class ValidityNode {
>  private long timeFrom;
>  private long timeTo; 
>  private ArrayList results = null;
>  private ValidityNode child = null;
> public ValidityNode()
> { // default constructor }
> public long getTimeFrom()
> { return timeFrom; }
> public void setTimeFrom(long timeFrom)
> { this.timeFrom = timeFrom; }
> public long getTimeTo()
> { return timeTo; }
> public void setTimeTo(long timeTo)
> { this.timeTo = timeTo; }
> public ArrayList getResults()
> { return results; }
> public void setResults(ArrayList results)
> { this.results = results; }
> public ValidityNode getChild()
> { return child; }
> public void setChild(ValidityNode child)
> { this.child = child; }
> @Override
>  public String toString()
> { return "ValidityNode [timeFrom=" + timeFrom + ", timeTo=" + timeTo + ", 
> results=" + results + ", child=" + child + "]"; }
> Is this issue maybe related to keyType and valueType that were intruduced in 
> 2.5?
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8722) Issue in REST API 2.5

2018-06-13 Thread Alexey Kuznetsov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov updated IGNITE-8722:
-
Fix Version/s: 2.6

> Issue in REST API 2.5
> -
>
> Key: IGNITE-8722
> URL: https://issues.apache.org/jira/browse/IGNITE-8722
> Project: Ignite
>  Issue Type: Bug
>  Components: rest
>Affects Versions: 2.5
>Reporter: Denis Dijak
>Assignee: Alexey Kuznetsov
>Priority: Major
>  Labels: rest
> Fix For: 2.6
>
> Attachments: rest.api.zip
>
>
> In 2.5 ignite REST-API dosent show cache value structure correctly
> rest-api 2.4
> "0013289414": {
>  "timeFrom": 1527166800,
>  "timeTo": 1528199550,
>  "results": ["BUSINESS-EU"],
>  "child":
> { "timeFrom": 1527166800, "timeTo": 10413788400, "results": ["BUSINESS-EU"], 
> "child": null }
> }
>  
>  rest-api2.5
> "0013289414":
> { "timeFrom": 1527166800, "timeTo": 1528199550, "results": ["BUSINESS-EU"] }
> As you can see the child is missing. If i switch back to 2.4 REST-API 
> everything works as expected. 
> The above structure is class ValidityNode and the child that is missing in 
> 2.5 is also a ValidityNode. The structure is meant to be as parent-child 
> implementation.
> public class ValidityNode {
>  private long timeFrom;
>  private long timeTo; 
>  private ArrayList results = null;
>  private ValidityNode child = null;
> public ValidityNode()
> { // default constructor }
> public long getTimeFrom()
> { return timeFrom; }
> public void setTimeFrom(long timeFrom)
> { this.timeFrom = timeFrom; }
> public long getTimeTo()
> { return timeTo; }
> public void setTimeTo(long timeTo)
> { this.timeTo = timeTo; }
> public ArrayList getResults()
> { return results; }
> public void setResults(ArrayList results)
> { this.results = results; }
> public ValidityNode getChild()
> { return child; }
> public void setChild(ValidityNode child)
> { this.child = child; }
> @Override
>  public String toString()
> { return "ValidityNode [timeFrom=" + timeFrom + ", timeTo=" + timeTo + ", 
> results=" + results + ", child=" + child + "]"; }
> Is this issue maybe related to keyType and valueType that were intruduced in 
> 2.5?
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-5971) Ignite Continuous Query 2: Flaky failure of #testMultiThreadedFailover

2018-06-13 Thread Alexey Kuznetsov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-5971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov reassigned IGNITE-5971:


Assignee: Alexey Kuznetsov

> Ignite Continuous Query 2: Flaky failure of #testMultiThreadedFailover
> --
>
> Key: IGNITE-5971
> URL: https://issues.apache.org/jira/browse/IGNITE-5971
> Project: Ignite
>  Issue Type: Test
>Affects Versions: 2.1
>Reporter: Ivan Rakov
>Assignee: Alexey Kuznetsov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> Bunch of tests inherited from CacheContinuousQueryFailoverAbstractSelfTest 
> have flaky #testMultiThreadedFailover test. It fails from time to time in all 
> inherited test classes.
> CacheContinuousQueryAsyncFailoverAtomicSelfTest.testFailoverStartStopBackup 
> fails with problem that looks the same.
> {noformat}
> junit.framework.AssertionFailedError: Lose events, see log for details.
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.TestCase.fail(TestCase.java:227)
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryFailoverAbstractSelfTest.checkEvents(CacheContinuousQueryFailoverAbstractSelfTest.java:1225)
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryFailoverAbstractSelfTest.access$3600(CacheContinuousQueryFailoverAbstractSelfTest.java:117)
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryFailoverAbstractSelfTest$22$3.run(CacheContinuousQueryFailoverAbstractSelfTest.java:1962)
>   at java.util.concurrent.CyclicBarrier.dowait(CyclicBarrier.java:220)
>   at java.util.concurrent.CyclicBarrier.await(CyclicBarrier.java:435)
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryFailoverAbstractSelfTest$23.run(CacheContinuousQueryFailoverAbstractSelfTest.java:2025)
>   at 
> org.apache.ignite.testframework.GridTestUtils$9.call(GridTestUtils.java:1236)
>   at 
> org.apache.ignite.testframework.GridTestThread.run(GridTestThread.java:86)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8131) ZookeeperDiscoverySpiTest#testClientReconnectSessionExpire* tests fail on TC

2018-06-13 Thread Denis Garus (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510932#comment-16510932
 ] 

Denis Garus commented on IGNITE-8131:
-

[~sergey-chugunov], I've executed tests 300 times on local and didn't get any 
error. Could you share logs, please?

> ZookeeperDiscoverySpiTest#testClientReconnectSessionExpire* tests fail on TC
> 
>
> Key: IGNITE-8131
> URL: https://issues.apache.org/jira/browse/IGNITE-8131
> Project: Ignite
>  Issue Type: Bug
>  Components: zookeeper
>Reporter: Sergey Chugunov
>Assignee: Denis Garus
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> Two tests always fail on TC with the assertion
> {noformat}
> junit.framework.AssertionFailedError: Failed to wait for disconnect/reconnect 
> event.
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.waitReconnectEvent(ZookeeperDiscoverySpiTest.java:4221)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.reconnectClientNodes(ZookeeperDiscoverySpiTest.java:4183)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.clientReconnectSessionExpire(ZookeeperDiscoverySpiTest.java:2231)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.testClientReconnectSessionExpire1_1(ZookeeperDiscoverySpiTest.java:2206)
> {noformat}
> from client disconnect/reconnect events check. Obviously client doesn't 
> generate these events as it supposed to do.
> (TC runs can be found 
> [here|https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_IgniteZooKeeperDiscovery_IgniteTests24Java8=pull%2F3730%2Fhead=buildTypeStatusDiv]).
> It is possible to reproduce test failure locally as well, but with low 
> probability: one failure for 50 or even 300 successful executions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7526) SQL: Introduce memory region for reducer merge results with disk offload

2018-06-13 Thread Vladimir Ozerov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510929#comment-16510929
 ] 

Vladimir Ozerov commented on IGNITE-7526:
-

Looks like we can try to piggyback on H2 external storage as follows:
1) Make our H2 instance persistable and allow to set max memory rows; this way 
H2 will store intermediate results on disk
2) Make our merge table use the same infrastructure to get rid of OOME on a 
client
3) Create a patch for H2 to offload GROUP BY results to disk

> SQL: Introduce memory region for reducer merge results with disk offload
> 
>
> Key: IGNITE-7526
> URL: https://issues.apache.org/jira/browse/IGNITE-7526
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Taras Ledkov
>Priority: Major
>
> Currently all results received from map nodes are stored inside reducer's 
> heap memory. What is worse, in case of complex queries, such as having sorts 
> or groupings, need to collect all results from mappers first before final 
> processing could be applied. In case of big results set (or intermediate 
> results) this could easily lead to OOME on reducer. 
> To mitigate this we should introduce special memory area where intermediate 
> results could be stored. All final processing should be stored in the same 
> area as well. This area should be of limited size and should be able to 
> offload results to disk in case of overflow.
> We could start with our B+Tree and free list and store results in some K-V 
> form. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8751) Possible race on node segmentation.

2018-06-13 Thread Andrey Gura (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510928#comment-16510928
 ] 

Andrey Gura commented on IGNITE-8751:
-

TC looks good: 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8_IgniteTests24Java8=pull%2F4171%2Fhead
Please review.

> Possible race on node segmentation.
> ---
>
> Key: IGNITE-8751
> URL: https://issues.apache.org/jira/browse/IGNITE-8751
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Andrew Mashenkov
>Assignee: Andrey Gura
>Priority: Major
> Fix For: 2.6
>
>
> Segmentation policy may be ignored, probably, due to a race.
> See [1] for details.
>  [1] 
> [http://apache-ignite-users.70518.x6.nabble.com/Node-pause-for-no-obvious-reason-td21923.html]
> Logs from segmented node.
> [08:42:42,290][INFO][tcp-disco-sock-reader-#15][TcpDiscoverySpi] Finished 
> serving remote node connection [rmtAddr=/10.29.42.45:38712, rmtPort=38712 
> [08:42:42,290][WARNING][disco-event-worker-#161][GridDiscoveryManager] Local 
> node SEGMENTED: TcpDiscoveryNode [id=8333aa56-8bf4-4558-a387-809b1d2e2e5b, 
> addrs=[10.29.42.44, 127.0.0.1], sockAddrs=[sap-datanode1/10.29.42.44:49500, 
> /127.0.0.1:49500], discPort=49500, order=1, intOrder=1, 
> lastExchangeTime=1528447362286, loc=true, ver=2.5.0#20180523-sha1:86e110c7, 
> isClient=false] 
> [08:42:42,294][SEVERE][tcp-disco-srvr-#2][] Critical system error detected. 
> Will be handled accordingly to configured handler [hnd=class 
> o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=java.lang.IllegalStateException: Thread 
> tcp-disco-srvr-#2 is terminated unexpectedly.]] 
> java.lang.IllegalStateException: Thread tcp-disco-srvr-#2 is terminated 
> unexpectedly. 
>         at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$TcpServer.body(ServerImpl.java:5686)
>  
>         at 
> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62) 
> [08:42:42,294][SEVERE][tcp-disco-srvr-#2][] JVM will be halted immediately 
> due to the failure: [failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=java.lang.IllegalStateException: Thread 
> tcp-disco-srvr-#2 is terminated unexpectedly.]] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8509) A lot of "Execution timeout" result for Cache 6 suite

2018-06-13 Thread Alexey Kuznetsov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510924#comment-16510924
 ] 

Alexey Kuznetsov commented on IGNITE-8509:
--

[~ascherbakov] hi. 
Are you going to fix test 
TxRollbackOnTimeoutNearCacheTest#testRandomMixedTxConfigurations in this ticket 
?
Or it should be fixed within https://issues.apache.org/jira/browse/IGNITE-8297 ?


> A lot of "Execution timeout" result for Cache 6 suite
> -
>
> Key: IGNITE-8509
> URL: https://issues.apache.org/jira/browse/IGNITE-8509
> Project: Ignite
>  Issue Type: Task
>Reporter: Maxim Muzafarov
>Assignee: Alexei Scherbakov
>Priority: Critical
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> *Summary*
> Suite Cache 6 fails with execution timeout fails with
> {code:java}
> [org.apache.ignite:ignite-core] [2018-05-15 02:35:14,143][WARN 
> ][grid-timeout-worker-#71656%transactions.TxRollbackOnTimeoutNearCacheTest0%][diagnostic]
>  Found long running transaction [startTime=02:32:57.989, 
> curTime=02:35:14.136, tx=GridDhtTxRemote
> {code}
> *Please, fefer for more details* 
> [https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_Cache6=1=buildTypeHistoryList_IgniteTests24Java8=%3Cdefault%3E]
> *Statistics Cache 6 Suite*
>  Recent fails : 42,0% [21 fails / 50 runs]; 
>  Critical recent fails: 10,0% [5 fails / 50 runs];
> Last mounth (15.04 – 15.05)
> Execution timeout: 21,0% [84 fails / 400 runs];



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8779) Web console: simplify E2E test runner

2018-06-13 Thread Ilya Borisov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Borisov updated IGNITE-8779:
-
Description: 
The way we run E2E tests now significantly overlaps with a runner provided by 
TestCafe CLI. This happens because:
# Ignite TestCafe runner uses environment variables to pass parameters.
# A need to start custom backend environment.

I think we can do better, the optimized result might look like a simple npm 
script:
{{concurrently "npm run env" "testcafe --runner=teamcity"}}

  was:
The way we run E2E tests now significantly overlaps with a runner provided by 
TestCafe CLI. This happens because:
# Ignite TestCafe runner uses environment variables to pass parameters.
# A need to start custom backend environment.

I think we can do better, the optimized result might look like a simple npm 
script:
{{ concurrently "npm run env" "testcafe --runner=teamcity" }}


> Web console: simplify E2E test runner
> -
>
> Key: IGNITE-8779
> URL: https://issues.apache.org/jira/browse/IGNITE-8779
> Project: Ignite
>  Issue Type: Improvement
>  Components: wizards
>Reporter: Ilya Borisov
>Assignee: Alexander Kalinin
>Priority: Minor
>
> The way we run E2E tests now significantly overlaps with a runner provided by 
> TestCafe CLI. This happens because:
> # Ignite TestCafe runner uses environment variables to pass parameters.
> # A need to start custom backend environment.
> I think we can do better, the optimized result might look like a simple npm 
> script:
> {{concurrently "npm run env" "testcafe --runner=teamcity"}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8749) Exception for "no space left" situation should be propagated to FailureHandler

2018-06-13 Thread Dmitriy Sorokin (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Sorokin reassigned IGNITE-8749:
---

Assignee: Dmitriy Sorokin  (was: Andrey Gura)

> Exception for "no space left" situation should be propagated to FailureHandler
> --
>
> Key: IGNITE-8749
> URL: https://issues.apache.org/jira/browse/IGNITE-8749
> Project: Ignite
>  Issue Type: Improvement
>  Components: persistence
>Reporter: Sergey Chugunov
>Assignee: Dmitriy Sorokin
>Priority: Major
> Fix For: 2.6
>
>
> For now if "no space left" situation is detected in 
> FileWriteAheadLogManager#formatFile method and corresponding exception is 
> thrown the exception doesn't get propagated to FailureHandler and node 
> continues working.
> As "no space left" is a critical situation, corresponding exception should be 
> propagated to handler to make necessary actions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8779) Web console: simplify E2E test runner

2018-06-13 Thread Ilya Borisov (JIRA)
Ilya Borisov created IGNITE-8779:


 Summary: Web console: simplify E2E test runner
 Key: IGNITE-8779
 URL: https://issues.apache.org/jira/browse/IGNITE-8779
 Project: Ignite
  Issue Type: Improvement
  Components: wizards
Reporter: Ilya Borisov
Assignee: Alexander Kalinin


The way we run E2E tests now significantly overlaps with a runner provided by 
TestCafe CLI. This happens because:
# Ignite TestCafe runner uses environment variables to pass parameters.
# A need to start custom backend environment.

I think we can do better, the optimized result might look like a simple npm 
script:
{{ concurrently "npm run env" "testcafe --runner=teamcity" }}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8771) OutOfMemory in Cache2 suite in master branch on TC

2018-06-13 Thread Alexey Kuznetsov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov reassigned IGNITE-8771:


Assignee: (was: Alexey Kuznetsov)

> OutOfMemory in Cache2 suite in master branch on TC
> --
>
> Key: IGNITE-8771
> URL: https://issues.apache.org/jira/browse/IGNITE-8771
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Chugunov
>Priority: Blocker
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> OutOfMemory error happened in Cache2 suite for the first time in a while: 
> [https://ci.ignite.apache.org/viewLog.html?buildId=1372380=buildResultsDiv=IgniteTests24Java8_Cache2]
> Recent history doesn't contain any OOMs or execution timeouts for this suite: 
> [TC 
> link|https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_Cache2_IgniteTests24Java8=%3Cdefault%3E=buildTypeStatusDiv]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-6055) SQL: Add String length constraint

2018-06-13 Thread Vladimir Ozerov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510892#comment-16510892
 ] 

Vladimir Ozerov commented on IGNITE-6055:
-

[~NIzhikov], my comments:
1) 
{{org.apache.ignite.internal.processors.query.h2.ddl.DdlStatementsProcessor#toQueryEntity}}
 - this check is redundant, we should check whether QueryEntity is valid on 
cache start, not inside DDL (to handle both DDL and cache create through API in 
the same place). See 
org.apache.ignite.internal.processors.query.QueryUtils#validateQueryEntity
2) I am not very happy with precision/scale/maxLength properties - too much of 
them. Instead, we can re-use "precision" for String length. H2 works this way 
(see their docs), so it should be ok for us as well. In fact, this is why 
GridSqlColumn.maxLength() return the same value as GridSqlColumn.precision()
3) QueryEntity - getters should return current value, without wrapping it into 
unmodifiable collection, because this is how users frequently use us - 
QueryEntity.getNotNullFields().add(...)
4) Let's split single decimal property into "scale" and "precision". Because 
single precision would be needed not only for strings, but for other data types 
as well (e.g. DOUBLE, REAL, BINARY)
5) I do not see how compatibility is handled in .NET - new fields are 
serialized unconditionally, meaning that we cannot talk to previous version. Am 
I wrong?
6) QueryBinaryProperty, QueryTypeDescriptorImpl, QueryUtils - checks for "_KEY" 
and "_VAL" are illegal, because key/val field names could be overriden with 
QueryEntity.keyFieldName/valFieldName. These checks should be more generic
7) QueryBinaryProperty.value() - new logic around key/val should be removed as 
it breaks an invariant that only nested fields can be read/written here. Please 
find a way to perform a check on key/value without changing central field 
extraction logic.
8) We need more tests for cache API. E.g. I do not see tests for invoke().

> SQL: Add String length constraint
> -
>
> Key: IGNITE-6055
> URL: https://issues.apache.org/jira/browse/IGNITE-6055
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.1
>Reporter: Vladimir Ozerov
>Assignee: Nikolay Izhikov
>Priority: Major
>  Labels: sql-engine
>
> We should support {{CHAR(X)}} and {{VARCHAR{X}} syntax. Currently, we ignore 
> it. First, it affects semantics. E.g., one can insert a string with greater 
> length into a cache/table without any problems. Second, it limits efficiency 
> of our default configuration. E.g., index inline cannot be applied to 
> {{String}} data type as we cannot guess it's length.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-4210) CacheLoadingConcurrentGridStartSelfTest.testLoadCacheFromStore() test lose data.

2018-06-13 Thread Andrey Gura (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510877#comment-16510877
 ] 

Andrey Gura edited comment on IGNITE-4210 at 6/13/18 9:47 AM:
--

[~Alexey Kuznetsov] From my point of view it's a not good solution because it 
will block partition map exchange until loadCache finish data loading and then 
will lead to massive data rebalancing. Better way, I believe, it's passing 
exception to user code in case of topology changes and then user will have 
possibility to manage initial data loading from cache store (e.g. user can 
split whole data set on blocks and retry loading of block on which topology is 
changed).


was (Author: agura):
[~Alexey Kuznetsov] From my point of view it's a not good solution because it 
will block partition map exchange until loadCache finish data loading and then 
will lead to massive data rebalancing. Better way, I believe, it's passing 
exception to user code in case of topology changes and then user will have 
possibility to manage initial data loading (e.g. user can split whole data set 
on blocks and retry loading of block on which topology is changed).

> CacheLoadingConcurrentGridStartSelfTest.testLoadCacheFromStore() test lose 
> data.
> 
>
> Key: IGNITE-4210
> URL: https://issues.apache.org/jira/browse/IGNITE-4210
> Project: Ignite
>  Issue Type: Bug
>Reporter: Anton Vinogradov
>Assignee: Alexey Kuznetsov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> org.apache.ignite.internal.processors.cache.distributed.CacheLoadingConcurrentGridStartSelfTest#testLoadCacheFromStore
>  sometimes have failures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-4210) CacheLoadingConcurrentGridStartSelfTest.testLoadCacheFromStore() test lose data.

2018-06-13 Thread Andrey Gura (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510877#comment-16510877
 ] 

Andrey Gura commented on IGNITE-4210:
-

[~Alexey Kuznetsov] From my point of view it's a not good solution because it 
will block partition map exchange until loadCache finish data loading and then 
will lead to massive data rebalancing. Better way, I believe, it's passing 
exception to user code in case of topology changes and then user will have 
possibility to manage initial data loading (e.g. user can split whole data set 
on blocks and retry loading of block on which topology is changed).

> CacheLoadingConcurrentGridStartSelfTest.testLoadCacheFromStore() test lose 
> data.
> 
>
> Key: IGNITE-4210
> URL: https://issues.apache.org/jira/browse/IGNITE-4210
> Project: Ignite
>  Issue Type: Bug
>Reporter: Anton Vinogradov
>Assignee: Alexey Kuznetsov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> org.apache.ignite.internal.processors.cache.distributed.CacheLoadingConcurrentGridStartSelfTest#testLoadCacheFromStore
>  sometimes have failures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8778) Cache tests fail due short timeout

2018-06-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510848#comment-16510848
 ] 

ASF GitHub Bot commented on IGNITE-8778:


GitHub user mcherkasov opened a pull request:

https://github.com/apache/ignite/pull/4180

IGNITE-8778 Cache tests fail due short timeout



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-master-8778

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4180.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4180


commit 1237a019197653c2e12c8d6bafccaefe6def67ac
Author: mcherkasov 
Date:   2018-06-13T09:27:03Z

IGNITE-8778 Cache tests fail due short timeout




> Cache tests fail due short timeout
> --
>
> Key: IGNITE-8778
> URL: https://issues.apache.org/jira/browse/IGNITE-8778
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
>Priority: Major
>
> Cache tests can fail due time out:
> [https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-6515019727174930828=testDetails]
>  
> usually it passes, tests take ~50seconds, which is close to timeout. If TC is 
> overloaded, tests can take >60sec, which leads to false failures.
>  
> we need to increase timeout to avoid this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8771) OutOfMemory in Cache2 suite in master branch on TC

2018-06-13 Thread Alexey Kuznetsov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov reassigned IGNITE-8771:


Assignee: Alexey Kuznetsov

> OutOfMemory in Cache2 suite in master branch on TC
> --
>
> Key: IGNITE-8771
> URL: https://issues.apache.org/jira/browse/IGNITE-8771
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Chugunov
>Assignee: Alexey Kuznetsov
>Priority: Blocker
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> OutOfMemory error happened in Cache2 suite for the first time in a while: 
> [https://ci.ignite.apache.org/viewLog.html?buildId=1372380=buildResultsDiv=IgniteTests24Java8_Cache2]
> Recent history doesn't contain any OOMs or execution timeouts for this suite: 
> [TC 
> link|https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_Cache2_IgniteTests24Java8=%3Cdefault%3E=buildTypeStatusDiv]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-8131) ZookeeperDiscoverySpiTest#testClientReconnectSessionExpire* tests fail on TC

2018-06-13 Thread Sergey Chugunov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510835#comment-16510835
 ] 

Sergey Chugunov edited comment on IGNITE-8131 at 6/13/18 9:25 AM:
--

[~garus.d.g], thanks for your efforts.

I double checked your result and managed to reproduce the issue locally: I've 
run this single test with "Until failure" configuration and observed failure on 
120th execution.

I found another detail about failure: in all successful executions ZooKeeper 
client is closed because of session timeout reported by this line in logs:
{noformat}
[WARN ][zk-client-timer-internal.ZookeeperDiscoverySpiTest1][ZookeeperClient] 
Failed to establish ZooKeeper connection, close client [timeout=2000]
{noformat}

But in failed execution I don't see this line in the logs. It looks worth 
investigating what causes this behavior and its implications.

Could you please try to reproduce the failure and figure out what's going on? 
Or if it is more convenient for you I can share full logs from my local run for 
both successful and failed executions.


was (Author: sergey-chugunov):
[~garus.d.g], thanks for your efforts.

I double checked your result and managed to reproduce the issue locally: I've 
run this single test with "Until failure" configuration and observed failure on 
120th execution.

I found another detail about failure: in all successful executions ZooKeeper 
client is closed because of session timeout reported by this line in logs:
{noformat}
[WARN ][zk-client-timer-internal.ZookeeperDiscoverySpiTest1][ZookeeperClient] 
Failed to establish ZooKeeper connection, close client [timeout=2000]
{noformat}

But in failed execution I don't see this line in the logs. It looks worth 
investigating what causes this behavior and its implications.

> ZookeeperDiscoverySpiTest#testClientReconnectSessionExpire* tests fail on TC
> 
>
> Key: IGNITE-8131
> URL: https://issues.apache.org/jira/browse/IGNITE-8131
> Project: Ignite
>  Issue Type: Bug
>  Components: zookeeper
>Reporter: Sergey Chugunov
>Assignee: Denis Garus
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> Two tests always fail on TC with the assertion
> {noformat}
> junit.framework.AssertionFailedError: Failed to wait for disconnect/reconnect 
> event.
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.waitReconnectEvent(ZookeeperDiscoverySpiTest.java:4221)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.reconnectClientNodes(ZookeeperDiscoverySpiTest.java:4183)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.clientReconnectSessionExpire(ZookeeperDiscoverySpiTest.java:2231)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.testClientReconnectSessionExpire1_1(ZookeeperDiscoverySpiTest.java:2206)
> {noformat}
> from client disconnect/reconnect events check. Obviously client doesn't 
> generate these events as it supposed to do.
> (TC runs can be found 
> [here|https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_IgniteZooKeeperDiscovery_IgniteTests24Java8=pull%2F3730%2Fhead=buildTypeStatusDiv]).
> It is possible to reproduce test failure locally as well, but with low 
> probability: one failure for 50 or even 300 successful executions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-8131) ZookeeperDiscoverySpiTest#testClientReconnectSessionExpire* tests fail on TC

2018-06-13 Thread Sergey Chugunov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510835#comment-16510835
 ] 

Sergey Chugunov edited comment on IGNITE-8131 at 6/13/18 9:24 AM:
--

[~garus.d.g], thanks for your efforts.

I double checked your result and managed to reproduce the issue locally: I've 
run this single test with "Until failure" configuration and observed failure on 
120th execution.

I found another detail about failure: in all successful executions ZooKeeper 
client is closed because of session timeout reported by this line in logs:
{noformat}
[WARN ][zk-client-timer-internal.ZookeeperDiscoverySpiTest1][ZookeeperClient] 
Failed to establish ZooKeeper connection, close client [timeout=2000]
{noformat}

But in failed execution I don't see this line in the logs. It looks worth 
investigating what causes this behavior and its implications.


was (Author: sergey-chugunov):
[~garus.d.g],

I double checked your result and managed to reproduce the issue locally: I've 
run this single test with "Until failure" configuration and observed failure on 
120th execution.

I found another detail about failure: in all successful executions ZooKeeper 
client is closed because of session timeout reported by this line in logs:
{noformat}
[WARN ][zk-client-timer-internal.ZookeeperDiscoverySpiTest1][ZookeeperClient] 
Failed to establish ZooKeeper connection, close client [timeout=2000]
{noformat}

But in failed execution I don't see this line in the logs. It looks worth 
investigating what causes this behavior and its implications.

> ZookeeperDiscoverySpiTest#testClientReconnectSessionExpire* tests fail on TC
> 
>
> Key: IGNITE-8131
> URL: https://issues.apache.org/jira/browse/IGNITE-8131
> Project: Ignite
>  Issue Type: Bug
>  Components: zookeeper
>Reporter: Sergey Chugunov
>Assignee: Denis Garus
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> Two tests always fail on TC with the assertion
> {noformat}
> junit.framework.AssertionFailedError: Failed to wait for disconnect/reconnect 
> event.
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.waitReconnectEvent(ZookeeperDiscoverySpiTest.java:4221)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.reconnectClientNodes(ZookeeperDiscoverySpiTest.java:4183)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.clientReconnectSessionExpire(ZookeeperDiscoverySpiTest.java:2231)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.testClientReconnectSessionExpire1_1(ZookeeperDiscoverySpiTest.java:2206)
> {noformat}
> from client disconnect/reconnect events check. Obviously client doesn't 
> generate these events as it supposed to do.
> (TC runs can be found 
> [here|https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_IgniteZooKeeperDiscovery_IgniteTests24Java8=pull%2F3730%2Fhead=buildTypeStatusDiv]).
> It is possible to reproduce test failure locally as well, but with low 
> probability: one failure for 50 or even 300 successful executions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8131) ZookeeperDiscoverySpiTest#testClientReconnectSessionExpire* tests fail on TC

2018-06-13 Thread Sergey Chugunov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510835#comment-16510835
 ] 

Sergey Chugunov commented on IGNITE-8131:
-

[~garus.d.g],

I double checked your result and managed to reproduce the issue locally: I've 
run this single test with "Until failure" configuration and observed failure on 
120th execution.

I found another detail about failure: in all successful executions ZooKeeper 
client is closed because of session timeout reported by this line in logs:
{noformat}
[WARN ][zk-client-timer-internal.ZookeeperDiscoverySpiTest1][ZookeeperClient] 
Failed to establish ZooKeeper connection, close client [timeout=2000]
{noformat}

But in failed execution I don't see this line in the logs. It looks worth 
investigating what causes this behavior and its implications.

> ZookeeperDiscoverySpiTest#testClientReconnectSessionExpire* tests fail on TC
> 
>
> Key: IGNITE-8131
> URL: https://issues.apache.org/jira/browse/IGNITE-8131
> Project: Ignite
>  Issue Type: Bug
>  Components: zookeeper
>Reporter: Sergey Chugunov
>Assignee: Denis Garus
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> Two tests always fail on TC with the assertion
> {noformat}
> junit.framework.AssertionFailedError: Failed to wait for disconnect/reconnect 
> event.
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.waitReconnectEvent(ZookeeperDiscoverySpiTest.java:4221)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.reconnectClientNodes(ZookeeperDiscoverySpiTest.java:4183)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.clientReconnectSessionExpire(ZookeeperDiscoverySpiTest.java:2231)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.testClientReconnectSessionExpire1_1(ZookeeperDiscoverySpiTest.java:2206)
> {noformat}
> from client disconnect/reconnect events check. Obviously client doesn't 
> generate these events as it supposed to do.
> (TC runs can be found 
> [here|https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_IgniteZooKeeperDiscovery_IgniteTests24Java8=pull%2F3730%2Fhead=buildTypeStatusDiv]).
> It is possible to reproduce test failure locally as well, but with low 
> probability: one failure for 50 or even 300 successful executions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8184) ZookeeperDiscoverySpiTest#testTopologyChangeMultithreaded_RestartZk* tests fail on TC

2018-06-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510813#comment-16510813
 ] 

ASF GitHub Bot commented on IGNITE-8184:


GitHub user dgarus opened a pull request:

https://github.com/apache/ignite/pull/4179

IGNITE-8184 
ZookeeperDiscoverySpiTest#testTopologyChangeMultithreaded_RestartZk* tests fail



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dgarus/ignite ignite-8184

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4179.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4179


commit 366a664c55ec461b4effe48e15500796277f5042
Author: Garus Denis 
Date:   2018-06-13T09:04:19Z

IGNITE-8184. check TC fail




> ZookeeperDiscoverySpiTest#testTopologyChangeMultithreaded_RestartZk* tests 
> fail on TC
> -
>
> Key: IGNITE-8184
> URL: https://issues.apache.org/jira/browse/IGNITE-8184
> Project: Ignite
>  Issue Type: Bug
>  Components: zookeeper
>Reporter: Sergey Chugunov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> Tests fail on TC but pass locally.
> There are some errors in logs like this:
> {noformat}
> class org.apache.ignite.IgniteCheckedException: Failed to start manager: 
> GridManagerAdapter [enabled=true, 
> name=org.apache.ignite.internal.managers.discovery.GridDiscoveryManager]
> at 
> org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1698)
> at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1007)
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1977)
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1720)
> at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1148)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:646)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:882)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:845)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:833)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:799)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest$3.call(GridAbstractTest.java:742)
> at 
> org.apache.ignite.testframework.GridTestThread.run(GridTestThread.java:86)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to start 
> SPI: ZookeeperDiscoverySpi [zkRootPath=/apacheIgnite, 
> zkConnectionString=127.0.0.1:45822,127.0.0.1:46661,127.0.0.1:43724, 
> joinTimeout=0, sesTimeout=3, clientReconnectDisabled=false, 
> internalLsnr=null]
> at 
> org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:300)
> at 
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:905)
> at 
> org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1693)
> ... 11 more
> Caused by: class org.apache.ignite.spi.IgniteSpiException: Failed to 
> initialize Zookeeper nodes
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.initZkNodes(ZookeeperDiscoveryImpl.java:827)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.startJoin(ZookeeperDiscoveryImpl.java:957)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.joinTopology(ZookeeperDiscoveryImpl.java:775)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.startJoinAndWait(ZookeeperDiscoveryImpl.java:693)
> at 
> org.apache.ignite.spi.discovery.zk.ZookeeperDiscoverySpi.spiStart(ZookeeperDiscoverySpi.java:471)
> at 
> org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)
> ... 13 more
> Caused by: 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperClientFailedException: 
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss for /apacheIgnite
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperClient.onZookeeperError(ZookeeperClient.java:808)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperClient.exists(ZookeeperClient.java:276)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.initZkNodes(ZookeeperDiscoveryImpl.java:789)
> ... 18 more
> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
> 

[jira] [Created] (IGNITE-8778) Cache tests fail due short timeout

2018-06-13 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-8778:
-

 Summary: Cache tests fail due short timeout
 Key: IGNITE-8778
 URL: https://issues.apache.org/jira/browse/IGNITE-8778
 Project: Ignite
  Issue Type: Bug
  Components: general
Reporter: Mikhail Cherkasov
Assignee: Mikhail Cherkasov


Cache tests can fail due time out:

[https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-6515019727174930828=testDetails]

 

usually it passes, tests take ~50seconds, which is close to timeout. If TC is 
overloaded, tests can take >60sec, which leads to false failures.

 

we need to increase timeout to avoid this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8384) SQL: Secondary indexes should sort entries by links rather than keys

2018-06-13 Thread Vladimir Ozerov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510801#comment-16510801
 ] 

Vladimir Ozerov commented on IGNITE-8384:
-

[~al.psc], patch looks good, but we need to do the following:
1) Add {{IgniteSecondaryIndexesMigrationToLinksComparisonTest}} to PDS test 
suite and re-run compatibility tests on TC
2) Run standard benchmark set as well as additional SQL benchmarks (JDBC, DML, 
put-indexed-8) to verify that there is no drop.

> SQL: Secondary indexes should sort entries by links rather than keys
> 
>
> Key: IGNITE-8384
> URL: https://issues.apache.org/jira/browse/IGNITE-8384
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.4
>Reporter: Vladimir Ozerov
>Assignee: Alexander Paschenko
>Priority: Major
>  Labels: iep-19, performance
> Fix For: 2.6
>
>
> Currently we sort entries in secondary indexes as {{(idx_cols, KEY)}}. The 
> key itself is not stored in the index in general case. It means that we need 
> to perform a lookup to data page to find correct insertion point for index 
> entry.
> This could be fixed easily by sorting entries a bit differently - {{idx_cols, 
> link}}. This is all we need.
> UPD: If we have an affinity keys, then affinity column will be added to 
> secondary index as well. 
>  So, we'll have secondary index as  {{(idx_cols, KEY, AFF_COL)}}
> Comparison occur here: 
> {{org.apache.ignite.internal.processors.query.h2.database.H2Tree#compare}}
> What we need is to avoid adding PK and affinity key columns to every 
> secondary index and compare links instead in this method.
> Probably we need to preserve old behavior for compatibility purposes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8763) java.nio.file.AccessDeniedException is not handled with default failure handler

2018-06-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510784#comment-16510784
 ] 

ASF GitHub Bot commented on IGNITE-8763:


GitHub user alex-plekhanov opened a pull request:

https://github.com/apache/ignite/pull/4178

IGNITE-8763 java.nio.file.AccessDeniedException is not handled with default 
failure handler



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alex-plekhanov/ignite ignite-8763

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4178.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4178


commit 18d8fb7148c1197402e0e89e9b7797a1ee38b492
Author: Aleksey Plekhanov 
Date:   2018-06-09T15:05:28Z

IGNITE-8763 Reproducer

commit c55c74425e9af807d2c521778e1185fc1e726520
Author: Aleksey Plekhanov 
Date:   2018-06-09T15:41:54Z

IGNITE-8763 Fix

commit 3cf690ffe5e59ae8e9bb54b7266e33a35e7ada89
Author: Aleksey Plekhanov 
Date:   2018-06-09T16:16:56Z

IGNITE-8763 Fix 2




> java.nio.file.AccessDeniedException is not handled with default failure 
> handler
> ---
>
> Key: IGNITE-8763
> URL: https://issues.apache.org/jira/browse/IGNITE-8763
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Andrey Gura
>Assignee: Aleksey Plekhanov
>Priority: Major
> Fix For: 2.6
>
>
> java.nio.file.AccessDeniedException is not handled with default failure 
> handler
> 1. Start cluster(4 nodes).
> 2. Upload some data.
> 3. Make files in metastore read only.
> 4. Deactivate grid.
> 5. Activate grid.
> On this step I see java.nio.file.AccessDeniedException:
> {noformat}
> [17:55:40,035][INFO][exchange-worker-#62][GridCacheDatabaseSharedManager] 
> Read checkpoint status 
> [startMarker=/storage/ssd/avolkov/tiden/iep_14-180517-175425/test_iep_14/ignite.server.1/work/db/node1/cp/1526568907638-46128a87-562a-45fc-8d73-75ccb1490d63-START.bin,
>  
> endMarker=/storage/ssd/avolkov/tiden/iep_14-180517-175425/test_iep_14/ignite.server.1/work/db/node1/cp/1526568907638-46128a87-562a-45fc-8d73-75ccb1490d63-END.bin]
> [17:55:40,037][SEVERE][exchange-worker-#62][GridDhtPartitionsExchangeFuture] 
> Failed to activate node components 
> [nodeId=bd7115d5-1f95-4673-9f40-47056b0b1a58, client=false, 
> topVer=AffinityTopologyVersion [topVer=4, minorTopVer=5]]
> class org.apache.ignite.IgniteCheckedException: Error while creating file 
> page store 
> [file=/storage/ssd/avolkov/tiden/iep_14-180517-175425/test_iep_14/ignite.server.1/work/db/node1/metastorage/part-0.bin]:
> at 
> org.apache.ignite.internal.processors.cache.persistence.file.FileVersionCheckingFactory.createPageStore(FileVersionCheckingFactory.java:98)
> at 
> org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.initDir(FilePageStoreManager.java:463)
> at 
> org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.initializeForMetastorage(FilePageStoreManager.java:234)
> at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.readCheckpointAndRestoreMemory(GridCacheDatabaseSharedManager.java:743)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onClusterStateChangeRequest(GridDhtPartitionsExchangeFuture.java:896)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:643)
> at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2419)
> at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2299)
> at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.nio.file.AccessDeniedException: 
> /storage/ssd/avolkov/tiden/iep_14-180517-175425/test_iep_14/ignite.server.1/work/db/node1/metastorage/part-0.bin
> at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
> at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
> at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
> at 
> sun.nio.fs.UnixFileSystemProvider.newAsynchronousFileChannel(UnixFileSystemProvider.java:196)
> at 
> 

[jira] [Assigned] (IGNITE-8722) Issue in REST API 2.5

2018-06-13 Thread Alexey Kuznetsov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov reassigned IGNITE-8722:


Assignee: Alexey Kuznetsov

> Issue in REST API 2.5
> -
>
> Key: IGNITE-8722
> URL: https://issues.apache.org/jira/browse/IGNITE-8722
> Project: Ignite
>  Issue Type: Bug
>  Components: rest
>Affects Versions: 2.5
>Reporter: Denis Dijak
>Assignee: Alexey Kuznetsov
>Priority: Major
>  Labels: rest
> Attachments: rest.api.zip
>
>
> In 2.5 ignite REST-API dosent show cache value structure correctly
> rest-api 2.4
> "0013289414": {
>  "timeFrom": 1527166800,
>  "timeTo": 1528199550,
>  "results": ["BUSINESS-EU"],
>  "child":
> { "timeFrom": 1527166800, "timeTo": 10413788400, "results": ["BUSINESS-EU"], 
> "child": null }
> }
>  
>  rest-api2.5
> "0013289414":
> { "timeFrom": 1527166800, "timeTo": 1528199550, "results": ["BUSINESS-EU"] }
> As you can see the child is missing. If i switch back to 2.4 REST-API 
> everything works as expected. 
> The above structure is class ValidityNode and the child that is missing in 
> 2.5 is also a ValidityNode. The structure is meant to be as parent-child 
> implementation.
> public class ValidityNode {
>  private long timeFrom;
>  private long timeTo; 
>  private ArrayList results = null;
>  private ValidityNode child = null;
> public ValidityNode()
> { // default constructor }
> public long getTimeFrom()
> { return timeFrom; }
> public void setTimeFrom(long timeFrom)
> { this.timeFrom = timeFrom; }
> public long getTimeTo()
> { return timeTo; }
> public void setTimeTo(long timeTo)
> { this.timeTo = timeTo; }
> public ArrayList getResults()
> { return results; }
> public void setResults(ArrayList results)
> { this.results = results; }
> public ValidityNode getChild()
> { return child; }
> public void setChild(ValidityNode child)
> { this.child = child; }
> @Override
>  public String toString()
> { return "ValidityNode [timeFrom=" + timeFrom + ", timeTo=" + timeTo + ", 
> results=" + results + ", child=" + child + "]"; }
> Is this issue maybe related to keyType and valueType that were intruduced in 
> 2.5?
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-584) Need to make sure that scan query returns consistent results on topology changes

2018-06-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510684#comment-16510684
 ] 

ASF GitHub Bot commented on IGNITE-584:
---

Github user zstan closed the pull request at:

https://github.com/apache/ignite/pull/2705


> Need to make sure that scan query returns consistent results on topology 
> changes
> 
>
> Key: IGNITE-584
> URL: https://issues.apache.org/jira/browse/IGNITE-584
> Project: Ignite
>  Issue Type: Sub-task
>  Components: data structures
>Affects Versions: 1.9, 2.0, 2.1
>Reporter: Artem Shutak
>Assignee: Stanilovsky Evgeny
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, Muted_test
> Fix For: 2.6
>
> Attachments: tc1.png
>
>
> Consistent results on topology changes was implemented for sql queries, but 
> looks like it still does not work for scan queries.
> This affects 'cache set' tests since set uses scan query for set iteration 
> (to be unmuted on TC): 
> GridCacheSetAbstractSelfTest testNodeJoinsAndLeaves and 
> testNodeJoinsAndLeavesCollocated; 
> Also see todos here GridCacheSetFailoverAbstractSelfTest



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8761) WAL fsync at rollover should be asynchronous in LOG_ONLY and BACKGROUND modes

2018-06-13 Thread Vladislav Pyatkov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov reassigned IGNITE-8761:
-

Assignee: Vladislav Pyatkov

> WAL fsync at rollover should be asynchronous in LOG_ONLY and BACKGROUND modes
> -
>
> Key: IGNITE-8761
> URL: https://issues.apache.org/jira/browse/IGNITE-8761
> Project: Ignite
>  Issue Type: Improvement
>  Components: persistence
>Reporter: Ivan Rakov
>Assignee: Vladislav Pyatkov
>Priority: Major
> Fix For: 2.6
>
>
> Transactions may periodically hang for a few seconds in LOG_ONLY or 
> BACKGROUND persistent modes. Thread dumps show that threads are hanging on 
> syncing previous WAL segment during rollover:
> {noformat}
>   java.lang.Thread.State: RUNNABLE
>at java.nio.MappedByteBuffer.force0(MappedByteBuffer.java:-1)
>at java.nio.MappedByteBuffer.force(MappedByteBuffer.java:203)
>at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileWriteHandle.close(FileWriteAheadLogManager.java:2843)
>at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileWriteHandle.access$600(FileWriteAheadLogManager.java:2483)
>at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager.rollOver(FileWriteAheadLogManager.java:1094)
> {noformat}
> Waiting for this fsync is not necessary action to ensure crash recovery 
> guarantees. Instead of this, we should just perform fsyncs asychronously and 
> ensure that they are completed prior to next checkpoint start.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)