[jira] [Created] (IGNITE-5918) Adding and searching objects in index tree produce a lot of garbage

2017-08-03 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-5918:
-

 Summary: Adding and searching objects in index tree produce a lot 
of garbage
 Key: IGNITE-5918
 URL: https://issues.apache.org/jira/browse/IGNITE-5918
 Project: Ignite
  Issue Type: Bug
Reporter: Mikhail Cherkasov
Assignee: Mikhail Cherkasov






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5918) Adding and searching objects in index tree produces a lot of garbage

2017-08-03 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5918:
--
Description: 
Adding and searching objects in index tree produces a lot of garbage and this 
can lead to big GC pauses.
Tests with data streaming of object with 5 string indexes show that ignite 
server spends about 15-25% CPU time in GC.

The problem is that ignite desirialize objects for comparing 


  was:
Adding and searching objects in index tree produces a lot of garbage and this 
can lead to big GC pauses.
Tests with data streaming of object with 5 string indexes show that ignite 
server spends about 15-25% CPU time in GC.



> Adding and searching objects in index tree produces a lot of garbage
> 
>
> Key: IGNITE-5918
> URL: https://issues.apache.org/jira/browse/IGNITE-5918
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
>
> Adding and searching objects in index tree produces a lot of garbage and this 
> can lead to big GC pauses.
> Tests with data streaming of object with 5 string indexes show that ignite 
> server spends about 15-25% CPU time in GC.
> The problem is that ignite desirialize objects for comparing 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5921) Reduce contention for free list access

2017-08-03 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-5921:
-

 Summary: Reduce contention for free list access
 Key: IGNITE-5921
 URL: https://issues.apache.org/jira/browse/IGNITE-5921
 Project: Ignite
  Issue Type: Improvement
Affects Versions: 2.1
 Environment: Reduce contention for free list access.
Reporter: Mikhail Cherkasov
Assignee: Igor Seliverstov


Reduce contention for free list access.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-5918) Adding and searching objects in index tree produces a lot of garbage

2017-08-03 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov reassigned IGNITE-5918:
-

Assignee: Igor Seliverstov  (was: Mikhail Cherkasov)

> Adding and searching objects in index tree produces a lot of garbage
> 
>
> Key: IGNITE-5918
> URL: https://issues.apache.org/jira/browse/IGNITE-5918
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Mikhail Cherkasov
>Assignee: Igor Seliverstov
>
> Adding and searching objects in index tree produces a lot of garbage and this 
> can lead to big GC pauses.
> Tests with data streaming of object with 5 string indexes show that ignite 
> server spends about 15-25% CPU time in GC.
> The problem is that ignite deserialize objects for comparing, while for the 
> primitive type and strings comparing can be implemented for bytes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5918) Adding and searching objects in index tree produces a lot of garbage

2017-08-03 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5918:
--
Description: 
Adding and searching objects in index tree produces a lot of garbage and this 
can lead to big GC pauses.
Tests with data streaming of object with 5 string indexes show that ignite 
server spends about 15-25% CPU time in GC.

The problem is that ignite deserialize objects for comparing, while for the 
primitive type and strings comparing can be implemented for bytes.


  was:
Adding and searching objects in index tree produces a lot of garbage and this 
can lead to big GC pauses.
Tests with data streaming of object with 5 string indexes show that ignite 
server spends about 15-25% CPU time in GC.

The problem is that ignite desirialize objects for comparing 



> Adding and searching objects in index tree produces a lot of garbage
> 
>
> Key: IGNITE-5918
> URL: https://issues.apache.org/jira/browse/IGNITE-5918
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
>
> Adding and searching objects in index tree produces a lot of garbage and this 
> can lead to big GC pauses.
> Tests with data streaming of object with 5 string indexes show that ignite 
> server spends about 15-25% CPU time in GC.
> The problem is that ignite deserialize objects for comparing, while for the 
> primitive type and strings comparing can be implemented for bytes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5918) Adding and searching objects in index tree produces a lot of garbage

2017-08-03 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5918:
--
Affects Version/s: 2.1

> Adding and searching objects in index tree produces a lot of garbage
> 
>
> Key: IGNITE-5918
> URL: https://issues.apache.org/jira/browse/IGNITE-5918
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.1
>Reporter: Mikhail Cherkasov
>Assignee: Igor Seliverstov
>
> Adding and searching objects in index tree produces a lot of garbage and this 
> can lead to big GC pauses.
> Tests with data streaming of object with 5 string indexes show that ignite 
> server spends about 15-25% CPU time in GC.
> The problem is that ignite deserialize objects for comparing, while for the 
> primitive type and strings comparing can be implemented for bytes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5918) Adding and searching objects in index tree produce a lot of garbage

2017-08-03 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5918:
--
Issue Type: Improvement  (was: Bug)

> Adding and searching objects in index tree produce a lot of garbage
> ---
>
> Key: IGNITE-5918
> URL: https://issues.apache.org/jira/browse/IGNITE-5918
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5756) Ignite with spark fails with class not found

2017-08-16 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5756:
--
Fix Version/s: 2.2

> Ignite with spark fails with class not found
> 
>
> Key: IGNITE-5756
> URL: https://issues.apache.org/jira/browse/IGNITE-5756
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 1.9
> Environment: Apache ignite 1.9 with CDH 5.9 and spark 1.6
>Reporter: Rajesh
>Assignee: Mikhail Cherkasov
>Priority: Minor
>  Labels: starter
> Fix For: 2.2
>
>
> I’m using ignite1.9 with CDH5.9. I’m unable to run sampe spark jobs with 
> below exception. I have followed the steps mentioned in documentation.
> Type :help for more information.
> Spark context available as sc (master = yarn-client, app id = 
> application_1499940258814_0024).
> SQL context available as sqlContext.
> scala> import org.apache.ignite.spark._
> import org.apache.ignite.spark._
> scala> import org.apache.ignite.configuration._
> import org.apache.ignite.configuration._
> scala> import javax.cache.configuration.MutableConfiguration
> import javax.cache.configuration.MutableConfiguration
> scala> val ic = new IgniteContext(sc, "config/default-config.xml")
> class org.apache.ignite.IgniteCheckedException: Failed to create Ignite 
> component (consider adding ignite-spring module to classpath) 
> [component=SPRING, 
> cls=org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl]
> at 
> org.apache.ignite.internal.IgniteComponentType.componentException(IgniteComponentType.java:320)
> at 
> org.apache.ignite.internal.IgniteComponentType.create0(IgniteComponentType.java:296)
> at 
> org.apache.ignite.internal.IgniteComponentType.create(IgniteComponentType.java:207)
> at 
> org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:637)
> at 
> org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:678)
> at 
> org.apache.ignite.internal.IgnitionEx.loadConfiguration(IgnitionEx.java:717)
> at 
> org.apache.ignite.spark.IgniteContext$$anonfun$$lessinit$greater$2.apply(IgniteContext.scala:84)
> at 
> org.apache.ignite.spark.IgniteContext$$anonfun$$lessinit$greater$2.apply(IgniteContext.scala:84)
> at org.apache.ignite.spark.Once.apply(IgniteContext.scala:197)
> at 
> org.apache.ignite.spark.IgniteContext.ignite(IgniteContext.scala:137)
> at 
> org.apache.ignite.spark.IgniteContext.(IgniteContext.scala:58)
> at 
> org.apache.ignite.spark.IgniteContext.(IgniteContext.scala:84)
> at 
> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:34)
> at 
> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:39)
> at 
> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:41)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:43)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:45)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:47)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:49)
> at $iwC$$iwC$$iwC$$iwC$$iwC.(:51)
> at $iwC$$iwC$$iwC$$iwC.(:53)
> at $iwC$$iwC$$iwC.(:55)
> at $iwC$$iwC.(:57)
> at $iwC.(:59)
> at (:61)
> at .(:65)
> at .()
> at .(:7)
> at .()
> at $print()
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1045)
> at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1326)
> at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:821)
> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:852)
> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:800)
> at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
> at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
> at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
> at 
> org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
> at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
> at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
> at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
>

[jira] [Updated] (IGNITE-6044) SQL insert waits for transaction commit, but it must be executed right away

2017-08-16 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-6044:
--
Description: 
Doc says:

""Presently, DML supports the atomic mode only meaning that if there is a DML 
query that is executed as a part of an Ignite transaction then it will not be 
enlisted in the transaction's writing queue and will be executed right away.""

https://apacheignite.readme.io/docs/dml#section-transactional-support

However the data will be added to cache only after transaction commit.

> SQL insert waits for transaction commit, but it must be executed right away
> ---
>
> Key: IGNITE-6044
> URL: https://issues.apache.org/jira/browse/IGNITE-6044
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.1
>Reporter: Mikhail Cherkasov
>
> Doc says:
> ""Presently, DML supports the atomic mode only meaning that if there is a DML 
> query that is executed as a part of an Ignite transaction then it will not be 
> enlisted in the transaction's writing queue and will be executed right away.""
> https://apacheignite.readme.io/docs/dml#section-transactional-support
> However the data will be added to cache only after transaction commit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6044) SQL insert waits for transaction commit, but it must be executed right away

2017-08-16 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-6044:
--
Environment: (was: Doc says:

""Presently, DML supports the atomic mode only meaning that if there is a DML 
query that is executed as a part of an Ignite transaction then it will not be 
enlisted in the transaction's writing queue and will be executed right away.""

https://apacheignite.readme.io/docs/dml#section-transactional-support

However the data will be added to cache only after transaction commit.)

> SQL insert waits for transaction commit, but it must be executed right away
> ---
>
> Key: IGNITE-6044
> URL: https://issues.apache.org/jira/browse/IGNITE-6044
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.1
>Reporter: Mikhail Cherkasov
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6044) SQL insert waits for transaction commit, but it must be executed right away

2017-08-11 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-6044:
-

 Summary: SQL insert waits for transaction commit, but it must be 
executed right away
 Key: IGNITE-6044
 URL: https://issues.apache.org/jira/browse/IGNITE-6044
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 2.1
 Environment: Doc says:

""Presently, DML supports the atomic mode only meaning that if there is a DML 
query that is executed as a part of an Ignite transaction then it will not be 
enlisted in the transaction's writing queue and will be executed right away.""

https://apacheignite.readme.io/docs/dml#section-transactional-support

However the data will be added to cache only after transaction commit.
Reporter: Mikhail Cherkasov






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5940) Datastreamer does not propagate OOME

2017-08-11 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5940:
--
Description: DataStreamer throws exception as it's closed if OOM occurs on 
server node.

>  Datastreamer does not propagate OOME
> -
>
> Key: IGNITE-5940
> URL: https://issues.apache.org/jira/browse/IGNITE-5940
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.1
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
>
> DataStreamer throws exception as it's closed if OOM occurs on server node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5940) Datastreamer does not propagate OOME

2017-08-11 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5940:
--
Summary:  Datastreamer does not propagate OOME  (was: DataStreamer throws 
exception as it's closed if OOM occurs on server node.)

>  Datastreamer does not propagate OOME
> -
>
> Key: IGNITE-5940
> URL: https://issues.apache.org/jira/browse/IGNITE-5940
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.1
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-5756) Ignite with spark fails with class not found

2017-07-14 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov reassigned IGNITE-5756:
-

Assignee: Mikhail Cherkasov

> Ignite with spark fails with class not found
> 
>
> Key: IGNITE-5756
> URL: https://issues.apache.org/jira/browse/IGNITE-5756
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 1.9
> Environment: Apache ignite 1.9 with CDH 5.9 and spark 1.6
>Reporter: Rajesh
>Assignee: Mikhail Cherkasov
>Priority: Minor
>  Labels: starter
>
> I’m using ignite1.9 with CDH5.9. I’m unable to run sampe spark jobs with 
> below exception. I have followed the steps mentioned in documentation.
> Type :help for more information.
> Spark context available as sc (master = yarn-client, app id = 
> application_1499940258814_0024).
> SQL context available as sqlContext.
> scala> import org.apache.ignite.spark._
> import org.apache.ignite.spark._
> scala> import org.apache.ignite.configuration._
> import org.apache.ignite.configuration._
> scala> import javax.cache.configuration.MutableConfiguration
> import javax.cache.configuration.MutableConfiguration
> scala> val ic = new IgniteContext(sc, "config/default-config.xml")
> class org.apache.ignite.IgniteCheckedException: Failed to create Ignite 
> component (consider adding ignite-spring module to classpath) 
> [component=SPRING, 
> cls=org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl]
> at 
> org.apache.ignite.internal.IgniteComponentType.componentException(IgniteComponentType.java:320)
> at 
> org.apache.ignite.internal.IgniteComponentType.create0(IgniteComponentType.java:296)
> at 
> org.apache.ignite.internal.IgniteComponentType.create(IgniteComponentType.java:207)
> at 
> org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:637)
> at 
> org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:678)
> at 
> org.apache.ignite.internal.IgnitionEx.loadConfiguration(IgnitionEx.java:717)
> at 
> org.apache.ignite.spark.IgniteContext$$anonfun$$lessinit$greater$2.apply(IgniteContext.scala:84)
> at 
> org.apache.ignite.spark.IgniteContext$$anonfun$$lessinit$greater$2.apply(IgniteContext.scala:84)
> at org.apache.ignite.spark.Once.apply(IgniteContext.scala:197)
> at 
> org.apache.ignite.spark.IgniteContext.ignite(IgniteContext.scala:137)
> at 
> org.apache.ignite.spark.IgniteContext.(IgniteContext.scala:58)
> at 
> org.apache.ignite.spark.IgniteContext.(IgniteContext.scala:84)
> at 
> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:34)
> at 
> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:39)
> at 
> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:41)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:43)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:45)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:47)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:49)
> at $iwC$$iwC$$iwC$$iwC$$iwC.(:51)
> at $iwC$$iwC$$iwC$$iwC.(:53)
> at $iwC$$iwC$$iwC.(:55)
> at $iwC$$iwC.(:57)
> at $iwC.(:59)
> at (:61)
> at .(:65)
> at .()
> at .(:7)
> at .()
> at $print()
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1045)
> at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1326)
> at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:821)
> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:852)
> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:800)
> at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
> at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
> at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
> at 
> org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
> at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
> at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
> at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
> at 
> 

[jira] [Resolved] (IGNITE-5554) ServiceProcessor may process failed reassignments in timeout thread

2017-07-07 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov resolved IGNITE-5554.
---
Resolution: Fixed

> ServiceProcessor may process failed reassignments in timeout thread
> ---
>
> Key: IGNITE-5554
> URL: https://issues.apache.org/jira/browse/IGNITE-5554
> Project: Ignite
>  Issue Type: Bug
>  Components: managed services
>Affects Versions: 1.7
>Reporter: Alexey Goncharuk
>Assignee: Mikhail Cherkasov
> Fix For: 2.1
>
>
> The following parts of GridServiceProcessor look wrong to me: 
> In GridServiceProcessor.TopologyListener#onReassignmentFailed
> {code}
> @Override public void onTimeout() {
> onReassignmentFailed(topVer, retries);
> }
> {code}
> And in GridServiceProcessor#onDeployment
> {code}
> @Override public void onTimeout() {
> .
> // Try again.
> onDeployment(dep, topVer);
> }
> {code}
> The rest of ServiceProcessor relies on the deployments being processed in a 
> single thread, while this code will be executed in the timeout processor 
> thread. Not only can it take a lot of time to reassign, which will stall the 
> timeout thread, but it may also break the service assignment logic.
> The corresponding calls should be wrapped to runnables and submitted to the 
> depExe.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (IGNITE-5756) Ignite with spark fails with class not found

2017-07-14 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov resolved IGNITE-5756.
---
Resolution: Not A Bug

https://stackoverflow.com/questions/45100574/caused-by-java-lang-classnotfoundexception-org-apache-ignite-internal-util-spr/45100640#45100640

> Ignite with spark fails with class not found
> 
>
> Key: IGNITE-5756
> URL: https://issues.apache.org/jira/browse/IGNITE-5756
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 1.9
> Environment: Apache ignite 1.9 with CDH 5.9 and spark 1.6
>Reporter: Rajesh
>Assignee: Mikhail Cherkasov
>Priority: Minor
>  Labels: starter
>
> I’m using ignite1.9 with CDH5.9. I’m unable to run sampe spark jobs with 
> below exception. I have followed the steps mentioned in documentation.
> Type :help for more information.
> Spark context available as sc (master = yarn-client, app id = 
> application_1499940258814_0024).
> SQL context available as sqlContext.
> scala> import org.apache.ignite.spark._
> import org.apache.ignite.spark._
> scala> import org.apache.ignite.configuration._
> import org.apache.ignite.configuration._
> scala> import javax.cache.configuration.MutableConfiguration
> import javax.cache.configuration.MutableConfiguration
> scala> val ic = new IgniteContext(sc, "config/default-config.xml")
> class org.apache.ignite.IgniteCheckedException: Failed to create Ignite 
> component (consider adding ignite-spring module to classpath) 
> [component=SPRING, 
> cls=org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl]
> at 
> org.apache.ignite.internal.IgniteComponentType.componentException(IgniteComponentType.java:320)
> at 
> org.apache.ignite.internal.IgniteComponentType.create0(IgniteComponentType.java:296)
> at 
> org.apache.ignite.internal.IgniteComponentType.create(IgniteComponentType.java:207)
> at 
> org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:637)
> at 
> org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:678)
> at 
> org.apache.ignite.internal.IgnitionEx.loadConfiguration(IgnitionEx.java:717)
> at 
> org.apache.ignite.spark.IgniteContext$$anonfun$$lessinit$greater$2.apply(IgniteContext.scala:84)
> at 
> org.apache.ignite.spark.IgniteContext$$anonfun$$lessinit$greater$2.apply(IgniteContext.scala:84)
> at org.apache.ignite.spark.Once.apply(IgniteContext.scala:197)
> at 
> org.apache.ignite.spark.IgniteContext.ignite(IgniteContext.scala:137)
> at 
> org.apache.ignite.spark.IgniteContext.(IgniteContext.scala:58)
> at 
> org.apache.ignite.spark.IgniteContext.(IgniteContext.scala:84)
> at 
> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:34)
> at 
> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:39)
> at 
> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:41)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:43)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:45)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:47)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:49)
> at $iwC$$iwC$$iwC$$iwC$$iwC.(:51)
> at $iwC$$iwC$$iwC$$iwC.(:53)
> at $iwC$$iwC$$iwC.(:55)
> at $iwC$$iwC.(:57)
> at $iwC.(:59)
> at (:61)
> at .(:65)
> at .()
> at .(:7)
> at .()
> at $print()
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1045)
> at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1326)
> at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:821)
> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:852)
> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:800)
> at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
> at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
> at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
> at 
> org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
> at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
> at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
> at 
> 

[jira] [Created] (IGNITE-5773) Scheduler throwing NullPointerException

2017-07-18 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-5773:
-

 Summary: Scheduler throwing NullPointerException
 Key: IGNITE-5773
 URL: https://issues.apache.org/jira/browse/IGNITE-5773
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.0
 Environment: Oracle Hotspot 1.8_121
Ignite 2.0.0
Springmix 4.3.7

Reporter: Mikhail Cherkasov
Assignee: Alexey Goncharuk
Priority: Critical
 Fix For: 2.1


NPE occurs during deploying a service as cluster singleton. Ignite scheduler is 
used as a cron for this purpose, however NPE occurs for ignite version 2.0.0.

Below is the log information for the exception:

2017-06-06 13:21:08 ERROR GridServiceProcessor:495 - Failed to initialize 
service (service will not be deployed): AVxezSbWNphcxa1CYjfP
java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.schedule.ScheduleFutureImpl.schedule(ScheduleFutureImpl.java:299)
at 
org.apache.ignite.internal.processors.schedule.IgniteScheduleProcessor.schedule(IgniteScheduleProcessor.java:56)
at 
org.apache.ignite.internal.IgniteSchedulerImpl.scheduleLocal(IgniteSchedulerImpl.java:109)
at 
com.mypackage.state.services.MyService.startScheduler(MyService.scala:172)
at com.mypackage.state.services.MyService.init(MyService.scala:149)
at 
org.apache.ignite.internal.processors.service.GridServiceProcessor.redeploy(GridServiceProcessor.java:1097)
at 
org.apache.ignite.internal.processors.service.GridServiceProcessor.processAssignment(GridServiceProcessor.java:1698)
at 
org.apache.ignite.internal.processors.service.GridServiceProcessor.onSystemCacheUpdated(GridServiceProcessor.java:1372)
at 
org.apache.ignite.internal.processors.service.GridServiceProcessor.access$300(GridServiceProcessor.java:117)
at 
org.apache.ignite.internal.processors.service.GridServiceProcessor$ServiceEntriesListener$1.run0(GridServiceProcessor.java:1339)
at 
org.apache.ignite.internal.processors.service.GridServiceProcessor$DepRunnable.run(GridServiceProcessor.java:1753)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2017-06-06 13:21:08:868 ERROR application - Unable to initialise GRID:
class org.apache.ignite.IgniteException: null
at 
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:949)
at 
org.apache.ignite.internal.IgniteServicesImpl.deployClusterSingleton(IgniteServicesImpl.java:122)
at 
com.mypackage.state.mypackage1.InitialiseGrid$$anonfun$apply$1.apply(InitialiseGrid.scala:22)
at 
com.mypackage.state.mypackage1.InitialiseGrid$$anonfun$apply$1.apply(InitialiseGrid.scala:19)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at 
com.mypackage.state.mypackage1.InitialiseGrid$.apply(InitialiseGrid.scala:19)
at com.mypackage.state.Application$.main(Application.scala:54)
at com.mypackage.state.Application.main(Application.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sbt.Run.invokeMain(Run.scala:67)
at sbt.Run.run0(Run.scala:61)
at sbt.Run.sbt$Run$$execute$1(Run.scala:51)
at sbt.Run$$anonfun$run$1.apply$mcV$sp(Run.scala:55)
at sbt.Run$$anonfun$run$1.apply(Run.scala:55)
at sbt.Run$$anonfun$run$1.apply(Run.scala:55)
at sbt.Logger$$anon$4.apply(Logger.scala:85)
at sbt.TrapExit$App.run(TrapExit.scala:248)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.IgniteCheckedException: null
at 
org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7242)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:258)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:189)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:139)
at 
org.apache.ignite.internal.AsyncSupportAdapter.saveOrGet(AsyncSupportAdapter.java:112)
at 
org.apache.ignite.internal.IgniteServicesImpl.deployClusterSingleton(IgniteServicesImpl.java:119)
... 20 more
Caused by: java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.schedule.ScheduleFutureImpl.schedule(ScheduleFutureImpl.java:299)
at 

[jira] [Assigned] (IGNITE-5767) Web console: use byte array type instead of java.lang.Object for binary JDBC types

2017-07-19 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov reassigned IGNITE-5767:
-

Assignee: Andrey Novikov

> Web console: use byte array type instead of java.lang.Object for binary JDBC 
> types
> --
>
> Key: IGNITE-5767
> URL: https://issues.apache.org/jira/browse/IGNITE-5767
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.0
>Reporter: Denis Kholodov
>Assignee: Andrey Novikov
> Fix For: 2.2
>
>
> Schema importer should use {{[B}} query entity field type instead of 
> {{java.lang.Object}} for the following SQL types: {{BINARY}}, {{VARBINARY}} 
> and {{LONGVARBINARY}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5790) Xml config can not be used in jdbs and user code simultaneously

2017-07-20 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5790:
--
Fix Version/s: (was: 2.1)
   2.2

> Xml config can not be used in jdbs and user code simultaneously
> ---
>
> Key: IGNITE-5790
> URL: https://issues.apache.org/jira/browse/IGNITE-5790
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc
>Affects Versions: 2.1
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
> Fix For: 2.2
>
>
> when user uses the same xml config for jdbc driver and for his own ignite 
> instance there can be :
> java.sql.SQLException: Failed to start Ignite node.
> Caused by: class org.apache.ignite.IgniteCheckedException: Ignite instance 
> with this name has already been started: CustomeIgniteName
> because JDBC creates separate ignite instance, while user already has one 
> with the same name.
> Of course that can be easily workarounded, user can support two configs or 
> create jdbc connect first and then use Ignition.getOrStart().
> However it's inconvenient for user and should be treated as usability issue.
> I see 2 solutions:
> 1) jdbc driver should use Ignition.getOrStart()
> 2) jdbc driver should connection string as ignite name.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5644) Metrics collection must be removed from discovery thread.

2017-06-30 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-5644:
-

 Summary: Metrics collection must be removed from discovery thread.
 Key: IGNITE-5644
 URL: https://issues.apache.org/jira/browse/IGNITE-5644
 Project: Ignite
  Issue Type: Bug
  Components: cache
Reporter: Mikhail Cherkasov
Assignee: Mikhail Cherkasov
 Fix For: 2.1


Cache metrics are copied in discovery worker threads. This looks a bit risky 
because in case of metrics collection may stall the whole cluster. We need to 
make sure that when the heartbeat message is processed, we already have a 
metrics snapshot enabled



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-5521) Large near caches lead to cluster instability with metrics enabled

2017-06-29 Thread Mikhail Cherkasov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068090#comment-16068090
 ] 

Mikhail Cherkasov commented on IGNITE-5521:
---

changeset: 
https://github.com/apache/ignite/commit/f6cbba3f50668bc5dedd5b3e4b3a98ab94956492

> Large near caches lead to cluster instability with metrics enabled
> --
>
> Key: IGNITE-5521
> URL: https://issues.apache.org/jira/browse/IGNITE-5521
> Project: Ignite
>  Issue Type: Bug
>  Components: cache, general
>Affects Versions: 1.7
>Reporter: Alexey Goncharuk
>Assignee: Mikhail Cherkasov
>Priority: Critical
>  Labels: important
> Fix For: 2.1
>
>
> We have two issues in the way cache metrics are working:
> 1) Near cache size is calculated using full iteration over the near entries. 
> Perhaps, this is done because of near entries may be invalidated by a primary 
> node change, however, we should give a less strict metric rather than O(N) 
> cache size time
> 2) Cache metrics are copied in discovery worker threads. This looks a bit 
> risky because an error like the one described before may stall the whole 
> cluster. We need to make sure that when the heartbeat message is processed, 
> we already have a metrics snapshot enabled



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5918) Adding and searching objects in index tree produce a lot of garbage

2017-08-03 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5918:
--
Description: Adding and searching objects in index tree produce a lot of 
garbage 

> Adding and searching objects in index tree produce a lot of garbage
> ---
>
> Key: IGNITE-5918
> URL: https://issues.apache.org/jira/browse/IGNITE-5918
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
>
> Adding and searching objects in index tree produce a lot of garbage 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5918) Adding and searching objects in index tree produces a lot of garbage

2017-08-03 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5918:
--
Summary: Adding and searching objects in index tree produces a lot of 
garbage  (was: Adding and searching objects in index tree produce a lot of 
garbage)

> Adding and searching objects in index tree produces a lot of garbage
> 
>
> Key: IGNITE-5918
> URL: https://issues.apache.org/jira/browse/IGNITE-5918
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
>
> Adding and searching objects in index tree produce a lot of garbage 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5918) Adding and searching objects in index tree produces a lot of garbage

2017-08-03 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5918:
--
Description: 
Adding and searching objects in index tree produces a lot of garbage and this 
can lead to big GC pauses.
Tests with data streaming of object with 5 string indexes show that ignite 
server spends about 15-25% CPU time in GC.


  was:
Adding and searching objects in index tree produces a lot of garbage and this 
can lead to big GC pauses.
Tests with data streaming of object with 5 string indexes shows that ignite 
server spends about 15-25% cpu time in GC.



> Adding and searching objects in index tree produces a lot of garbage
> 
>
> Key: IGNITE-5918
> URL: https://issues.apache.org/jira/browse/IGNITE-5918
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
>
> Adding and searching objects in index tree produces a lot of garbage and this 
> can lead to big GC pauses.
> Tests with data streaming of object with 5 string indexes show that ignite 
> server spends about 15-25% CPU time in GC.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5918) Adding and searching objects in index tree produces a lot of garbage

2017-08-03 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5918:
--
Description: 
Adding and searching objects in index tree produces a lot of garbage and this 
can lead to big GC pauses.
Tests with data streaming of object with 5 string indexes shows that ignite 
server spends about 15-25% cpu time in GC.


  was:Adding and searching objects in index tree produce a lot of garbage 


> Adding and searching objects in index tree produces a lot of garbage
> 
>
> Key: IGNITE-5918
> URL: https://issues.apache.org/jira/browse/IGNITE-5918
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
>
> Adding and searching objects in index tree produces a lot of garbage and this 
> can lead to big GC pauses.
> Tests with data streaming of object with 5 string indexes shows that ignite 
> server spends about 15-25% cpu time in GC.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5942) Python3 pylibmc does not work with Ignite memcache mode

2017-08-04 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-5942:
-

 Summary: Python3 pylibmc does not work with Ignite memcache mode
 Key: IGNITE-5942
 URL: https://issues.apache.org/jira/browse/IGNITE-5942
 Project: Ignite
  Issue Type: Bug
Reporter: Mikhail Cherkasov


Example from:
https://apacheignite.readme.io/v2.0/docs/memcached-support#python
doesn't for Python 3.6.
There's exception on the following call:

client.set("key", "val")

It was tested with another python library - it works, so looks like the problem 
with pylibmc/libmemcached integration with Ignite.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5944) Ignite 1.9 can't be started with configured IGFS and Hadoop secondary system

2017-08-04 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5944:
--
Description: (was: I can't run example)

> Ignite 1.9 can't be started with configured IGFS and Hadoop secondary system
> 
>
> Key: IGNITE-5944
> URL: https://issues.apache.org/jira/browse/IGNITE-5944
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.9
>Reporter: Mikhail Cherkasov
> Attachments: IGFSExample.java, pradeep-config.xml
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5944) Ignite 1.9 can't be started with configured IGFS and Hadoop secondary system

2017-08-04 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-5944:
-

 Summary: Ignite 1.9 can't be started with configured IGFS and 
Hadoop secondary system
 Key: IGNITE-5944
 URL: https://issues.apache.org/jira/browse/IGNITE-5944
 Project: Ignite
  Issue Type: Bug
Affects Versions: 1.9
Reporter: Mikhail Cherkasov






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5944) Ignite 1.9 can't be started with configured IGFS and Hadoop secondary system

2017-08-04 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5944:
--
Attachment: pradeep-config.xml
IGFSExample.java

> Ignite 1.9 can't be started with configured IGFS and Hadoop secondary system
> 
>
> Key: IGNITE-5944
> URL: https://issues.apache.org/jira/browse/IGNITE-5944
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.9
>Reporter: Mikhail Cherkasov
> Attachments: IGFSExample.java, pradeep-config.xml
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5944) Ignite 1.9 can't be started with configured IGFS and Hadoop secondary system

2017-08-04 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5944:
--
Description: I can't run example

> Ignite 1.9 can't be started with configured IGFS and Hadoop secondary system
> 
>
> Key: IGNITE-5944
> URL: https://issues.apache.org/jira/browse/IGNITE-5944
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.9
>Reporter: Mikhail Cherkasov
> Attachments: IGFSExample.java, pradeep-config.xml
>
>
> I can't run example



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (IGNITE-5944) Ignite 1.9 can't be started with configured IGFS and Hadoop secondary system

2017-08-04 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov resolved IGNITE-5944.
---
Resolution: Cannot Reproduce

it works fine with 2.0.0

> Ignite 1.9 can't be started with configured IGFS and Hadoop secondary system
> 
>
> Key: IGNITE-5944
> URL: https://issues.apache.org/jira/browse/IGNITE-5944
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.9
>Reporter: Mikhail Cherkasov
> Attachments: IGFSExample.java, pradeep-config.xml
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5227) StackOverflowError in GridCacheMapEntry#checkOwnerChanged()

2017-05-16 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5227:
--
Description: 
A simple test reproducing this error:
{code}
/**
 * @throws Exception if failed.
 */
public void testBatchUnlock() throws Exception {
   startGrid(0);
   grid(0).createCache(new CacheConfiguration(DEFAULT_CACHE_NAME)
.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL));

try {
final CountDownLatch releaseLatch = new CountDownLatch(1);

IgniteInternalFuture fut = GridTestUtils.runAsync(new 
Callable() {
@Override public Object call() throws Exception {
IgniteCache cache = grid(0).cache(null);

Lock lock = cache.lock("key");

try {
lock.lock();

releaseLatch.await();
}
finally {
lock.unlock();
}

return null;
}
});

Map putMap = new LinkedHashMap<>();

putMap.put("key", "trigger");

for (int i = 0; i < 10_000; i++)
putMap.put("key-" + i, "value");

IgniteCache asyncCache = 
grid(0).cache(null).withAsync();

asyncCache.putAll(putMap);

IgniteFuture resFut = asyncCache.future();

Thread.sleep(1000);

releaseLatch.countDown();

fut.get();

resFut.get();
}
finally {
stopAllGrids();
}
{code}
We should replace a recursive call with a simple iteration over the linked list.

  was:
A simple test reproducing this error:
{code}
/**
 * @throws Exception if failed.
 */
public void testBatchUnlock() throws Exception {
startGrid(0);

try {
final CountDownLatch releaseLatch = new CountDownLatch(1);

IgniteInternalFuture fut = GridTestUtils.runAsync(new 
Callable() {
@Override public Object call() throws Exception {
IgniteCache cache = grid(0).cache(null);

Lock lock = cache.lock("key");

try {
lock.lock();

releaseLatch.await();
}
finally {
lock.unlock();
}

return null;
}
});

Map putMap = new LinkedHashMap<>();

putMap.put("key", "trigger");

for (int i = 0; i < 10_000; i++)
putMap.put("key-" + i, "value");

IgniteCache asyncCache = 
grid(0).cache(null).withAsync();

asyncCache.putAll(putMap);

IgniteFuture resFut = asyncCache.future();

Thread.sleep(1000);

releaseLatch.countDown();

fut.get();

resFut.get();
}
finally {
stopAllGrids();
}
{code}
We should replace a recursive call with a simple iteration over the linked list.


> StackOverflowError in GridCacheMapEntry#checkOwnerChanged()
> ---
>
> Key: IGNITE-5227
> URL: https://issues.apache.org/jira/browse/IGNITE-5227
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.6
>Reporter: Alexey Goncharuk
>Assignee: Mikhail Cherkasov
>Priority: Critical
> Fix For: 2.1
>
>
> A simple test reproducing this error:
> {code}
> /**
>  * @throws Exception if failed.
>  */
> public void testBatchUnlock() throws Exception {
>startGrid(0);
>grid(0).createCache(new CacheConfiguration Integer>(DEFAULT_CACHE_NAME)
> .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL));
> try {
> final CountDownLatch releaseLatch = new CountDownLatch(1);
> IgniteInternalFuture fut = GridTestUtils.runAsync(new 
> Callable() {
> @Override public Object call() throws Exception {
> IgniteCache cache = grid(0).cache(null);
> Lock lock = cache.lock("key");
> try {
> lock.lock();
> releaseLatch.await();
> }
> finally {
> lock.unlock();
> }
> return null;
> }
> });
> Map putMap = new LinkedHashMap<>();
> putMap.put("key", "trigger");
> for (int i = 0; 

[jira] [Assigned] (IGNITE-5227) StackOverflowError in GridCacheMapEntry#checkOwnerChanged()

2017-05-16 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov reassigned IGNITE-5227:
-

Assignee: Mikhail Cherkasov

> StackOverflowError in GridCacheMapEntry#checkOwnerChanged()
> ---
>
> Key: IGNITE-5227
> URL: https://issues.apache.org/jira/browse/IGNITE-5227
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.6
>Reporter: Alexey Goncharuk
>Assignee: Mikhail Cherkasov
>Priority: Critical
> Fix For: 2.1
>
>
> A simple test reproducing this error:
> {code}
> /**
>  * @throws Exception if failed.
>  */
> public void testBatchUnlock() throws Exception {
> startGrid(0);
> try {
> final CountDownLatch releaseLatch = new CountDownLatch(1);
> IgniteInternalFuture fut = GridTestUtils.runAsync(new 
> Callable() {
> @Override public Object call() throws Exception {
> IgniteCache cache = grid(0).cache(null);
> Lock lock = cache.lock("key");
> try {
> lock.lock();
> releaseLatch.await();
> }
> finally {
> lock.unlock();
> }
> return null;
> }
> });
> Map putMap = new LinkedHashMap<>();
> putMap.put("key", "trigger");
> for (int i = 0; i < 10_000; i++)
> putMap.put("key-" + i, "value");
> IgniteCache asyncCache = 
> grid(0).cache(null).withAsync();
> asyncCache.putAll(putMap);
> IgniteFuture resFut = asyncCache.future();
> Thread.sleep(1000);
> releaseLatch.countDown();
> fut.get();
> resFut.get();
> }
> finally {
> stopAllGrids();
> }
> {code}
> We should replace a recursive call with a simple iteration over the linked 
> list.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-5575) Ignite returns wrong CacheMetrics for cluster group

2017-06-22 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-5575:
-

 Summary: Ignite returns wrong CacheMetrics for cluster group
 Key: IGNITE-5575
 URL: https://issues.apache.org/jira/browse/IGNITE-5575
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.1
Reporter: Mikhail Cherkasov
Assignee: Mikhail Cherkasov






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5575) Ignite returns wrong CacheMetrics for cluster group

2017-06-22 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5575:
--
Attachment: MultiNodeMetricsTest.java
ignite-cfg.xml
ignite-cfg2.xml

> Ignite returns wrong CacheMetrics for cluster group
> ---
>
> Key: IGNITE-5575
> URL: https://issues.apache.org/jira/browse/IGNITE-5575
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.1
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
> Attachments: ignite-cfg2.xml, ignite-cfg.xml, 
> MultiNodeMetricsTest.java
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5575) Ignite returns wrong CacheMetrics for cluster group

2017-06-22 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5575:
--
Description: 
CacheMetrics metrics = 
cache.metrics(cluster.forCacheNodes(DEFAULT_CACHE_NAME)); returns cache metrics 
only for the local node.
Looks like cache metrics exchange is broken.

> Ignite returns wrong CacheMetrics for cluster group
> ---
>
> Key: IGNITE-5575
> URL: https://issues.apache.org/jira/browse/IGNITE-5575
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.1
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
> Attachments: ignite-cfg2.xml, ignite-cfg.xml, 
> MultiNodeMetricsTest.java
>
>
> CacheMetrics metrics = 
> cache.metrics(cluster.forCacheNodes(DEFAULT_CACHE_NAME)); returns cache 
> metrics only for the local node.
> Looks like cache metrics exchange is broken.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-5554) ServiceProcessor may process failed reassignments in timeout thread

2017-06-20 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov reassigned IGNITE-5554:
-

Assignee: Mikhail Cherkasov

> ServiceProcessor may process failed reassignments in timeout thread
> ---
>
> Key: IGNITE-5554
> URL: https://issues.apache.org/jira/browse/IGNITE-5554
> Project: Ignite
>  Issue Type: Bug
>  Components: managed services
>Affects Versions: 1.7
>Reporter: Alexey Goncharuk
>Assignee: Mikhail Cherkasov
> Fix For: 2.1
>
>
> The following parts of GridServiceProcessor look wrong to me: 
> In GridServiceProcessor.TopologyListener#onReassignmentFailed
> {code}
> @Override public void onTimeout() {
> onReassignmentFailed(topVer, retries);
> }
> {code}
> And in GridServiceProcessor#onDeployment
> {code}
> @Override public void onTimeout() {
> .
> // Try again.
> onDeployment(dep, topVer);
> }
> {code}
> The rest of ServiceProcessor relies on the deployments being processed in a 
> single thread, while this code will be executed in the timeout processor 
> thread. Not only can it take a lot of time to reassign, which will stall the 
> timeout thread, but it may also break the service assignment logic.
> The corresponding calls should be wrapped to runnables and submitted to the 
> depExe.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-5521) Large near caches lead to cluster instability with metrics enabled

2017-06-20 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov reassigned IGNITE-5521:
-

Assignee: Mikhail Cherkasov

> Large near caches lead to cluster instability with metrics enabled
> --
>
> Key: IGNITE-5521
> URL: https://issues.apache.org/jira/browse/IGNITE-5521
> Project: Ignite
>  Issue Type: Bug
>  Components: cache, general
>Affects Versions: 1.7
>Reporter: Alexey Goncharuk
>Assignee: Mikhail Cherkasov
>Priority: Critical
>  Labels: important
> Fix For: 2.1
>
>
> We have two issues in the way cache metrics are working:
> 1) Near cache size is calculated using full iteration over the near entries. 
> Perhaps, this is done because of near entries may be invalidated by a primary 
> node change, however, we should give a less strict metric rather than O(N) 
> cache size time
> 2) Cache metrics are copied in discovery worker threads. This looks a bit 
> risky because an error like the one described before may stall the whole 
> cluster. We need to make sure that when the heartbeat message is processed, 
> we already have a metrics snapshot enabled



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-5030) Support Spring @Cacheable(sync=true) annotation

2017-05-25 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov reassigned IGNITE-5030:
-

Assignee: Mikhail Cherkasov

> Support Spring @Cacheable(sync=true) annotation
> ---
>
> Key: IGNITE-5030
> URL: https://issues.apache.org/jira/browse/IGNITE-5030
> Project: Ignite
>  Issue Type: Task
>Reporter: Anton Vinogradov
>Assignee: Mikhail Cherkasov
>
> @Cacheable(sync=true) guarantee that only one thread (across the cluster) 
> will fetch value for a key on get, even in case of some simultaneous gets.
> So, 
> org.apache.ignite.cache.spring.SpringCache#get(java.lang.Object, 
> java.util.concurrent.Callable) 
> should be implemented to provide such guarantee.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-5364) Remove contention on DS creation or removing

2017-05-31 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-5364:
-

 Summary: Remove contention on DS creation or removing
 Key: IGNITE-5364
 URL: https://issues.apache.org/jira/browse/IGNITE-5364
 Project: Ignite
  Issue Type: Improvement
Reporter: Mikhail Cherkasov
Assignee: Mikhail Cherkasov
 Fix For: 2.1


All DSs are stored in one Map which itself is stored in utilityCache, this 
makes high contention on DS creation or removing, it requires lock on the key 
and manipulation with the Map under the lock. So all threads in cluster should 
wait for this lock to create or remove DS.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (IGNITE-5461) Visor show wrong statistics for off heap memory

2017-06-09 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5461:
--
Attachment: visor-config.xml
CreateCache.java

> Visor show wrong statistics for off heap memory
> ---
>
> Key: IGNITE-5461
> URL: https://issues.apache.org/jira/browse/IGNITE-5461
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Cherkasov
>Assignee: Alexey Kuznetsov
> Attachments: CreateCache.java, visor-config.xml
>
>
> Visor show that data is stored in Heap, while the data is in off heap:
> Total: 1
> Heap: 1
> Off-Heap: 0
> Off-Heap Memory: 0
> while:
> cache.localPeek("Key1", ONHEAP) == null
> cache.localPeek("Key1", OFFHEAP) == Value
> reproducer is attached.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (IGNITE-5461) Visor shows wrong statistics for off heap memory

2017-06-09 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5461:
--
Fix Version/s: 2.1

> Visor shows wrong statistics for off heap memory
> 
>
> Key: IGNITE-5461
> URL: https://issues.apache.org/jira/browse/IGNITE-5461
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.0
>Reporter: Mikhail Cherkasov
>Assignee: Alexey Kuznetsov
> Fix For: 2.1
>
> Attachments: CreateCache.java, visor-config.xml
>
>
> Visor show that data is stored in Heap, while the data is in off heap:
> Total: 1
> Heap: 1
> Off-Heap: 0
> Off-Heap Memory: 0
> while:
> cache.localPeek("Key1", ONHEAP) == null
> cache.localPeek("Key1", OFFHEAP) == Value
> reproducer is attached.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (IGNITE-5461) Visor shows wrong statistics for off heap memory

2017-06-09 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5461:
--
Summary: Visor shows wrong statistics for off heap memory  (was: Visor show 
wrong statistics for off heap memory)

> Visor shows wrong statistics for off heap memory
> 
>
> Key: IGNITE-5461
> URL: https://issues.apache.org/jira/browse/IGNITE-5461
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Cherkasov
>Assignee: Alexey Kuznetsov
> Attachments: CreateCache.java, visor-config.xml
>
>
> Visor show that data is stored in Heap, while the data is in off heap:
> Total: 1
> Heap: 1
> Off-Heap: 0
> Off-Heap Memory: 0
> while:
> cache.localPeek("Key1", ONHEAP) == null
> cache.localPeek("Key1", OFFHEAP) == Value
> reproducer is attached.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-5461) Visor show wrong statistics for off heap memory

2017-06-09 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-5461:
-

 Summary: Visor show wrong statistics for off heap memory
 Key: IGNITE-5461
 URL: https://issues.apache.org/jira/browse/IGNITE-5461
 Project: Ignite
  Issue Type: Bug
Reporter: Mikhail Cherkasov
Assignee: Alexey Kuznetsov


Visor show that data is stored in Heap, while the data is in off heap:

Total: 1
Heap: 1
Off-Heap: 0
Off-Heap Memory: 0

while:
cache.localPeek("Key1", ONHEAP) == null
cache.localPeek("Key1", OFFHEAP) == Value

reproducer is attached.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (IGNITE-5461) Visor shows wrong statistics for off heap memory

2017-06-09 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5461:
--
Affects Version/s: 2.0

> Visor shows wrong statistics for off heap memory
> 
>
> Key: IGNITE-5461
> URL: https://issues.apache.org/jira/browse/IGNITE-5461
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.0
>Reporter: Mikhail Cherkasov
>Assignee: Alexey Kuznetsov
> Attachments: CreateCache.java, visor-config.xml
>
>
> Visor show that data is stored in Heap, while the data is in off heap:
> Total: 1
> Heap: 1
> Off-Heap: 0
> Off-Heap Memory: 0
> while:
> cache.localPeek("Key1", ONHEAP) == null
> cache.localPeek("Key1", OFFHEAP) == Value
> reproducer is attached.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-5461) Visor shows wrong statistics for off heap memory

2017-06-09 Thread Mikhail Cherkasov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16044691#comment-16044691
 ] 

Mikhail Cherkasov commented on IGNITE-5461:
---

"cache.metrics().getOffHeapEntriesCount();" returns 0, so looks like it doesn't 
related with Visor.

> Visor shows wrong statistics for off heap memory
> 
>
> Key: IGNITE-5461
> URL: https://issues.apache.org/jira/browse/IGNITE-5461
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.0
>Reporter: Mikhail Cherkasov
>Assignee: Alexey Kuznetsov
> Fix For: 2.1
>
> Attachments: CreateCache.java, visor-config.xml
>
>
> Visor show that data is stored in Heap, while the data is in off heap:
> Total: 1
> Heap: 1
> Off-Heap: 0
> Off-Heap Memory: 0
> while:
> cache.localPeek("Key1", ONHEAP) == null
> cache.localPeek("Key1", OFFHEAP) == Value
> reproducer is attached.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (IGNITE-5484) DataStructuresCacheKey and DataStructureInfoKey should have GridCacheInternal marker

2017-06-15 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5484:
--
Description: 
All internal keys and objects stored in cache should have GridCacheInternal 
marker to avoid eviction, also processing of this internal keys and objects can 
be different from regular users keys and objects.


  was:DataStructuresCacheKey and DataStructureInfoKey should have 
GridCacheInternal marker.


> DataStructuresCacheKey and DataStructureInfoKey should have GridCacheInternal 
> marker
> 
>
> Key: IGNITE-5484
> URL: https://issues.apache.org/jira/browse/IGNITE-5484
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
>
> All internal keys and objects stored in cache should have GridCacheInternal 
> marker to avoid eviction, also processing of this internal keys and objects 
> can be different from regular users keys and objects.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5484) DataStructuresCacheKey and DataStructureInfoKey should have GridCacheInternal marker

2017-06-14 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-5484:
-

 Summary: DataStructuresCacheKey and DataStructureInfoKey should 
have GridCacheInternal marker
 Key: IGNITE-5484
 URL: https://issues.apache.org/jira/browse/IGNITE-5484
 Project: Ignite
  Issue Type: Bug
  Components: cache
Reporter: Mikhail Cherkasov
Assignee: Mikhail Cherkasov


DataStructuresCacheKey and DataStructureInfoKey should have GridCacheInternal 
marker



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5364) Remove contention on DataStructure creation or removing

2017-05-31 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5364:
--
Summary: Remove contention on DataStructure creation or removing  (was: 
Remove contention on DS creation or removing)

> Remove contention on DataStructure creation or removing
> ---
>
> Key: IGNITE-5364
> URL: https://issues.apache.org/jira/browse/IGNITE-5364
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
> Fix For: 2.1
>
>
> All DSs are stored in one Map which itself is stored in utilityCache, this 
> makes high contention on DS creation or removing, it requires lock on the key 
> and manipulation with the Map under the lock. So all threads in cluster 
> should wait for this lock to create or remove DS.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-6437) DataStructure can not be obtained on client if it is created on server node.

2017-09-19 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-6437:
-

 Summary: DataStructure can not be obtained on client if it is 
created on server node.
 Key: IGNITE-6437
 URL: https://issues.apache.org/jira/browse/IGNITE-6437
 Project: Ignite
  Issue Type: Bug
  Components: data structures
Affects Versions: 2.1
Reporter: Mikhail Cherkasov
Priority: Critical
 Fix For: 2.3


DataStructure can not be obtained on client if it is created on server node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6437) DataStructure can not be obtained on client if it is created on server node.

2017-09-19 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-6437:
--
Attachment: NoQueueOnClientNodeTest.java

> DataStructure can not be obtained on client if it is created on server node.
> 
>
> Key: IGNITE-6437
> URL: https://issues.apache.org/jira/browse/IGNITE-6437
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures
>Affects Versions: 2.1
>Reporter: Mikhail Cherkasov
>Priority: Critical
> Fix For: 2.3
>
> Attachments: NoQueueOnClientNodeTest.java
>
>
> DataStructure can not be obtained on client if it is created on server node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-6580) Cluster can fail during concurrent re-balancing and cache destruction

2017-10-09 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov reassigned IGNITE-6580:
-

Assignee: Alexey Goncharuk

> Cluster can fail during concurrent re-balancing and cache destruction
> -
>
> Key: IGNITE-6580
> URL: https://issues.apache.org/jira/browse/IGNITE-6580
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.2
>Reporter: Mikhail Cherkasov
>Assignee: Alexey Goncharuk
> Fix For: 2.3
>
>
> The following exceptions can be abserved during concurrent re-balancing and 
> cache destruction:
> 1.
> {noformat}
> [00:01:27,135][ERROR][sys-#4375%null%][GridDhtPreloader] Partition eviction 
> failed, this can cause grid hang.
> org.apache.ignite.IgniteException: Runtime failure on search row: 
> Row@6be51c3d[ **REMOVED SENSITIVE INFORMATION** ]
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doRemove(BPlusTree.java:1787)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.remove(BPlusTree.java:1578)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.remove(H2TreeIndex.java:226)
>  ~[ignite-indexing-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.doUpdate(GridH2Table.java:523)
>  ~[ignite-indexing-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:416)
>  ~[ignite-indexing-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.remove(IgniteH2Indexing.java:574)
>  ~[ignite-indexing-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.remove(GridQueryProcessor.java:2172)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.remove(GridCacheQueryManager.java:451)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishRemove(IgniteCacheOffheapManagerImpl.java:1462)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.remove(IgniteCacheOffheapManagerImpl.java:1425)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.remove(IgniteCacheOffheapManagerImpl.java:383)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.removeValue(GridCacheMapEntry.java:3224)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry.clearInternal(GridDhtCacheEntry.java:588)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.clearAll(GridDhtLocalPartition.java:951)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.tryEvict(GridDhtLocalPartition.java:809)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:593)
>  [ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:580)
>  [ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6629)
>  [ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)
>  [ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
> [ignite-core-2.1.4.jar:2.1.4]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [?:1.8.0_131]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [?:1.8.0_131]
>   at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
> Caused by: java.lang.IllegalStateException: Item not found: 1
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.findIndirectItemIndex(DataPageIO.java:346)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.getDataOffset(DataPageIO.java:446)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> 

[jira] [Updated] (IGNITE-6580) Cluster can fail during concurrent re-balancing and cache destruction

2017-10-09 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-6580:
--
Priority: Major  (was: Critical)

> Cluster can fail during concurrent re-balancing and cache destruction
> -
>
> Key: IGNITE-6580
> URL: https://issues.apache.org/jira/browse/IGNITE-6580
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.2
>Reporter: Mikhail Cherkasov
> Fix For: 2.3
>
>
> The following exceptions can be abserved during concurrent re-balancing and 
> cache destruction:
> 1.
> {noformat}
> [00:01:27,135][ERROR][sys-#4375%null%][GridDhtPreloader] Partition eviction 
> failed, this can cause grid hang.
> org.apache.ignite.IgniteException: Runtime failure on search row: 
> Row@6be51c3d[ **REMOVED SENSITIVE INFORMATION** ]
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doRemove(BPlusTree.java:1787)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.remove(BPlusTree.java:1578)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.remove(H2TreeIndex.java:226)
>  ~[ignite-indexing-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.doUpdate(GridH2Table.java:523)
>  ~[ignite-indexing-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:416)
>  ~[ignite-indexing-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.remove(IgniteH2Indexing.java:574)
>  ~[ignite-indexing-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.remove(GridQueryProcessor.java:2172)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.remove(GridCacheQueryManager.java:451)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishRemove(IgniteCacheOffheapManagerImpl.java:1462)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.remove(IgniteCacheOffheapManagerImpl.java:1425)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.remove(IgniteCacheOffheapManagerImpl.java:383)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.removeValue(GridCacheMapEntry.java:3224)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry.clearInternal(GridDhtCacheEntry.java:588)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.clearAll(GridDhtLocalPartition.java:951)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.tryEvict(GridDhtLocalPartition.java:809)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:593)
>  [ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:580)
>  [ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6629)
>  [ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)
>  [ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
> [ignite-core-2.1.4.jar:2.1.4]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [?:1.8.0_131]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [?:1.8.0_131]
>   at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
> Caused by: java.lang.IllegalStateException: Item not found: 1
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.findIndirectItemIndex(DataPageIO.java:346)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.getDataOffset(DataPageIO.java:446)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.readPayload(DataPageIO.java:488)
>  

[jira] [Updated] (IGNITE-6580) Cluster can fail during concurrent re-balancing and cache destruction

2017-10-09 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-6580:
--
Affects Version/s: 2.2

> Cluster can fail during concurrent re-balancing and cache destruction
> -
>
> Key: IGNITE-6580
> URL: https://issues.apache.org/jira/browse/IGNITE-6580
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.2
>Reporter: Mikhail Cherkasov
> Fix For: 2.3
>
>
> The following exceptions can be abserved during concurrent re-balancing and 
> cache destruction:
> 1.
> {noformat}
> [00:01:27,135][ERROR][sys-#4375%null%][GridDhtPreloader] Partition eviction 
> failed, this can cause grid hang.
> org.apache.ignite.IgniteException: Runtime failure on search row: 
> Row@6be51c3d[ **REMOVED SENSITIVE INFORMATION** ]
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doRemove(BPlusTree.java:1787)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.remove(BPlusTree.java:1578)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.remove(H2TreeIndex.java:226)
>  ~[ignite-indexing-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.doUpdate(GridH2Table.java:523)
>  ~[ignite-indexing-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:416)
>  ~[ignite-indexing-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.remove(IgniteH2Indexing.java:574)
>  ~[ignite-indexing-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.remove(GridQueryProcessor.java:2172)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.remove(GridCacheQueryManager.java:451)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishRemove(IgniteCacheOffheapManagerImpl.java:1462)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.remove(IgniteCacheOffheapManagerImpl.java:1425)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.remove(IgniteCacheOffheapManagerImpl.java:383)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.removeValue(GridCacheMapEntry.java:3224)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry.clearInternal(GridDhtCacheEntry.java:588)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.clearAll(GridDhtLocalPartition.java:951)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.tryEvict(GridDhtLocalPartition.java:809)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:593)
>  [ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:580)
>  [ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6629)
>  [ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)
>  [ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
> [ignite-core-2.1.4.jar:2.1.4]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [?:1.8.0_131]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [?:1.8.0_131]
>   at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
> Caused by: java.lang.IllegalStateException: Item not found: 1
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.findIndirectItemIndex(DataPageIO.java:346)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.getDataOffset(DataPageIO.java:446)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.readPayload(DataPageIO.java:488)
>  

[jira] [Updated] (IGNITE-6580) Cluster can fail during concurrent re-balancing and cache destruction

2017-10-09 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-6580:
--
Fix Version/s: 2.3

> Cluster can fail during concurrent re-balancing and cache destruction
> -
>
> Key: IGNITE-6580
> URL: https://issues.apache.org/jira/browse/IGNITE-6580
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.2
>Reporter: Mikhail Cherkasov
> Fix For: 2.3
>
>
> The following exceptions can be abserved during concurrent re-balancing and 
> cache destruction:
> 1.
> {noformat}
> [00:01:27,135][ERROR][sys-#4375%null%][GridDhtPreloader] Partition eviction 
> failed, this can cause grid hang.
> org.apache.ignite.IgniteException: Runtime failure on search row: 
> Row@6be51c3d[ **REMOVED SENSITIVE INFORMATION** ]
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doRemove(BPlusTree.java:1787)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.remove(BPlusTree.java:1578)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.remove(H2TreeIndex.java:226)
>  ~[ignite-indexing-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.doUpdate(GridH2Table.java:523)
>  ~[ignite-indexing-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:416)
>  ~[ignite-indexing-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.remove(IgniteH2Indexing.java:574)
>  ~[ignite-indexing-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.remove(GridQueryProcessor.java:2172)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.remove(GridCacheQueryManager.java:451)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishRemove(IgniteCacheOffheapManagerImpl.java:1462)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.remove(IgniteCacheOffheapManagerImpl.java:1425)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.remove(IgniteCacheOffheapManagerImpl.java:383)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.removeValue(GridCacheMapEntry.java:3224)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry.clearInternal(GridDhtCacheEntry.java:588)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.clearAll(GridDhtLocalPartition.java:951)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.tryEvict(GridDhtLocalPartition.java:809)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:593)
>  [ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:580)
>  [ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6629)
>  [ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)
>  [ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
> [ignite-core-2.1.4.jar:2.1.4]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [?:1.8.0_131]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [?:1.8.0_131]
>   at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
> Caused by: java.lang.IllegalStateException: Item not found: 1
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.findIndirectItemIndex(DataPageIO.java:346)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.getDataOffset(DataPageIO.java:446)
>  ~[ignite-core-2.1.4.jar:2.1.4]
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.readPayload(DataPageIO.java:488)
>  

[jira] [Commented] (IGNITE-5195) DataStreamer can fails if non-data node enter\leave the grid.

2017-10-16 Thread Mikhail Cherkasov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16205695#comment-16205695
 ] 

Mikhail Cherkasov commented on IGNITE-5195:
---

I set priority to major because many users have faced with this problem and in 
an active cluster, where a lot of clients join and left cluster makes data 
streamer useless.

> DataStreamer can fails if non-data node enter\leave the grid.
> -
>
> Key: IGNITE-5195
> URL: https://issues.apache.org/jira/browse/IGNITE-5195
> Project: Ignite
>  Issue Type: Bug
>  Components: streaming
>Affects Versions: 1.8
>Reporter: Andrew Mashenkov
>Assignee: Evgenii Zhuravlev
> Fix For: 2.4
>
> Attachments: DataStreamerFailure.java
>
>
> DataStreamer failed with "too many remaps" message even if non-data node 
> enter\leave topology.
> PFA repro attached. 
> Seems, we should ignore topology changes when non-data node enter\leave the 
> grid. 
> And also we need to sure that remapping doesn't occurs when there is no data 
> nodes in grid any more, as remapping make no sense and more suitable message 
> should be logged.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5195) DataStreamer can fails if non-data node enter\leave the grid.

2017-10-16 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5195:
--
Priority: Major  (was: Minor)

> DataStreamer can fails if non-data node enter\leave the grid.
> -
>
> Key: IGNITE-5195
> URL: https://issues.apache.org/jira/browse/IGNITE-5195
> Project: Ignite
>  Issue Type: Bug
>  Components: streaming
>Affects Versions: 1.8
>Reporter: Andrew Mashenkov
>Assignee: Evgenii Zhuravlev
> Fix For: 2.4
>
> Attachments: DataStreamerFailure.java
>
>
> DataStreamer failed with "too many remaps" message even if non-data node 
> enter\leave topology.
> PFA repro attached. 
> Seems, we should ignore topology changes when non-data node enter\leave the 
> grid. 
> And also we need to sure that remapping doesn't occurs when there is no data 
> nodes in grid any more, as remapping make no sense and more suitable message 
> should be logged.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6639) Ignite node can try to join to itself

2017-10-16 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-6639:
-

 Summary: Ignite node can try to join to itself
 Key: IGNITE-6639
 URL: https://issues.apache.org/jira/browse/IGNITE-6639
 Project: Ignite
  Issue Type: Bug
  Security Level: Public (Viewable by anyone)
  Components: general
Affects Versions: 2.3
Reporter: Mikhail Cherkasov
Assignee: Mikhail Cherkasov
 Fix For: 2.4






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-6280) Cassandra ignores AffinityKeyMapped annotation in parent classes.

2017-10-16 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov reassigned IGNITE-6280:
-

Assignee: Mikhail Cherkasov

> Cassandra ignores AffinityKeyMapped annotation in parent classes.
> -
>
> Key: IGNITE-6280
> URL: https://issues.apache.org/jira/browse/IGNITE-6280
> Project: Ignite
>  Issue Type: Bug
>  Components: cassandra
>Affects Versions: 2.1
>Reporter: Andrew Mashenkov
>Assignee: Mikhail Cherkasov
> Attachments: CassandraConfigTest.java
>
>
> By default, using @AffinityKeyMapped annotation force Ignire to override user 
> _keyPersistence_ configuration that may cause confusing results.
> PFA repro attached.
> h3. Description
> 1. Let there is 2 keys A and B that has same fields with one difference. Key 
> A has affinity key in parent class. So, it looks like this.
> {code}
> class BaseKey {
> @AffinityKeyMapped
>  Object affinityKey
> }
> {code}
> {code}
> class A extends BaseKey {
>  int id;
> }
> {code}
> {code}
> class B {
> @AffinityKeyMapped
>  Object affinityKey;
>  int uid;
> }
> {code}
> 2. Let we make different affinity mapping for Cassandra store, that looks 
> like a valid case
> {code:xml}
> 
> 
>  
>  
>
> 
> {code}
> 3. We have different behavior for these similar cases that makes user 
> confused.
> For key A this will work fine and expected DDL will be generated.
> For key B we'll get different DDL as Ignite will remove "_uid_" field from 
> "_partitionKey_".
> So, we should either to not allow Ignite to override key mapping or force 
> Ignite to check if parent classes has @AffinityKeyMapped annotation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6654) Ignite client can hang in case IgniteOOM on server

2017-10-17 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-6654:
-

 Summary: Ignite client can hang in case IgniteOOM on server
 Key: IGNITE-6654
 URL: https://issues.apache.org/jira/browse/IGNITE-6654
 Project: Ignite
  Issue Type: Bug
  Security Level: Public (Viewable by anyone)
  Components: cache, general
Reporter: Mikhail Cherkasov
Assignee: Mikhail Cherkasov


Ignite client can hang in case IgniteOOM on server



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6323) Ignite node not stopping after segmentation

2017-09-08 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-6323:
--
Affects Version/s: 2.1

> Ignite node not stopping after segmentation
> ---
>
> Key: IGNITE-6323
> URL: https://issues.apache.org/jira/browse/IGNITE-6323
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.1
>Reporter: Mikhail Cherkasov
> Fix For: 2.3
>
>
> The problem was found by a user and described in user list:
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-node-not-stopping-after-segmentation-td16773.html
> copy of the message:
> """
> I have follow up question on segmentation from my previous post. The issue I 
> am trying to resolve is that ignite node does not stop on the segmented node. 
> Here is brief information on my application.
>  
> I have embedded Ignite into my application and using it for distributed 
> caches. I am running Ignite cluster in my lab environment. I have two nodes 
> in the cluster. In current setup, the application receives about 1 million 
> data points every minute. I am putting the data into ignite distributed cache 
> using data streamer. This way data gets distributed among members and each 
> member further processes the data. The application also uses other 
> distributed caches while processing the data.
>  
> When a member node gets segmented, it does not stop. I get BEFORE_NODE_STOP 
> event but nothing happens after that. Node hangs in some unstable state. I am 
> suspecting that when node is trying to stop there are data in buffers of 
> streamer which needs sent to other members. Because the node is segmented, it 
> is not able to flush/drop the data. The application is also trying to access 
> caches while node is stopping, that also causes deadlock situation.
>  
> I have tried few things to make it work,
> Letting node stop after segmentation which is the default behavior. But the 
> node gets stuck.
> Setting segmentation policy to NOOP. Plan was to stop the node manually after 
> some clean up.
> This way when I get segmented event, I first try to close data streamer 
> instance and cache instance. But when I trying to close data streamer, the 
> close() call gets stuck. I was calling close with true to drop everything is 
> streamer. But that did not help.
> On receiving segmentation event, restrict the application from accessing any 
> caches. Then stop the node. Even then the node gets stuck.
>  
> I have attached few thread dumps here. In each of them one thread is trying 
> to stop the node, but gets into waiting state.
> """



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6323) Ignite node not stopping after segmentation

2017-09-08 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-6323:
-

 Summary: Ignite node not stopping after segmentation
 Key: IGNITE-6323
 URL: https://issues.apache.org/jira/browse/IGNITE-6323
 Project: Ignite
  Issue Type: Bug
Reporter: Mikhail Cherkasov


The problem was found by a user and described in user list:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-node-not-stopping-after-segmentation-td16773.html

copy of the message:
"""
I have follow up question on segmentation from my previous post. The issue I am 
trying to resolve is that ignite node does not stop on the segmented node. Here 
is brief information on my application.

 

I have embedded Ignite into my application and using it for distributed caches. 
I am running Ignite cluster in my lab environment. I have two nodes in the 
cluster. In current setup, the application receives about 1 million data points 
every minute. I am putting the data into ignite distributed cache using data 
streamer. This way data gets distributed among members and each member further 
processes the data. The application also uses other distributed caches while 
processing the data.

 

When a member node gets segmented, it does not stop. I get BEFORE_NODE_STOP 
event but nothing happens after that. Node hangs in some unstable state. I am 
suspecting that when node is trying to stop there are data in buffers of 
streamer which needs sent to other members. Because the node is segmented, it 
is not able to flush/drop the data. The application is also trying to access 
caches while node is stopping, that also causes deadlock situation.

 

I have tried few things to make it work,

Letting node stop after segmentation which is the default behavior. But the 
node gets stuck.
Setting segmentation policy to NOOP. Plan was to stop the node manually after 
some clean up.
This way when I get segmented event, I first try to close data streamer 
instance and cache instance. But when I trying to close data streamer, the 
close() call gets stuck. I was calling close with true to drop everything is 
streamer. But that did not help.
On receiving segmentation event, restrict the application from accessing any 
caches. Then stop the node. Even then the node gets stuck.
 

I have attached few thread dumps here. In each of them one thread is trying to 
stop the node, but gets into waiting state.
"""




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6323) Ignite node not stopping after segmentation

2017-09-08 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-6323:
--
Attachment: thread-dump-9-1.txt
thread-dump-9-2.txt
thread-dump-9-4.txt

> Ignite node not stopping after segmentation
> ---
>
> Key: IGNITE-6323
> URL: https://issues.apache.org/jira/browse/IGNITE-6323
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.1
>Reporter: Mikhail Cherkasov
> Fix For: 2.3
>
> Attachments: thread-dump-9-1.txt, thread-dump-9-2.txt, 
> thread-dump-9-4.txt
>
>
> The problem was found by a user and described in user list:
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-node-not-stopping-after-segmentation-td16773.html
> copy of the message:
> """
> I have follow up question on segmentation from my previous post. The issue I 
> am trying to resolve is that ignite node does not stop on the segmented node. 
> Here is brief information on my application.
>  
> I have embedded Ignite into my application and using it for distributed 
> caches. I am running Ignite cluster in my lab environment. I have two nodes 
> in the cluster. In current setup, the application receives about 1 million 
> data points every minute. I am putting the data into ignite distributed cache 
> using data streamer. This way data gets distributed among members and each 
> member further processes the data. The application also uses other 
> distributed caches while processing the data.
>  
> When a member node gets segmented, it does not stop. I get BEFORE_NODE_STOP 
> event but nothing happens after that. Node hangs in some unstable state. I am 
> suspecting that when node is trying to stop there are data in buffers of 
> streamer which needs sent to other members. Because the node is segmented, it 
> is not able to flush/drop the data. The application is also trying to access 
> caches while node is stopping, that also causes deadlock situation.
>  
> I have tried few things to make it work,
> Letting node stop after segmentation which is the default behavior. But the 
> node gets stuck.
> Setting segmentation policy to NOOP. Plan was to stop the node manually after 
> some clean up.
> This way when I get segmented event, I first try to close data streamer 
> instance and cache instance. But when I trying to close data streamer, the 
> close() call gets stuck. I was calling close with true to drop everything is 
> streamer. But that did not help.
> On receiving segmentation event, restrict the application from accessing any 
> caches. Then stop the node. Even then the node gets stuck.
>  
> I have attached few thread dumps here. In each of them one thread is trying 
> to stop the node, but gets into waiting state.
> """



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6323) Ignite node not stopping after segmentation

2017-09-08 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-6323:
--
Component/s: general

> Ignite node not stopping after segmentation
> ---
>
> Key: IGNITE-6323
> URL: https://issues.apache.org/jira/browse/IGNITE-6323
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.1
>Reporter: Mikhail Cherkasov
> Fix For: 2.3
>
>
> The problem was found by a user and described in user list:
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-node-not-stopping-after-segmentation-td16773.html
> copy of the message:
> """
> I have follow up question on segmentation from my previous post. The issue I 
> am trying to resolve is that ignite node does not stop on the segmented node. 
> Here is brief information on my application.
>  
> I have embedded Ignite into my application and using it for distributed 
> caches. I am running Ignite cluster in my lab environment. I have two nodes 
> in the cluster. In current setup, the application receives about 1 million 
> data points every minute. I am putting the data into ignite distributed cache 
> using data streamer. This way data gets distributed among members and each 
> member further processes the data. The application also uses other 
> distributed caches while processing the data.
>  
> When a member node gets segmented, it does not stop. I get BEFORE_NODE_STOP 
> event but nothing happens after that. Node hangs in some unstable state. I am 
> suspecting that when node is trying to stop there are data in buffers of 
> streamer which needs sent to other members. Because the node is segmented, it 
> is not able to flush/drop the data. The application is also trying to access 
> caches while node is stopping, that also causes deadlock situation.
>  
> I have tried few things to make it work,
> Letting node stop after segmentation which is the default behavior. But the 
> node gets stuck.
> Setting segmentation policy to NOOP. Plan was to stop the node manually after 
> some clean up.
> This way when I get segmented event, I first try to close data streamer 
> instance and cache instance. But when I trying to close data streamer, the 
> close() call gets stuck. I was calling close with true to drop everything is 
> streamer. But that did not help.
> On receiving segmentation event, restrict the application from accessing any 
> caches. Then stop the node. Even then the node gets stuck.
>  
> I have attached few thread dumps here. In each of them one thread is trying 
> to stop the node, but gets into waiting state.
> """



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6323) Ignite node not stopping after segmentation

2017-09-08 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-6323:
--
Fix Version/s: 2.3

> Ignite node not stopping after segmentation
> ---
>
> Key: IGNITE-6323
> URL: https://issues.apache.org/jira/browse/IGNITE-6323
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.1
>Reporter: Mikhail Cherkasov
> Fix For: 2.3
>
>
> The problem was found by a user and described in user list:
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-node-not-stopping-after-segmentation-td16773.html
> copy of the message:
> """
> I have follow up question on segmentation from my previous post. The issue I 
> am trying to resolve is that ignite node does not stop on the segmented node. 
> Here is brief information on my application.
>  
> I have embedded Ignite into my application and using it for distributed 
> caches. I am running Ignite cluster in my lab environment. I have two nodes 
> in the cluster. In current setup, the application receives about 1 million 
> data points every minute. I am putting the data into ignite distributed cache 
> using data streamer. This way data gets distributed among members and each 
> member further processes the data. The application also uses other 
> distributed caches while processing the data.
>  
> When a member node gets segmented, it does not stop. I get BEFORE_NODE_STOP 
> event but nothing happens after that. Node hangs in some unstable state. I am 
> suspecting that when node is trying to stop there are data in buffers of 
> streamer which needs sent to other members. Because the node is segmented, it 
> is not able to flush/drop the data. The application is also trying to access 
> caches while node is stopping, that also causes deadlock situation.
>  
> I have tried few things to make it work,
> Letting node stop after segmentation which is the default behavior. But the 
> node gets stuck.
> Setting segmentation policy to NOOP. Plan was to stop the node manually after 
> some clean up.
> This way when I get segmented event, I first try to close data streamer 
> instance and cache instance. But when I trying to close data streamer, the 
> close() call gets stuck. I was calling close with true to drop everything is 
> streamer. But that did not help.
> On receiving segmentation event, restrict the application from accessing any 
> caches. Then stop the node. Even then the node gets stuck.
>  
> I have attached few thread dumps here. In each of them one thread is trying 
> to stop the node, but gets into waiting state.
> """



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6120) Web Console: Propagate "lazy" flag on Query screen

2017-09-06 Thread Mikhail Cherkasov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155479#comment-16155479
 ] 

Mikhail Cherkasov commented on IGNITE-6120:
---

Why was it closed if there are no changes in the master repo?

> Web Console: Propagate "lazy" flag on Query screen
> --
>
> Key: IGNITE-6120
> URL: https://issues.apache.org/jira/browse/IGNITE-6120
> Project: Ignite
>  Issue Type: Task
>  Components: sql, wizards
>Reporter: Alexey Kuznetsov
>Assignee: Pavel Konstantinov
> Fix For: 2.3
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-4750) SQL: Support GROUP_CONCAT function

2017-09-06 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-4750:
--
Component/s: sql

> SQL: Support GROUP_CONCAT function
> --
>
> Key: IGNITE-4750
> URL: https://issues.apache.org/jira/browse/IGNITE-4750
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Denis Magda
>
> GROUP_CONCAT function is not supported at the moment. Makes sense to fill 
> this gap:
> http://apache-ignite-users.70518.x6.nabble.com/GROUP-CONCAT-function-is-unsupported-td10757.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6352) ignite-indexing is not compatible to OSGI

2017-09-12 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-6352:
-

 Summary: ignite-indexing is not compatible to OSGI
 Key: IGNITE-6352
 URL: https://issues.apache.org/jira/browse/IGNITE-6352
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 2.1
Reporter: Mikhail Cherkasov
 Fix For: 2.3


the issue is reported by user, there's his message:

When trying to start Ignite in an OSGi context I get the following exception:
Caused by: java.lang.NoClassDefFoundError: org/h2/server/Service
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at 
org.apache.ignite.internal.IgniteComponentType.inClassPath(IgniteComponentType.java:153)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1832)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1648)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1076)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:506)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:482)
at org.apache.ignite.Ignition.start(Ignition.java:304)

 That is because the h2 bundle (jar) is properly osgified, but does NOT export 
the package org.h2.server, so it isn't visible to my code's classloader



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (IGNITE-6323) Ignite node not stopping after segmentation

2017-09-12 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov resolved IGNITE-6323.
---
   Resolution: Cannot Reproduce
Fix Version/s: (was: 2.3)

The issue can't be reproduced with Ignite 2.1

> Ignite node not stopping after segmentation
> ---
>
> Key: IGNITE-6323
> URL: https://issues.apache.org/jira/browse/IGNITE-6323
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.0
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
> Attachments: thread-dump-9-1.txt, thread-dump-9-2.txt, 
> thread-dump-9-4.txt
>
>
> The problem was found by a user and described in user list:
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-node-not-stopping-after-segmentation-td16773.html
> copy of the message:
> """
> I have follow up question on segmentation from my previous post. The issue I 
> am trying to resolve is that ignite node does not stop on the segmented node. 
> Here is brief information on my application.
>  
> I have embedded Ignite into my application and using it for distributed 
> caches. I am running Ignite cluster in my lab environment. I have two nodes 
> in the cluster. In current setup, the application receives about 1 million 
> data points every minute. I am putting the data into ignite distributed cache 
> using data streamer. This way data gets distributed among members and each 
> member further processes the data. The application also uses other 
> distributed caches while processing the data.
>  
> When a member node gets segmented, it does not stop. I get BEFORE_NODE_STOP 
> event but nothing happens after that. Node hangs in some unstable state. I am 
> suspecting that when node is trying to stop there are data in buffers of 
> streamer which needs sent to other members. Because the node is segmented, it 
> is not able to flush/drop the data. The application is also trying to access 
> caches while node is stopping, that also causes deadlock situation.
>  
> I have tried few things to make it work,
> Letting node stop after segmentation which is the default behavior. But the 
> node gets stuck.
> Setting segmentation policy to NOOP. Plan was to stop the node manually after 
> some clean up.
> This way when I get segmented event, I first try to close data streamer 
> instance and cache instance. But when I trying to close data streamer, the 
> close() call gets stuck. I was calling close with true to drop everything is 
> streamer. But that did not help.
> On receiving segmentation event, restrict the application from accessing any 
> caches. Then stop the node. Even then the node gets stuck.
>  
> I have attached few thread dumps here. In each of them one thread is trying 
> to stop the node, but gets into waiting state.
> """



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-6323) Ignite node not stopping after segmentation

2017-09-12 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov reassigned IGNITE-6323:
-

Assignee: Mikhail Cherkasov

> Ignite node not stopping after segmentation
> ---
>
> Key: IGNITE-6323
> URL: https://issues.apache.org/jira/browse/IGNITE-6323
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.0
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
> Fix For: 2.3
>
> Attachments: thread-dump-9-1.txt, thread-dump-9-2.txt, 
> thread-dump-9-4.txt
>
>
> The problem was found by a user and described in user list:
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-node-not-stopping-after-segmentation-td16773.html
> copy of the message:
> """
> I have follow up question on segmentation from my previous post. The issue I 
> am trying to resolve is that ignite node does not stop on the segmented node. 
> Here is brief information on my application.
>  
> I have embedded Ignite into my application and using it for distributed 
> caches. I am running Ignite cluster in my lab environment. I have two nodes 
> in the cluster. In current setup, the application receives about 1 million 
> data points every minute. I am putting the data into ignite distributed cache 
> using data streamer. This way data gets distributed among members and each 
> member further processes the data. The application also uses other 
> distributed caches while processing the data.
>  
> When a member node gets segmented, it does not stop. I get BEFORE_NODE_STOP 
> event but nothing happens after that. Node hangs in some unstable state. I am 
> suspecting that when node is trying to stop there are data in buffers of 
> streamer which needs sent to other members. Because the node is segmented, it 
> is not able to flush/drop the data. The application is also trying to access 
> caches while node is stopping, that also causes deadlock situation.
>  
> I have tried few things to make it work,
> Letting node stop after segmentation which is the default behavior. But the 
> node gets stuck.
> Setting segmentation policy to NOOP. Plan was to stop the node manually after 
> some clean up.
> This way when I get segmented event, I first try to close data streamer 
> instance and cache instance. But when I trying to close data streamer, the 
> close() call gets stuck. I was calling close with true to drop everything is 
> streamer. But that did not help.
> On receiving segmentation event, restrict the application from accessing any 
> caches. Then stop the node. Even then the node gets stuck.
>  
> I have attached few thread dumps here. In each of them one thread is trying 
> to stop the node, but gets into waiting state.
> """



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6323) Ignite node not stopping after segmentation

2017-09-12 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-6323:
--
Affects Version/s: (was: 2.1)
   2.0

> Ignite node not stopping after segmentation
> ---
>
> Key: IGNITE-6323
> URL: https://issues.apache.org/jira/browse/IGNITE-6323
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.0
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
> Fix For: 2.3
>
> Attachments: thread-dump-9-1.txt, thread-dump-9-2.txt, 
> thread-dump-9-4.txt
>
>
> The problem was found by a user and described in user list:
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-node-not-stopping-after-segmentation-td16773.html
> copy of the message:
> """
> I have follow up question on segmentation from my previous post. The issue I 
> am trying to resolve is that ignite node does not stop on the segmented node. 
> Here is brief information on my application.
>  
> I have embedded Ignite into my application and using it for distributed 
> caches. I am running Ignite cluster in my lab environment. I have two nodes 
> in the cluster. In current setup, the application receives about 1 million 
> data points every minute. I am putting the data into ignite distributed cache 
> using data streamer. This way data gets distributed among members and each 
> member further processes the data. The application also uses other 
> distributed caches while processing the data.
>  
> When a member node gets segmented, it does not stop. I get BEFORE_NODE_STOP 
> event but nothing happens after that. Node hangs in some unstable state. I am 
> suspecting that when node is trying to stop there are data in buffers of 
> streamer which needs sent to other members. Because the node is segmented, it 
> is not able to flush/drop the data. The application is also trying to access 
> caches while node is stopping, that also causes deadlock situation.
>  
> I have tried few things to make it work,
> Letting node stop after segmentation which is the default behavior. But the 
> node gets stuck.
> Setting segmentation policy to NOOP. Plan was to stop the node manually after 
> some clean up.
> This way when I get segmented event, I first try to close data streamer 
> instance and cache instance. But when I trying to close data streamer, the 
> close() call gets stuck. I was calling close with true to drop everything is 
> streamer. But that did not help.
> On receiving segmentation event, restrict the application from accessing any 
> caches. Then stop the node. Even then the node gets stuck.
>  
> I have attached few thread dumps here. In each of them one thread is trying 
> to stop the node, but gets into waiting state.
> """



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5918) Adding and searching objects in index tree produces a lot of garbage

2017-09-12 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-5918:
--
Fix Version/s: 2.3

> Adding and searching objects in index tree produces a lot of garbage
> 
>
> Key: IGNITE-5918
> URL: https://issues.apache.org/jira/browse/IGNITE-5918
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 2.1
>Reporter: Mikhail Cherkasov
>Assignee: Igor Seliverstov
>  Labels: performance
> Fix For: 2.3
>
>
> Adding and searching objects in index tree produces a lot of garbage and this 
> can lead to big GC pauses.
> Tests with data streaming of object with 5 string indexes show that ignite 
> server spends about 15-25% CPU time in GC.
> The problem is that ignite deserialize objects for comparing, while for the 
> primitive type and strings comparing can be implemented for bytes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6360) NPE occurs if object with null indexed field is added

2017-09-12 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-6360:
-

 Summary: NPE occurs if object with null indexed field is added
 Key: IGNITE-6360
 URL: https://issues.apache.org/jira/browse/IGNITE-6360
 Project: Ignite
  Issue Type: Bug
 Environment: NPE occurs if object with null indexed field is added
Reporter: Mikhail Cherkasov
Assignee: Mikhail Cherkasov
 Fix For: 2.3






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6528) Warning if no table for BinaryObject

2017-09-28 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-6528:
--
Fix Version/s: 2.3

> Warning if no table for BinaryObject
> 
>
> Key: IGNITE-6528
> URL: https://issues.apache.org/jira/browse/IGNITE-6528
> Project: Ignite
>  Issue Type: Improvement
>  Components: binary, cache, sql
>Reporter: Mikhail Cherkasov
> Fix For: 2.3
>
>
> I've seen several times that due wrong cache configuration people can't find 
> data in cache and blame Ignite that it's buggy and doesn't work.
> And it's very difficult to find an error in the code, especially if you don't 
> have reach experience with Ignite.
> The problem is that we don't have strong typing when defining QueryEntriy and 
> a user can use an arbitrary string id to
> define a type, but he should use the same string id to obtain binary object 
> builder, however, people sometimes confusing this.
> So the user can define QueryEntity with value type:  
> queryEntity.setValueType("MyCoolName") and 
> later put to cache the following binary object: 
> ignite.binary.toBinary(value), but this object won't be indexed, because
> ignite.binary.toBinary uses class name as string id while indexing expects to 
> find "MyCoolName" as id.
> The example is simple and the error is obvious when you see this two lines 
> close to each other, however, in real life, cache definition and data 
> ingestion are separated by tons of code.
> We can save a lot of man-hours for our users if Ignite will print a warning 
> If a cache has a configured QE and user puts BinaryObject with typeName which 
> doesn't correspond to any QE.
> The warning should be printed only once, something like:
> [WARN] No table is found for %typeName% binary object.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6528) Warning if no table for BinaryObject

2017-09-28 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-6528:
-

 Summary: Warning if no table for BinaryObject
 Key: IGNITE-6528
 URL: https://issues.apache.org/jira/browse/IGNITE-6528
 Project: Ignite
  Issue Type: Improvement
  Components: binary, cache, sql
Reporter: Mikhail Cherkasov


I've seen several times that due wrong cache configuration people can't find 
data in cache and blame Ignite that it's buggy and doesn't work.

And it's very difficult to find an error in the code, especially if you don't 
have reach experience with Ignite.

The problem is that we don't have strong typing when defining QueryEntriy and a 
user can use an arbitrary string id to
define a type, but he should use the same string id to obtain binary object 
builder, however, people sometimes confusing this.
So the user can define QueryEntity with value type:  
queryEntity.setValueType("MyCoolName") and 
later put to cache the following binary object: ignite.binary.toBinary(value), 
but this object won't be indexed, because
ignite.binary.toBinary uses class name as string id while indexing expects to 
find "MyCoolName" as id.

The example is simple and the error is obvious when you see this two lines 
close to each other, however, in real life, cache definition and data ingestion 
are separated by tons of code.

We can save a lot of man-hours for our users if Ignite will print a warning If 
a cache has a configured QE and user puts BinaryObject with typeName which 
doesn't correspond to any QE.

The warning should be printed only once, something like:
[WARN] No table is found for %typeName% binary object.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6360) NPE occurs if object with null indexed field is added

2017-09-26 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-6360:
--
Component/s: sql

> NPE occurs if object with null indexed field is added
> -
>
> Key: IGNITE-6360
> URL: https://issues.apache.org/jira/browse/IGNITE-6360
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.3
> Environment: NPE occurs if object with null indexed field is added
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
> Fix For: 2.3
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6360) NPE occurs if object with null indexed field is added

2017-09-26 Thread Mikhail Cherkasov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180619#comment-16180619
 ] 

Mikhail Cherkasov commented on IGNITE-6360:
---

IGNITE-5918 introduced a bug that is fixed by this one.

> NPE occurs if object with null indexed field is added
> -
>
> Key: IGNITE-6360
> URL: https://issues.apache.org/jira/browse/IGNITE-6360
> Project: Ignite
>  Issue Type: Bug
> Environment: NPE occurs if object with null indexed field is added
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
> Fix For: 2.3
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (IGNITE-5795) @AffinityKeyMapped ignored if QueryEntity used

2017-09-04 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov reopened IGNITE-5795:
---

it's still the issue for users, how should they get know this work around?

One of the users spent a plenty time trying to fix broken affinity, he asked us 
for help, I spent a plenty time too and found the solution only after 
consulting with Vova O.

We need to mention this in doc at least, like a big red warning or even warning 
in ignite output.





> @AffinityKeyMapped ignored if QueryEntity used
> --
>
> Key: IGNITE-5795
> URL: https://issues.apache.org/jira/browse/IGNITE-5795
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.0
>Reporter: Dmitry Karachentsev
>Assignee: Alexey Kukushkin
> Fix For: 2.3
>
>
> When cache configured with QueryEntity and used key type with 
> @AffinityKeyMapped field, it will be ignored and wrong partition calculated. 
> This happens because QueryEntity processing precedes key type registering in 
> binary meta cache. On that step 
> CacheObjectBinaryProcessorImpl#affinityKeyField called and unable to resolve 
> type, so null returned and null putted in affKeyFields.
> On next put/get operation CacheObjectBinaryProcessorImpl#affinityKeyField 
> will return null from affKeyFields, but should be affinity key field.
> Test that reproduces problem in [PR 
> 2330|https://github.com/apache/ignite/pull/2330]
> To wrorkaround the issue, set IgniteConfiguration#setKeyConfiguration(), it 
> will force registering key.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-5795) @AffinityKeyMapped ignored if QueryEntity used

2017-09-05 Thread Mikhail Cherkasov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16153478#comment-16153478
 ] 

Mikhail Cherkasov commented on IGNITE-5795:
---

[~kukushal] please, discuss this again with Vova, he said that we should treat 
this as a bug and fix it.

> @AffinityKeyMapped ignored if QueryEntity used
> --
>
> Key: IGNITE-5795
> URL: https://issues.apache.org/jira/browse/IGNITE-5795
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.0
>Reporter: Dmitry Karachentsev
>Assignee: Alexey Kukushkin
>  Labels: usability
> Fix For: 2.3
>
>
> When cache configured with QueryEntity and used key type with 
> @AffinityKeyMapped field, it will be ignored and wrong partition calculated. 
> This happens because QueryEntity processing precedes key type registering in 
> binary meta cache. On that step 
> CacheObjectBinaryProcessorImpl#affinityKeyField called and unable to resolve 
> type, so null returned and null putted in affKeyFields.
> On next put/get operation CacheObjectBinaryProcessorImpl#affinityKeyField 
> will return null from affKeyFields, but should be affinity key field.
> Test that reproduces problem in [PR 
> 2330|https://github.com/apache/ignite/pull/2330]
> To wrorkaround the issue, set IgniteConfiguration#setKeyConfiguration(), it 
> will force registering key.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6580) Cluster can fail during concurrent re-balancing and cache destruction

2017-10-09 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-6580:
-

 Summary: Cluster can fail during concurrent re-balancing and cache 
destruction
 Key: IGNITE-6580
 URL: https://issues.apache.org/jira/browse/IGNITE-6580
 Project: Ignite
  Issue Type: Bug
  Components: cache
Reporter: Mikhail Cherkasov
Priority: Critical


The following exceptions can be abserved during concurrent re-balancing and 
cache destruction:
1.
{noformat}

[00:01:27,135][ERROR][sys-#4375%null%][GridDhtPreloader] Partition eviction 
failed, this can cause grid hang.
org.apache.ignite.IgniteException: Runtime failure on search row: Row@6be51c3d[ 
**REMOVED SENSITIVE INFORMATION** ]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doRemove(BPlusTree.java:1787)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.remove(BPlusTree.java:1578)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.remove(H2TreeIndex.java:226)
 ~[ignite-indexing-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.doUpdate(GridH2Table.java:523)
 ~[ignite-indexing-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:416)
 ~[ignite-indexing-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.remove(IgniteH2Indexing.java:574)
 ~[ignite-indexing-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.remove(GridQueryProcessor.java:2172)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.remove(GridCacheQueryManager.java:451)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishRemove(IgniteCacheOffheapManagerImpl.java:1462)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.remove(IgniteCacheOffheapManagerImpl.java:1425)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.remove(IgniteCacheOffheapManagerImpl.java:383)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.removeValue(GridCacheMapEntry.java:3224)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry.clearInternal(GridDhtCacheEntry.java:588)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.clearAll(GridDhtLocalPartition.java:951)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.tryEvict(GridDhtLocalPartition.java:809)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:593)
 [ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:580)
 [ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6629)
 [ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)
 [ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
[ignite-core-2.1.4.jar:2.1.4]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_131]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
Caused by: java.lang.IllegalStateException: Item not found: 1
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.findIndirectItemIndex(DataPageIO.java:346)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.getDataOffset(DataPageIO.java:446)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.readPayload(DataPageIO.java:488)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:149)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:101)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 

[jira] [Updated] (IGNITE-6580) Cluster can fail during concurrent re-balancing and cache destruction

2017-10-09 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-6580:
--
Description: 
The following exceptions can be abserved during concurrent re-balancing and 
cache destruction:
1.
{noformat}

[00:01:27,135][ERROR][sys-#4375%null%][GridDhtPreloader] Partition eviction 
failed, this can cause grid hang.
org.apache.ignite.IgniteException: Runtime failure on search row: Row@6be51c3d[ 
**REMOVED SENSITIVE INFORMATION** ]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doRemove(BPlusTree.java:1787)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.remove(BPlusTree.java:1578)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.remove(H2TreeIndex.java:226)
 ~[ignite-indexing-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.doUpdate(GridH2Table.java:523)
 ~[ignite-indexing-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:416)
 ~[ignite-indexing-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.remove(IgniteH2Indexing.java:574)
 ~[ignite-indexing-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.remove(GridQueryProcessor.java:2172)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.remove(GridCacheQueryManager.java:451)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishRemove(IgniteCacheOffheapManagerImpl.java:1462)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.remove(IgniteCacheOffheapManagerImpl.java:1425)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.remove(IgniteCacheOffheapManagerImpl.java:383)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.removeValue(GridCacheMapEntry.java:3224)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry.clearInternal(GridDhtCacheEntry.java:588)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.clearAll(GridDhtLocalPartition.java:951)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.tryEvict(GridDhtLocalPartition.java:809)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:593)
 [ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:580)
 [ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6629)
 [ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)
 [ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
[ignite-core-2.1.4.jar:2.1.4]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_131]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
Caused by: java.lang.IllegalStateException: Item not found: 1
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.findIndirectItemIndex(DataPageIO.java:346)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.getDataOffset(DataPageIO.java:446)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.readPayload(DataPageIO.java:488)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:149)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:101)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.query.h2.database.H2RowFactory.getRow(H2RowFactory.java:62)
 ~[ignite-indexing-2.1.4.jar:2.1.4]
at 

[jira] [Created] (IGNITE-6665) Client node re-joins only to the list from disco configuration and ignores the rest nodes

2017-10-18 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-6665:
-

 Summary: Client node re-joins only to the list from disco 
configuration and ignores the rest nodes
 Key: IGNITE-6665
 URL: https://issues.apache.org/jira/browse/IGNITE-6665
 Project: Ignite
  Issue Type: Bug
  Security Level: Public (Viewable by anyone)
  Components: general
Affects Versions: 2.2
Reporter: Mikhail Cherkasov
 Fix For: 2.4


Client node re-joins only to the list from disco configuration and ignores the 
rest nodes.
if we have a cluster with 3 server nodes and in client discovery configuration 
only 1 is mentioned and this server node left cluster, client node will try to 
re-join only to this one and will ignore the rest 2 server nodes.

Reproducer is attached.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6665) Client node re-joins only to the list from disco configuration and ignores the rest nodes

2017-10-18 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-6665:
--
Attachment: ClientReJoinTest.java

> Client node re-joins only to the list from disco configuration and ignores 
> the rest nodes
> -
>
> Key: IGNITE-6665
> URL: https://issues.apache.org/jira/browse/IGNITE-6665
> Project: Ignite
>  Issue Type: Bug
>  Security Level: Public(Viewable by anyone) 
>  Components: general
>Affects Versions: 2.2
>Reporter: Mikhail Cherkasov
> Fix For: 2.4
>
> Attachments: ClientReJoinTest.java
>
>
> Client node re-joins only to the list from disco configuration and ignores 
> the rest nodes.
> if we have a cluster with 3 server nodes and in client discovery 
> configuration only 1 is mentioned and this server node left cluster, client 
> node will try to re-join only to this one and will ignore the rest 2 server 
> nodes.
> Reproducer is attached.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7050) Add support for spring3

2017-11-28 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-7050:
-

 Summary: Add support for spring3
 Key: IGNITE-7050
 URL: https://issues.apache.org/jira/browse/IGNITE-7050
 Project: Ignite
  Issue Type: Improvement
Affects Versions: 2.3
 Environment: there are still users who use spring3 and hence can't use 
ignite which depends on spring4. I think we can create separate modules which 
Reporter: Mikhail Cherkasov
Assignee: Mikhail Cherkasov
 Fix For: 2.4






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Issue Comment Deleted] (IGNITE-7050) Add support for spring3

2017-11-29 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-7050:
--
Comment: was deleted

(was: there are still users who use spring3 and hence can't use ignite which 
depends on spring4. I think we can create separate modules for spring3 support, 
like it was done for hibernate 4/5.)

> Add support for spring3
> ---
>
> Key: IGNITE-7050
> URL: https://issues.apache.org/jira/browse/IGNITE-7050
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.3
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
> Fix For: 2.4
>
>
> there are still users who use spring3 and hence can't use ignite which 
> depends on spring4. I think we can create separate modules for spring3 
> support, like it was done for hibernate 4/5.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-7050) Add support for spring3

2017-11-29 Thread Mikhail Cherkasov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16270942#comment-16270942
 ] 

Mikhail Cherkasov commented on IGNITE-7050:
---

there are still users who use spring3 and hence can't use ignite which depends 
on spring4. I think we can create separate modules for spring3 support, like it 
was done for hibernate 4/5.

> Add support for spring3
> ---
>
> Key: IGNITE-7050
> URL: https://issues.apache.org/jira/browse/IGNITE-7050
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.3
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
> Fix For: 2.4
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-7050) Add support for spring3

2017-11-29 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-7050:
--
Description: there are still users who use spring3 and hence can't use 
ignite which depends on spring4. I think we can create separate modules for 
spring3 support, like it was done for hibernate 4/5.

> Add support for spring3
> ---
>
> Key: IGNITE-7050
> URL: https://issues.apache.org/jira/browse/IGNITE-7050
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.3
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
> Fix For: 2.4
>
>
> there are still users who use spring3 and hence can't use ignite which 
> depends on spring4. I think we can create separate modules for spring3 
> support, like it was done for hibernate 4/5.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-7050) Add support for spring3

2017-11-29 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-7050:
--
Environment: (was: there are still users who use spring3 and hence 
can't use ignite which depends on spring4. I think we can create separate 
modules which )

> Add support for spring3
> ---
>
> Key: IGNITE-7050
> URL: https://issues.apache.org/jira/browse/IGNITE-7050
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.3
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
> Fix For: 2.4
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7028) Memcached does not set type flags for response

2017-11-27 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-7028:
-

 Summary: Memcached does not set type flags for response
 Key: IGNITE-7028
 URL: https://issues.apache.org/jira/browse/IGNITE-7028
 Project: Ignite
  Issue Type: Bug
  Components: rest
Affects Versions: 2.3
Reporter: Mikhail Cherkasov
 Fix For: 2.4


Memcached does not set type flags for response:
http://apache-ignite-users.70518.x6.nabble.com/Memcached-doesn-t-store-flags-td18403.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7019) Cluster can not survive after IgniteOOM

2017-11-27 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-7019:
-

 Summary: Cluster can not survive after IgniteOOM
 Key: IGNITE-7019
 URL: https://issues.apache.org/jira/browse/IGNITE-7019
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.3
Reporter: Mikhail Cherkasov
Priority: Critical
 Fix For: 2.4


even if we have full sync mode and transactional cache we can't add new nodes 
if there  was IgniteOOM, after adding new nodes and re-balancing, old nodes 
can't evict partitions:

[2017-11-17 20:02:24,588][ERROR][sys-#65%DR1%][GridDhtPreloader] Partition 
eviction failed, this can cause grid hang.
class org.apache.ignite.internal.mem.IgniteOutOfMemoryException: Not enough 
memory allocated [policyName=100MB_Region_Eviction, size=104.9 MB]
Consider increasing memory policy size, enabling evictions, adding more nodes 
to the cluster, reducing number of backups or reducing model size.
at 
org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.allocatePage(PageMemoryNoStoreImpl.java:294)
at 
org.apache.ignite.internal.processors.cache.persistence.DataStructure.allocatePageNoReuse(DataStructure.java:117)
at 
org.apache.ignite.internal.processors.cache.persistence.DataStructure.allocatePage(DataStructure.java:105)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.PagesList.addStripe(PagesList.java:413)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.PagesList.getPageForPut(PagesList.java:528)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.PagesList.put(PagesList.java:617)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.FreeListImpl.addForRecycle(FreeListImpl.java:582)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.reuseFreePages(BPlusTree.java:3847)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.releaseAll(BPlusTree.java:4106)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.access$6900(BPlusTree.java:3166)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doRemove(BPlusTree.java:1782)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.remove(BPlusTree.java:1567)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.remove(IgniteCacheOffheapManagerImpl.java:1387)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.remove(IgniteCacheOffheapManagerImpl.java:374)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.removeValue(GridCacheMapEntry.java:3233)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry.clearInternal(GridDhtCacheEntry.java:588)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.clearAll(GridDhtLocalPartition.java:892)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.tryEvict(GridDhtLocalPartition.java:750)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:593)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:580)
at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6639)
at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7021) IgniteOOM is not propogated to client in case of implicit transaction

2017-11-27 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-7021:
-

 Summary: IgniteOOM is not propogated to client  in case of 
implicit transaction
 Key: IGNITE-7021
 URL: https://issues.apache.org/jira/browse/IGNITE-7021
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.3
Reporter: Mikhail Cherkasov
Priority: Critical
 Fix For: 2.4


it's related to https://issues.apache.org/jira/browse/IGNITE-7019
when transaction fails due IgniteOOM,  ignite tries to rollback transaction and 
it fails too, because can't add free pages to free list due a new IgniteOOM:

[2017-11-27 
12:47:37,539][ERROR][sys-stripe-2-#4%cache.IgniteOutOfMemoryPropagationTest0%][GridNearTxLocal]
 Heuristic transaction failure.
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.userCommit(IgniteTxLocalAdapter.java:835)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter.localFinish(GridDhtTxLocalAdapter.java:774)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.localFinish(GridDhtTxLocal.java:555)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.finishTx(GridDhtTxLocal.java:441)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.commitDhtLocalAsync(GridDhtTxLocal.java:489)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.commitAsync(GridDhtTxLocal.java:498)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.onDone(GridDhtTxPrepareFuture.java:727)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.onDone(GridDhtTxPrepareFuture.java:104)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:451)
at 
org.apache.ignite.internal.util.future.GridCompoundFuture.checkComplete(GridCompoundFuture.java:285)
at 
org.apache.ignite.internal.util.future.GridCompoundFuture.markInitialized(GridCompoundFuture.java:276)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare0(GridDhtTxPrepareFuture.java:1246)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.mapIfLocked(GridDhtTxPrepareFuture.java:666)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare(GridDhtTxPrepareFuture.java:1040)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.prepareAsync(GridDhtTxLocal.java:398)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.prepareNearTx(IgniteTxHandler.java:519)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processNearTxPrepareRequest0(IgniteTxHandler.java:150)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processNearTxPrepareRequest(IgniteTxHandler.java:135)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.access$000(IgniteTxHandler.java:97)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$1.apply(IgniteTxHandler.java:177)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$1.apply(IgniteTxHandler.java:175)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1060)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:378)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:304)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:99)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:293)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090)
at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:499)
at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.IgniteException: Runtime failure on search 
row: org.apache.ignite.internal.processors.cache.tree.SearchRow@2b17e5c8

[jira] [Created] (IGNITE-7231) Cassandra-sessions-pool is running after Ignition.stop

2017-12-18 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-7231:
-

 Summary: Cassandra-sessions-pool is running after Ignition.stop
 Key: IGNITE-7231
 URL: https://issues.apache.org/jira/browse/IGNITE-7231
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.3
Reporter: Mikhail Cherkasov
Assignee: Mikhail Cherkasov
 Fix For: 2.4


Cassandra-sessions-pool is running after Ignition.stop.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-7231) Cassandra-sessions-pool is running after Ignition.stop

2017-12-18 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-7231:
--
Attachment: ThreadDump.txt

> Cassandra-sessions-pool is running after Ignition.stop
> --
>
> Key: IGNITE-7231
> URL: https://issues.apache.org/jira/browse/IGNITE-7231
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.3
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
> Fix For: 2.4
>
> Attachments: ThreadDump.txt
>
>
> Cassandra-sessions-pool is running after Ignition.stop.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-7021) IgniteOOM is not propogated to client in case of implicit transaction

2017-12-13 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov reassigned IGNITE-7021:
-

Assignee: Mikhail Cherkasov

> IgniteOOM is not propogated to client  in case of implicit transaction
> --
>
> Key: IGNITE-7021
> URL: https://issues.apache.org/jira/browse/IGNITE-7021
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.3
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
>Priority: Critical
> Fix For: 2.4
>
>
> it's related to https://issues.apache.org/jira/browse/IGNITE-7019
> when transaction fails due IgniteOOM,  ignite tries to rollback transaction 
> and it fails too, because can't add free pages to free list due a new 
> IgniteOOM:
> [2017-11-27 
> 12:47:37,539][ERROR][sys-stripe-2-#4%cache.IgniteOutOfMemoryPropagationTest0%][GridNearTxLocal]
>  Heuristic transaction failure.
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.userCommit(IgniteTxLocalAdapter.java:835)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter.localFinish(GridDhtTxLocalAdapter.java:774)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.localFinish(GridDhtTxLocal.java:555)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.finishTx(GridDhtTxLocal.java:441)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.commitDhtLocalAsync(GridDhtTxLocal.java:489)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.commitAsync(GridDhtTxLocal.java:498)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.onDone(GridDhtTxPrepareFuture.java:727)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.onDone(GridDhtTxPrepareFuture.java:104)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:451)
>   at 
> org.apache.ignite.internal.util.future.GridCompoundFuture.checkComplete(GridCompoundFuture.java:285)
>   at 
> org.apache.ignite.internal.util.future.GridCompoundFuture.markInitialized(GridCompoundFuture.java:276)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare0(GridDhtTxPrepareFuture.java:1246)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.mapIfLocked(GridDhtTxPrepareFuture.java:666)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare(GridDhtTxPrepareFuture.java:1040)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.prepareAsync(GridDhtTxLocal.java:398)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.prepareNearTx(IgniteTxHandler.java:519)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processNearTxPrepareRequest0(IgniteTxHandler.java:150)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processNearTxPrepareRequest(IgniteTxHandler.java:135)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.access$000(IgniteTxHandler.java:97)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$1.apply(IgniteTxHandler.java:177)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$1.apply(IgniteTxHandler.java:175)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1060)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:378)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:304)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:99)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:293)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
>   at 
> 

[jira] [Created] (IGNITE-7196) Exchange can stuck and wait while new node restoring state from disk and starting caches

2017-12-13 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-7196:
-

 Summary: Exchange can stuck and wait while new node restoring 
state from disk and starting caches
 Key: IGNITE-7196
 URL: https://issues.apache.org/jira/browse/IGNITE-7196
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.3
Reporter: Mikhail Cherkasov
Priority: Critical
 Fix For: 2.4


Exchange can stuck and wait while new node restoring state from disk and 
starting caches, there's a log snippet from a just joined new node that shows 
the issue:

[21:36:13,023][INFO][exchange-worker-#62%statement_grid%][time] Started 
exchange init [topVer=AffinityTopologyVersion [topVer=57, minorTopVer=0], 
crd=false, evt=NODE_JOINED, evtNode=3ac1160e-0de4-41bc-a366-59292c9f03c1, 
customEvt=null, allowMerge=true]
[21:36:13,023][INFO][exchange-worker-#62%statement_grid%][FilePageStoreManager] 
Resolved page store work directory: 
/mnt/store/node00-d1eb270c-d2cc-4550-87aa-64f6df2a9463
[21:36:13,024][INFO][exchange-worker-#62%statement_grid%][FileWriteAheadLogManager]
 Resolved write ahead log work directory: 
/mnt/wal/WAL/node00-d1eb270c-d2cc-4550-87aa-64f6df2a9463
[21:36:13,024][INFO][exchange-worker-#62%statement_grid%][FileWriteAheadLogManager]
 Resolved write ahead log archive directory: 
/mnt/wal/WAL_archive/node00-d1eb270c-d2cc-4550-87aa-64f6df2a9463
[21:36:13,046][INFO][exchange-worker-#62%statement_grid%][FileWriteAheadLogManager]
 Started write-ahead log manager [mode=DEFAULT]
[21:36:13,065][INFO][exchange-worker-#62%statement_grid%][PageMemoryImpl] 
Started page memory [memoryAllocated=100.0 MiB, pages=6352, tableSize=373.4 
KiB, checkpointBuffer=100.0 MiB]
[21:36:13,105][INFO][exchange-worker-#62%statement_grid%][PageMemoryImpl] 
Started page memory [memoryAllocated=32.0 GiB, pages=2083376, tableSize=119.6 
MiB, checkpointBuffer=896.0 MiB]
[21:36:13,428][INFO][exchange-worker-#62%statement_grid%][GridCacheDatabaseSharedManager]
 Read checkpoint status 
[startMarker=/mnt/store/node00-d1eb270c-d2cc-4550-87aa-64f6df2a9463/cp/1512930965253-306c0895-1f5f-4237-bebf-8bf2b49682af-START.bin,
 
endMarker=/mnt/store/node00-d1eb270c-d2cc-4550-87aa-64f6df2a9463/cp/1512930869357-1c24b6dc-d64c-4b83-8166-11edf1bfdad3-END.bin]
[21:36:13,429][INFO][exchange-worker-#62%statement_grid%][GridCacheDatabaseSharedManager]
 Checking memory state [lastValidPos=FileWALPointer [idx=3582, 
fileOffset=59186076, len=9229, forceFlush=false], lastMarked=FileWALPointer 
[idx=3629, fileOffset=50829700, len=9229, forceFlush=false], 
lastCheckpointId=306c0895-1f5f-4237-bebf-8bf2b49682af]
[21:36:13,429][WARNING][exchange-worker-#62%statement_grid%][GridCacheDatabaseSharedManager]
 Ignite node stopped in the middle of checkpoint. Will restore memory state and 
finish checkpoint on node start.
[21:36:18,312][INFO][grid-nio-worker-tcp-comm-0-#41%statement_grid%][TcpCommunicationSpi]
 Accepted incoming communication connection [locAddr=/172.31.20.209:48100, 
rmtAddr=/172.31.17.115:57148]
[21:36:21,619][INFO][exchange-worker-#62%statement_grid%][GridCacheDatabaseSharedManager]
 Found last checkpoint marker [cpId=306c0895-1f5f-4237-bebf-8bf2b49682af, 
pos=FileWALPointer [idx=3629, fileOffset=50829700, len=9229, forceFlush=false]]
[21:36:21,620][INFO][exchange-worker-#62%statement_grid%][GridCacheDatabaseSharedManager]
 Finished applying memory changes [changesApplied=165103, time=8189ms]
[21:36:22,403][INFO][grid-nio-worker-tcp-comm-1-#42%statement_grid%][TcpCommunicationSpi]
 Accepted incoming communication connection [locAddr=/172.31.20.209:48100, 
rmtAddr=/172.31.28.10:47964]
[21:36:23,414][INFO][grid-nio-worker-tcp-comm-2-#43%statement_grid%][TcpCommunicationSpi]
 Accepted incoming communication connection [locAddr=/172.31.20.209:48100, 
rmtAddr=/172.31.27.101:46000]
[21:36:33,019][WARNING][main][GridCachePartitionExchangeManager] Failed to wait 
for initial partition map exchange. Possible reasons are:
^-- Transactions in deadlock.
^-- Long running transactions (ignore if this is the case).
^-- Unreleased explicit locks.
[21:36:53,021][WARNING][main][GridCachePartitionExchangeManager] Still waiting 
for initial partition map exchange [fut=GridDhtPartitionsExchangeFuture 
[firstDiscoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode 
[id=3ac1160e-0de4-41bc-a366-59292c9f03c1, addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 
172.31.20.209], sockAddrs=[/0:0:0:0:0:0:0:1%lo:48500, /127.0.0.1:48500, 
ip-172-31-20-209.eu-central-1.compute.internal/172.31.20.209:48500], 
discPort=48500, order=57, intOrder=36, lastExchangeTime=1512931012268, 
loc=true, ver=2.3.1#20171129-sha1:4b1ec0fe, isClient=false], topVer=57, 
nodeId8=3ac1160e, msg=null, type=NODE_JOINED, tstamp=1512930972992], 
crd=TcpDiscoveryNode [id=56c97317-26cf-43d2-bf76-0cab59c6fa5f, 
addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 172.31.27.101], 
sockAddrs=[/0:0:0:0:0:0:0:1%lo:48500, /127.0.0.1:48500, 

[jira] [Resolved] (IGNITE-5940) Datastreamer does not propagate OOME

2017-11-10 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov resolved IGNITE-5940.
---
Resolution: Duplicate

this one is fixed by IGNITE-6654

>  Datastreamer does not propagate OOME
> -
>
> Key: IGNITE-5940
> URL: https://issues.apache.org/jira/browse/IGNITE-5940
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.1
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
>
> DataStreamer throws exception as it's closed if OOM occurs on server node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6960) ContinuousQuery failed if set deploymentMode to Private

2017-11-20 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-6960:
-

 Summary: ContinuousQuery failed if set deploymentMode to Private
 Key: IGNITE-6960
 URL: https://issues.apache.org/jira/browse/IGNITE-6960
 Project: Ignite
  Issue Type: Bug
  Components: cache, general
Reporter: Mikhail Cherkasov
Priority: Critical


user list: 
http://apache-ignite-users.70518.x6.nabble.com/Scan-query-failed-if-set-deploymentMode-to-Private-tt18244.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6960) ContinuousQuery failed if set deploymentMode to Private

2017-11-20 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-6960:
--
Affects Version/s: 2.3

> ContinuousQuery failed if set deploymentMode to Private
> ---
>
> Key: IGNITE-6960
> URL: https://issues.apache.org/jira/browse/IGNITE-6960
> Project: Ignite
>  Issue Type: Bug
>  Components: cache, general
>Affects Versions: 2.3
>Reporter: Mikhail Cherkasov
>Priority: Critical
> Fix For: 2.4
>
>
> user list: 
> http://apache-ignite-users.70518.x6.nabble.com/Scan-query-failed-if-set-deploymentMode-to-Private-tt18244.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6960) ContinuousQuery failed if set deploymentMode to Private

2017-11-20 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-6960:
--
Fix Version/s: 2.4

> ContinuousQuery failed if set deploymentMode to Private
> ---
>
> Key: IGNITE-6960
> URL: https://issues.apache.org/jira/browse/IGNITE-6960
> Project: Ignite
>  Issue Type: Bug
>  Components: cache, general
>Affects Versions: 2.3
>Reporter: Mikhail Cherkasov
>Priority: Critical
> Fix For: 2.4
>
>
> user list: 
> http://apache-ignite-users.70518.x6.nabble.com/Scan-query-failed-if-set-deploymentMode-to-Private-tt18244.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6942) Auto re-connect to other node in case of failure of current

2017-11-16 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-6942:
-

 Summary: Auto re-connect to other node in case of failure of 
current
 Key: IGNITE-6942
 URL: https://issues.apache.org/jira/browse/IGNITE-6942
 Project: Ignite
  Issue Type: Improvement
  Security Level: Public (Viewable by anyone)
  Components: sql
Reporter: Mikhail Cherkasov
 Fix For: 2.4


it will be great to have a re-connect feature for thin driver, in case if 
server failure it should choose another server node from a list of server nods.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   3   >