[jira] [Assigned] (IGNITE-8201) Refactor REST API for authentication

2018-04-12 Thread Alexey Kuznetsov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov reassigned IGNITE-8201:


Assignee: Andrey Novikov  (was: Alexey Kuznetsov)

[~anovikov], Please review my changes.

> Refactor REST API for authentication
> 
>
> Key: IGNITE-8201
> URL: https://issues.apache.org/jira/browse/IGNITE-8201
> Project: Ignite
>  Issue Type: Task
>  Components: rest
>Reporter: Alexey Kuznetsov
>Assignee: Andrey Novikov
>Priority: Major
> Fix For: 2.5
>
>
> # Introduce "authenticate" command.
>  # All subsequent commands should be executed with session token from step 1.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8242) Remove method GAGridUtils.getGenesForChromosome() as problematic when Chromosome contains duplicate genes.

2018-04-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436746#comment-16436746
 ] 

ASF GitHub Bot commented on IGNITE-8242:


GitHub user techbysample opened a pull request:

https://github.com/apache/ignite/pull/3813

IGNITE-8242: Remove method GAGridUtils.getGenesForChromosome() as pro…

Remove method GAGridUtils.getGenesForChromosome() as problematic when 
Chromosome contains duplicate genes. GAGridUtils.getGenesInOrderForChromosome() 
will be used instead.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/techbysample/ignite ignite-8242

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3813.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3813


commit bf08d83546d9f11333894f88527d34cad2c5418b
Author: Turik Campbell 
Date:   2018-04-13T01:27:57Z

IGNITE-8242: Remove method GAGridUtils.getGenesForChromosome() as 
problematic when Chromosome contains duplicate genes.
 GAGridUtils.getGenesInOrderForChromosome() will be used 
instead.




> Remove method GAGridUtils.getGenesForChromosome() as problematic when 
> Chromosome contains duplicate genes.
> --
>
> Key: IGNITE-8242
> URL: https://issues.apache.org/jira/browse/IGNITE-8242
> Project: Ignite
>  Issue Type: Bug
>  Components: ml
>Reporter: Turik Campbell
>Assignee: Turik Campbell
>Priority: Minor
> Fix For: 2.5
>
>
> Remove method GAGridUtils.getGenesForChromosome() as problematic when 
> Chromosome contains duplicate genes.
> GAGridUtils.getGenesInOrderForChromosome() will be used instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8242) Remove method GAGridUtils.getGenesForChromosome() as problematic when Chromosome contains duplicate genes.

2018-04-12 Thread Turik Campbell (JIRA)
Turik Campbell created IGNITE-8242:
--

 Summary: Remove method GAGridUtils.getGenesForChromosome() as 
problematic when Chromosome contains duplicate genes.
 Key: IGNITE-8242
 URL: https://issues.apache.org/jira/browse/IGNITE-8242
 Project: Ignite
  Issue Type: Bug
  Components: ml
Reporter: Turik Campbell
Assignee: Turik Campbell
 Fix For: 2.5


Remove method GAGridUtils.getGenesForChromosome() as problematic when 
Chromosome contains duplicate genes.

GAGridUtils.getGenesInOrderForChromosome() will be used instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8241) Docs: Triggering automatic rebalancing if the whole baseline topology is not recovered

2018-04-12 Thread Denis Magda (JIRA)
Denis Magda created IGNITE-8241:
---

 Summary: Docs: Triggering automatic rebalancing if the whole 
baseline topology is not recovered
 Key: IGNITE-8241
 URL: https://issues.apache.org/jira/browse/IGNITE-8241
 Project: Ignite
  Issue Type: Task
  Components: documentation
Affects Versions: 2.4
Reporter: Denis Magda
Assignee: Denis Magda
 Fix For: 2.5


The ticket is created as a result of the following discussion:
http://apache-ignite-developers.2346864.n4.nabble.com/Triggering-rebalancing-on-timeout-or-manually-if-the-baseline-topology-is-not-reassembled-td29299.html

The rebalancing doesn't happen if one of the nodes goes down, 
thus, shrinking the baseline topology. It complies with our assumption that 
the node should be recovered soon and there is no need to waste 
CPU/memory/networking resources of the cluster shifting the data around. 

However, there are always edge cases. I was reasonably asked how to trigger 
the rebalancing within the baseline topology manually or on timeout if: 
* It's not expected that the failed node would be resurrected in the 
   nearest time and 
* It's not likely that that node will be replaced by the other one. 

Until we embedd special facilities in the baseline topology that would consider 
such situations we can document the following workaround. A user 
application/tool/script has to subscribe to node_left events and remove the 
failed node from the baseline topology in some time. Once the node is removed, 
the baseline topology will be changed, and the rebalancing will be kicked off.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8078) Add new metrics for data storage

2018-04-12 Thread Denis Magda (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436418#comment-16436418
 ] 

Denis Magda commented on IGNITE-8078:
-

[~DmitriyGovorukhin]

* Can we get the size of a particular secondary index with a method like 
{{getIndexSize(indexName)}}?[~vozerov], it should be feasible, right?
* The new {{DataRegionMXBean}} metrics list is not the same as of 
{{DataRegionMetricsMXBean}} interface. Why is so that and what's the difference 
between such similar interfaces?
* I wouldn't do this - Depricate CacheMetrics.getRebalancingPartitionsCount(); 
and move to CacheGroupMetricsMXBean.getRebalancingPartitionsCount(). If we 
redesign the way we store our data within data pages in the future, then 
{{CacheMetrics.getRebalancingPartitionsCount()}} would make sense.

> Add new metrics for data storage
> 
>
> Key: IGNITE-8078
> URL: https://issues.apache.org/jira/browse/IGNITE-8078
> Project: Ignite
>  Issue Type: New Feature
>Reporter: Dmitriy Govorukhin
>Assignee: Dmitriy Govorukhin
>Priority: Major
>  Labels: iep-6
> Fix For: 2.5
>
>
> 1. Create new MXbean for each index, IndexMxBean
> {code}
> class IndexMxBean{
> /** The number of PUT operations on the index. */
> long getProcessedPuts();
> /** The number of GET operations on the index. */
> long getProcessedGets();
> /** The total index size in bytes. */
> long getIndexSize();
> /** Index name.*/
> String getName();
> }
> {code}
> 2. Add new metrics for data storage and cache group.
> {code}
> class CacheGroupMetricsMXBean{
> /** The total index size in bytes */
> long getIndexesSize();
> /** Total size in bytes for primary key indexes. */
> long getPKIndexesSize();
> /** Total size in bytes for reuse list.*/
> long getReuseListSize();
> /** Total size in bytes. */
> long getTotalSize();
> /** Total size in bytes for pure data.*/
> long getDataSize();
> /** Total size in bytes for data pages.*/
> long getDataPagesSize();
> /** CacheGroup type. PARTITIONED, REPLICATED, LOCAL.*/
> String getType();
> /** Partitions currently assigned to the local node in this cache group. */
> int[] getPartitions();
> }
> {code}
> {code}
> class DataRegionMXBean{
> /** Total size in bytes for indexes. */
> long getIndexesSize();
> /** Total size in bytes for primary key indexes. */
> long getPKIndexesSize();
> /** Total size in bytes. */
> long getTotalSize();
> /** Total size in bytes for pure data.*/
> long getDataSize();
> /** Total size in bytes for data pages.*/
> long getDataPagesSize();
> /** Total used offheap size in bytes. */
> long getOffheapUsedSize();
> /** The number of read pages from last restart. */
> long getPagesRead();
> /** The number of writen pages from last restart. */
> long getPagesWriten();
> /** The number of replaced pages from last restart . */
> long getPagesReplaced();
> /** Total dirty pages for the next checkpoint. */
> long getDirtyPagesForNextCheckpoint();
> }
> {code}
> {code}
> class DataStorageMXbean{
> /** Total size in bytes for indexes. */
> long getIndexesSize();
> /** Total size in bytes for primary key indexes. */
> long getPKIndexesSize();
> /** Total size in bytes for all storages. */
> long getTotalSize();
> /** Total offheap size in bytes. */
> long getOffHeapSize();
> /** Total used offheap size in bytes for all data regions. */
> long getOffheapUsedSize();
> /** Total size in bytes for pure data.*/
> long getDataSize();
> /** The number of read pages from last restart. */
> long getPagesRead();
> /** The number of writen pages from last restart. */
> long getPagesWriten();
> /** The number of replaced pages from last restart. */
> long getPagesReplaced();
> /** Total checkpoint time from last restart. */
> long getCheckpointTotalTime();
> /** Total dirty pages for the next checkpoint. */
> long getDirtyPagesForNextCheckpoint();
> /** Total size in bytes for storage wal files. */
> long getWalTotalSize();
> /** Time of the last WAL segment rollover. */
> long getWalLastSwitchTime();
> }
> {code}
> {code}
> class IgniteMxBean {
> /** Returns string containing Node ID, Consistent ID, Node Order */
> String getCurrentCoordinator();
> }
> {code}
> Depricate CacheMetrics.getRebalancingPartitionsCount(); and move to 
> CacheGroupMetricsMXBean.getRebalancingPartitionsCount();



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8240) .NET: Use default scheduler when starting Tasks

2018-04-12 Thread Pavel Tupitsyn (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436392#comment-16436392
 ] 

Pavel Tupitsyn commented on IGNITE-8240:


Waiting for TC: https://ci.ignite.apache.org/viewQueued.html?itemId=1199392

> .NET: Use default scheduler when starting Tasks
> ---
>
> Key: IGNITE-8240
> URL: https://issues.apache.org/jira/browse/IGNITE-8240
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET
> Fix For: 2.5
>
>
> Default scheduler should be specified explicitly when starting new tasks to 
> avoid deadlocks: 
> http://blog.stephencleary.com/2013/10/continuewith-is-dangerous-too.html
> This applies to {{StartNew}}, {{ConyinueWith}}, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8240) .NET: Use default scheduler when starting Tasks

2018-04-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436388#comment-16436388
 ] 

ASF GitHub Bot commented on IGNITE-8240:


GitHub user ptupitsyn opened a pull request:

https://github.com/apache/ignite/pull/3812

IGNITE-8240 .NET: Use default scheduler when starting Tasks



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ptupitsyn/ignite ignite-8240

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3812.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3812


commit 7746efcebe2bbda3fcb1bcaf4b3015489221532b
Author: Pavel Tupitsyn 
Date:   2018-04-12T21:31:10Z

IGNITE-8240 .NET: Use default scheduler when starting Tasks

commit e411695b65cbe0b9fca129927cd9a5b238b2c753
Author: Pavel Tupitsyn 
Date:   2018-04-12T21:39:31Z

cleanup

commit 35d1f543d8f3c1177d9400d02d2d5999589bf481
Author: Pavel Tupitsyn 
Date:   2018-04-12T21:41:37Z

Fix gitignore

commit 8b3472954affc107bdbb97eb48e2434a3677a16b
Author: Pavel Tupitsyn 
Date:   2018-04-12T21:44:32Z

ContinueWith fixed

commit b979841ab43b724d2efa26f6877e2bd1c111b9d0
Author: Pavel Tupitsyn 
Date:   2018-04-12T21:49:01Z

fixing StartNew

commit 3f2a3561c15772334ae7bc93514b47cc0c705526
Author: Pavel Tupitsyn 
Date:   2018-04-12T21:51:42Z

fixing StartNew

commit 04dd8f6ba285214a1d6d935e9bd8d609f01d1e17
Author: Pavel Tupitsyn 
Date:   2018-04-12T22:03:54Z

StartNew fixed




> .NET: Use default scheduler when starting Tasks
> ---
>
> Key: IGNITE-8240
> URL: https://issues.apache.org/jira/browse/IGNITE-8240
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET
> Fix For: 2.5
>
>
> Default scheduler should be specified explicitly when starting new tasks to 
> avoid deadlocks: 
> http://blog.stephencleary.com/2013/10/continuewith-is-dangerous-too.html
> This applies to {{StartNew}}, {{ConyinueWith}}, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-8240) .NET: Use default scheduler when starting Tasks

2018-04-12 Thread Pavel Tupitsyn (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436388#comment-16436388
 ] 

Pavel Tupitsyn edited comment on IGNITE-8240 at 4/12/18 10:18 PM:
--

GitHub user ptupitsyn opened a pull request:

https://github.com/apache/ignite/pull/3812

IGNITE-8240 .NET: Use default scheduler when starting Tasks



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ptupitsyn/ignite ignite-8240

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3812.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3812



was (Author: githubbot):
GitHub user ptupitsyn opened a pull request:

https://github.com/apache/ignite/pull/3812

IGNITE-8240 .NET: Use default scheduler when starting Tasks



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ptupitsyn/ignite ignite-8240

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3812.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3812


commit 7746efcebe2bbda3fcb1bcaf4b3015489221532b
Author: Pavel Tupitsyn 
Date:   2018-04-12T21:31:10Z

IGNITE-8240 .NET: Use default scheduler when starting Tasks

commit e411695b65cbe0b9fca129927cd9a5b238b2c753
Author: Pavel Tupitsyn 
Date:   2018-04-12T21:39:31Z

cleanup

commit 35d1f543d8f3c1177d9400d02d2d5999589bf481
Author: Pavel Tupitsyn 
Date:   2018-04-12T21:41:37Z

Fix gitignore

commit 8b3472954affc107bdbb97eb48e2434a3677a16b
Author: Pavel Tupitsyn 
Date:   2018-04-12T21:44:32Z

ContinueWith fixed

commit b979841ab43b724d2efa26f6877e2bd1c111b9d0
Author: Pavel Tupitsyn 
Date:   2018-04-12T21:49:01Z

fixing StartNew

commit 3f2a3561c15772334ae7bc93514b47cc0c705526
Author: Pavel Tupitsyn 
Date:   2018-04-12T21:51:42Z

fixing StartNew

commit 04dd8f6ba285214a1d6d935e9bd8d609f01d1e17
Author: Pavel Tupitsyn 
Date:   2018-04-12T22:03:54Z

StartNew fixed




> .NET: Use default scheduler when starting Tasks
> ---
>
> Key: IGNITE-8240
> URL: https://issues.apache.org/jira/browse/IGNITE-8240
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET
> Fix For: 2.5
>
>
> Default scheduler should be specified explicitly when starting new tasks to 
> avoid deadlocks: 
> http://blog.stephencleary.com/2013/10/continuewith-is-dangerous-too.html
> This applies to {{StartNew}}, {{ConyinueWith}}, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8240) .NET: Use default scheduler when starting Tasks

2018-04-12 Thread Pavel Tupitsyn (JIRA)
Pavel Tupitsyn created IGNITE-8240:
--

 Summary: .NET: Use default scheduler when starting Tasks
 Key: IGNITE-8240
 URL: https://issues.apache.org/jira/browse/IGNITE-8240
 Project: Ignite
  Issue Type: Bug
  Components: platforms
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn
 Fix For: 2.5


Default scheduler should be specified explicitly when starting new tasks to 
avoid deadlocks: 
http://blog.stephencleary.com/2013/10/continuewith-is-dangerous-too.html

This applies to {{StartNew}}, {{ConyinueWith}}, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8239) SQL TX: Do not use skipReducer flag for MVCC DML requests

2018-04-12 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-8239:


 Summary: SQL TX: Do not use skipReducer flag for MVCC DML requests
 Key: IGNITE-8239
 URL: https://issues.apache.org/jira/browse/IGNITE-8239
 Project: Ignite
  Issue Type: Task
Reporter: Igor Seliverstov


Currently we explicitly set skipReducer flag to true to get UpdatePlan with 
DmlDistributedPlanInfo. We should check if mvcc is enabled instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7983) NPE in TxRollbackOnTimeoutNearCacheTest.testRandomMixedTxConfigurations

2018-04-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436138#comment-16436138
 ] 

ASF GitHub Bot commented on IGNITE-7983:


Github user andrey-kuznetsov closed the pull request at:

https://github.com/apache/ignite/pull/3675


> NPE in TxRollbackOnTimeoutNearCacheTest.testRandomMixedTxConfigurations
> ---
>
> Key: IGNITE-7983
> URL: https://issues.apache.org/jira/browse/IGNITE-7983
> Project: Ignite
>  Issue Type: Task
>Affects Versions: 2.4
>Reporter: Andrey Kuznetsov
>Assignee: Andrey Kuznetsov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.5
>
>
> {{get}} inside transaction sometimes returns {{null}}. This should be 
> impossible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7983) NPE in TxRollbackOnTimeoutNearCacheTest.testRandomMixedTxConfigurations

2018-04-12 Thread Andrey Gura (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436102#comment-16436102
 ] 

Andrey Gura commented on IGNITE-7983:
-

[~andrey-kuznetsov] LGTM! Merged to master branch. Thanks for contribution!

> NPE in TxRollbackOnTimeoutNearCacheTest.testRandomMixedTxConfigurations
> ---
>
> Key: IGNITE-7983
> URL: https://issues.apache.org/jira/browse/IGNITE-7983
> Project: Ignite
>  Issue Type: Task
>Affects Versions: 2.4
>Reporter: Andrey Kuznetsov
>Assignee: Andrey Kuznetsov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.5
>
>
> {{get}} inside transaction sometimes returns {{null}}. This should be 
> impossible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7718) Collections.singleton() and Collections.singletonMap() are not properly serialized by binary marshaller

2018-04-12 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436089#comment-16436089
 ] 

Dmitriy Pavlov commented on IGNITE-7718:


[~pvinokurov], I've triggered several reruns.

> Collections.singleton() and Collections.singletonMap() are not properly 
> serialized by binary marshaller
> ---
>
> Key: IGNITE-7718
> URL: https://issues.apache.org/jira/browse/IGNITE-7718
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.3
>Reporter: Pavel Vinokurov
>Assignee: Pavel Vinokurov
>Priority: Major
>
> After desialization collections obtained by Collections.singleton() and  
> Collections.singletonMap() does not return collection of binary objects, but 
> rather collection of deserialized objects. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8110) GridCacheWriteBehindStore.Flusher thread uses the wrong transformation from milliseconds to nanoseconds.

2018-04-12 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436058#comment-16436058
 ] 

Dmitriy Pavlov commented on IGNITE-8110:


Cherry-picked fix to AI v2.5

> GridCacheWriteBehindStore.Flusher thread uses the wrong transformation from 
> milliseconds to nanoseconds.
> 
>
> Key: IGNITE-8110
> URL: https://issues.apache.org/jira/browse/IGNITE-8110
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.4
>Reporter: Vyacheslav Koptilin
>Assignee: Anton Kurbanov
>Priority: Minor
> Fix For: 2.5, 2.6
>
>
> The initial value of a cache flushing frequency is defined as follows:
> {code}
> /** Cache flushing frequence in nanos. */
> protected long cacheFlushFreqNanos = cacheFlushFreq * 1000;
> {code}
> where is {{cacheFlushFreq}} is equal to
> {code}
> /** Default flush frequency for write-behind cache store in milliseconds. 
> */
> public static final long DFLT_WRITE_BEHIND_FLUSH_FREQUENCY = 5000;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8110) GridCacheWriteBehindStore.Flusher thread uses the wrong transformation from milliseconds to nanoseconds.

2018-04-12 Thread Dmitriy Pavlov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-8110:
---
Fix Version/s: 2.5

> GridCacheWriteBehindStore.Flusher thread uses the wrong transformation from 
> milliseconds to nanoseconds.
> 
>
> Key: IGNITE-8110
> URL: https://issues.apache.org/jira/browse/IGNITE-8110
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.4
>Reporter: Vyacheslav Koptilin
>Assignee: Anton Kurbanov
>Priority: Minor
> Fix For: 2.5, 2.6
>
>
> The initial value of a cache flushing frequency is defined as follows:
> {code}
> /** Cache flushing frequence in nanos. */
> protected long cacheFlushFreqNanos = cacheFlushFreq * 1000;
> {code}
> where is {{cacheFlushFreq}} is equal to
> {code}
> /** Default flush frequency for write-behind cache store in milliseconds. 
> */
> public static final long DFLT_WRITE_BEHIND_FLUSH_FREQUENCY = 5000;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-5874) Store TTL expire times in B+ tree on per-partition basis

2018-04-12 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436027#comment-16436027
 ] 

Dmitriy Pavlov commented on IGNITE-5874:


[~amashenkov], please do not forget to check test failures. 

> Store TTL expire times in B+ tree on per-partition basis
> 
>
> Key: IGNITE-5874
> URL: https://issues.apache.org/jira/browse/IGNITE-5874
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache, persistence
>Affects Versions: 2.1
>Reporter: Ivan Rakov
>Assignee: Andrew Mashenkov
>Priority: Major
> Fix For: 2.5
>
> Attachments: IgnitePdsWithTtlTest.java
>
>
> TTL expire times for entries are stored in PendingEntriesTree, which is 
> singleton for cache. When expiration occurs, all system threads iterate 
> through the tree in order to remove expired entries. Iterating through single 
> tree causes contention and perfomance loss. 
> Related performance issue: https://issues.apache.org/jira/browse/IGNITE-5793
> We should keep instance of PendingEntriesTree for each partition, like we do 
> for CacheDataTree.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8110) GridCacheWriteBehindStore.Flusher thread uses the wrong transformation from milliseconds to nanoseconds.

2018-04-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436015#comment-16436015
 ] 

ASF GitHub Bot commented on IGNITE-8110:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3742


> GridCacheWriteBehindStore.Flusher thread uses the wrong transformation from 
> milliseconds to nanoseconds.
> 
>
> Key: IGNITE-8110
> URL: https://issues.apache.org/jira/browse/IGNITE-8110
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.4
>Reporter: Vyacheslav Koptilin
>Assignee: Anton Kurbanov
>Priority: Minor
> Fix For: 2.6
>
>
> The initial value of a cache flushing frequency is defined as follows:
> {code}
> /** Cache flushing frequence in nanos. */
> protected long cacheFlushFreqNanos = cacheFlushFreq * 1000;
> {code}
> where is {{cacheFlushFreq}} is equal to
> {code}
> /** Default flush frequency for write-behind cache store in milliseconds. 
> */
> public static final long DFLT_WRITE_BEHIND_FLUSH_FREQUENCY = 5000;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8141) Improve OS config suggestions: SWAPPINESS

2018-04-12 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435997#comment-16435997
 ] 

Dmitriy Pavlov commented on IGNITE-8141:


[~ascherbakov] ,thank you for your advice.

> Improve OS config suggestions: SWAPPINESS
> -
>
> Key: IGNITE-8141
> URL: https://issues.apache.org/jira/browse/IGNITE-8141
> Project: Ignite
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.4
>Reporter: Reed Sandberg
>Assignee: Reed Sandberg
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 2.6
>
>
> Acknowledge suggested SWAPPINESS OS param adjustment using a range (<= 10).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8141) Improve OS config suggestions: SWAPPINESS

2018-04-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435985#comment-16435985
 ] 

ASF GitHub Bot commented on IGNITE-8141:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3727


> Improve OS config suggestions: SWAPPINESS
> -
>
> Key: IGNITE-8141
> URL: https://issues.apache.org/jira/browse/IGNITE-8141
> Project: Ignite
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.4
>Reporter: Reed Sandberg
>Assignee: Reed Sandberg
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 2.6
>
>
> Acknowledge suggested SWAPPINESS OS param adjustment using a range (<= 10).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8138) Incorrect uptime in Ignite metrics for long running server node (1+ days)

2018-04-12 Thread Sergey Skudnov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435937#comment-16435937
 ] 

Sergey Skudnov commented on IGNITE-8138:


TC run 
[https://ci.ignite.apache.org/viewLog.html?buildId=1196636=buildResultsDiv=IgniteTests24Java8_RunAll]

> Incorrect uptime in Ignite metrics for long running server node (1+ days)
> -
>
> Key: IGNITE-8138
> URL: https://issues.apache.org/jira/browse/IGNITE-8138
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.1, 2.4
>Reporter: Max Shonichev
>Assignee: Sergey Skudnov
>Priority: Major
> Fix For: 2.5
>
> Attachments: Screenshot from 2018-04-08 23-37-39.png
>
>
> Ignite prints a metrics to the log with uptime, formatted as 'XX:YY:ZZ:TTT'.
> It looks, like XX corresponds to hours, YY to minutes, ZZ to seconds, however 
> if we filter uptime metric from a long running server (few days), we would 
> see that :
> {noformat}
>  ^-- Node [id=684d2761, name=null, uptime=00:01:00:009]
>  ^-- Node [id=684d2761, name=null, uptime=00:02:00:009]
>  ^-- Node [id=684d2761, name=null, uptime=00:03:00:009]
>  ^-- Node [id=684d2761, name=null, uptime=00:04:00:021]
> ...
>  ^-- Node [id=684d2761, name=null, uptime=23:58:08:391]
>  ^-- Node [id=684d2761, name=null, uptime=23:59:08:393]
>  ^-- Node [id=684d2761, name=null, uptime=24:00:08:395]
>  ^-- Node [id=684d2761, name=null, uptime=24:01:08:406]
> ...
>  ^-- Node [id=684d2761, name=null, uptime=59:59:23:542]
>  ^-- Node [id=684d2761, name=null, uptime=00:00:23:554]
> ...
> {noformat}
> BUG:
> 1. hours do not rollover at 23:59:59
> 2. there's no simple means for user to get uptime days, because hours do 
> actually rollover after 59:59:59
> what is expected: 
>  1. add a day counter, init with 0
>  2. make hours correctly rollover after full day (24hrs) run
> {noformat}
>  ^-- Node [id=684d2761, name=null, uptime=0:00:01:00:009]
> ...
>  ^-- Node [id=684d2761, name=null, uptime=0:23:59:00:009]
>  ^-- Node [id=684d2761, name=null, uptime=1:00:00:00:009]
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-5874) Store TTL expire times in B+ tree on per-partition basis

2018-04-12 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435938#comment-16435938
 ] 

Dmitriy Pavlov commented on IGNITE-5874:


I left several minor proposals in CR.

Change looks good to me provided that we will cover crash recovery from new 
PageSnapshot records by unit or integration tests.

[~agoncharuk], could you also take a look to change?

> Store TTL expire times in B+ tree on per-partition basis
> 
>
> Key: IGNITE-5874
> URL: https://issues.apache.org/jira/browse/IGNITE-5874
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache, persistence
>Affects Versions: 2.1
>Reporter: Ivan Rakov
>Assignee: Andrew Mashenkov
>Priority: Major
> Fix For: 2.5
>
> Attachments: IgnitePdsWithTtlTest.java
>
>
> TTL expire times for entries are stored in PendingEntriesTree, which is 
> singleton for cache. When expiration occurs, all system threads iterate 
> through the tree in order to remove expired entries. Iterating through single 
> tree causes contention and perfomance loss. 
> Related performance issue: https://issues.apache.org/jira/browse/IGNITE-5793
> We should keep instance of PendingEntriesTree for each partition, like we do 
> for CacheDataTree.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8237) Ignite blocks on SecurityException in exchange-worker due to unauthorised on-heap cache configuration

2018-04-12 Thread Dmitriy Pavlov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-8237:
---
Labels: MakeTeamcityGreenAgain  (was: )

> Ignite blocks on SecurityException in exchange-worker due to unauthorised 
> on-heap cache configuration 
> --
>
> Key: IGNITE-8237
> URL: https://issues.apache.org/jira/browse/IGNITE-8237
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Alexey Kukushkin
>Assignee: Alexey Kukushkin
>Priority: Blocker
>  Labels: MakeTeamcityGreenAgain
>
> Ignite blocks on SecurityException in exchange-worker due to unauthorised 
> on-heap cache configuration. Consider moving IGNITE_DISABLE_ONHEAP_CACHE 
> system property check to a more appropriate place to avoid blocking Ignite.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8220) Discovery worker termination in PDS test

2018-04-12 Thread Dmitriy Pavlov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-8220:
---
Labels: MakeTeamcityGreenAgain Muted_test  (was: MakeTeamcityGreenAgain)

> Discovery worker termination in PDS test
> 
>
> Key: IGNITE-8220
> URL: https://issues.apache.org/jira/browse/IGNITE-8220
> Project: Ignite
>  Issue Type: Test
>  Components: persistence
>Reporter: Dmitriy Pavlov
>Assignee: Andrey Gura
>Priority: Critical
>  Labels: MakeTeamcityGreenAgain, Muted_test
> Fix For: 2.6
>
>
> 3 suites failed 
> https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_IgnitePds1_IgniteTests24Java8=%3Cdefault%3E=buildTypeStatusDiv
> https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_PdsDirectIo1_IgniteTests24Java8=%3Cdefault%3E=buildTypeStatusDiv
> https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_ActivateDeactivateCluster_IgniteTests24Java8=%3Cdefault%3E=buildTypeStatusDiv
> Example of tests failed:
> - IgniteClusterActivateDeactivateTestWithPersistence.testActivateFailover3
> - IgniteClusterActivateDeactivateTestWithPersistence.testDeactivateFailover3  
> {noformat}
> [2018-04-11 
> 02:43:09,769][ERROR][tcp-disco-srvr-#2298%cache.IgniteClusterActivateDeactivateTestWithPersistence0%][IgniteTestResources]
>  Critical failure. Will be handled accordingly to configured handler 
> [hnd=class o.a.i.failure.NoOpFailureHandler, failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=java.lang.IllegalStateException: Thread 
> tcp-disco-srvr-#2298%cache.IgniteClusterActivateDeactivateTestWithPersistence0%
>  is terminated unexpectedly.]] 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-172) [Test] [Rare] GridTcpCommunicationSpiRecoveryAckSelfTest and IgniteTcpCommunicationRecoveryAckClosureSelfTest

2018-04-12 Thread Amelchev Nikita (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amelchev Nikita reassigned IGNITE-172:
--

Assignee: Amelchev Nikita

> [Test] [Rare] GridTcpCommunicationSpiRecoveryAckSelfTest and 
> IgniteTcpCommunicationRecoveryAckClosureSelfTest
> -
>
> Key: IGNITE-172
> URL: https://issues.apache.org/jira/browse/IGNITE-172
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 1.5.0.final
>Reporter: Irina Vasilinets
>Assignee: Amelchev Nikita
>Priority: Major
>  Labels: Muted_test
>
> GridTcpCommunicationSpiRecoveryAckSelfTest.testQueueOverflow and 
> GridTcpCommunicationSpiTcpNoDelayOffSelfTest.testSendToManyNodes 
>  fail sometimes.
> IgniteTcpCommunicationRecoveryAckClosureSelfTest.testQueueOverflow - 1 from 10



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8238) Operation can fails with unexpected RuntimeException when node is stopping.

2018-04-12 Thread Andrew Mashenkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Mashenkov updated IGNITE-8238:
-
Fix Version/s: 2.6

> Operation can fails with unexpected RuntimeException when node is stopping.
> ---
>
> Key: IGNITE-8238
> URL: https://issues.apache.org/jira/browse/IGNITE-8238
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Reporter: Andrew Mashenkov
>Priority: Minor
> Fix For: 2.6
>
>
> Operation can fails with RuntimeException when node is stoped in other 
> thread. 
> It is not clear from javadoc that operation can throws RuntimeException.
>  We should add it to javadoc or e.g. throws IllegalStateException which 
> already present in java cache api javadoc.
> Failure in thread: Thread [id=3484, name=updater-2]
> java.lang.RuntimeException: Failed to perform cache update: node is stopping.
>  at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.checkpointReadLock(GridCacheDatabaseSharedManager.java:1350)
>  at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.purgeExpired(GridCacheOffheapManager.java:1685)
>  at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager.expire(GridCacheOffheapManager.java:796)
>  at 
> org.apache.ignite.internal.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:197)
>  at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.unwindEvicts(GridCacheUtils.java:834)
>  at 
> org.apache.ignite.internal.processors.cache.GridCacheGateway.leaveNoLock(GridCacheGateway.java:240)
>  at 
> org.apache.ignite.internal.processors.cache.GridCacheGateway.leave(GridCacheGateway.java:225)
>  at 
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.onLeave(GatewayProtectedCacheProxy.java:1708)
>  at 
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.putAll(GatewayProtectedCacheProxy.java:945)
>  at 
> org.apache.ignite.internal.processors.cache.persistence.IgnitePdsContinuousRestartTest$1.call(IgnitePdsContinuousRestartTest.java:261)
>  at org.apache.ignite.testframework.GridTestThread.run(GridTestThread.java:86)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8238) Operation can fails with unexpected RuntimeException when node is stopping.

2018-04-12 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-8238:


 Summary: Operation can fails with unexpected RuntimeException when 
node is stopping.
 Key: IGNITE-8238
 URL: https://issues.apache.org/jira/browse/IGNITE-8238
 Project: Ignite
  Issue Type: Bug
  Components: general
Reporter: Andrew Mashenkov


Operation can fails with RuntimeException when node is stoped in other thread. 
PFA stacktrace.



It is not clear from javadoc that operation can throws RuntimeException.
We should add it to javadoc or e.g. throws IllegalStateException which already 
present in java cache api javadoc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8238) Operation can fails with unexpected RuntimeException when node is stopping.

2018-04-12 Thread Andrew Mashenkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Mashenkov updated IGNITE-8238:
-
Priority: Minor  (was: Major)

> Operation can fails with unexpected RuntimeException when node is stopping.
> ---
>
> Key: IGNITE-8238
> URL: https://issues.apache.org/jira/browse/IGNITE-8238
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Reporter: Andrew Mashenkov
>Priority: Minor
>
> Operation can fails with RuntimeException when node is stoped in other 
> thread. 
> It is not clear from javadoc that operation can throws RuntimeException.
>  We should add it to javadoc or e.g. throws IllegalStateException which 
> already present in java cache api javadoc.
> Failure in thread: Thread [id=3484, name=updater-2]
> java.lang.RuntimeException: Failed to perform cache update: node is stopping.
>  at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.checkpointReadLock(GridCacheDatabaseSharedManager.java:1350)
>  at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.purgeExpired(GridCacheOffheapManager.java:1685)
>  at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager.expire(GridCacheOffheapManager.java:796)
>  at 
> org.apache.ignite.internal.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:197)
>  at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.unwindEvicts(GridCacheUtils.java:834)
>  at 
> org.apache.ignite.internal.processors.cache.GridCacheGateway.leaveNoLock(GridCacheGateway.java:240)
>  at 
> org.apache.ignite.internal.processors.cache.GridCacheGateway.leave(GridCacheGateway.java:225)
>  at 
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.onLeave(GatewayProtectedCacheProxy.java:1708)
>  at 
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.putAll(GatewayProtectedCacheProxy.java:945)
>  at 
> org.apache.ignite.internal.processors.cache.persistence.IgnitePdsContinuousRestartTest$1.call(IgnitePdsContinuousRestartTest.java:261)
>  at org.apache.ignite.testframework.GridTestThread.run(GridTestThread.java:86)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8238) Operation can fails with unexpected RuntimeException when node is stopping.

2018-04-12 Thread Andrew Mashenkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Mashenkov updated IGNITE-8238:
-
Description: 
Operation can fails with RuntimeException when node is stoped in other thread. 

It is not clear from javadoc that operation can throws RuntimeException.
 We should add it to javadoc or e.g. throws IllegalStateException which already 
present in java cache api javadoc.

Failure in thread: Thread [id=3484, name=updater-2]
java.lang.RuntimeException: Failed to perform cache update: node is stopping.
 at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.checkpointReadLock(GridCacheDatabaseSharedManager.java:1350)
 at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.purgeExpired(GridCacheOffheapManager.java:1685)
 at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager.expire(GridCacheOffheapManager.java:796)
 at 
org.apache.ignite.internal.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:197)
 at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.unwindEvicts(GridCacheUtils.java:834)
 at 
org.apache.ignite.internal.processors.cache.GridCacheGateway.leaveNoLock(GridCacheGateway.java:240)
 at 
org.apache.ignite.internal.processors.cache.GridCacheGateway.leave(GridCacheGateway.java:225)
 at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.onLeave(GatewayProtectedCacheProxy.java:1708)
 at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.putAll(GatewayProtectedCacheProxy.java:945)
 at 
org.apache.ignite.internal.processors.cache.persistence.IgnitePdsContinuousRestartTest$1.call(IgnitePdsContinuousRestartTest.java:261)
 at org.apache.ignite.testframework.GridTestThread.run(GridTestThread.java:86)

  was:
Operation can fails with RuntimeException when node is stoped in other thread. 
PFA stacktrace.



It is not clear from javadoc that operation can throws RuntimeException.
We should add it to javadoc or e.g. throws IllegalStateException which already 
present in java cache api javadoc.


> Operation can fails with unexpected RuntimeException when node is stopping.
> ---
>
> Key: IGNITE-8238
> URL: https://issues.apache.org/jira/browse/IGNITE-8238
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Reporter: Andrew Mashenkov
>Priority: Major
>
> Operation can fails with RuntimeException when node is stoped in other 
> thread. 
> It is not clear from javadoc that operation can throws RuntimeException.
>  We should add it to javadoc or e.g. throws IllegalStateException which 
> already present in java cache api javadoc.
> Failure in thread: Thread [id=3484, name=updater-2]
> java.lang.RuntimeException: Failed to perform cache update: node is stopping.
>  at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.checkpointReadLock(GridCacheDatabaseSharedManager.java:1350)
>  at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.purgeExpired(GridCacheOffheapManager.java:1685)
>  at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager.expire(GridCacheOffheapManager.java:796)
>  at 
> org.apache.ignite.internal.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:197)
>  at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.unwindEvicts(GridCacheUtils.java:834)
>  at 
> org.apache.ignite.internal.processors.cache.GridCacheGateway.leaveNoLock(GridCacheGateway.java:240)
>  at 
> org.apache.ignite.internal.processors.cache.GridCacheGateway.leave(GridCacheGateway.java:225)
>  at 
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.onLeave(GatewayProtectedCacheProxy.java:1708)
>  at 
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.putAll(GatewayProtectedCacheProxy.java:945)
>  at 
> org.apache.ignite.internal.processors.cache.persistence.IgnitePdsContinuousRestartTest$1.call(IgnitePdsContinuousRestartTest.java:261)
>  at org.apache.ignite.testframework.GridTestThread.run(GridTestThread.java:86)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8166) stopGrid() hangs in some cases when node is invalidated and PDS is enabled

2018-04-12 Thread Aleksey Plekhanov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov reassigned IGNITE-8166:
-

Assignee: Aleksey Plekhanov

> stopGrid() hangs in some cases when node is invalidated and PDS is enabled
> --
>
> Key: IGNITE-8166
> URL: https://issues.apache.org/jira/browse/IGNITE-8166
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Labels: iep-14
>
> Node invalidation via FailureProcessor can hang {{exchange-worker}} and 
> {{stopGrid()}} when PDS is enabled.
> Reproducer (reproducer is racy, sometimes finished without hang):
> {code:java}
> public class StopNodeHangsTest extends GridCommonAbstractTest {
> /** Offheap size for memory policy. */
> private static final int SIZE = 10 * 1024 * 1024;
> /** Page size. */
> static final int PAGE_SIZE = 2048;
> /** Number of entries. */
> static final int ENTRIES = 2_000;
> /** {@inheritDoc} */
> @Override protected IgniteConfiguration getConfiguration(String 
> igniteInstanceName) throws Exception {
> IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
> DataStorageConfiguration dsCfg = new DataStorageConfiguration();
> DataRegionConfiguration dfltPlcCfg = new DataRegionConfiguration();
> dfltPlcCfg.setName("dfltPlc");
> dfltPlcCfg.setInitialSize(SIZE);
> dfltPlcCfg.setMaxSize(SIZE);
> dfltPlcCfg.setPersistenceEnabled(true);
> dsCfg.setDefaultDataRegionConfiguration(dfltPlcCfg);
> dsCfg.setPageSize(PAGE_SIZE);
> cfg.setDataStorageConfiguration(dsCfg);
> cfg.setFailureHandler(new FailureHandler() {
> @Override public boolean onFailure(Ignite ignite, FailureContext 
> failureCtx) {
> return true;
> }
> });
> return cfg;
> }
> public void testStopNodeHangs() throws Exception {
> cleanPersistenceDir();
> IgniteEx ignite0 = startGrid(0);
> IgniteEx ignite1 = startGrid(1);
> ignite1.cluster().active(true);
> awaitPartitionMapExchange();
> IgniteCache cache = ignite1.getOrCreateCache("TEST");
> Map entries = new HashMap<>();
> for (int i = 0; i < ENTRIES; i++)
> entries.put(i, new byte[PAGE_SIZE * 2 / 3]);
> cache.putAll(entries);
> ignite1.context().failure().process(new 
> FailureContext(FailureType.CRITICAL_ERROR, null));
> stopGrid(0);
> stopGrid(1);
> }
> }
> {code}
> {{stopGrid(1)}} waiting until exchange finished, {{exchange-worker}} waits on 
> method {{GridCacheDatabaseSharedManager#checkpointReadLock}} for 
> {{CheckpointProgressSnapshot#cpBeginFut}}, but this future is never done 
> because {{db-checkpoint-thread}} got exception at 
> {{GridCacheDatabaseSharedManager.Checkpointer#markCheckpointBegin}} thrown by 
> {{FileWriteAheadLogManager#checkNode}} and leave method 
> {{markCheckpointBegin}} before future is done ({{curr.cpBeginFut.onDone();}})



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8166) stopGrid() hangs in some cases when node is invalidated and PDS is enabled

2018-04-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435840#comment-16435840
 ] 

ASF GitHub Bot commented on IGNITE-8166:


GitHub user alex-plekhanov opened a pull request:

https://github.com/apache/ignite/pull/3811

IGNITE-8166 PME hangs when error occurs during checkpoint



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alex-plekhanov/ignite ignite-8166

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3811.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3811


commit 38f976609f9039cbd31ff47a87773bcbe7ce8e30
Author: Aleksey Plekhanov 
Date:   2018-04-10T19:19:03Z

IGNITE-8166 PME hangs when error occurs during checkpoint




> stopGrid() hangs in some cases when node is invalidated and PDS is enabled
> --
>
> Key: IGNITE-8166
> URL: https://issues.apache.org/jira/browse/IGNITE-8166
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Aleksey Plekhanov
>Priority: Major
>  Labels: iep-14
>
> Node invalidation via FailureProcessor can hang {{exchange-worker}} and 
> {{stopGrid()}} when PDS is enabled.
> Reproducer (reproducer is racy, sometimes finished without hang):
> {code:java}
> public class StopNodeHangsTest extends GridCommonAbstractTest {
> /** Offheap size for memory policy. */
> private static final int SIZE = 10 * 1024 * 1024;
> /** Page size. */
> static final int PAGE_SIZE = 2048;
> /** Number of entries. */
> static final int ENTRIES = 2_000;
> /** {@inheritDoc} */
> @Override protected IgniteConfiguration getConfiguration(String 
> igniteInstanceName) throws Exception {
> IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
> DataStorageConfiguration dsCfg = new DataStorageConfiguration();
> DataRegionConfiguration dfltPlcCfg = new DataRegionConfiguration();
> dfltPlcCfg.setName("dfltPlc");
> dfltPlcCfg.setInitialSize(SIZE);
> dfltPlcCfg.setMaxSize(SIZE);
> dfltPlcCfg.setPersistenceEnabled(true);
> dsCfg.setDefaultDataRegionConfiguration(dfltPlcCfg);
> dsCfg.setPageSize(PAGE_SIZE);
> cfg.setDataStorageConfiguration(dsCfg);
> cfg.setFailureHandler(new FailureHandler() {
> @Override public boolean onFailure(Ignite ignite, FailureContext 
> failureCtx) {
> return true;
> }
> });
> return cfg;
> }
> public void testStopNodeHangs() throws Exception {
> cleanPersistenceDir();
> IgniteEx ignite0 = startGrid(0);
> IgniteEx ignite1 = startGrid(1);
> ignite1.cluster().active(true);
> awaitPartitionMapExchange();
> IgniteCache cache = ignite1.getOrCreateCache("TEST");
> Map entries = new HashMap<>();
> for (int i = 0; i < ENTRIES; i++)
> entries.put(i, new byte[PAGE_SIZE * 2 / 3]);
> cache.putAll(entries);
> ignite1.context().failure().process(new 
> FailureContext(FailureType.CRITICAL_ERROR, null));
> stopGrid(0);
> stopGrid(1);
> }
> }
> {code}
> {{stopGrid(1)}} waiting until exchange finished, {{exchange-worker}} waits on 
> method {{GridCacheDatabaseSharedManager#checkpointReadLock}} for 
> {{CheckpointProgressSnapshot#cpBeginFut}}, but this future is never done 
> because {{db-checkpoint-thread}} got exception at 
> {{GridCacheDatabaseSharedManager.Checkpointer#markCheckpointBegin}} thrown by 
> {{FileWriteAheadLogManager#checkNode}} and leave method 
> {{markCheckpointBegin}} before future is done ({{curr.cpBeginFut.onDone();}})



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8048) Dynamic indexes are not stored to cache data on node join

2018-04-12 Thread Eduard Shangareev (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435777#comment-16435777
 ] 

Eduard Shangareev commented on IGNITE-8048:
---

Also, I don't see any test which covers the situation that after cache 
descriptions merge we would after restart restore exactly this merged 
descriptor.

> Dynamic indexes are not stored to cache data on node join
> -
>
> Key: IGNITE-8048
> URL: https://issues.apache.org/jira/browse/IGNITE-8048
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.4
>Reporter: Alexey Goncharuk
>Assignee: Anton Kalashnikov
>Priority: Major
> Fix For: 2.5
>
> Attachments: IgniteDynamicIndexRestoreTest.java
>
>
> Consider the following scenario:
> 1) Start nodes, add some data
> 2) Shutdown a node, create a dynamic index
> 3) Shutdown the whole cluster, startup with the absent node, activate from 
> the absent node
> 4) Since the absent node did not 'see' the create index, index will not be 
> active after cluster activation
> 5) Update some data in the cluster
> 6) Restart the cluster, but activate from the node which did 'see' the create 
> index
> 7) Attempt to update data. Depending on the updates in (5), this will either 
> hang or result in an exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-8048) Dynamic indexes are not stored to cache data on node join

2018-04-12 Thread Eduard Shangareev (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435749#comment-16435749
 ] 

Eduard Shangareev edited comment on IGNITE-8048 at 4/12/18 3:25 PM:


I have checked your changes. Please, check upsource.

Also, I should say that test coverage is not complete.

 
|Active cluster|Joining node|Result|Test|
| |+ INDEX / COLUMN|Node 
reject|testFailJoiningNodeBecauseNeedConfigUpdateOnActiveGrid|
| |--INDEX / COLUMN|Node join|No test|
|+ INDEX / COLUMN| |Node join|testTakeChangedConfigOnActiveGrid|
|--INDEX / COLUMN| |Node join|No test|
|CONFLICT| |Node reject|No test|

 
|Inactive cluster|Joining node|Result|Test|
|+ INDEX / COLUMN| |Node join|testMergeChangedConfigOnCoordinator|
| |+ INDEX / COLUMN|Node join|No test|
|--INDEX / COLUMN| |Node join|testMergeChangedConfigOnInactiveGrid|
| |--INDEX / COLUMN|Node join|No test|
|+ INDEX / COLUMN|+ INDEX / COLUMN|Node join|No test|
|--INDEX / COLUMN|--INDEX / COLUMN|Node join|No test|
|CONFLICT| |Node reject|testFailJoiningNodeBecauseDifferentSql|


was (Author: edshanggg):
I have checked your changes. Please, check upsource.

Also, I should say that test coverage is not complete.

 
|Active cluster|Joining node|Result|Test|
| |+ INDEX / COLUMN|Node 
reject|testFailJoiningNodeBecauseNeedConfigUpdateOnActiveGrid|
| |--INDEX / COLUMN|Node join|No test|
|+ INDEX / COLUMN| |Node join|testTakeChangedConfigOnActiveGrid|
|--INDEX / COLUMN| |Node join|No test|
|CONFLICT| |Node reject|No test|

 
|Inactive cluster|Joining node|Result|Test|
|+ INDEX / COLUMN| |Node join|testMergeChangedConfigOnCoordinator|
| |+ INDEX / COLUMN|Node join|No test|
|--INDEX / COLUMN| |Node join|testMergeChangedConfigOnInactiveGrid|
| |--INDEX / COLUMN|Node join|No test|
|+ INDEX / COLUMN|+ INDEX / COLUMN|Node join|No test|
|--INDEX / COLUMN|--INDEX / COLUMN|Node join|No test|
|CONFLICT|Node reject|testFailJoiningNodeBecauseDifferentSql|

> Dynamic indexes are not stored to cache data on node join
> -
>
> Key: IGNITE-8048
> URL: https://issues.apache.org/jira/browse/IGNITE-8048
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.4
>Reporter: Alexey Goncharuk
>Assignee: Anton Kalashnikov
>Priority: Major
> Fix For: 2.5
>
> Attachments: IgniteDynamicIndexRestoreTest.java
>
>
> Consider the following scenario:
> 1) Start nodes, add some data
> 2) Shutdown a node, create a dynamic index
> 3) Shutdown the whole cluster, startup with the absent node, activate from 
> the absent node
> 4) Since the absent node did not 'see' the create index, index will not be 
> active after cluster activation
> 5) Update some data in the cluster
> 6) Restart the cluster, but activate from the node which did 'see' the create 
> index
> 7) Attempt to update data. Depending on the updates in (5), this will either 
> hang or result in an exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-8048) Dynamic indexes are not stored to cache data on node join

2018-04-12 Thread Eduard Shangareev (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435749#comment-16435749
 ] 

Eduard Shangareev edited comment on IGNITE-8048 at 4/12/18 3:25 PM:


I have checked your changes. Please, check upsource.

Also, I should say that test coverage is not complete.

 
|Active cluster|Joining node|Result|Test|
| |+ INDEX / COLUMN|Node 
reject|testFailJoiningNodeBecauseNeedConfigUpdateOnActiveGrid|
| |--INDEX / COLUMN|Node join|No test|
|+ INDEX / COLUMN| |Node join|testTakeChangedConfigOnActiveGrid|
|--INDEX / COLUMN| |Node join|No test|
|CONFLICT| |Node reject|No test|

 
|Inactive cluster|Joining node|Result|Test|
|+ INDEX / COLUMN| |Node join|testMergeChangedConfigOnCoordinator|
| |+ INDEX / COLUMN|Node join|No test|
|--INDEX / COLUMN| |Node join|testMergeChangedConfigOnInactiveGrid|
| |--INDEX / COLUMN|Node join|No test|
|+ INDEX / COLUMN|+ INDEX / COLUMN|Node join|No test|
|--INDEX / COLUMN|--INDEX / COLUMN|Node join|No test|
|CONFLICT|Node reject|testFailJoiningNodeBecauseDifferentSql|


was (Author: edshanggg):
I have checked your changes. Please, check upsource.

Also, I should say that test coverage is not complete.

 
|Active cluster|Joining node|Result|Test|
| |+ INDEX / COLUMN|Node 
reject|testFailJoiningNodeBecauseNeedConfigUpdateOnActiveGrid|
| | - INDEX / COLUMN|Node join|No test|
|+ INDEX / COLUMN| |Node join|testTakeChangedConfigOnActiveGrid|
|- INDEX / COLUMN| |Node join|No test|
|CONFLICT| |Node reject|No test|

 
|Inactive cluster|Joining node|Result|Test|
|+ INDEX / COLUMN| |Node join|testMergeChangedConfigOnCoordinator|
| |+ INDEX / COLUMN|Node join|No test|
|- INDEX / COLUMN| |Node join|testMergeChangedConfigOnInactiveGrid|
| |- INDEX / COLUMN|Node join|No test|
|+ INDEX / COLUMN|+ INDEX / COLUMN|Node join|No test|
|- INDEX / COLUMN|- INDEX / COLUMN|Node join|No test|
|CONFLICT|Node reject|testFailJoiningNodeBecauseDifferentSql|

> Dynamic indexes are not stored to cache data on node join
> -
>
> Key: IGNITE-8048
> URL: https://issues.apache.org/jira/browse/IGNITE-8048
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.4
>Reporter: Alexey Goncharuk
>Assignee: Anton Kalashnikov
>Priority: Major
> Fix For: 2.5
>
> Attachments: IgniteDynamicIndexRestoreTest.java
>
>
> Consider the following scenario:
> 1) Start nodes, add some data
> 2) Shutdown a node, create a dynamic index
> 3) Shutdown the whole cluster, startup with the absent node, activate from 
> the absent node
> 4) Since the absent node did not 'see' the create index, index will not be 
> active after cluster activation
> 5) Update some data in the cluster
> 6) Restart the cluster, but activate from the node which did 'see' the create 
> index
> 7) Attempt to update data. Depending on the updates in (5), this will either 
> hang or result in an exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-8048) Dynamic indexes are not stored to cache data on node join

2018-04-12 Thread Eduard Shangareev (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435749#comment-16435749
 ] 

Eduard Shangareev edited comment on IGNITE-8048 at 4/12/18 3:24 PM:


I have checked your changes. Please, check upsource.

Also, I should say that test coverage is not complete.

 
|Active cluster|Joining node|Result|Test|
| |+ INDEX / COLUMN|Node 
reject|testFailJoiningNodeBecauseNeedConfigUpdateOnActiveGrid|
| | - INDEX / COLUMN|Node join|No test|
|+ INDEX / COLUMN| |Node join|testTakeChangedConfigOnActiveGrid|
| - INDEX / COLUMN| |Node join|No test|
|CONFLICT| |Node reject|No test|

 
|Inactive cluster|Joining node|Result|Test|
|+ INDEX / COLUMN| |Node join|testMergeChangedConfigOnCoordinator|
| |+ INDEX / COLUMN|Node join|No test|
| - INDEX / COLUMN| |Node join|testMergeChangedConfigOnInactiveGrid|
| | - INDEX / COLUMN|Node join|No test|
|+ INDEX / COLUMN|+ INDEX / COLUMN|Node join|No test|
| - INDEX / COLUMN| - INDEX / COLUMN|Node join|No test|
|CONFLICT|Node reject|testFailJoiningNodeBecauseDifferentSql|


was (Author: edshanggg):
I have checked your changes. Please, check upsource.

Also, I should say that test coverage is not complete.

 
|Active cluster|Joining node|Result|Test|
| |+ INDEX / COLUMN|Node 
reject|testFailJoiningNodeBecauseNeedConfigUpdateOnActiveGrid|
| |- INDEX / COLUMN|Node join|No test|
|+ INDEX / COLUMN| |Node join|testTakeChangedConfigOnActiveGrid|
|- INDEX / COLUMN| |Node join|No test|
|CONFLICT|Node reject|No test|

 
|Inactive cluster|Joining node|Result|Test|
|+ INDEX / COLUMN| |Node join|testMergeChangedConfigOnCoordinator|
| |+ INDEX / COLUMN|Node join|No test|
|- INDEX / COLUMN| |Node join|testMergeChangedConfigOnInactiveGrid|
| |- INDEX / COLUMN|Node join|No test|
|+ INDEX / COLUMN|+ INDEX / COLUMN|Node join|No test|
|- INDEX / COLUMN|- INDEX / COLUMN|Node join|No test|
|CONFLICT|Node reject|testFailJoiningNodeBecauseDifferentSql|

> Dynamic indexes are not stored to cache data on node join
> -
>
> Key: IGNITE-8048
> URL: https://issues.apache.org/jira/browse/IGNITE-8048
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.4
>Reporter: Alexey Goncharuk
>Assignee: Anton Kalashnikov
>Priority: Major
> Fix For: 2.5
>
> Attachments: IgniteDynamicIndexRestoreTest.java
>
>
> Consider the following scenario:
> 1) Start nodes, add some data
> 2) Shutdown a node, create a dynamic index
> 3) Shutdown the whole cluster, startup with the absent node, activate from 
> the absent node
> 4) Since the absent node did not 'see' the create index, index will not be 
> active after cluster activation
> 5) Update some data in the cluster
> 6) Restart the cluster, but activate from the node which did 'see' the create 
> index
> 7) Attempt to update data. Depending on the updates in (5), this will either 
> hang or result in an exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-8048) Dynamic indexes are not stored to cache data on node join

2018-04-12 Thread Eduard Shangareev (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435749#comment-16435749
 ] 

Eduard Shangareev edited comment on IGNITE-8048 at 4/12/18 3:24 PM:


I have checked your changes. Please, check upsource.

Also, I should say that test coverage is not complete.

 
|Active cluster|Joining node|Result|Test|
| |+ INDEX / COLUMN|Node 
reject|testFailJoiningNodeBecauseNeedConfigUpdateOnActiveGrid|
| | - INDEX / COLUMN|Node join|No test|
|+ INDEX / COLUMN| |Node join|testTakeChangedConfigOnActiveGrid|
|- INDEX / COLUMN| |Node join|No test|
|CONFLICT| |Node reject|No test|

 
|Inactive cluster|Joining node|Result|Test|
|+ INDEX / COLUMN| |Node join|testMergeChangedConfigOnCoordinator|
| |+ INDEX / COLUMN|Node join|No test|
|- INDEX / COLUMN| |Node join|testMergeChangedConfigOnInactiveGrid|
| |- INDEX / COLUMN|Node join|No test|
|+ INDEX / COLUMN|+ INDEX / COLUMN|Node join|No test|
|- INDEX / COLUMN|- INDEX / COLUMN|Node join|No test|
|CONFLICT|Node reject|testFailJoiningNodeBecauseDifferentSql|


was (Author: edshanggg):
I have checked your changes. Please, check upsource.

Also, I should say that test coverage is not complete.

 
|Active cluster|Joining node|Result|Test|
| |+ INDEX / COLUMN|Node 
reject|testFailJoiningNodeBecauseNeedConfigUpdateOnActiveGrid|
| | - INDEX / COLUMN|Node join|No test|
|+ INDEX / COLUMN| |Node join|testTakeChangedConfigOnActiveGrid|
| - INDEX / COLUMN| |Node join|No test|
|CONFLICT| |Node reject|No test|

 
|Inactive cluster|Joining node|Result|Test|
|+ INDEX / COLUMN| |Node join|testMergeChangedConfigOnCoordinator|
| |+ INDEX / COLUMN|Node join|No test|
| - INDEX / COLUMN| |Node join|testMergeChangedConfigOnInactiveGrid|
| | - INDEX / COLUMN|Node join|No test|
|+ INDEX / COLUMN|+ INDEX / COLUMN|Node join|No test|
| - INDEX / COLUMN| - INDEX / COLUMN|Node join|No test|
|CONFLICT|Node reject|testFailJoiningNodeBecauseDifferentSql|

> Dynamic indexes are not stored to cache data on node join
> -
>
> Key: IGNITE-8048
> URL: https://issues.apache.org/jira/browse/IGNITE-8048
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.4
>Reporter: Alexey Goncharuk
>Assignee: Anton Kalashnikov
>Priority: Major
> Fix For: 2.5
>
> Attachments: IgniteDynamicIndexRestoreTest.java
>
>
> Consider the following scenario:
> 1) Start nodes, add some data
> 2) Shutdown a node, create a dynamic index
> 3) Shutdown the whole cluster, startup with the absent node, activate from 
> the absent node
> 4) Since the absent node did not 'see' the create index, index will not be 
> active after cluster activation
> 5) Update some data in the cluster
> 6) Restart the cluster, but activate from the node which did 'see' the create 
> index
> 7) Attempt to update data. Depending on the updates in (5), this will either 
> hang or result in an exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8048) Dynamic indexes are not stored to cache data on node join

2018-04-12 Thread Anton Kalashnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435750#comment-16435750
 ] 

Anton Kalashnikov commented on IGNITE-8048:
---

UP - [https://reviews.ignite.apache.org/ignite/review/IGNT-CR-546]

> Dynamic indexes are not stored to cache data on node join
> -
>
> Key: IGNITE-8048
> URL: https://issues.apache.org/jira/browse/IGNITE-8048
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.4
>Reporter: Alexey Goncharuk
>Assignee: Anton Kalashnikov
>Priority: Major
> Fix For: 2.5
>
> Attachments: IgniteDynamicIndexRestoreTest.java
>
>
> Consider the following scenario:
> 1) Start nodes, add some data
> 2) Shutdown a node, create a dynamic index
> 3) Shutdown the whole cluster, startup with the absent node, activate from 
> the absent node
> 4) Since the absent node did not 'see' the create index, index will not be 
> active after cluster activation
> 5) Update some data in the cluster
> 6) Restart the cluster, but activate from the node which did 'see' the create 
> index
> 7) Attempt to update data. Depending on the updates in (5), this will either 
> hang or result in an exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8048) Dynamic indexes are not stored to cache data on node join

2018-04-12 Thread Eduard Shangareev (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435749#comment-16435749
 ] 

Eduard Shangareev commented on IGNITE-8048:
---

I have checked your changes. Please, check upsource.

Also, I should say that test coverage is not complete.

 
|Active cluster|Joining node|Result|Test|
| |+ INDEX / COLUMN|Node 
reject|testFailJoiningNodeBecauseNeedConfigUpdateOnActiveGrid|
| |- INDEX / COLUMN|Node join|No test|
|+ INDEX / COLUMN| |Node join|testTakeChangedConfigOnActiveGrid|
|- INDEX / COLUMN| |Node join|No test|
|CONFLICT|Node reject|No test|

 
|Inactive cluster|Joining node|Result|Test|
|+ INDEX / COLUMN| |Node join|testMergeChangedConfigOnCoordinator|
| |+ INDEX / COLUMN|Node join|No test|
|- INDEX / COLUMN| |Node join|testMergeChangedConfigOnInactiveGrid|
| |- INDEX / COLUMN|Node join|No test|
|+ INDEX / COLUMN|+ INDEX / COLUMN|Node join|No test|
|- INDEX / COLUMN|- INDEX / COLUMN|Node join|No test|
|CONFLICT|Node reject|testFailJoiningNodeBecauseDifferentSql|

> Dynamic indexes are not stored to cache data on node join
> -
>
> Key: IGNITE-8048
> URL: https://issues.apache.org/jira/browse/IGNITE-8048
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.4
>Reporter: Alexey Goncharuk
>Assignee: Anton Kalashnikov
>Priority: Major
> Fix For: 2.5
>
> Attachments: IgniteDynamicIndexRestoreTest.java
>
>
> Consider the following scenario:
> 1) Start nodes, add some data
> 2) Shutdown a node, create a dynamic index
> 3) Shutdown the whole cluster, startup with the absent node, activate from 
> the absent node
> 4) Since the absent node did not 'see' the create index, index will not be 
> active after cluster activation
> 5) Update some data in the cluster
> 6) Restart the cluster, but activate from the node which did 'see' the create 
> index
> 7) Attempt to update data. Depending on the updates in (5), this will either 
> hang or result in an exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-7319) Memory leak during creating/destroying local cache

2018-04-12 Thread Andrey Aleksandrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Aleksandrov reassigned IGNITE-7319:
--

Assignee: Andrey Aleksandrov  (was: Roman Guseinov)

> Memory leak during creating/destroying local cache
> --
>
> Key: IGNITE-7319
> URL: https://issues.apache.org/jira/browse/IGNITE-7319
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.3
>Reporter: Mikhail Cherkasov
>Assignee: Andrey Aleksandrov
>Priority: Major
> Fix For: 2.5
>
> Attachments: Demo.java
>
>
> The following code creates local caches:
> {code}
> private IgniteCache createLocalCache(String name) { 
> CacheConfiguration cCfg = new 
> CacheConfiguration<>(); 
> cCfg.setName(name); 
> cCfg.setGroupName("localCaches"); // without group leak is much 
> bigger! 
> cCfg.setStoreKeepBinary(true); 
> cCfg.setCacheMode(CacheMode.LOCAL); 
> cCfg.setOnheapCacheEnabled(false); 
> cCfg.setCopyOnRead(false); 
> cCfg.setBackups(0); 
> cCfg.setWriteBehindEnabled(false); 
> cCfg.setReadThrough(false); 
> cCfg.setReadFromBackup(false); 
> cCfg.setQueryEntities(); 
> return ignite.createCache(cCfg).withKeepBinary(); 
> } 
> {code}
> The caches are placed in the queue and are picked up by the worker thread 
> which just destroys them after removing from the queue. 
> This setup seems to generate a memory leak of about 1GB per day. 
> When looking at heap dump, I see all space is occupied by instances of 
> java.util.concurrent.ConcurrentSkipListMap$Node.
> User list: 
> http://apache-ignite-users.70518.x6.nabble.com/Memory-leak-in-GridCachePartitionExchangeManager-tt18995.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8061) GridCachePartitionedDataStructuresFailoverSelfTest.testCountDownLatchConstantMultipleTopologyChange may hang on TeamCity

2018-04-12 Thread Andrey Kuznetsov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Kuznetsov updated IGNITE-8061:
-
Fix Version/s: (was: 2.5)
   2.6

> GridCachePartitionedDataStructuresFailoverSelfTest.testCountDownLatchConstantMultipleTopologyChange
>  may hang on TeamCity
> 
>
> Key: IGNITE-8061
> URL: https://issues.apache.org/jira/browse/IGNITE-8061
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures
>Affects Versions: 2.4
>Reporter: Andrey Kuznetsov
>Priority: Major
> Fix For: 2.6
>
> Attachments: log.txt
>
>
> The log attached contains 'Test has been timed out and will be interrupted' 
> message, but does not contain subsequent 'Test has been timed out [test=...'.
> Known facts:
> * There is pending GridDhtColocatedLockFuture in the log.
> * On timeout, InterruptedException comes to doTestCountDownLatch, but 
> finally-block contains the code leading to distributed locking.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6980) Automatic cancelling of hanging Ignite operations

2018-04-12 Thread Andrey Gura (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Gura updated IGNITE-6980:

Fix Version/s: (was: 2.5)
   2.6

> Automatic cancelling of hanging Ignite operations
> -
>
> Key: IGNITE-6980
> URL: https://issues.apache.org/jira/browse/IGNITE-6980
> Project: Ignite
>  Issue Type: New Feature
>Reporter: Denis Magda
>Assignee: Aleksey Plekhanov
>Priority: Blocker
>  Labels: iep-7, important
> Fix For: 2.6
>
>
> If an Ignite operation hangs due to some reason due to an internal problem or 
> buggy application code it needs to eventual fail after a timeout fires.
> An application must not freeze waiting for a human being intervention if an 
> atomic update fails internally.
> Even more, I would let all possible operation to fail after a timeout fires:
>  - Ignite compute computations (covered by IGNITE-6940).
>  - Ignite services calls.
>  - Atomic cache updates (see devlist discussion - 
> [http://apache-ignite-developers.2346864.n4.nabble.com/Timeouts-in-atomic-cache-td19839.html]).
>  - Transactional cache updates (covered by IGNITE-6894 and IGNITE-6895).
>  - SQL queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6324) transactional cache data partially available after crash.

2018-04-12 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-6324:
-
Fix Version/s: (was: 2.5)
   2.6

> transactional cache data partially available after crash.
> -
>
> Key: IGNITE-6324
> URL: https://issues.apache.org/jira/browse/IGNITE-6324
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 1.9, 2.1
>Reporter: Stanilovsky Evgeny
>Assignee: Dmitriy Govorukhin
>Priority: Major
> Fix For: 2.6
>
> Attachments: InterruptCommitedThreadTest.java
>
>
> If InterruptedException raise in client code during pds store operations we 
> can obtain inconsistent cache after restart. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-6324) transactional cache data partially available after crash.

2018-04-12 Thread Alexey Goncharuk (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435706#comment-16435706
 ] 

Alexey Goncharuk commented on IGNITE-6324:
--

The change still needs some improvements and refactoring, moving to 2.6.

> transactional cache data partially available after crash.
> -
>
> Key: IGNITE-6324
> URL: https://issues.apache.org/jira/browse/IGNITE-6324
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 1.9, 2.1
>Reporter: Stanilovsky Evgeny
>Assignee: Dmitriy Govorukhin
>Priority: Major
> Fix For: 2.6
>
> Attachments: InterruptCommitedThreadTest.java
>
>
> If InterruptedException raise in client code during pds store operations we 
> can obtain inconsistent cache after restart. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6766) RC check automation

2018-04-12 Thread Anton Vinogradov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-6766:
-
Fix Version/s: (was: 2.5)
   2.6

> RC check automation
> ---
>
> Key: IGNITE-6766
> URL: https://issues.apache.org/jira/browse/IGNITE-6766
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Anton Vinogradov
>Assignee: Oleg Ostanin
>Priority: Major
>  Labels: teamcity
> Fix For: 2.6
>
>
> Need to add task which downloads RC from 
> https://dist.apache.org/repos/dist/dev/ignite/X.Y.Z-rcK
> and checks that sha1, md5, gpg, src(license, build)) are ok.
> Also it should check that all Jira issues are resolved or closed for this 
> version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7004) Ability to disable WAL (Cross-cache tx should be rescricted while WAL disabled)

2018-04-12 Thread Anton Vinogradov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-7004:
-
Fix Version/s: (was: 2.5)
   2.6

> Ability to disable WAL (Cross-cache tx should be rescricted while WAL 
> disabled)
> ---
>
> Key: IGNITE-7004
> URL: https://issues.apache.org/jira/browse/IGNITE-7004
> Project: Ignite
>  Issue Type: Task
>  Components: persistence
>Reporter: Anton Vinogradov
>Priority: Major
> Fix For: 2.6
>
>
> Cross-cache transactions affecting caches with different modes (e.g. one 
> enabled, another disabled) are not allowed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6894) Hanged Tx monitoring

2018-04-12 Thread Anton Vinogradov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-6894:
-
Fix Version/s: (was: 2.5)
   2.6

> Hanged Tx monitoring
> 
>
> Key: IGNITE-6894
> URL: https://issues.apache.org/jira/browse/IGNITE-6894
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Anton Vinogradov
>Assignee: Dmitriy Sorokin
>Priority: Major
>  Labels: iep-7
> Fix For: 2.6
>
>
> Hanging Transactions not Related to Deadlock
> Description
>  This situation can occur if user explicitly markups the transaction (esp 
> Pessimistic Repeatable Read) and, for example, calls remote service (which 
> may be unresponsive) after acquiring some locks. All other transactions 
> depending on the same keys will hang.
> Detection and Solution
>  This most likely cannot be resolved automatically other than rollback TX by 
> timeout and release all the locks acquired so far. Also such TXs can be 
> rolled back from Web Console as described above.
>  If transaction has been rolled back on timeout or via UI then any further 
> action in the transaction, e.g. lock acquisition or commit attempt should 
> throw exception.
> Report
> Management tools (eg. Web Console) should provide ability to rollback any 
> transaction via UI.
>  Long running transaction should be reported to logs. Log record should 
> contain: near nodes, transaction IDs, cache names, keys (limited to several 
> tens of), etc ( ?).
> Also there should be a screen in Web Console that will list all ongoing 
> transactions in the cluster including the info as above.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6300) BinaryObject's set size estimator

2018-04-12 Thread Anton Vinogradov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-6300:
-
Fix Version/s: (was: 2.5)
   2.6

> BinaryObject's set size estimator
> -
>
> Key: IGNITE-6300
> URL: https://issues.apache.org/jira/browse/IGNITE-6300
> Project: Ignite
>  Issue Type: New Feature
>Reporter: Anton Vinogradov
>Assignee: Dmitriy Sorokin
>Priority: Major
> Fix For: 2.6
>
>
> Need to provide some API to estimate requirements for any data model.
> For example:
> 1) You have classes A,B and C with known fields and data distribution over 
> this fields.
> 2) You know that you have to keep 1M of A, 2M of B and 45K of C.
> 3) BinarySizeEstimator should return you expected memory consumption on 
> actual Ignite version without starting a node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6903) Implement new JMX metrics for Indexing

2018-04-12 Thread Anton Vinogradov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-6903:
-
Fix Version/s: (was: 2.5)
   2.6

> Implement new JMX metrics for Indexing
> --
>
> Key: IGNITE-6903
> URL: https://issues.apache.org/jira/browse/IGNITE-6903
> Project: Ignite
>  Issue Type: New Feature
>  Components: sql
>Reporter: Anton Vinogradov
>Priority: Critical
>  Labels: iep-6, important
> Fix For: 2.6
>
>
> These additional metrics and methods should be implemented:
> - Space occupied by indexes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6895) TX deadlock monitoring

2018-04-12 Thread Anton Vinogradov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-6895:
-
Fix Version/s: (was: 2.5)
   2.6

> TX deadlock monitoring
> --
>
> Key: IGNITE-6895
> URL: https://issues.apache.org/jira/browse/IGNITE-6895
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Anton Vinogradov
>Assignee: Dmitriy Sorokin
>Priority: Major
>  Labels: iep-7
> Fix For: 2.6
>
>
> Deadlocks with Cache Transactions
> Description
> Deadlocks of this type are possible if user locks 2 or more keys within 2 or 
> more transactions in different orders (this does not apply to OPTIMISTIC 
> SERIALIZABLE transactions as they are capable to detect deadlock and choose 
> winning tx). Currently, Ignite can detect deadlocked transactions but this 
> procedure is started only for transactions that have timeout set explicitly 
> or default timeout in configuration set to value greater than 0.
> Detection and Solution
> Each NEAR node should periodically (need new config property?) scan the list 
> of local transactions and initiate the same procedure as we have now for 
> timed out transactions. If deadlock found it should be reported to logs. Log 
> record should contain: near nodes, transaction IDs, cache names, keys 
> (limited to several tens of) involved in deadlock. User should have ability 
> to configure default behavior - REPORT_ONLY, ROLLBACK (any more?) or manually 
> rollback selected transaction through web console or Visor.
> Report
> If deadlock found it should be reported to logs. Log record should contain: 
> near nodes, transaction IDs, cache names, keys (limited to several tens of) 
> involved in deadlock.
> Also there should be a screen in Web Console that will list all ongoing 
> transactions in the cluster including the following info:
> - Near node
> - Start time
> - DHT nodes
> - Pending Locks (by request)
> Web Console should provide ability to rollback any transaction via UI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6727) Ignite assembly should not require ignite-tools installation

2018-04-12 Thread Anton Vinogradov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-6727:
-
Fix Version/s: (was: 2.5)
   2.6

> Ignite assembly should not require ignite-tools installation
> 
>
> Key: IGNITE-6727
> URL: https://issues.apache.org/jira/browse/IGNITE-6727
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Anton Vinogradov
>Assignee: Oleg Ostanin
>Priority: Major
> Fix For: 2.6
>
>
> Need to research how to solve this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7005) Ability to disable WAL (Recoverable case)

2018-04-12 Thread Anton Vinogradov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-7005:
-
Fix Version/s: (was: 2.5)
   2.6

> Ability to disable WAL (Recoverable case)
> -
>
> Key: IGNITE-7005
> URL: https://issues.apache.org/jira/browse/IGNITE-7005
> Project: Ignite
>  Issue Type: Task
>  Components: persistence
>Reporter: Anton Vinogradov
>Priority: Major
> Fix For: 2.6
>
>
> In addition to non recoverable case(IGNITE-7003):
> On WAL disabling we should (on each node)
> - trigger exchange to guarantie consistent state
> - schedule new checkpoint. This checkpoint should be recorded to special 
> place (temporary checkpoint location), to prevent damage of latest one.
> All new checkpoints should update temporary checkpoint.
> On WAL enabling we should (on each node) after all nodes reported that 
> checkpoints finished
> - merge temp checkpoint with stable (scheduled before WAL disabling)
> - clean WAL
> before enabling proxies 
> On any failure during loading (while WAL disabled or enabling) we should be 
> able to reactivate cluster with
> - data from original checkpoints & WAL for affected caches
> - latest state for non affected caches
> Failover:
> Any topology change should be covered(while WAL disabled or enabling)
> - Node(s) Left (inc. coordinator)
> - Node(s) Join



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IGNITE-5798) Logging Ignite configuration at startup

2018-04-12 Thread Vyacheslav Daradur (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Daradur resolved IGNITE-5798.

Resolution: Won't Fix  (was: Fixed)

Resolution is changed to "Won't Fix" because the feature was committed within 
an implementation "IEP-4 Baseline topology for persistent caches (Phase 
1)"([commit|https://github.com/apache/ignite/commit/6f7aba8526ff01a159b289aff932324a3604c1d8]).

> Logging Ignite configuration at startup
> ---
>
> Key: IGNITE-5798
> URL: https://issues.apache.org/jira/browse/IGNITE-5798
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexandr Kuramshin
>Assignee: Ivan Fedotov
>Priority: Major
>  Labels: easyfix, newbie
> Fix For: 2.5
>
>
> I've found that IgniteConfiguration is not logged even when 
> -DIGNITE_QUIET=false
> When we starting Ignite with path to the xml, or InputStream, we have to 
> ensure, that all configuration options were properly read. And also we would 
> like to know actual values of uninitialized configuration properties (default 
> values), which will be set only after Ignite get started.
> Monitoring tools, like Visor or WebConsole, do not show all configuration 
> options. And even though they will be updated to show all properties, when 
> new configuration options appear, then tools update will be needed.
> Logging IgniteConfiguration at startup gives a possibility to ensure that the 
> right grid configuration has been applied and leads to better user support 
> based on log analyzing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7809) Ignite PDS 2 & PDS 2 Direct IO: stable failures of IgniteWalFlushDefaultSelfTest

2018-04-12 Thread Dmitriy Pavlov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-7809:
---
Fix Version/s: (was: 2.5)
   2.6

> Ignite PDS 2 & PDS 2 Direct IO: stable failures of 
> IgniteWalFlushDefaultSelfTest
> 
>
> Key: IGNITE-7809
> URL: https://issues.apache.org/jira/browse/IGNITE-7809
> Project: Ignite
>  Issue Type: Task
>  Components: persistence
>Affects Versions: 2.4
>Reporter: Dmitriy Pavlov
>Assignee: Ilya Lantukh
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> Probably after last WAL default changes 'IGNITE-7594 Fixed performance drop 
> after WAL optimization for FSYNC' 2 tests in 2 build configs began to fail
>Ignite PDS 2 (Direct IO) [ tests 2 ]  
>  IgnitePdsNativeIoTestSuite2: 
> IgniteWalFlushDefaultSelfTest.testFailAfterStart (fail rate 13,0%) 
>  IgnitePdsNativeIoTestSuite2: 
> IgniteWalFlushDefaultSelfTest.testFailWhileStart (fail rate 13,0%) 
>Ignite PDS 2 [ tests 2 ]  
>  IgnitePdsTestSuite2: IgniteWalFlushDefaultSelfTest.testFailAfterStart 
> (fail rate 8,4%) 
>  IgnitePdsTestSuite2: IgniteWalFlushDefaultSelfTest.testFailWhileStart 
> (fail rate 8,4%) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-7972) NPE in TTL manager.

2018-04-12 Thread Andrew Mashenkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435676#comment-16435676
 ] 

Andrew Mashenkov edited comment on IGNITE-7972 at 4/12/18 2:35 PM:
---

GridCacheUtils.unwindEvicts() takes cache contexts from SharedContext and tries 
to evict entries from known caches.

Possible issue here is that cache context was published to SharedContext before 
it's managers (incl. TtlManager) has been started.
 There is a guarantee that if cache context.started()==true then all it's 
managers is started as well.

Seems, GridCacheUtils.unwindEvicts() should just check if cache context is 
started.


was (Author: amashenkov):
GridCacheUtils.unwindEvicts() takes cache contexts from SharedContext and tries 
to evict entries from known caches.

Possible issue here is that cache context is published to SharedContext before 
it's managers (incl. TtlManager) has started.
There is a guarantee that if cache context.started()==true then all it's 
managers is started as well.

Seems, GridCacheUtils.unwindEvicts() should just check if cache context is 
started.

> NPE in TTL manager.
> ---
>
> Key: IGNITE-7972
> URL: https://issues.apache.org/jira/browse/IGNITE-7972
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.3
>Reporter: Andrew Mashenkov
>Assignee: Andrew Mashenkov
>Priority: Major
> Fix For: 2.5
>
> Attachments: npe.log
>
>
> TTL manager can try to evict expired entries on cache that wasn't initialized 
> yet due to a race.
> This lead to NPE in unwindEvicts method.
> PFA stacktrace.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7972) NPE in TTL manager.

2018-04-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435689#comment-16435689
 ] 

ASF GitHub Bot commented on IGNITE-7972:


GitHub user AMashenkov opened a pull request:

https://github.com/apache/ignite/pull/3810

IGNITE-7972: Fixed NPE in TTL manager on unwindEvicts.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-7972

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3810.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3810






> NPE in TTL manager.
> ---
>
> Key: IGNITE-7972
> URL: https://issues.apache.org/jira/browse/IGNITE-7972
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.3
>Reporter: Andrew Mashenkov
>Assignee: Andrew Mashenkov
>Priority: Major
> Fix For: 2.5
>
> Attachments: npe.log
>
>
> TTL manager can try to evict expired entries on cache that wasn't initialized 
> yet due to a race.
> This lead to NPE in unwindEvicts method.
> PFA stacktrace.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7972) NPE in TTL manager.

2018-04-12 Thread Andrew Mashenkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Mashenkov updated IGNITE-7972:
-
Fix Version/s: 2.5

> NPE in TTL manager.
> ---
>
> Key: IGNITE-7972
> URL: https://issues.apache.org/jira/browse/IGNITE-7972
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.3
>Reporter: Andrew Mashenkov
>Assignee: Andrew Mashenkov
>Priority: Major
> Fix For: 2.5
>
> Attachments: npe.log
>
>
> TTL manager can try to evict expired entries on cache that wasn't initialized 
> yet due to a race.
> This lead to NPE in unwindEvicts method.
> PFA stacktrace.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-5179) Inconsistent return type in javadoc of GridSecurityProcessor

2018-04-12 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435678#comment-16435678
 ] 

Dmitriy Pavlov commented on IGNITE-5179:


Hi [~vinx13], I see your PR is closed. If you don't mind, I change ticket 
status to Open again.

> Inconsistent return type in javadoc of GridSecurityProcessor
> 
>
> Key: IGNITE-5179
> URL: https://issues.apache.org/jira/browse/IGNITE-5179
> Project: Ignite
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0
>Reporter: Wuwei Lin
>Assignee: Wuwei Lin
>Priority: Trivial
>
> In {{GridSecurityProcessor}}, the return types of methods {{authenticate}} 
> and {{authenticateNode}} are {{SecurityContext}}, but documented as returning 
> {{boolean}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8134) Services can't be deployed on servers outside of baseline topology

2018-04-12 Thread Denis Mekhanikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Mekhanikov reassigned IGNITE-8134:


Assignee: Denis Mekhanikov  (was: Stanislav Lukyanov)

> Services can't be deployed on servers outside of baseline topology
> --
>
> Key: IGNITE-8134
> URL: https://issues.apache.org/jira/browse/IGNITE-8134
> Project: Ignite
>  Issue Type: Bug
>  Components: managed services, persistence
>Reporter: Stanislav Lukyanov
>Assignee: Denis Mekhanikov
>Priority: Major
> Fix For: 2.5
>
>
> If a node is not a part of the baseline topology, the services will never be 
> deployed on it. In particular, if that node calls a synchronous deploy* 
> method, the method will hang.
>  After the node is added to the baseline, all previously initiated 
> deployments succeed (and deploy* methods return).
> It seems that the issue is with the continuous query started by the 
> GridServiceProcessor on the ignite-sys-cache.
> Example:
>  =
> {code}
> public class BltServicesBug {
> public static void main(String[] args) {
> // start one node
> IgniteConfiguration cfg1 = new IgniteConfiguration()
> .setIgniteInstanceName("node1")
> .setDataStorageConfiguration(
> new DataStorageConfiguration()
> .setDefaultDataRegionConfiguration(
> new DataRegionConfiguration()
> .setPersistenceEnabled(true)
> )
> );
> try (Ignite ignite1 = Ignition.start(cfg1)) {
> // activate and set baseline topology
> ignite1.cluster().active(true);
> // start another node
> IgniteConfiguration cfg2 = new IgniteConfiguration(cfg1)
> .setIgniteInstanceName("node2");
> try (Ignite ignite2 = Ignition.start(cfg2)) { 
> // try to deploy a service; 
> // this call hangs until the second node is added to the BLT 
> (e.g. externally via control.sh) 
> ignite2.services().deployNodeSingleton("myService", new 
> MyServiceImpl()); System.out.println("> Deployed"); }
> }
> }
> private static class MyServiceImpl implements Service {
> @Override public void cancel(ServiceContext ctx) { }
> @Override public void init(ServiceContext ctx) { }
> @Override public void execute(ServiceContext ctx) { }
> }
> }
> }
> {code}
>  =



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7972) NPE in TTL manager.

2018-04-12 Thread Andrew Mashenkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435676#comment-16435676
 ] 

Andrew Mashenkov commented on IGNITE-7972:
--

GridCacheUtils.unwindEvicts() takes cache contexts from SharedContext and tries 
to evict entries from known caches.

Possible issue here is that cache context is published to SharedContext before 
it's managers (incl. TtlManager) has started.
There is a guarantee that if cache context.started()==true then all it's 
managers is started as well.

Seems, GridCacheUtils.unwindEvicts() should just check if cache context is 
started.

> NPE in TTL manager.
> ---
>
> Key: IGNITE-7972
> URL: https://issues.apache.org/jira/browse/IGNITE-7972
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.3
>Reporter: Andrew Mashenkov
>Assignee: Andrew Mashenkov
>Priority: Major
> Attachments: npe.log
>
>
> TTL manager can try to evict expired entries on cache that wasn't initialized 
> yet due to a race.
> This lead to NPE in unwindEvicts method.
> PFA stacktrace.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8017) Disable WAL during initial preloading

2018-04-12 Thread Alexey Goncharuk (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435672#comment-16435672
 ] 

Alexey Goncharuk commented on IGNITE-8017:
--

Ilya,

1. Please check the case when WAL was disabled for rebalancing, then topology 
changes and the node is not going to rebalance anymore. You listen on rebalance 
future and enable WAL only if future succeeds, however I am not sure if another 
rebalancing session is triggered in this case.
2. We need to persist locally disabled WAL state because checkpoints are still 
running and local storage for WAL-disabled cache group may be corrupted. If 
such a situation happens, we need to clean up corresponding cache group 
storages the same way as global WAL disable does. Please add corresponding 
test. 
3. When WAL is re-enabled, we need to enable WAL first and then trigger 
checkpoint, not in reverse order.
4. There may be a race in {{onGroupRebalanceFinished}} method - we can own a 
partition that we did not rebalance. I think topology read lock is a proper way 
to synchronize here.
5. Please check that {{changeLocalStatesOnExchangeDone}} is not called when 
holding checkpoint read lock, otherwise it may deadlock. 
6. Add a specific test that will check that WAL will not be disabled on cluster 
nodes when BLT size is reduced.

> Disable WAL during initial preloading
> -
>
> Key: IGNITE-8017
> URL: https://issues.apache.org/jira/browse/IGNITE-8017
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ilya Lantukh
>Assignee: Ilya Lantukh
>Priority: Major
>  Labels: iep-16
> Fix For: 2.5
>
>
> While handling SupplyMessage, node handles each supplied data entry 
> separately, which causes a WAL record for each entry to be written. It 
> significantly limits preloading speed.
> We can improve rebalancing speed and reduce pressure on disk by disabling WAL 
> until all data is loaded. The disadvantage of this approach is that data 
> might get corrupted if node crashes - but node that crashed during preloading 
> has to clear all it's data anyway. However, it is important to distinguish 
> situations when new node joined cluster or added to baseline topology (and 
> doesn't hold any data) and when additional partitions got assigned to node 
> after baseline topology changed (in this case node has to keep all data in 
> consistent state).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7659) Reduce multiple Trainer interfaces to one

2018-04-12 Thread Anton Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Dmitriev updated IGNITE-7659:
---
Issue Type: Sub-task  (was: Improvement)
Parent: IGNITE-8232

> Reduce multiple Trainer interfaces to one
> -
>
> Key: IGNITE-7659
> URL: https://issues.apache.org/jira/browse/IGNITE-7659
> Project: Ignite
>  Issue Type: Sub-task
>  Components: ml
>Reporter: Anton Dmitriev
>Assignee: Anton Dmitriev
>Priority: Minor
> Fix For: 2.5
>
>
> Currently there are two `Trainer` interfaces: in package 
> `org.apache.ignite.ml` and `org.apache.ignite.ml.trainers`. We need to use 
> only one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-6827) Configurable rollback for long running transactions before partition exchange

2018-04-12 Thread Anton Vinogradov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435592#comment-16435592
 ] 

Anton Vinogradov commented on IGNITE-6827:
--

[~ascherbakov],
I found correct description at https://issues.apache.org/jira/browse/IGNITE-7910
Could you please update description of this issue and close duplicates?

One more thing, could you please make sure that in case of manual (by 
administrator) rollback {{TransactionRollbackException}} will be extended with 
{{caused by TransactionInterruptedException}}?


> Configurable rollback for long running transactions before partition exchange
> -
>
> Key: IGNITE-6827
> URL: https://issues.apache.org/jira/browse/IGNITE-6827
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.0
>Reporter: Alexei Scherbakov
>Assignee: Alexei Scherbakov
>Priority: Major
> Fix For: 2.5
>
>
> Currently long running / buggy user transactions force partition exchange 
> block on waiting for 
> org.apache.ignite.internal.processors.cache.GridCacheSharedContext#partitionReleaseFuture,
>  preventing all grid progress.
> I suggest introducing new global flag in TransactionConfiguration, like 
> {{txRollbackTimeoutOnTopologyChange}}
> which will rollback exchange blocking transaction after the timeout.
> Still need to think what to do with other topology locking activities.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-7547) Failing to deploy service created by Proxy.newProxyInstance() on multiple nodes

2018-04-12 Thread Ilya Kasnacheev (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Kasnacheev reassigned IGNITE-7547:
---

Assignee: (was: Ilya Kasnacheev)

> Failing to deploy service created by Proxy.newProxyInstance() on multiple 
> nodes
> ---
>
> Key: IGNITE-7547
> URL: https://issues.apache.org/jira/browse/IGNITE-7547
> Project: Ignite
>  Issue Type: Bug
>  Components: compute
>Affects Versions: 2.4
>Reporter: Ilya Kasnacheev
>Priority: Major
>
> When a new node comes with a service which is already defined in the cluster 
> (by name), the following check is made:
> deployed.configuration().equalsIgnoreNodeFilter(newCfg)
> It checks for several parameters, including Service's class.equals().
> If a normal class is used, it will work. However, sometimes Service 
> implementation is created with java.lang.reflect.Proxy.newProxyInstance().
> This method creates new classes on demand. They will have names like 
> $ProxyNN, where NN is ordinal which cannot be depended on. On different nodes 
> the ordering of proxies will be different. This means that equality for these 
> classes cannot be dependent on.
> And indeed it causes problems, as follows
> {code:java}
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to deploy 
> service (service already exists with different configuration) 
> [deployed=LazyServiceConfiguration [srvcClsName=com.sun.proxy.$Proxy0, 
> svcCls=, nodeFilterCls=], new=LazyServiceConfiguration 
> [srvcClsName=com.sun.proxy.$Proxy1, svcCls=$Proxy1, nodeFilterCls=]]
>     at 
> org.apache.ignite.internal.processors.service.GridServiceProcessor.writeServiceToCache(GridServiceProcessor.java:689)
>     at 
> org.apache.ignite.internal.processors.service.GridServiceProcessor.deployAll(GridServiceProcessor.java:590){code}
> My proposal follows: we should check that both classes respond to 
> Proxy.isProxyClass() before comparing classes. If they are, consider them 
> equal. I don't think we can check more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-7165) Re-balancing is cancelled if client node joins

2018-04-12 Thread Ilya Kasnacheev (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Kasnacheev reassigned IGNITE-7165:
---

Assignee: (was: Ilya Kasnacheev)

> Re-balancing is cancelled if client node joins
> --
>
> Key: IGNITE-7165
> URL: https://issues.apache.org/jira/browse/IGNITE-7165
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Cherkasov
>Priority: Critical
>
> Re-balancing is canceled if client node joins. Re-balancing can take hours 
> and each time when client node joins it starts again:
> [15:10:05,700][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager]
>  Added new node to topology: TcpDiscoveryNode 
> [id=979cf868-1c37-424a-9ad1-12db501f32ef, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 
> 172.31.16.213], sockAddrs=[/0:0:0:0:0:0:0:1:0, /127.0.0.1:0, 
> /172.31.16.213:0], discPort=0, order=36, intOrder=24, 
> lastExchangeTime=1512907805688, loc=false, ver=2.3.1#20171129-sha1:4b1ec0fe, 
> isClient=true]
> [15:10:05,701][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager]
>  Topology snapshot [ver=36, servers=7, clients=5, CPUs=128, heap=160.0GB]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Started 
> exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> crd=false, evt=NODE_JOINED, evtNode=979cf868-1c37-424a-9ad1-12db501f32ef, 
> customEvt=null, allowMerge=true]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionsExchangeFuture]
>  Finish exchange future [startVer=AffinityTopologyVersion [topVer=36, 
> minorTopVer=0], resVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> err=null]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Finished 
> exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> crd=false]
> [15:10:05,703][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion 
> [topVer=36, minorTopVer=0], evt=NODE_JOINED, 
> node=979cf868-1c37-424a-9ad1-12db501f32ef]
> [15:10:08,706][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Cancelled rebalancing from all nodes [topology=AffinityTopologyVersion 
> [topVer=35, minorTopVer=0]]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Rebalancing scheduled [order=[statementp]]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Rebalancing started [top=null, evt=NODE_JOINED, 
> node=a8be3c14-9add-48c3-b099-3fd304cfdbf4]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=2f6bde48-ffb5-4815-bd32-df4e57dc13e0, partitionsCount=18, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=35d01141-4dce-47dd-adf6-a4f3b2bb9da9, partitionsCount=15, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=b3a8be53-e61f-4023-a906-a265923837ba, partitionsCount=15, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=f825cb4e-7dcc-405f-a40d-c1dc1a3ade5a, partitionsCount=12, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=4ae1db91-8b88-4180-a84b-127a303959e9, partitionsCount=11, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=7c286481-7638-49e4-8c68-fa6aa65d8b76, partitionsCount=18, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> so in clusters with a big amount of data and the frequent client left/join 
> events this means that a new server will never receive its partitions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8233) KNN and SVM algorithms don't work when partition doesn't contain data

2018-04-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435574#comment-16435574
 ] 

ASF GitHub Bot commented on IGNITE-8233:


GitHub user dmitrievanthony opened a pull request:

https://github.com/apache/ignite/pull/3807

IGNITE-8233 KNN and SVM algorithms don't work when partition doesn't 
contain data.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8233

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3807.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3807


commit 14a7357afc0f33b07f5d4f56d6081222ec5bb437
Author: Anton Dmitriev 
Date:   2018-04-12T13:35:59Z

IGNITE-8233 Add protection of dataset compute method from empty data.

commit c63105c2cd4d4d8829ff300eaa7d11ed6620ebdb
Author: Anton Dmitriev 
Date:   2018-04-12T13:44:55Z

IGNITE-8233 Fix tests after adding protection of dataset compute method
from empty data.




> KNN and SVM algorithms don't work when partition doesn't contain data
> -
>
> Key: IGNITE-8233
> URL: https://issues.apache.org/jira/browse/IGNITE-8233
> Project: Ignite
>  Issue Type: Bug
>  Components: ml
>Affects Versions: 2.5
>Reporter: Anton Dmitriev
>Assignee: Anton Dmitriev
>Priority: Major
> Fix For: 2.5
>
>
> KNN and SVM algorithms implemented with assumption that partition data won't 
> be null:
> {code:java}
> public LabeledDataset(double[][] mtx, double[] lbs, String[] featureNames, 
> boolean isDistributed) {
> super();
> assert mtx != null;
> assert lbs != null;
> {code}
> Currently it's wrong assumption, so we need to update dataset to support this 
> assumption or update these algorithms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7165) Re-balancing is cancelled if client node joins

2018-04-12 Thread Ilya Kasnacheev (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Kasnacheev updated IGNITE-7165:

Fix Version/s: (was: 2.5)

> Re-balancing is cancelled if client node joins
> --
>
> Key: IGNITE-7165
> URL: https://issues.apache.org/jira/browse/IGNITE-7165
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Cherkasov
>Assignee: Ilya Kasnacheev
>Priority: Critical
>
> Re-balancing is canceled if client node joins. Re-balancing can take hours 
> and each time when client node joins it starts again:
> [15:10:05,700][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager]
>  Added new node to topology: TcpDiscoveryNode 
> [id=979cf868-1c37-424a-9ad1-12db501f32ef, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 
> 172.31.16.213], sockAddrs=[/0:0:0:0:0:0:0:1:0, /127.0.0.1:0, 
> /172.31.16.213:0], discPort=0, order=36, intOrder=24, 
> lastExchangeTime=1512907805688, loc=false, ver=2.3.1#20171129-sha1:4b1ec0fe, 
> isClient=true]
> [15:10:05,701][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager]
>  Topology snapshot [ver=36, servers=7, clients=5, CPUs=128, heap=160.0GB]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Started 
> exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> crd=false, evt=NODE_JOINED, evtNode=979cf868-1c37-424a-9ad1-12db501f32ef, 
> customEvt=null, allowMerge=true]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionsExchangeFuture]
>  Finish exchange future [startVer=AffinityTopologyVersion [topVer=36, 
> minorTopVer=0], resVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> err=null]
> [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Finished 
> exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> crd=false]
> [15:10:05,703][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion 
> [topVer=36, minorTopVer=0], evt=NODE_JOINED, 
> node=979cf868-1c37-424a-9ad1-12db501f32ef]
> [15:10:08,706][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Cancelled rebalancing from all nodes [topology=AffinityTopologyVersion 
> [topVer=35, minorTopVer=0]]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Rebalancing scheduled [order=[statementp]]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager]
>  Rebalancing started [top=null, evt=NODE_JOINED, 
> node=a8be3c14-9add-48c3-b099-3fd304cfdbf4]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=2f6bde48-ffb5-4815-bd32-df4e57dc13e0, partitionsCount=18, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=35d01141-4dce-47dd-adf6-a4f3b2bb9da9, partitionsCount=15, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=b3a8be53-e61f-4023-a906-a265923837ba, partitionsCount=15, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=f825cb4e-7dcc-405f-a40d-c1dc1a3ade5a, partitionsCount=12, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=4ae1db91-8b88-4180-a84b-127a303959e9, partitionsCount=11, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander]
>  Starting rebalancing [mode=ASYNC, 
> fromNode=7c286481-7638-49e4-8c68-fa6aa65d8b76, partitionsCount=18, 
> topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], 
> updateSeq=-1754630006]
> so in clusters with a big amount of data and the frequent client left/join 
> events this means that a new server will never receive its partitions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7592) Dynamic cache with rebalanceDelay == -1 doesn't trigger late affinity assignment even after explicit rebalance is called on every node

2018-04-12 Thread Maxim Muzafarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maxim Muzafarov updated IGNITE-7592:

Fix Version/s: 2.6

> Dynamic cache with rebalanceDelay == -1 doesn't trigger late affinity 
> assignment even after explicit rebalance is called on every node
> --
>
> Key: IGNITE-7592
> URL: https://issues.apache.org/jira/browse/IGNITE-7592
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.4
>Reporter: Ilya Lantukh
>Assignee: Maxim Muzafarov
>Priority: Major
> Fix For: 2.6
>
>
> Reproducer:
> {noformat}
> startGrids(NODE_COUNT);
> IgniteEx ig = grid(0);
> ig.cluster().active(true);
> awaitPartitionMapExchange();
> IgniteCache cache =
> ig.createCache(
> new CacheConfiguration()
> .setName(CACHE_NAME)
> .setCacheMode(PARTITIONED)
> .setBackups(1)
> .setPartitionLossPolicy(READ_ONLY_SAFE)
> .setReadFromBackup(true)
> .setWriteSynchronizationMode(FULL_SYNC)
> .setRebalanceDelay(-1)
> );
> for (int i = 0; i < NODE_COUNT; i++)
> grid(i).cache(CACHE_NAME).rebalance().get();
> awaitPartitionMapExchange();
> {noformat}
> Sometimes this code will hang on the last awaitPartitionMapExchange(), though 
> probability that it will happen is rather low (<10%).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8129) JDBC: JdbcThinConnectionSSLTest.testDefaultContext

2018-04-12 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-8129:

Fix Version/s: 2.5

> JDBC: JdbcThinConnectionSSLTest.testDefaultContext
> --
>
> Key: IGNITE-8129
> URL: https://issues.apache.org/jira/browse/IGNITE-8129
> Project: Ignite
>  Issue Type: Bug
>Reporter: Peter Ivanov
>Assignee: Taras Ledkov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.5
>
>
> Test fails under strange conditions: it runs successful if is executed by 
> {{mvn test}} command and fails if is executed by {{mvn surefire:test}}. Seems 
> some maven reactor dependencies and/or race condition problems.
> {code}
> [2018-04-04 05:52:26,389][ERROR][main][root] Test failed.
> java.sql.SQLException: Failed to SSL connect to server 
> [url=jdbc:ignite:thin://127.0.0.1:10800]
>   at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinSSLUtil.createSSLSocket(JdbcThinSSLUtil.java:93)
>   at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.connect(JdbcThinTcpIo.java:214)
>   at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.start(JdbcThinTcpIo.java:156)
>   at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.start(JdbcThinTcpIo.java:131)
>   at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.ensureConnected(JdbcThinConnection.java:156)
>   at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.(JdbcThinConnection.java:145)
>   at 
> org.apache.ignite.IgniteJdbcThinDriver.connect(IgniteJdbcThinDriver.java:157)
>   at java.sql.DriverManager.getConnection(DriverManager.java:664)
>   at java.sql.DriverManager.getConnection(DriverManager.java:270)
>   at 
> org.apache.ignite.jdbc.thin.JdbcThinConnectionSSLTest.testDefaultContext(JdbcThinConnectionSSLTest.java:187)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2001)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:133)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:1916)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection 
> during handshake
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:992)
>   at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387)
>   at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinSSLUtil.createSSLSocket(JdbcThinSSLUtil.java:88)
>   ... 18 more
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
>   at sun.security.ssl.InputRecord.read(InputRecord.java:505)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973)
>   ... 22 more
> [08:54:52][org.apache.ignite.jdbc.thin.JdbcThinConnectionSSLTest.testDefaultContext]
>  java.sql.SQLException: Failed to SSL connect to server 
> [url=jdbc:ignite:thin://127.0.0.1:10800]
> [08:54:52]
> [org.apache.ignite.jdbc.thin.JdbcThinConnectionSSLTest.testDefaultContext] 
> java.sql.SQLException: Failed to SSL connect to server 
> [url=jdbc:ignite:thin://127.0.0.1:10800]
>   at 
> org.apache.ignite.jdbc.thin.JdbcThinConnectionSSLTest.testDefaultContext(JdbcThinConnectionSSLTest.java:187)
> Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection 
> during handshake
>   at 
> org.apache.ignite.jdbc.thin.JdbcThinConnectionSSLTest.testDefaultContext(JdbcThinConnectionSSLTest.java:187)
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
>   at 
> org.apache.ignite.jdbc.thin.JdbcThinConnectionSSLTest.testDefaultContext(JdbcThinConnectionSSLTest.java:187)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7659) Reduce multiple Trainer interfaces to one

2018-04-12 Thread Anton Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Dmitriev updated IGNITE-7659:
---
Fix Version/s: 2.5

> Reduce multiple Trainer interfaces to one
> -
>
> Key: IGNITE-7659
> URL: https://issues.apache.org/jira/browse/IGNITE-7659
> Project: Ignite
>  Issue Type: Improvement
>  Components: ml
>Reporter: Anton Dmitriev
>Assignee: Anton Dmitriev
>Priority: Minor
> Fix For: 2.5
>
>
> Currently there are two `Trainer` interfaces: in package 
> `org.apache.ignite.ml` and `org.apache.ignite.ml.trainers`. We need to use 
> only one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-4750) SQL: Support GROUP_CONCAT function

2018-04-12 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435539#comment-16435539
 ] 

Taras Ledkov commented on IGNITE-4750:
--

[~xionghao], thanks for the comment. The described case have been fixed.
[~vozerov], [~al.psc] please review the changes. Tests are OK.

> SQL: Support GROUP_CONCAT function
> --
>
> Key: IGNITE-4750
> URL: https://issues.apache.org/jira/browse/IGNITE-4750
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Denis Magda
>Assignee: Taras Ledkov
>Priority: Major
>  Labels: sql-engine
> Fix For: 2.6
>
>
> GROUP_CONCAT function is not supported at the moment. Makes sense to fill 
> this gap:
> http://apache-ignite-users.70518.x6.nabble.com/GROUP-CONCAT-function-is-unsupported-td10757.html
> Presently the function doc is hidden:
> https://apacheignite-sql.readme.io/docs/group_concat
> Open it up once the ticket is released.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8173) ignite.getOrCreateCache(cacheConfig).iterator() method works incorrect for replicated cache in case if some data node isn't in baseline

2018-04-12 Thread Andrey Aleksandrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Aleksandrov updated IGNITE-8173:
---
Fix Version/s: 2.5

> ignite.getOrCreateCache(cacheConfig).iterator() method works incorrect for 
> replicated cache in case if some data node isn't in baseline
> ---
>
> Key: IGNITE-8173
> URL: https://issues.apache.org/jira/browse/IGNITE-8173
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.4
>Reporter: Andrey Aleksandrov
>Priority: Major
>  Labels: persistence
> Fix For: 2.5
>
> Attachments: StartClientNode.java, StartOneServerAndActivate.java, 
> StartSecondServerNode.java
>
>
> How to reproduce:
> 1)Create new server node and activate the cluster (run the 
> StartOneServerAndActivate.java example)
>  2)Create another server node. (run the StartSecondServerNode.java example 
> after step 1)
>  3)Create client node that will create new replicated cache, put some values 
> and try to get the cache iterator. (run the StartClientNode.java example)
> In this case you should see the log like that:
> Elements in cache: 0
>  **
>  **
>  **
>  **
> Elements in cache: 1000
>  **
>  **
>  **
>  **
> Elements in cache: 1000
>  **
>  **
>  **
>  **
> Elements in cache: 0
>  **
>  **
>  **
>  **
> Elements in cache: 0
>  **
>  **
> It means that sometime we see the cache values and sometime we do not. It 
> happens because second node isn't in the baseline topology. In this case all 
> data was stored to first server node and it's ok.
> The problem here is that when we ask for the iterator then we choose the node 
> that contains the required cache and send to it scan query. At the moment we 
> can choose the node in baseline topology (with data) and empty node. Looks 
> like this behavior should be changed.
> Incorrect logic located at GridCacheQueryAdapter.java at "private 
> Collection nodes() throws IgniteCheckedException" method:
> {noformat}
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return Collections.singletonList(cctx.localNode());
> Collection affNodes = nodes(cctx, null, null); //HERE WE 
> HAVE BOTH NODES AT affNodes
> return affNodes.isEmpty() ? affNodes : 
> Collections.singletonList(F.rand(affNodes)) {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8173) ignite.getOrCreateCache(cacheConfig).iterator() method works incorrect for replicated cache in case if some data node isn't in baseline

2018-04-12 Thread Andrey Aleksandrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Aleksandrov updated IGNITE-8173:
---
Affects Version/s: (was: 2.5)
   2.4

> ignite.getOrCreateCache(cacheConfig).iterator() method works incorrect for 
> replicated cache in case if some data node isn't in baseline
> ---
>
> Key: IGNITE-8173
> URL: https://issues.apache.org/jira/browse/IGNITE-8173
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.4
>Reporter: Andrey Aleksandrov
>Priority: Major
>  Labels: persistence
> Fix For: 2.5
>
> Attachments: StartClientNode.java, StartOneServerAndActivate.java, 
> StartSecondServerNode.java
>
>
> How to reproduce:
> 1)Create new server node and activate the cluster (run the 
> StartOneServerAndActivate.java example)
>  2)Create another server node. (run the StartSecondServerNode.java example 
> after step 1)
>  3)Create client node that will create new replicated cache, put some values 
> and try to get the cache iterator. (run the StartClientNode.java example)
> In this case you should see the log like that:
> Elements in cache: 0
>  **
>  **
>  **
>  **
> Elements in cache: 1000
>  **
>  **
>  **
>  **
> Elements in cache: 1000
>  **
>  **
>  **
>  **
> Elements in cache: 0
>  **
>  **
>  **
>  **
> Elements in cache: 0
>  **
>  **
> It means that sometime we see the cache values and sometime we do not. It 
> happens because second node isn't in the baseline topology. In this case all 
> data was stored to first server node and it's ok.
> The problem here is that when we ask for the iterator then we choose the node 
> that contains the required cache and send to it scan query. At the moment we 
> can choose the node in baseline topology (with data) and empty node. Looks 
> like this behavior should be changed.
> Incorrect logic located at GridCacheQueryAdapter.java at "private 
> Collection nodes() throws IgniteCheckedException" method:
> {noformat}
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return Collections.singletonList(cctx.localNode());
> Collection affNodes = nodes(cctx, null, null); //HERE WE 
> HAVE BOTH NODES AT affNodes
> return affNodes.isEmpty() ? affNodes : 
> Collections.singletonList(F.rand(affNodes)) {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7918) Huge memory leak when data streamer used together with local cache

2018-04-12 Thread Andrey Aleksandrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Aleksandrov updated IGNITE-7918:
---
Priority: Major  (was: Blocker)

> Huge memory leak when data streamer used together with local cache
> --
>
> Key: IGNITE-7918
> URL: https://issues.apache.org/jira/browse/IGNITE-7918
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.3
>Reporter: Zbyszek B
>Assignee: Andrey Aleksandrov
>Priority: Major
> Fix For: 2.5
>
> Attachments: Demo.java, MemLeak-Ignite.png, MemLeak-Ignite.txt
>
>
> Dear Igniters,
> We observe huge memory leak when data streamer used together with local cache.
> In the attached demo producer produces local cache with single binary object 
> and passes this to the queue. Consumer picks up the cache from the queue, 
> constructs different binary object from it, adds it to global partitioned 
> cache and destroys local cache.
> This design causes a significant leak - the whole heap is used within minutes 
> (no matter if this is 4G or 24G).
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8236) Sporadic NPE in IgniteAtomicSequenceExample

2018-04-12 Thread Ivan Artukhov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Artukhov updated IGNITE-8236:
--
Priority: Major  (was: Critical)

> Sporadic NPE in IgniteAtomicSequenceExample 
> 
>
> Key: IGNITE-8236
> URL: https://issues.apache.org/jira/browse/IGNITE-8236
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures
>Affects Versions: 1.9
>Reporter: Ivan Artukhov
>Priority: Major
> Attachments: IgniteAtomicSequenceExample.1.zip
>
>
> Platforms: Linux, Windows
> Reproducibility: ~30%
> Sometimes the _datastructures.IgniteAtomicSequenceExample_ throws the 
> following exception during run:
> {code}
> [06:26:26,810][SEVERE][utility-#62%null%][GridCacheIoManager] Failed 
> processing message [senderId=71fea304-3e08-4706-880a-7b00bebb280f, 
> msg=GridNearTxPrepareRequest 
> [futId=d889888b261-bc99dff6-1c40-4ab1-90d2-5a1d12832afc, 
> miniId=e889888b261-bc99dff6-1c40-4ab1-90d2-5a1d12832afc, near=false, 
> topVer=AffinityTopologyVersion [topVer=2, minorTopVer=0], last=true, 
> lastBackups=null, retVal=false, implicitSingle=false, explicitLock=false, 
> subjId=71fea304-3e08-4706-880a-7b00bebb280f, taskNameHash=0, 
> firstClientReq=false, super=GridDistributedTxPrepareRequest [threadId=134, 
> concurrency=PESSIMISTIC, isolation=REPEATABLE_READ, writeVer=GridCacheVersion 
> [topVer=134994380, time=1523514386721, order=1523514383656, nodeOrder=2], 
> timeout=0, invalidate=false, reads=[], writes=[IgniteTxEntry 
> [key=KeyCacheObjectImpl [val=CacheDataStructuresConfigurationKey [], 
> hasValBytes=true], cacheId=-2100569601, partId=-1, txKey=IgniteTxKey 
> [key=KeyCacheObjectImpl [val=CacheDataStructuresConfigurationKey [], 
> hasValBytes=true], cacheId=-2100569601], val=[op=TRANSFORM, val=null], 
> prevVal=[op=NOOP, val=null], oldVal=[op=NOOP, val=null], 
> entryProcessorsCol=[IgniteBiTuple [val1=AddAtomicProcessor 
> [info=DataStructureInfo [name=example-sequence, type=ATOMIC_SEQ]], 
> val2=[Ljava.lang.Object;@1540f725]], ttl=-1, conflictExpireTime=-1, 
> conflictVer=null, explicitVer=null, dhtVer=null, filters=[], 
> filtersPassed=false, filtersSet=false, entry=GridDhtColocatedCacheEntry 
> [super=GridDhtCacheEntry [rdrs=[], locPart=GridDhtLocalPartition 
> [rmvQueueMaxSize=128, rmvdEntryTtl=1, id=21, 
> map=o.a.i.i.processors.cache.GridCacheConcurrentMapImpl@1ea42fb7, cntr=1, 
> shouldBeRenting=false, state=OWNING, reservations=0, empty=false, 
> createTime=04/12/2018 06:26:18], super=GridDistributedCacheEntry 
> [super=GridCacheMapEntry [key=KeyCacheObjectImpl 
> [val=CacheDataStructuresConfigurationKey [], hasValBytes=true], 
> val=CacheObjectImpl [val={example-sequence=DataStructureInfo 
> [name=example-sequence, type=ATOMIC_SEQ]}, hasValBytes=true], 
> startVer=1523514383659, ver=GridCacheVersion [topVer=134994380, 
> time=1523514386783, order=1523514383665, nodeOrder=1], hash=-1424345221, 
> extras=GridCacheMvccEntryExtras [mvcc=GridCacheMvcc 
> [locs=[GridCacheMvccCandidate [nodeId=15305184-e5a2-4872-a549-d98e0df8b745, 
> ver=GridCacheVersion [topVer=134994380, time=1523514386744, 
> order=1523514383660, nodeOrder=1], threadId=113, id=6, 
> topVer=AffinityTopologyVersion [topVer=2, minorTopVer=0], reentry=null, 
> otherNodeId=15305184-e5a2-4872-a549-d98e0df8b745, otherVer=GridCacheVersion 
> [topVer=134994380, time=1523514386744, order=1523514383660, nodeOrder=1], 
> mappedDhtNodes=null, mappedNearNodes=null, ownerVer=GridCacheVersion 
> [topVer=134994380, time=1523514386732, order=1523514383657, nodeOrder=1], 
> serOrder=null, key=KeyCacheObjectImpl 
> [val=CacheDataStructuresConfigurationKey [], hasValBytes=true], 
> masks=local=1|owner=1|ready=1|reentry=0|used=0|tx=1|single_implicit=0|dht_local=1|near_local=0|removed=0|read=0,
>  prevVer=null, nextVer=GridCacheVersion [topVer=134994380, 
> time=1523514386789, order=1523514383667, nodeOrder=1]]], rmts=null]], 
> flags=0, prepared=0, locked=false, nodeId=null, locMapped=false, 
> expiryPlc=null, transferExpiryPlc=false, flags=0, partUpdateCntr=0, 
> serReadVer=null, xidVer=null]], dhtVers={IgniteTxKey [key=KeyCacheObjectImpl 
> [val=CacheDataStructuresConfigurationKey [], hasValBytes=true], 
> cacheId=-2100569601]=null}, txSize=0, onePhaseCommit=true, sys=true, plc=5, 
> txState=IgniteTxStateImpl [activeCacheIds=GridLongList [idx=1, 
> arr=[-2100569601]], txMap={IgniteTxKey [key=KeyCacheObjectImpl 
> [val=CacheDataStructuresConfigurationKey [], hasValBytes=true], 
> cacheId=-2100569601]=IgniteTxEntry [key=KeyCacheObjectImpl 
> [val=CacheDataStructuresConfigurationKey [], hasValBytes=true], 
> cacheId=-2100569601, partId=21, txKey=IgniteTxKey [key=KeyCacheObjectImpl 
> [val=CacheDataStructuresConfigurationKey [], hasValBytes=true], 
> cacheId=-2100569601], val=[op=TRANSFORM, val=null], 

[jira] [Commented] (IGNITE-7918) Huge memory leak when data streamer used together with local cache

2018-04-12 Thread Andrey Aleksandrov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435481#comment-16435481
 ] 

Andrey Aleksandrov commented on IGNITE-7918:


[~dpavlov] Could you please take a look and commit if everything is ok?

> Huge memory leak when data streamer used together with local cache
> --
>
> Key: IGNITE-7918
> URL: https://issues.apache.org/jira/browse/IGNITE-7918
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.3
>Reporter: Zbyszek B
>Assignee: Andrey Aleksandrov
>Priority: Blocker
> Fix For: 2.5
>
> Attachments: Demo.java, MemLeak-Ignite.png, MemLeak-Ignite.txt
>
>
> Dear Igniters,
> We observe huge memory leak when data streamer used together with local cache.
> In the attached demo producer produces local cache with single binary object 
> and passes this to the queue. Consumer picks up the cache from the queue, 
> constructs different binary object from it, adds it to global partitioned 
> cache and destroys local cache.
> This design causes a significant leak - the whole heap is used within minutes 
> (no matter if this is 4G or 24G).
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8232) ML package cleanup for 2.5 release

2018-04-12 Thread Anton Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Dmitriev updated IGNITE-8232:
---
Description: 
To prepare release 2.5 we need to cleanup existing code.

# Cleanup {{org.apache.ignite.ml.Trainer}} and correspondent 
{{LinearRegressionQRTrainer}}.
# Cleanup {{org.apache.ignite.ml.trainers.Trainer}} and correspondent 
{{GroupTrainer}}/{{Metaoptimizer}}.
# Cleanup {{Estimators}} class.
# Use {{SimpleLabeledDatasetData}} instead of {{LinSysPartitionDataOnHeap}}.
# Cleanup {{GradientFunction}}, {{GradientDescent}}, 
{{LeastSquaresGradientFunction}}, etc. in {{optimization}} package.

  was:
To prepare release 2.5 we need to cleanup existing code.

# Cleanup {{org.apache.ignite.ml.Trainer}} and correspondent 
{{LinearRegressionQRTrainer}}.
# Cleanup {{org.apache.ignite.ml.trainers.Trainer}} and correspondent 
{{GroupTrainer}}/{{Metaoptimizer}}.
# Use {{EmptyContext}} class instead of {{KNNPartitionContext}}, 
{{LabelPartitionContext}}, and {{SVMPartitionContext}}.
# Cleanup {{Estimators}} class.
# Add {{serialVersionUID}} to {{KNNClassificationModel}}, 
{{KNNRegressionModel}}, {{SVMLinearMultiClassClassificationModel}} classes.
# Use {{SimpleLabeledDatasetData}} instead of {{LinSysPartitionDataOnHeap}}.
# Cleanup {{GradientFunction}}, {{GradientDescent}}, 
{{LeastSquaresGradientFunction}}, etc. in {{optimization}} package.
# Update {{KNNRegressionTest}}, {{KNNRegressionTest}}, {{KNNRegressionTest}} to 
run without Ignite.


> ML package cleanup for 2.5 release
> --
>
> Key: IGNITE-8232
> URL: https://issues.apache.org/jira/browse/IGNITE-8232
> Project: Ignite
>  Issue Type: Improvement
>  Components: ml
>Affects Versions: 2.5
>Reporter: Anton Dmitriev
>Assignee: Anton Dmitriev
>Priority: Major
>
> To prepare release 2.5 we need to cleanup existing code.
> # Cleanup {{org.apache.ignite.ml.Trainer}} and correspondent 
> {{LinearRegressionQRTrainer}}.
> # Cleanup {{org.apache.ignite.ml.trainers.Trainer}} and correspondent 
> {{GroupTrainer}}/{{Metaoptimizer}}.
> # Cleanup {{Estimators}} class.
> # Use {{SimpleLabeledDatasetData}} instead of {{LinSysPartitionDataOnHeap}}.
> # Cleanup {{GradientFunction}}, {{GradientDescent}}, 
> {{LeastSquaresGradientFunction}}, etc. in {{optimization}} package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8232) ML package cleanup for 2.5 release

2018-04-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435476#comment-16435476
 ] 

ASF GitHub Bot commented on IGNITE-8232:


GitHub user dmitrievanthony opened a pull request:

https://github.com/apache/ignite/pull/3806

IGNITE-8232 ML package cleanup for 2.5 release.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8232

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3806.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3806


commit 3ea8e3df0f7b72ea6f58e2a47d1677dd6863d077
Author: Anton Dmitriev 
Date:   2018-04-12T12:30:08Z

IGNITE-8232 Remove all trainers except DatasetTrainer.




> ML package cleanup for 2.5 release
> --
>
> Key: IGNITE-8232
> URL: https://issues.apache.org/jira/browse/IGNITE-8232
> Project: Ignite
>  Issue Type: Improvement
>  Components: ml
>Affects Versions: 2.5
>Reporter: Anton Dmitriev
>Assignee: Anton Dmitriev
>Priority: Major
>
> To prepare release 2.5 we need to cleanup existing code.
> # Cleanup {{org.apache.ignite.ml.Trainer}} and correspondent 
> {{LinearRegressionQRTrainer}}.
> # Cleanup {{org.apache.ignite.ml.trainers.Trainer}} and correspondent 
> {{GroupTrainer}}/{{Metaoptimizer}}.
> # Use {{EmptyContext}} class instead of {{KNNPartitionContext}}, 
> {{LabelPartitionContext}}, and {{SVMPartitionContext}}.
> # Cleanup {{Estimators}} class.
> # Add {{serialVersionUID}} to {{KNNClassificationModel}}, 
> {{KNNRegressionModel}}, {{SVMLinearMultiClassClassificationModel}} classes.
> # Use {{SimpleLabeledDatasetData}} instead of {{LinSysPartitionDataOnHeap}}.
> # Cleanup {{GradientFunction}}, {{GradientDescent}}, 
> {{LeastSquaresGradientFunction}}, etc. in {{optimization}} package.
> # Update {{KNNRegressionTest}}, {{KNNRegressionTest}}, {{KNNRegressionTest}} 
> to run without Ignite.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8237) Ignite blocks on SecurityException in exchange-worker due to unauthorised on-heap cache configuration

2018-04-12 Thread Alexey Kukushkin (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kukushkin updated IGNITE-8237:
-
Summary: Ignite blocks on SecurityException in exchange-worker due to 
unauthorised on-heap cache configuration   (was: Ignite blocks on 
SecurityException in exchange-worker due to unauthorised off-heap cache 
configuration )

> Ignite blocks on SecurityException in exchange-worker due to unauthorised 
> on-heap cache configuration 
> --
>
> Key: IGNITE-8237
> URL: https://issues.apache.org/jira/browse/IGNITE-8237
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Alexey Kukushkin
>Assignee: Alexey Kukushkin
>Priority: Blocker
>
> Ignite blocks on SecurityException in exchange-worker due to unauthorised 
> off-heap cache configuration. Consider moving IGNITE_DISABLE_OFFHEAP_CACHE 
> system property check to a more appropriate place to avoid blocking Ignite.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8237) Ignite blocks on SecurityException in exchange-worker due to unauthorised on-heap cache configuration

2018-04-12 Thread Alexey Kukushkin (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kukushkin updated IGNITE-8237:
-
Description: Ignite blocks on SecurityException in exchange-worker due to 
unauthorised on-heap cache configuration. Consider moving 
IGNITE_DISABLE_ONHEAP_CACHE system property check to a more appropriate place 
to avoid blocking Ignite.  (was: Ignite blocks on SecurityException in 
exchange-worker due to unauthorised off-heap cache configuration. Consider 
moving IGNITE_DISABLE_OFFHEAP_CACHE system property check to a more appropriate 
place to avoid blocking Ignite.)

> Ignite blocks on SecurityException in exchange-worker due to unauthorised 
> on-heap cache configuration 
> --
>
> Key: IGNITE-8237
> URL: https://issues.apache.org/jira/browse/IGNITE-8237
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Alexey Kukushkin
>Assignee: Alexey Kukushkin
>Priority: Blocker
>
> Ignite blocks on SecurityException in exchange-worker due to unauthorised 
> on-heap cache configuration. Consider moving IGNITE_DISABLE_ONHEAP_CACHE 
> system property check to a more appropriate place to avoid blocking Ignite.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-7972) NPE in TTL manager.

2018-04-12 Thread Andrew Mashenkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Mashenkov reassigned IGNITE-7972:


Assignee: Andrew Mashenkov

> NPE in TTL manager.
> ---
>
> Key: IGNITE-7972
> URL: https://issues.apache.org/jira/browse/IGNITE-7972
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.3
>Reporter: Andrew Mashenkov
>Assignee: Andrew Mashenkov
>Priority: Major
> Attachments: npe.log
>
>
> TTL manager can try to evict expired entries on cache that wasn't initialized 
> yet due to a race.
> This lead to NPE in unwindEvicts method.
> PFA stacktrace.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8237) Ignite blocks on SecurityException in exchange-worker due to unauthorised off-heap cache configuration

2018-04-12 Thread Alexey Kukushkin (JIRA)
Alexey Kukushkin created IGNITE-8237:


 Summary: Ignite blocks on SecurityException in exchange-worker due 
to unauthorised off-heap cache configuration 
 Key: IGNITE-8237
 URL: https://issues.apache.org/jira/browse/IGNITE-8237
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.5
Reporter: Alexey Kukushkin
Assignee: Alexey Kukushkin


Ignite blocks on SecurityException in exchange-worker due to unauthorised 
off-heap cache configuration. Consider moving IGNITE_DISABLE_OFFHEAP_CACHE 
system property check to a more appropriate place to avoid blocking Ignite.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8122) Partition state restored from WAL may be lost if no checkpoints are done

2018-04-12 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-8122:
-
Priority: Major  (was: Minor)

> Partition state restored from WAL may be lost if no checkpoints are done
> 
>
> Key: IGNITE-8122
> URL: https://issues.apache.org/jira/browse/IGNITE-8122
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Pavel Kovalenko
>Assignee: Alexey Goncharuk
>Priority: Major
> Fix For: 2.5
>
>
> Problem:
> 1) Start several nodes with enabled persistence.
> 2) Make sure that all partitions for 'ignite-sys-cache' have status OWN on 
> all nodes and appropriate PartitionMetaStateRecord record is logged to WAL
> 3) Stop all nodes and start again, activate cluster. Checkpoint for 
> 'ignite-sys-cache' is empty, because there were no data in cache.
> 4) State for all partitions will be restored to OWN 
> (GridCacheDatabaseSharedManager#restoreState) from WAL, but not recorded to 
> page memory, because there were no checkpoints and data in cache. Store 
> manager doesn't have any allocated pages (including meta) for such partitions.
> 5) On exchange done we're trying to restore states of partitions 
> (initPartitionsWhenAffinityReady) on all nodes. Because page memory is empty, 
> states of all partitions will be restored to MOVING by default.
> 6) All nodes start to rebalance partitions from each other and this process 
> become unpredictable because we're trying to rebalance from MOVING partitions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8233) KNN and SVM algorithms don't work when partition doesn't contain data

2018-04-12 Thread Anton Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Dmitriev updated IGNITE-8233:
---
Affects Version/s: (was: 2.4)
   2.5

> KNN and SVM algorithms don't work when partition doesn't contain data
> -
>
> Key: IGNITE-8233
> URL: https://issues.apache.org/jira/browse/IGNITE-8233
> Project: Ignite
>  Issue Type: Bug
>  Components: ml
>Affects Versions: 2.5
>Reporter: Anton Dmitriev
>Assignee: Anton Dmitriev
>Priority: Major
> Fix For: 2.5
>
>
> KNN and SVM algorithms implemented with assumption that partition data won't 
> be null:
> {code:java}
> public LabeledDataset(double[][] mtx, double[] lbs, String[] featureNames, 
> boolean isDistributed) {
> super();
> assert mtx != null;
> assert lbs != null;
> {code}
> Currently it's wrong assumption, so we need to update dataset to support this 
> assumption or update these algorithms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8236) Sporadic NPE in IgniteAtomicSequenceExample

2018-04-12 Thread Ivan Artukhov (JIRA)
Ivan Artukhov created IGNITE-8236:
-

 Summary: Sporadic NPE in IgniteAtomicSequenceExample 
 Key: IGNITE-8236
 URL: https://issues.apache.org/jira/browse/IGNITE-8236
 Project: Ignite
  Issue Type: Bug
  Components: data structures
Affects Versions: 1.9
Reporter: Ivan Artukhov
 Attachments: IgniteAtomicSequenceExample.1.zip

Platforms: Linux, Windows
Reproducibility: ~30%

Sometimes the _datastructures.IgniteAtomicSequenceExample_ throws the following 
exception during run:

{code}
[06:26:26,810][SEVERE][utility-#62%null%][GridCacheIoManager] Failed processing 
message [senderId=71fea304-3e08-4706-880a-7b00bebb280f, 
msg=GridNearTxPrepareRequest 
[futId=d889888b261-bc99dff6-1c40-4ab1-90d2-5a1d12832afc, 
miniId=e889888b261-bc99dff6-1c40-4ab1-90d2-5a1d12832afc, near=false, 
topVer=AffinityTopologyVersion [topVer=2, minorTopVer=0], last=true, 
lastBackups=null, retVal=false, implicitSingle=false, explicitLock=false, 
subjId=71fea304-3e08-4706-880a-7b00bebb280f, taskNameHash=0, 
firstClientReq=false, super=GridDistributedTxPrepareRequest [threadId=134, 
concurrency=PESSIMISTIC, isolation=REPEATABLE_READ, writeVer=GridCacheVersion 
[topVer=134994380, time=1523514386721, order=1523514383656, nodeOrder=2], 
timeout=0, invalidate=false, reads=[], writes=[IgniteTxEntry 
[key=KeyCacheObjectImpl [val=CacheDataStructuresConfigurationKey [], 
hasValBytes=true], cacheId=-2100569601, partId=-1, txKey=IgniteTxKey 
[key=KeyCacheObjectImpl [val=CacheDataStructuresConfigurationKey [], 
hasValBytes=true], cacheId=-2100569601], val=[op=TRANSFORM, val=null], 
prevVal=[op=NOOP, val=null], oldVal=[op=NOOP, val=null], 
entryProcessorsCol=[IgniteBiTuple [val1=AddAtomicProcessor 
[info=DataStructureInfo [name=example-sequence, type=ATOMIC_SEQ]], 
val2=[Ljava.lang.Object;@1540f725]], ttl=-1, conflictExpireTime=-1, 
conflictVer=null, explicitVer=null, dhtVer=null, filters=[], 
filtersPassed=false, filtersSet=false, entry=GridDhtColocatedCacheEntry 
[super=GridDhtCacheEntry [rdrs=[], locPart=GridDhtLocalPartition 
[rmvQueueMaxSize=128, rmvdEntryTtl=1, id=21, 
map=o.a.i.i.processors.cache.GridCacheConcurrentMapImpl@1ea42fb7, cntr=1, 
shouldBeRenting=false, state=OWNING, reservations=0, empty=false, 
createTime=04/12/2018 06:26:18], super=GridDistributedCacheEntry 
[super=GridCacheMapEntry [key=KeyCacheObjectImpl 
[val=CacheDataStructuresConfigurationKey [], hasValBytes=true], 
val=CacheObjectImpl [val={example-sequence=DataStructureInfo 
[name=example-sequence, type=ATOMIC_SEQ]}, hasValBytes=true], 
startVer=1523514383659, ver=GridCacheVersion [topVer=134994380, 
time=1523514386783, order=1523514383665, nodeOrder=1], hash=-1424345221, 
extras=GridCacheMvccEntryExtras [mvcc=GridCacheMvcc 
[locs=[GridCacheMvccCandidate [nodeId=15305184-e5a2-4872-a549-d98e0df8b745, 
ver=GridCacheVersion [topVer=134994380, time=1523514386744, 
order=1523514383660, nodeOrder=1], threadId=113, id=6, 
topVer=AffinityTopologyVersion [topVer=2, minorTopVer=0], reentry=null, 
otherNodeId=15305184-e5a2-4872-a549-d98e0df8b745, otherVer=GridCacheVersion 
[topVer=134994380, time=1523514386744, order=1523514383660, nodeOrder=1], 
mappedDhtNodes=null, mappedNearNodes=null, ownerVer=GridCacheVersion 
[topVer=134994380, time=1523514386732, order=1523514383657, nodeOrder=1], 
serOrder=null, key=KeyCacheObjectImpl [val=CacheDataStructuresConfigurationKey 
[], hasValBytes=true], 
masks=local=1|owner=1|ready=1|reentry=0|used=0|tx=1|single_implicit=0|dht_local=1|near_local=0|removed=0|read=0,
 prevVer=null, nextVer=GridCacheVersion [topVer=134994380, time=1523514386789, 
order=1523514383667, nodeOrder=1]]], rmts=null]], flags=0, prepared=0, 
locked=false, nodeId=null, locMapped=false, expiryPlc=null, 
transferExpiryPlc=false, flags=0, partUpdateCntr=0, serReadVer=null, 
xidVer=null]], dhtVers={IgniteTxKey [key=KeyCacheObjectImpl 
[val=CacheDataStructuresConfigurationKey [], hasValBytes=true], 
cacheId=-2100569601]=null}, txSize=0, onePhaseCommit=true, sys=true, plc=5, 
txState=IgniteTxStateImpl [activeCacheIds=GridLongList [idx=1, 
arr=[-2100569601]], txMap={IgniteTxKey [key=KeyCacheObjectImpl 
[val=CacheDataStructuresConfigurationKey [], hasValBytes=true], 
cacheId=-2100569601]=IgniteTxEntry [key=KeyCacheObjectImpl 
[val=CacheDataStructuresConfigurationKey [], hasValBytes=true], 
cacheId=-2100569601, partId=21, txKey=IgniteTxKey [key=KeyCacheObjectImpl 
[val=CacheDataStructuresConfigurationKey [], hasValBytes=true], 
cacheId=-2100569601], val=[op=TRANSFORM, val=null], prevVal=[op=NOOP, 
val=null], oldVal=[op=NOOP, val=null], entryProcessorsCol=[IgniteBiTuple 
[val1=AddAtomicProcessor [info=DataStructureInfo [name=example-sequence, 
type=ATOMIC_SEQ]], val2=[Ljava.lang.Object;@1540f725]], ttl=-1, 
conflictExpireTime=-1, conflictVer=null, explicitVer=null, dhtVer=null, 
filters=[], filtersPassed=false, filtersSet=false, 

[jira] [Commented] (IGNITE-8230) SQL: CREATE TABLE doesn't take backups from template

2018-04-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435454#comment-16435454
 ] 

ASF GitHub Bot commented on IGNITE-8230:


Github user devozerov closed the pull request at:

https://github.com/apache/ignite/pull/3803


> SQL: CREATE TABLE doesn't take backups from template
> 
>
> Key: IGNITE-8230
> URL: https://issues.apache.org/jira/browse/IGNITE-8230
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.4
>Reporter: Evgenii Zhuravlev
>Assignee: Vladimir Ozerov
>Priority: Blocker
> Fix For: 2.5
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7871) Implement 2-phase waiting for partition release

2018-04-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435435#comment-16435435
 ] 

ASF GitHub Bot commented on IGNITE-7871:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3804


> Implement 2-phase waiting for partition release
> ---
>
> Key: IGNITE-7871
> URL: https://issues.apache.org/jira/browse/IGNITE-7871
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Affects Versions: 2.4
>Reporter: Pavel Kovalenko
>Assignee: Alexey Goncharuk
>Priority: Major
> Fix For: 2.5
>
>
> Using validation implemented in IGNITE-7467 we can observe the following 
> situation:
> Let's we have some partition and nodes which owning it N1 (primary) and N2 
> (backup)
> 1) Exchange is started
> 2) N2 finished waiting for partitions release and started to create Single 
> message (with update counters).
> 3) N1 waits for partitions release.
> 4) We have pending cache update N1 -> N2. This update is done after step 2.
> 5) This update increments update counters both on N1 and N2.
> 6) N1 finished waiting for partitions release, while N2 already sent Single 
> message to coordinator with outdated update counter.
> 7) Coordinator sees different partition update counters for N1 and N2. 
> Validation is failed, while data is equal.  
> Solution:
> Every server node participating in PME should wait while all other server 
> nodes will finish their ongoing updates (finish wait for partition release 
> method)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8135) Missing SQL-DDL Authorization

2018-04-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435430#comment-16435430
 ] 

ASF GitHub Bot commented on IGNITE-8135:


Github user devozerov closed the pull request at:

https://github.com/apache/ignite/pull/3801


> Missing SQL-DDL Authorization
> -
>
> Key: IGNITE-8135
> URL: https://issues.apache.org/jira/browse/IGNITE-8135
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.5
>Reporter: Alexey Kukushkin
>Assignee: Vladimir Ozerov
>Priority: Major
> Fix For: 2.5
>
>
> Ignite has infrastructure to support 3-rd party security plugins. To support 
> authorization, Ignite has security checks spread all over the code delegating 
> actual authorization to a 3rd party security plugins if configured.
> In addition to existing checks, Ignite 2.5 will authorise "create" and 
> "destroy" cache operations.
> The problem is authorization is not implemented for SQL at all - even if 
> authorization is enabled, it is currently possible to run any SQL to 
> create/drop/alter caches and read/modify/remove the cache data thus bypassing 
> security. The problem exists for both DDL (create/drop/alter table) and DML 
> (select/merge/insert/delete).
> This ticket addresses DDL only: DML will be addressed by a different ticket.
> The problem must be fixed for all clients: Ignite client and server nodes, 
> Java and .NET thin clients, ODBC and JDBC, REST.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-7659) Reduce multiple Trainer interfaces to one

2018-04-12 Thread Anton Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Dmitriev reassigned IGNITE-7659:
--

Assignee: Anton Dmitriev

> Reduce multiple Trainer interfaces to one
> -
>
> Key: IGNITE-7659
> URL: https://issues.apache.org/jira/browse/IGNITE-7659
> Project: Ignite
>  Issue Type: Improvement
>  Components: ml
>Reporter: Anton Dmitriev
>Assignee: Anton Dmitriev
>Priority: Minor
>
> Currently there are two `Trainer` interfaces: in package 
> `org.apache.ignite.ml` and `org.apache.ignite.ml.trainers`. We need to use 
> only one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7918) Huge memory leak when data streamer used together with local cache

2018-04-12 Thread Ilya Lantukh (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435402#comment-16435402
 ] 

Ilya Lantukh commented on IGNITE-7918:
--

Looks good.

> Huge memory leak when data streamer used together with local cache
> --
>
> Key: IGNITE-7918
> URL: https://issues.apache.org/jira/browse/IGNITE-7918
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.3
>Reporter: Zbyszek B
>Assignee: Andrey Aleksandrov
>Priority: Blocker
> Fix For: 2.5
>
> Attachments: Demo.java, MemLeak-Ignite.png, MemLeak-Ignite.txt
>
>
> Dear Igniters,
> We observe huge memory leak when data streamer used together with local cache.
> In the attached demo producer produces local cache with single binary object 
> and passes this to the queue. Consumer picks up the cache from the queue, 
> constructs different binary object from it, adds it to global partitioned 
> cache and destroys local cache.
> This design causes a significant leak - the whole heap is used within minutes 
> (no matter if this is 4G or 24G).
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7691) Provide info about DECIMAL column scale and precision

2018-04-12 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435386#comment-16435386
 ] 

Dmitriy Pavlov commented on IGNITE-7691:


It seems there are no test left, all others are failing in master.

> Provide info about DECIMAL column scale and precision
> -
>
> Key: IGNITE-7691
> URL: https://issues.apache.org/jira/browse/IGNITE-7691
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 2.4
>Reporter: Nikolay Izhikov
>Assignee: Nikolay Izhikov
>Priority: Minor
> Fix For: 2.5
>
>
> Currently, it impossible to obtain scale and precision of DECIMAL column from 
> sql table metadata.
> Ignite should provide those type of meta information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-7592) Dynamic cache with rebalanceDelay == -1 doesn't trigger late affinity assignment even after explicit rebalance is called on every node

2018-04-12 Thread Maxim Muzafarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maxim Muzafarov reassigned IGNITE-7592:
---

Assignee: Maxim Muzafarov

> Dynamic cache with rebalanceDelay == -1 doesn't trigger late affinity 
> assignment even after explicit rebalance is called on every node
> --
>
> Key: IGNITE-7592
> URL: https://issues.apache.org/jira/browse/IGNITE-7592
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.4
>Reporter: Ilya Lantukh
>Assignee: Maxim Muzafarov
>Priority: Major
>
> Reproducer:
> {noformat}
> startGrids(NODE_COUNT);
> IgniteEx ig = grid(0);
> ig.cluster().active(true);
> awaitPartitionMapExchange();
> IgniteCache cache =
> ig.createCache(
> new CacheConfiguration()
> .setName(CACHE_NAME)
> .setCacheMode(PARTITIONED)
> .setBackups(1)
> .setPartitionLossPolicy(READ_ONLY_SAFE)
> .setReadFromBackup(true)
> .setWriteSynchronizationMode(FULL_SYNC)
> .setRebalanceDelay(-1)
> );
> for (int i = 0; i < NODE_COUNT; i++)
> grid(i).cache(CACHE_NAME).rebalance().get();
> awaitPartitionMapExchange();
> {noformat}
> Sometimes this code will hang on the last awaitPartitionMapExchange(), though 
> probability that it will happen is rather low (<10%).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7871) Implement 2-phase waiting for partition release

2018-04-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435372#comment-16435372
 ] 

ASF GitHub Bot commented on IGNITE-7871:


GitHub user Jokser opened a pull request:

https://github.com/apache/ignite/pull/3804

IGNITE-7871 Fixed condition for cache partitions validation.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite 
ignite-7871-validation-condition-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3804.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3804


commit 044ecd2cced4ccbe745cfc53442e35d7f7b5d300
Author: Pavel Kovalenko 
Date:   2018-04-12T11:16:47Z

IGNITE-7871 Fixed condition for cache partitions validation.




> Implement 2-phase waiting for partition release
> ---
>
> Key: IGNITE-7871
> URL: https://issues.apache.org/jira/browse/IGNITE-7871
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Affects Versions: 2.4
>Reporter: Pavel Kovalenko
>Assignee: Alexey Goncharuk
>Priority: Major
> Fix For: 2.5
>
>
> Using validation implemented in IGNITE-7467 we can observe the following 
> situation:
> Let's we have some partition and nodes which owning it N1 (primary) and N2 
> (backup)
> 1) Exchange is started
> 2) N2 finished waiting for partitions release and started to create Single 
> message (with update counters).
> 3) N1 waits for partitions release.
> 4) We have pending cache update N1 -> N2. This update is done after step 2.
> 5) This update increments update counters both on N1 and N2.
> 6) N1 finished waiting for partitions release, while N2 already sent Single 
> message to coordinator with outdated update counter.
> 7) Coordinator sees different partition update counters for N1 and N2. 
> Validation is failed, while data is equal.  
> Solution:
> Every server node participating in PME should wait while all other server 
> nodes will finish their ongoing updates (finish wait for partition release 
> method)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8235) Implement execution of selected part of SQL query

2018-04-12 Thread Alexander Kalinin (JIRA)
Alexander Kalinin created IGNITE-8235:
-

 Summary: Implement execution of selected part of SQL query
 Key: IGNITE-8235
 URL: https://issues.apache.org/jira/browse/IGNITE-8235
 Project: Ignite
  Issue Type: Improvement
  Components: wizards
Reporter: Alexander Kalinin
Assignee: Alexander Kalinin


If we had 3 SQL rows in the notebook, and selected one and clicked execute. 
We should only execute the highlighted row. If no row is highlighted, then all 
rows should be executed. That's a standard feature of graphical SQL tools.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7691) Provide info about DECIMAL column scale and precision

2018-04-12 Thread Nikolay Izhikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435345#comment-16435345
 ] 

Nikolay Izhikov commented on IGNITE-7691:
-

Latest "Run All" for my PR - 
https://ci.ignite.apache.org/viewLog.html?buildId=1195498
I've checked all failures locally and it seems, that no failed tests that are 
related to my changes.


> Provide info about DECIMAL column scale and precision
> -
>
> Key: IGNITE-7691
> URL: https://issues.apache.org/jira/browse/IGNITE-7691
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 2.4
>Reporter: Nikolay Izhikov
>Assignee: Nikolay Izhikov
>Priority: Minor
> Fix For: 2.5
>
>
> Currently, it impossible to obtain scale and precision of DECIMAL column from 
> sql table metadata.
> Ignite should provide those type of meta information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7824) Rollback part of IGNITE-7170 changes

2018-04-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435309#comment-16435309
 ] 

ASF GitHub Bot commented on IGNITE-7824:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3788


> Rollback part of IGNITE-7170 changes
> 
>
> Key: IGNITE-7824
> URL: https://issues.apache.org/jira/browse/IGNITE-7824
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.4
>Reporter: Alexey Popov
>Assignee: Amelchev Nikita
>Priority: Trivial
>  Labels: newbie
> Fix For: 2.6
>
>
> Please rollback changes done by mistake:
> U.quietAndWarn(log, "Nodes started on local machine require 
> more than 20% of physical RAM what can " +
> back to:
> U.quietAndWarn(log, "Nodes started on local machine require 
> more than 80% of physical RAM what can " +



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-8227) Research possibility and implement JUnit test failure handler for TeamCity

2018-04-12 Thread Andrey Gura (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435298#comment-16435298
 ] 

Andrey Gura edited comment on IGNITE-8227 at 4/12/18 10:23 AM:
---

It's adhook decision and should be made by developer during test development. I 
have doubt about common failure handler.


was (Author: agura):
It's adhook decision and should be made by developer during test development. I 
have doubt that some common failure handler makes sense.

> Research possibility and implement JUnit test failure handler for TeamCity
> --
>
> Key: IGNITE-8227
> URL: https://issues.apache.org/jira/browse/IGNITE-8227
> Project: Ignite
>  Issue Type: Test
>Reporter: Dmitriy Pavlov
>Assignee: Dmitriy Pavlov
>Priority: Major
> Fix For: 2.6
>
>
> After IEP-14 
> (https://cwiki.apache.org/confluence/display/IGNITE/IEP-14+Ignite+failures+handling)
>   we found a lot of TC failures involving unexpected nodes stop.
> To avoid suites exit codes, tests have NoOpFailureHandler as default.
> But instead of this, better handler could be 
> stopNode + fail currenly running test with message.
> This default allows to identify such failures without log-message fail 
> condition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8227) Research possibility and implement JUnit test failure handler for TeamCity

2018-04-12 Thread Andrey Gura (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435298#comment-16435298
 ] 

Andrey Gura commented on IGNITE-8227:
-

It's adhook decision and should be made by developer during test development. I 
have doubt that some common failure handler makes sense.

> Research possibility and implement JUnit test failure handler for TeamCity
> --
>
> Key: IGNITE-8227
> URL: https://issues.apache.org/jira/browse/IGNITE-8227
> Project: Ignite
>  Issue Type: Test
>Reporter: Dmitriy Pavlov
>Assignee: Dmitriy Pavlov
>Priority: Major
> Fix For: 2.6
>
>
> After IEP-14 
> (https://cwiki.apache.org/confluence/display/IGNITE/IEP-14+Ignite+failures+handling)
>   we found a lot of TC failures involving unexpected nodes stop.
> To avoid suites exit codes, tests have NoOpFailureHandler as default.
> But instead of this, better handler could be 
> stopNode + fail currenly running test with message.
> This default allows to identify such failures without log-message fail 
> condition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8230) SQL: CREATE TABLE doesn't take backups from template

2018-04-12 Thread Vladimir Ozerov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435286#comment-16435286
 ] 

Vladimir Ozerov commented on IGNITE-8230:
-

[https://ci.ignite.apache.org/viewQueued.html?itemId=1196528=queuedBuildOverviewTab]

https://ci.ignite.apache.org/viewQueued.html?itemId=1196431=queuedBuildOverviewTab

> SQL: CREATE TABLE doesn't take backups from template
> 
>
> Key: IGNITE-8230
> URL: https://issues.apache.org/jira/browse/IGNITE-8230
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.4
>Reporter: Evgenii Zhuravlev
>Assignee: Vladimir Ozerov
>Priority: Blocker
> Fix For: 2.5
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8230) SQL: CREATE TABLE doesn't take backups from template

2018-04-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435281#comment-16435281
 ] 

ASF GitHub Bot commented on IGNITE-8230:


GitHub user devozerov opened a pull request:

https://github.com/apache/ignite/pull/3803

IGNITE-8230



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8230

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3803.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3803


commit b30298fcccb7153d7bc66118bd0d1d61de223258
Author: devozerov 
Date:   2018-04-12T09:47:39Z

WIP

commit 2585132d5e5e12c9eb8c3dd0eb9a65616c3a19ec
Author: devozerov 
Date:   2018-04-12T10:14:08Z

Test.




> SQL: CREATE TABLE doesn't take backups from template
> 
>
> Key: IGNITE-8230
> URL: https://issues.apache.org/jira/browse/IGNITE-8230
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.4
>Reporter: Evgenii Zhuravlev
>Assignee: Vladimir Ozerov
>Priority: Blocker
> Fix For: 2.5
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7824) Rollback part of IGNITE-7170 changes

2018-04-12 Thread Amelchev Nikita (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435280#comment-16435280
 ] 

Amelchev Nikita commented on IGNITE-7824:
-

I have checked log message and fixed it. 

> Rollback part of IGNITE-7170 changes
> 
>
> Key: IGNITE-7824
> URL: https://issues.apache.org/jira/browse/IGNITE-7824
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.4
>Reporter: Alexey Popov
>Assignee: Amelchev Nikita
>Priority: Trivial
>  Labels: newbie
> Fix For: 2.6
>
>
> Please rollback changes done by mistake:
> U.quietAndWarn(log, "Nodes started on local machine require 
> more than 20% of physical RAM what can " +
> back to:
> U.quietAndWarn(log, "Nodes started on local machine require 
> more than 80% of physical RAM what can " +



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7824) Rollback part of IGNITE-7170 changes

2018-04-12 Thread Amelchev Nikita (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amelchev Nikita updated IGNITE-7824:

Fix Version/s: 2.6

> Rollback part of IGNITE-7170 changes
> 
>
> Key: IGNITE-7824
> URL: https://issues.apache.org/jira/browse/IGNITE-7824
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.4
>Reporter: Alexey Popov
>Assignee: Amelchev Nikita
>Priority: Trivial
>  Labels: newbie
> Fix For: 2.6
>
>
> Please rollback changes done by mistake:
> U.quietAndWarn(log, "Nodes started on local machine require 
> more than 20% of physical RAM what can " +
> back to:
> U.quietAndWarn(log, "Nodes started on local machine require 
> more than 80% of physical RAM what can " +



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >