[jira] [Updated] (IGNITE-12654) Some of rentingFutures in GridDhtPartitionTopologyImpl may accumulate a huge number of eviction callbacks

2020-04-17 Thread Nikolay Izhikov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Izhikov updated IGNITE-12654:
-
Fix Version/s: 2.8.1

> Some of rentingFutures in GridDhtPartitionTopologyImpl may accumulate a huge 
> number of eviction callbacks
> -
>
> Key: IGNITE-12654
> URL: https://issues.apache.org/jira/browse/IGNITE-12654
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8
>Reporter: Vyacheslav Koptilin
>Assignee: Vyacheslav Koptilin
>Priority: Major
> Fix For: 2.9, 2.8.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Example of heap dump:
> ||Class Name||Shallow Heap||Retained Heap||
> |top 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl
>  @ 0x809f03d0|88|1 381 118 968|
> |grp org.apache.ignite.internal.processors.cache.CacheGroupContext @ 
> 0x809f04c8|96|1 381 121 912|
> |locParts java.util.concurrent.atomic.AtomicReferenceArray @ 0x81656c30|16|1 
> 380 925 496|
> |array java.lang.Object[1024] @ 0x81656c40|4 112|1 380 925 480|
> |org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition
>  @ 0xb5f2bcd8|24|318 622 384|
> |org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition
>  @ 0xb5f28d90|96|318 618 624|
> |org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition
>  @ 0xb5ed4ac8|24|318 618 576|
> |org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition
>  @ 0xb5f2e7f8|24|318 618 528|
> |grp org.apache.ignite.internal.processors.cache.CacheGroupContext @ 
> 0x809f04c8|96|1 381 121 912|
> |state org.apache.ignite.internal.util.future.GridFutureAdapter$Node @ 
> 0xe8ed4cd0|24|318 618 624|
> |val 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl$$Lambda$58
>  @ 0xe8ed4cb8|24|24|
> |arg$1 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl
>  @ 0x809f03d0|88|1 381 118 968|
> The number of {{GridFutureAdapter$Node}}'s and 
> {{GridDhtPartitionTopologyImpl$$Lambda$58}}'s looks weird. It seems that the 
> following code is the root cause:
> {code:java|title=GridDhtPartitionTopologyImpl.java}
> /**
>  * Finds local partitions which don't belong to affinity and runs 
> eviction process for such partitions.
>  *
>  * @param updateSeq Update sequence.
>  * @param aff Affinity assignments.
>  * @return {@code True} if there are local partitions need to be evicted.
>  */
> private boolean checkEvictions(long updateSeq, AffinityAssignment aff) {
>...
> // After all rents are finished resend partitions.
> if (!rentingFutures.isEmpty()) {
> final AtomicInteger rentingPartitions = new 
> AtomicInteger(rentingFutures.size());
> for (IgniteInternalFuture rentingFuture : rentingFutures) {
> rentingFuture.listen(f -> {
> int remaining = rentingPartitions.decrementAndGet();
> if (remaining == 0) {
> lock.writeLock().lock();
> try {
> this.updateSeq.incrementAndGet();
> if (log.isDebugEnabled())
> log.debug("Partitions have been scheduled to 
> resend [reason=" +
> "Evictions are done [grp=" + 
> grp.cacheOrGroupName() + "]");
> ctx.exchange().scheduleResendPartitions();
> }
> finally {
> lock.writeLock().unlock();
> }
> }
> });
> }
> }
> return hasEvictedPartitions;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12654) Some of rentingFutures in GridDhtPartitionTopologyImpl may accumulate a huge number of eviction callbacks

2020-02-10 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-12654:
-
Description: 
Example of heap dump:
||Class Name||Shallow Heap||Retained Heap||
|top 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl
 @ 0x809f03d0|88|1 381 118 968|
|grp org.apache.ignite.internal.processors.cache.CacheGroupContext @ 
0x809f04c8|96|1 381 121 912|
|locParts java.util.concurrent.atomic.AtomicReferenceArray @ 0x81656c30|16|1 
380 925 496|
|array java.lang.Object[1024] @ 0x81656c40|4 112|1 380 925 480|
|org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition
 @ 0xb5f2bcd8|24|318 622 384|
|org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition
 @ 0xb5f28d90|96|318 618 624|
|org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition
 @ 0xb5ed4ac8|24|318 618 576|
|org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition
 @ 0xb5f2e7f8|24|318 618 528|
|grp org.apache.ignite.internal.processors.cache.CacheGroupContext @ 
0x809f04c8|96|1 381 121 912|
|state org.apache.ignite.internal.util.future.GridFutureAdapter$Node @ 
0xe8ed4cd0|24|318 618 624|
|val 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl$$Lambda$58
 @ 0xe8ed4cb8|24|24|
|arg$1 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl
 @ 0x809f03d0|88|1 381 118 968|

The number of {{GridFutureAdapter$Node}}'s and 
{{GridDhtPartitionTopologyImpl$$Lambda$58}}'s looks weird. It seems that the 
following code is the root cause:
{code:java|title=GridDhtPartitionTopologyImpl.java}
/**
 * Finds local partitions which don't belong to affinity and runs eviction 
process for such partitions.
 *
 * @param updateSeq Update sequence.
 * @param aff Affinity assignments.
 * @return {@code True} if there are local partitions need to be evicted.
 */
private boolean checkEvictions(long updateSeq, AffinityAssignment aff) {
   ...
// After all rents are finished resend partitions.
if (!rentingFutures.isEmpty()) {
final AtomicInteger rentingPartitions = new 
AtomicInteger(rentingFutures.size());

for (IgniteInternalFuture rentingFuture : rentingFutures) {
rentingFuture.listen(f -> {
int remaining = rentingPartitions.decrementAndGet();

if (remaining == 0) {
lock.writeLock().lock();

try {
this.updateSeq.incrementAndGet();

if (log.isDebugEnabled())
log.debug("Partitions have been scheduled to 
resend [reason=" +
"Evictions are done [grp=" + 
grp.cacheOrGroupName() + "]");

ctx.exchange().scheduleResendPartitions();
}
finally {
lock.writeLock().unlock();
}
}
});
}
}

return hasEvictedPartitions;
}
{code}

  was:
Example of heap dump:
||Class Name||Shallow Heap||Retained Heap||
|top 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl
 @ 0x809f03d0|88|1 381 118 968|
|grp org.apache.ignite.internal.processors.cache.CacheGroupContext @ 
0x809f04c8|96|1 381 121 912|
|locParts java.util.concurrent.atomic.AtomicReferenceArray @ 0x81656c30|16|1 
380 925 496|
|array java.lang.Object[1024] @ 0x81656c40|4 112|1 380 925 480|
|org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition
 @ 0xb5f2bcd8|24|318 622 384|
|org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition
 @ 0xb5f28d90|96|318 618 624|
|org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition
 @ 0xb5ed4ac8|24|318 618 576|
|org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition
 @ 0xb5f2e7f8|24|318 618 528|
|grp org.apache.ignite.internal.processors.cache.CacheGroupContext @ 
0x809f04c8|96|1 381 121 912|
|state org.apache.ignite.internal.util.future.GridFutureAdapter$Node @ 
0xe8ed4cd0|24|318 618 624|
|val 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl$$Lambda$58
 @ 0xe8ed4cb8|24|24|
|arg$1 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl
 @ 0x809f03d0|88|1 381 118 968|

The number of {{GridFutureAdapter$Node}}'s and 
{{GridDhtPartitionTopologyImpl$$Lambda$58}}'s looks weird. It seems that the 
following code is the root cause:
{code:java|GridDhtPartitionTopologyImpl.java}
/**
 * Finds local partitions which don't belong to affinity and runs eviction 
process for 

[jira] [Updated] (IGNITE-12654) Some of rentingFutures in GridDhtPartitionTopologyImpl may accumulate a huge number of eviction callbacks

2020-02-10 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-12654:
-
Description: 
Example of heap dump:
||Class Name||Shallow Heap||Retained Heap||
|top 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl
 @ 0x809f03d0|88|1 381 118 968|
|grp org.apache.ignite.internal.processors.cache.CacheGroupContext @ 
0x809f04c8|96|1 381 121 912|
|locParts java.util.concurrent.atomic.AtomicReferenceArray @ 0x81656c30|16|1 
380 925 496|
|array java.lang.Object[1024] @ 0x81656c40|4 112|1 380 925 480|
|org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition
 @ 0xb5f2bcd8|24|318 622 384|
|org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition
 @ 0xb5f28d90|96|318 618 624|
|org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition
 @ 0xb5ed4ac8|24|318 618 576|
|org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition
 @ 0xb5f2e7f8|24|318 618 528|
|grp org.apache.ignite.internal.processors.cache.CacheGroupContext @ 
0x809f04c8|96|1 381 121 912|
|state org.apache.ignite.internal.util.future.GridFutureAdapter$Node @ 
0xe8ed4cd0|24|318 618 624|
|val 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl$$Lambda$58
 @ 0xe8ed4cb8|24|24|
|arg$1 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl
 @ 0x809f03d0|88|1 381 118 968|

The number of {{GridFutureAdapter$Node}}'s and 
{{GridDhtPartitionTopologyImpl$$Lambda$58}}'s looks weird. It seems that the 
following code is the root cause:
{code:java|GridDhtPartitionTopologyImpl.java}
/**
 * Finds local partitions which don't belong to affinity and runs eviction 
process for such partitions.
 *
 * @param updateSeq Update sequence.
 * @param aff Affinity assignments.
 * @return {@code True} if there are local partitions need to be evicted.
 */
private boolean checkEvictions(long updateSeq, AffinityAssignment aff) {
   ...
// After all rents are finished resend partitions.
if (!rentingFutures.isEmpty()) {
final AtomicInteger rentingPartitions = new 
AtomicInteger(rentingFutures.size());

for (IgniteInternalFuture rentingFuture : rentingFutures) {
rentingFuture.listen(f -> {
int remaining = rentingPartitions.decrementAndGet();

if (remaining == 0) {
lock.writeLock().lock();

try {
this.updateSeq.incrementAndGet();

if (log.isDebugEnabled())
log.debug("Partitions have been scheduled to 
resend [reason=" +
"Evictions are done [grp=" + 
grp.cacheOrGroupName() + "]");

ctx.exchange().scheduleResendPartitions();
}
finally {
lock.writeLock().unlock();
}
}
});
}
}

return hasEvictedPartitions;
}
{code}

  was:
||Heading 1||Heading 2||
|\|text\|test\|text|Col A2|


> Some of rentingFutures in GridDhtPartitionTopologyImpl may accumulate a huge 
> number of eviction callbacks
> -
>
> Key: IGNITE-12654
> URL: https://issues.apache.org/jira/browse/IGNITE-12654
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8
>Reporter: Vyacheslav Koptilin
>Assignee: Vyacheslav Koptilin
>Priority: Major
> Fix For: 2.9
>
>
> Example of heap dump:
> ||Class Name||Shallow Heap||Retained Heap||
> |top 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl
>  @ 0x809f03d0|88|1 381 118 968|
> |grp org.apache.ignite.internal.processors.cache.CacheGroupContext @ 
> 0x809f04c8|96|1 381 121 912|
> |locParts java.util.concurrent.atomic.AtomicReferenceArray @ 0x81656c30|16|1 
> 380 925 496|
> |array java.lang.Object[1024] @ 0x81656c40|4 112|1 380 925 480|
> |org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition
>  @ 0xb5f2bcd8|24|318 622 384|
> |org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition
>  @ 0xb5f28d90|96|318 618 624|
> |org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition
>  @ 0xb5ed4ac8|24|318 618 576|
> |org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition
>  @ 0xb5f2e7f8|24|318 618 528|
> |grp org.apache.ignite.internal.processors.cache.CacheGroupContext @ 
> 0x809f04c8|96|1 381 121 912|
> |state 

[jira] [Updated] (IGNITE-12654) Some of rentingFutures in GridDhtPartitionTopologyImpl may accumulate a huge number of eviction callbacks

2020-02-10 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-12654:
-
Description: 
||Heading 1||Heading 2||
|\|text\|test\|text|Col A2|

> Some of rentingFutures in GridDhtPartitionTopologyImpl may accumulate a huge 
> number of eviction callbacks
> -
>
> Key: IGNITE-12654
> URL: https://issues.apache.org/jira/browse/IGNITE-12654
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8
>Reporter: Vyacheslav Koptilin
>Assignee: Vyacheslav Koptilin
>Priority: Major
> Fix For: 2.9
>
>
> ||Heading 1||Heading 2||
> |\|text\|test\|text|Col A2|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12654) Some of rentingFutures in GridDhtPartitionTopologyImpl may accumulate a huge number of eviction callbacks

2020-02-10 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-12654:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Some of rentingFutures in GridDhtPartitionTopologyImpl may accumulate a huge 
> number of eviction callbacks
> -
>
> Key: IGNITE-12654
> URL: https://issues.apache.org/jira/browse/IGNITE-12654
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8
>Reporter: Vyacheslav Koptilin
>Assignee: Vyacheslav Koptilin
>Priority: Major
> Fix For: 2.9
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)