[ 
https://issues.apache.org/jira/browse/IGNITE-8892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16526318#comment-16526318
 ] 

Andrew Mashenkov edited comment on IGNITE-8892 at 6/28/18 2:22 PM:
-------------------------------------------------------------------

So, the issue here is ScanQuery.keepAll to true by default.
This bug affects all type of queries. Cache.iterator() works fine as it 
explicitly disabled keepAll flag.

There is no workaround as keepAll flag is internal feature and can't be set 
from user side.


was (Author: amashenkov):
So, the issue here is we set ScanQuery.keepAll to true by default.

> Iterating over large dataset via ScanQuery can fails with OOME.
> ---------------------------------------------------------------
>
>                 Key: IGNITE-8892
>                 URL: https://issues.apache.org/jira/browse/IGNITE-8892
>             Project: Ignite
>          Issue Type: Bug
>          Components: cache
>            Reporter: Andrew Mashenkov
>            Priority: Critical
>              Labels: OutOfMemoryError
>             Fix For: 2.7
>
>         Attachments: ScanQueryOOM.java
>
>
> Seems, iterating over query iterator (ScanQuery at least, but may be other 
> affected as well) on client node cause memory leakage.
> The use case is quite simple.
>  Start server and client. Put much data into cache, then iterate over all 
> entries via ScanQuery.
>  Looks like JVM crashed due to OOM as GridCacheDistributedQueryManager.futs 
> map contains to many GridCacheDistributedQueryFuture futures.
> I've put 15kk entries into cache and client failed with OOM after iterating 
> over 10kk entry.
>  In heapdump I observer 2*10^9 GridCacheDistributedQueryFuture futures. 
> We have to check
>  # if these futures removed from map correctly.
>  # we don't create unnecessary futures.
> PFA repro.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to