[ 
https://issues.apache.org/jira/browse/PHOENIX-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14209003#comment-14209003
 ] 

Samarth Jain commented on PHOENIX-1435:
---------------------------------------

Works fine even with 100k! 

However, we might have a memory leak in hand. Looks like we are probably not 
calling close on all the iterators. In my test I am calling conn.close() but 
still seeing the GlobalMemoryManager.finalize() complaining about orphan 
chunks. Added debug messages and changed thread names and this is what I have 
so far:

Stacktrace of request:
Orphaned chunk of 75 bytes found during finalize. Acquisition stack: 
ThreadName: phoenix-2-thread-0-via-Thread: main
java.lang.Exception
        at 
org.apache.phoenix.iterate.ParallelIterators.submitWork(ParallelIterators.java:84)
        at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:511)
        at 
org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
        at 
org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
        at 
org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
        at 
org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:739)
        at 
org.apache.phoenix.end2end.StatsCollectorIT.testPhoenixPerformanceOnHighNumberofGuidePosts(StatsCollectorIT.java:439)



Stacktrace of memory chunk acquistion:
java.lang.Exception
        at 
org.apache.phoenix.memory.GlobalMemoryManager$GlobalMemoryChunk.<init>(GlobalMemoryManager.java:123)
        at 
org.apache.phoenix.memory.GlobalMemoryManager$GlobalMemoryChunk.<init>(GlobalMemoryManager.java:118)
        at 
org.apache.phoenix.memory.GlobalMemoryManager.newMemoryChunk(GlobalMemoryManager.java:111)
        at 
org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:102)
        at 
org.apache.phoenix.iterate.SpoolingResultIterator.<init>(SpoolingResultIterator.java:90)
        at 
org.apache.phoenix.iterate.SpoolingResultIterator.<init>(SpoolingResultIterator.java:76)
        at 
org.apache.phoenix.iterate.SpoolingResultIterator$SpoolingResultIteratorFactory.newIterator(SpoolingResultIterator.java:67)
        at 
org.apache.phoenix.iterate.ChunkedResultIterator.<init>(ChunkedResultIterator.java:90)
        at 
org.apache.phoenix.iterate.ChunkedResultIterator$ChunkedResultIteratorFactory.newIterator(ChunkedResultIterator.java:70)
        at 
org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:100)
        at 
org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:1)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
        at java.lang.Thread.run(Thread.java:695)


> Create unit test that uses 15K guideposts
> -----------------------------------------
>
>                 Key: PHOENIX-1435
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-1435
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: James Taylor
>            Assignee: Samarth Jain
>
> We're seeing memory issues when 14K guideposts are used to execute a query. 
> Although other issues are contributing to the high number of guideposts 
> (PHOENIX-1434), and we don't need to execute some LIMIT queries in parallel 
> (PHOENIX-1432), we should still get to the bottom of why this is causing 
> memory and/or CPU issues.
> One question with this kind of scenario - why didn't the query get rejected, 
> as it seems like it would fill up the queue past the allowed 5000 threads? 
> Also, do the temp files get cleaned up in this scenario?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to