[ 
https://issues.apache.org/jira/browse/PHOENIX-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14209027#comment-14209027
 ] 

James Taylor commented on PHOENIX-1435:
---------------------------------------

I took a look at SpoolingResultIterator, but don't see anything obvious. Might 
be related to the null check - the only way that'd be null is if we're not 
setting things up correctly in submitWork(). Can you try, in an 
isDebugEnabled() block to iterate through all the nestedFutures at the end of 
ParallelIterators.submitWork()? Just make sure there are no null values (i.e. 
copy/paste the same loop from close).

> Create unit test that uses 15K guideposts
> -----------------------------------------
>
>                 Key: PHOENIX-1435
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-1435
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: James Taylor
>            Assignee: Samarth Jain
>
> We're seeing memory issues when 14K guideposts are used to execute a query. 
> Although other issues are contributing to the high number of guideposts 
> (PHOENIX-1434), and we don't need to execute some LIMIT queries in parallel 
> (PHOENIX-1432), we should still get to the bottom of why this is causing 
> memory and/or CPU issues.
> One question with this kind of scenario - why didn't the query get rejected, 
> as it seems like it would fill up the queue past the allowed 5000 threads? 
> Also, do the temp files get cleaned up in this scenario?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to