[ 
https://issues.apache.org/jira/browse/PHOENIX-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14208779#comment-14208779
 ] 

James Taylor commented on PHOENIX-1435:
---------------------------------------

In tests, we delete rows instead of dropping the hbase metadata (b/c this is 
too slow). Deleting rows uses guideposts. For your test, just set this config 
to true:
{code}
        props.put(QueryServices.DROP_METADATA_ATTRIB, Boolean.toString(true));
{code}

> Create unit test that uses 15K guideposts
> -----------------------------------------
>
>                 Key: PHOENIX-1435
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-1435
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: James Taylor
>            Assignee: Samarth Jain
>
> We're seeing memory issues when 14K guideposts are used to execute a query. 
> Although other issues are contributing to the high number of guideposts 
> (PHOENIX-1434), and we don't need to execute some LIMIT queries in parallel 
> (PHOENIX-1432), we should still get to the bottom of why this is causing 
> memory and/or CPU issues.
> One question with this kind of scenario - why didn't the query get rejected, 
> as it seems like it would fill up the queue past the allowed 5000 threads? 
> Also, do the temp files get cleaned up in this scenario?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to