[ 
https://issues.apache.org/jira/browse/PHOENIX-539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14061035#comment-14061035
 ] 

James Taylor commented on PHOENIX-539:
--------------------------------------

I'm not positive that there's an issue, but I suspect there is, [~maryannxue]. 
The ChunkedResultIterator actually closes the Scanner and then re-opens another 
one starting where it left off. My suspicion is that after the first batch, the 
close of the Scanner would have caused the hash cache to be cleared.

Probably best to form a unit test around this where the batch size is very 
small to force the issue.

> Implement parallel scanner that does not spool to disk
> ------------------------------------------------------
>
>                 Key: PHOENIX-539
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-539
>             Project: Phoenix
>          Issue Type: Task
>            Reporter: James Taylor
>            Assignee: Gabriel Reid
>             Fix For: 5.0.0, 3.1, 4.1
>
>         Attachments: PHOENIX-539.1.patch, PHOENIX-539.patch
>
>
> In scenarios where a LIMIT is not present on a non aggregate query that will 
> return a lot of results, Phoenix spools the results to disk. This is less 
> than ideal in these situations. @larsh has created a very good and relatively 
> simple implementation that is queue based to replace this.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to