[ 
https://issues.apache.org/jira/browse/PHOENIX-6340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17272370#comment-17272370
 ] 

ASF GitHub Bot commented on PHOENIX-6340:
-----------------------------------------

stoty commented on a change in pull request #1112:
URL: https://github.com/apache/phoenix/pull/1112#discussion_r564791378



##########
File path: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/RoundRobinResultIterator.java
##########
@@ -114,12 +114,15 @@ public Tuple next() throws SQLException {
                 index = (index + 1) % size;
             }
         }
+        close();

Review comment:
       This is the fix

##########
File path: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/RoundRobinResultIterator.java
##########
@@ -300,7 +305,11 @@ public Tuple call() throws Exception {
         private RoundRobinIterator(PeekingResultIterator itr, Tuple tuple) {
             this.delegate = itr;
             this.tuple = tuple;
-            this.numRecordsRead = 0;
+            if (tuple != null) {

Review comment:
       This is meant to be a performance fix.
   The tuple was peek() -ed from the delegate, so that element is actually the 
first of the cached rows.
   Fixing this SHOULD enable processing the 2nd and later batches parallelly, 
but I didn't thest this.

##########
File path: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/RoundRobinResultIterator.java
##########
@@ -176,13 +179,15 @@ QueryPlan getQueryPlan() {
 
     private List<RoundRobinIterator> getIterators() throws SQLException {
         if (closed) { return Collections.emptyList(); }
-        if (openIterators.size() > 0 && openIterators.size() == 
numScannersCacheExhausted) {
+        if (openIterators.size() > 0 && openIterators.size() <= 
numScannersCacheExhausted) {

Review comment:
       Ths probably does nothing, just a belts + suspenders change.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


> Infinitely looping ResultSet.next()
> -----------------------------------
>
>                 Key: PHOENIX-6340
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-6340
>             Project: Phoenix
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 5.1.0
>            Reporter: Istvan Toth
>            Assignee: Istvan Toth
>            Priority: Blocker
>         Attachments: create_table.sql, make_csv.py
>
>
> Under certain conditions, ResultSet.next() will loop the results indefinitely.
> Unfortunately, I haven't been able to replicate this in a unit test.
> Steps for manual replication:
> 1. Download and run make_csv.py
> {noformat}
> python3 make_csv.py > data.csv
> {noformat}
> 2. Add the following to bin/hbase-site.xml, then do issue *mvn clean package*
> {noformat}
>   <property>
>     <name>hbase.client.scanner.caching</name>
>     <value>100</value>
>   </property>
> {noformat}
> 3. run *bin/phoenix_sandbox.py*
> 4. connect to sandbox with sqlline, create test table (see create_table.sql)
> 5. load the test data (data.csv) into the table
> {noformat}
> bin/psql.py -s -a ";"  -t LARGE_TABLE localhost:<sandbox_port>  /data.csv
> {noformat}
> 6. connect to sandbox with sqlline, run the following command
> {noformat}
> select * from large_table where id>100 and id<200 limit 300;
> {noformat}
> Instead of returning 99 rows, it will return 297 rows. If you omit the limit 
> clause, the result will loop indefinitely.
> If you switch sqlline to incremental mode (*!set incremental true*), then the 
> bug won't trigger, and the query will return 99 rows correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to