[ 
https://issues.apache.org/jira/browse/DRILL-4884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15604110#comment-15604110
 ] 

ASF GitHub Bot commented on DRILL-4884:
---------------------------------------

Github user zbdzzg commented on a diff in the pull request:

    https://github.com/apache/drill/pull/584#discussion_r84828174
  
    --- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/validate/IteratorValidatorBatchIterator.java
 ---
    @@ -301,7 +301,7 @@ public IterOutcome next() {
                       "Incoming batch [#%d, %s] has an empty schema. This is 
not allowed.",
                       instNum, batchTypeName));
             }
    -        if (incoming.getRecordCount() > MAX_BATCH_SIZE) {
    +        if (incoming.getRecordCount() >= MAX_BATCH_SIZE) {
    --- End diff --
    
    Thanks for reply.
    
    I tried turn assertion off and rerun, the IOB is produced again. The 
IllegalStateException is thrown when turning it on.
    
    The built-in storage plug-ins always have smaller batch limitation (smaller 
than 65536) in reader implementations, so this problem is supposed to be 
avoided by design. The scene to us running into this problem is that we have a 
customized storage plug-in with a large batch-size set in the reader part. I 
personally guess that the assertion is in technically enabled in testing / 
debugging for a new written storage plug-in, so the IllegalStateException 
should be thrown in which the assertion is on, then we know that our batch-size 
is too large to execute all stuffs. As a comparison, a wired IOB exception 
makes us confused. 
    
    If we consider the scene disabling assertion, another bound check should be 
inserted to LimitRecordBatch.java, but what I am seeing is that most check 
logics is written at IteratorValidator, I suppose that author wants storage 
developers make all things work in the early debugging, the assertion can be 
off after the exceptions like this IOB are resolved.
    
    So I don t think that making any changes to LimitRecordBatch.java is 
reasonable rather than Doing same thing to validator part. This change benefits 
the plugin development. Do you agree?


> Drill produced IOB exception while querying data of 65536 limitation using 
> non batched reader
> ---------------------------------------------------------------------------------------------
>
>                 Key: DRILL-4884
>                 URL: https://issues.apache.org/jira/browse/DRILL-4884
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Query Planning & Optimization
>    Affects Versions: 1.8.0
>         Environment: CentOS 6.5 / JAVA 8
>            Reporter: Hongze Zhang
>            Assignee: Jinfeng Ni
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Drill produces IOB while using a non batched scanner and limiting SQL by 
> 65536.
> SQL:
> {noformat}
> select id from xx limit 1 offset 65535
> {noformat}
> Result:
> {noformat}
>       at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:534)
>  ~[classes/:na]
>       at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:324)
>  [classes/:na]
>       at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:184)
>  [classes/:na]
>       at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:290)
>  [classes/:na]
>       at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>  [classes/:na]
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_101]
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_101]
>       at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
> Caused by: java.lang.IndexOutOfBoundsException: index: 131072, length: 2 
> (expected: range(0, 131072))
>       at io.netty.buffer.DrillBuf.checkIndexD(DrillBuf.java:175) 
> ~[classes/:4.0.27.Final]
>       at io.netty.buffer.DrillBuf.chk(DrillBuf.java:197) 
> ~[classes/:4.0.27.Final]
>       at io.netty.buffer.DrillBuf.setChar(DrillBuf.java:517) 
> ~[classes/:4.0.27.Final]
>       at 
> org.apache.drill.exec.record.selection.SelectionVector2.setIndex(SelectionVector2.java:79)
>  ~[classes/:na]
>       at 
> org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.limitWithNoSV(LimitRecordBatch.java:167)
>  ~[classes/:na]
>       at 
> org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.doWork(LimitRecordBatch.java:145)
>  ~[classes/:na]
>       at 
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:93)
>  ~[classes/:na]
>       at 
> org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext(LimitRecordBatch.java:115)
>  ~[classes/:na]
>       at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  ~[classes/:na]
>       at 
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
>  ~[classes/:na]
>       at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
>  ~[classes/:na]
>       at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
>  ~[classes/:na]
>       at 
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
>  ~[classes/:na]
>       at 
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:94)
>  ~[classes/:na]
>       at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  ~[classes/:na]
>       at 
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
>  ~[classes/:na]
>       at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
>  ~[classes/:na]
>       at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
>  ~[classes/:na]
>       at 
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
>  ~[classes/:na]
>       at 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:132)
>  ~[classes/:na]
>       at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  ~[classes/:na]
>       at 
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
>  ~[classes/:na]
>       at 
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104) 
> ~[classes/:na]
>       at 
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:81)
>  ~[classes/:na]
>       at 
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94) 
> ~[classes/:na]
>       at 
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:256)
>  ~[classes/:na]
>       at 
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:250)
>  ~[classes/:na]
>       at java.security.AccessController.doPrivileged(Native Method) 
> ~[na:1.8.0_101]
>       at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_101]
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>  ~[hadoop-common-2.7.1.jar:na]
>       at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:250)
>  [classes/:na]
>       ... 4 common frames omitted
> {noformat}
> Code from IteratorValidatorBatchIterator.java said that it is OK incoming 
> returning 65536 records:
> {noformat}
> if (incoming.getRecordCount() > MAX_BATCH_SIZE) { // MAX_BATCH_SIZE == 65536
>           throw new IllegalStateException(
>               String.format(
>                   "Incoming batch [#%d, %s] has size %d, which is beyond the"
>                   + " limit of %d",
>                   instNum, batchTypeName, incoming.getRecordCount(), 
> MAX_BATCH_SIZE
>                   ));
>         }
> {noformat}
> Code from LimitRecordBatch.java shows that a loop will not break as expected 
> when the incoming returns 65536 records:
> {noformat}
>   private void limitWithNoSV(int recordCount) {
>     final int offset = Math.max(0, Math.min(recordCount - 1, recordsToSkip));
>     recordsToSkip -= offset;
>     int fetch;
>     if(noEndLimit) {
>       fetch = recordCount;
>     } else {
>       fetch = Math.min(recordCount, offset + recordsLeft);
>       recordsLeft -= Math.max(0, fetch - offset);
>     }
>     int svIndex = 0;
>     for(char i = (char) offset; i < fetch; svIndex++, i++) { // since 
> fetch==recordCount==65536, param i can be increased from 65535 to 65536, then 
> be limited to 0 by the char type limitation, the loop abnormally continues.
>       outgoingSv.setIndex(svIndex, i);
>     }
>     outgoingSv.setRecordCount(svIndex);
>   }
> {noformat}
> The IllegalStateException should be thrown when incoming returns 65535 
> records rather than 65536.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to