Github user jinfengni commented on a diff in the pull request:

    https://github.com/apache/drill/pull/597#discussion_r80796011
  
    --- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/ParquetGroupScan.java
 ---
    @@ -115,6 +115,8 @@
       private List<RowGroupInfo> rowGroupInfos;
       private Metadata.ParquetTableMetadataBase parquetTableMetadata = null;
       private String cacheFileRoot = null;
    +  private int batchSize;
    +  private static final int DEFAULT_BATCH_LENGTH = 256 * 1024;
    --- End diff --
    
    Are you referring to code here:
    {code}
        // Pick the minimum of recordsPerBatch calculated above, batchSize we 
got from rowGroupScan (based on limit)
        // and user configured batchSize value.
        recordsPerBatch = (int) Math.min(Math.min(recordsPerBatch, batchSize),
                                         
fragmentContext.getOptions().getOption(ExecConstants.PARQUET_RECORD_BATCH_SIZE).num_val.intValue());
    {code}
    
    If I understand correctly, batchSize in ParquetRecordReader comes from 
ParquetRowGroupScan, which comes from ParquetGroupScan, which is set to 
DEFAULT_BATCH_LENGTH.  If I have a RG with 512K rows, and I set 
"store.parquet.record_batch_size" to be 512K, will your code honor this 512 
batch size, or will it use DEFAULT_BATCH_LENGTH since it's smallest? 
    
    Also, if "store.parquet.record_batch_size" is set to be different from 
DEFAULT_BATCH_LENGTH, why would we still use DEFAULT_BATCH_LENGTH in 
ParquetGroupScan / ParquetRowGroupScan?  People might be confused if they look 
at the serialized physical plan, which shows "batchSize = DEFAULT_BATCH_LENGTH. 



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to