Ben-Zvi commented on a change in pull request #1420: Drill 6664: Limit the
maximum parquet reader batch rows to 64k
URL: https://github.com/apache/drill/pull/1420#discussion_r207680848
##########
File path:
exec/java-exec/src/main/java/org/apache/drill/exec/ExecConstants.java
##########
@@ -322,7 +322,7 @@ private ExecConstants() {
// Controls the flat parquet reader batching constraints (number of record
and memory limit)
public static final String PARQUET_FLAT_BATCH_NUM_RECORDS =
"store.parquet.flat.batch.num_records";
- public static final OptionValidator PARQUET_FLAT_BATCH_NUM_RECORDS_VALIDATOR
= new RangeLongValidator(PARQUET_FLAT_BATCH_NUM_RECORDS, 1, Integer.MAX_VALUE);
+ public static final OptionValidator PARQUET_FLAT_BATCH_NUM_RECORDS_VALIDATOR
= new RangeLongValidator(PARQUET_FLAT_BATCH_NUM_RECORDS, 1, 65535);
Review comment:
This figure is used to set the `maxRecordsPerBatch` in
`RecordBatchSizerManager`; hence every batch sent downstream from the Parquet
reader would have at most 64K-1 records. What limit do other operators use ?
Though a minor space waste, going "minus one" does have some benefit for
variable length columns (see DRILL-5446), where the offsets vector need not
double.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services