[
https://issues.apache.org/jira/browse/DRILL-6147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16527049#comment-16527049
]
ASF GitHub Bot commented on DRILL-6147:
---------------------------------------
sachouche commented on a change in pull request #1330: DRILL-6147: Adding
Columnar Parquet Batch Sizing functionality
URL: https://github.com/apache/drill/pull/1330#discussion_r198930977
##########
File path:
exec/java-exec/src/main/java/org/apache/drill/exec/ExecConstants.java
##########
@@ -315,6 +315,13 @@ private ExecConstants() {
public static final String PARQUET_FLAT_READER_BULK =
"store.parquet.flat.reader.bulk";
public static final OptionValidator PARQUET_FLAT_READER_BULK_VALIDATOR = new
BooleanValidator(PARQUET_FLAT_READER_BULK);
+ // Controls the flat parquet reader batching constraints (number of record
and memory limit)
+ public static final String PARQUET_FLAT_BATCH_NUM_RECORDS =
"store.parquet.flat.batch.num_records";
Review comment:
- First of all, these constraints are meant for internal use
- Providing a constraint on the number of rows allows us a) to cap this
number (e.g., less than 64k-1 to avoid overflowing vectors with offsets or
nullables) and b) to all allow the performance team tune the best number of
rows per batch; for example, the memory constraint could be 32/16MB but yet a
batch of 8k rows is more than enough for a good performance. The higher memory
is to handle wide selection..
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
> Limit batch size for Flat Parquet Reader
> ----------------------------------------
>
> Key: DRILL-6147
> URL: https://issues.apache.org/jira/browse/DRILL-6147
> Project: Apache Drill
> Issue Type: Improvement
> Components: Storage - Parquet
> Reporter: salim achouche
> Assignee: salim achouche
> Priority: Major
> Fix For: 1.14.0
>
>
> The Parquet reader currently uses a hard-coded batch size limit (32k rows)
> when creating scan batches; there is no parameter nor any logic for
> controlling the amount of memory used. This enhancement will allow Drill to
> take an extra input parameter to control direct memory usage.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)