[
https://issues.apache.org/jira/browse/DRILL-6310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16525934#comment-16525934
]
ASF GitHub Bot commented on DRILL-6310:
---------------------------------------
Ben-Zvi commented on a change in pull request #1324: DRILL-6310: limit batch
size for hash aggregate
URL: https://github.com/apache/drill/pull/1324#discussion_r198698608
##########
File path:
exec/java-exec/src/main/java/org/apache/drill/exec/record/RecordBatchMemoryManager.java
##########
@@ -201,6 +201,10 @@ public static int adjustOutputRowCount(int rowCount) {
return (Math.min(MAX_NUM_ROWS, Math.max(Integer.highestOneBit(rowCount) -
1, MIN_NUM_ROWS)));
}
+ public static int computeOutputRowCount(int batchSize, int rowWidth) {
+ return adjustOutputRowCount(RecordBatchSizer.safeDivide(batchSize,
rowWidth));
Review comment:
BTW, `safeDivide` uses `Math.ceil()`, so it may return a number one bigger
than the actual division. E.g., safeDivide(15, 2) = 8 , not 7. But probably
the `- 1` in the adjustment above fixes that issue.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
> limit batch size for hash aggregate
> -----------------------------------
>
> Key: DRILL-6310
> URL: https://issues.apache.org/jira/browse/DRILL-6310
> Project: Apache Drill
> Issue Type: Improvement
> Components: Execution - Flow
> Affects Versions: 1.13.0
> Reporter: Padma Penumarthy
> Assignee: Padma Penumarthy
> Priority: Major
> Fix For: 1.14.0
>
>
> limit batch size for hash aggregate based on memory.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)