[ https://issues.apache.org/jira/browse/DRILL-6032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344296#comment-16344296 ]
ASF GitHub Bot commented on DRILL-6032: --------------------------------------- Github user Ben-Zvi commented on a diff in the pull request: https://github.com/apache/drill/pull/1101#discussion_r164604859 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/HashAggTemplate.java --- @@ -397,11 +384,9 @@ private void delayedSetup() { } numPartitions = BaseAllocator.nextPowerOfTwo(numPartitions); // in case not a power of 2 - if ( schema == null ) { estValuesBatchSize = estOutgoingAllocSize = estMaxBatchSize = 0; } // incoming was an empty batch --- End diff -- Why was the ( schema == null ) check removed ? Without it, calling updateEstMaxBatchSize() would return an NPE when accessing the batch's schema (when calling RecordBatchSizer(incoming) ). > Use RecordBatchSizer to estimate size of columns in HashAgg > ----------------------------------------------------------- > > Key: DRILL-6032 > URL: https://issues.apache.org/jira/browse/DRILL-6032 > Project: Apache Drill > Issue Type: Improvement > Reporter: Timothy Farkas > Assignee: Timothy Farkas > Priority: Major > Fix For: 1.13.0 > > > We need to use the RecordBatchSize to estimate the size of columns in the > Partition batches created by HashAgg. -- This message was sent by Atlassian JIRA (v7.6.3#76005)