jihoonson commented on a change in pull request #10685:
URL: https://github.com/apache/druid/pull/10685#discussion_r551659244
##########
File path:
processing/src/main/java/org/apache/druid/query/groupby/epinephelinae/ByteBufferMinMaxOffsetHeap.java
##########
@@ -59,6 +60,35 @@ public ByteBufferMinMaxOffsetHeap(
this.heapIndexUpdater = heapIndexUpdater;
}
+ public ByteBufferMinMaxOffsetHeap copy()
+ {
+ LimitedBufferHashGrouper.BufferGrouperOffsetHeapIndexUpdater updater =
+ Optional
+ .ofNullable(heapIndexUpdater)
+
.map(LimitedBufferHashGrouper.BufferGrouperOffsetHeapIndexUpdater::copy)
+ .orElse(null);
+
+ // deep copy buf
+ ByteBuffer buffer = ByteBuffer.allocateDirect(buf.capacity());
Review comment:
GroupBy queries use both processing buffers and merge buffers. The
former is used when you compute per-segment results while the later is for all
other purposes (merging per-segment results, computing subtotals, etc).
Especially the broker only uses the merge buffers to process subtotals and
subqueries.
The merge buffers are maintained in a `BlockingPool` to manage the memory
usage in brokers and historicals. This is important to avoid query failures due
to OOM errors. Here, you should not allocate memory directly, but get one from
the merge buffer pool. Check out [how the merge buffer is currently
acquired](https://github.com/apache/druid/blob/master/processing/src/main/java/org/apache/druid/query/groupby/epinephelinae/GroupByRowProcessor.java#L117-L126).
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]