[ 
https://issues.apache.org/jira/browse/DRILL-8484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17828184#comment-17828184
 ] 

ASF GitHub Bot commented on DRILL-8484:
---------------------------------------

shfshihuafeng commented on code in PR #2889:
URL: https://github.com/apache/drill/pull/2889#discussion_r1529743261


##########
exec/java-exec/src/main/java/org/apache/drill/exec/cache/VectorAccessibleSerializable.java:
##########
@@ -155,12 +157,18 @@ public void readFromStreamWithContainer(VectorContainer 
myContainer, InputStream
     for (SerializedField metaData : fieldList) {
       final int dataLength = metaData.getBufferLength();
       final MaterializedField field = MaterializedField.create(metaData);
-      final DrillBuf buf = allocator.buffer(dataLength);
-      final ValueVector vector;
+      DrillBuf buf = null;
+      ValueVector vector = null;
       try {
+        buf = allocator.buffer(dataLength);
         buf.writeBytes(input, dataLength);
         vector = TypeHelper.getNewVector(field, allocator);
         vector.load(metaData, buf);
+      } catch (OutOfMemoryException oom) {
+        for (ValueVector valueVector : vectorList) {
+          valueVector.clear();
+        }
+        throw UserException.memoryError(oom).message("Allocator memory 
failed").build(logger);

Review Comment:
     when we prepare to allocator memory  using "allocator.buffer(dataLength)" 
for hashjoinPop allocator, if actualmemory > maxAllocation(The parameter is 
calculated  by call computeOperatorMemory) ,then it throw exception, like 
following my test。
     user  can adjust directMemory parameters (DRILL_MAX_DIRECT_MEMORY) or 
reduce concurrency based on actual  conditions. 
   
   ```
   Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Unable to 
allocate buffer of size 16384 (rounded from 14359) due to memory limit 
(41943040). Current allocation: 22583616
           at 
org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:241)
           at 
org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:216)
           at 
org.apache.drill.exec.cache.VectorAccessibleSerializable.readFromStreamWithContainer(VectorAccessibleSerializable.java:172)
   ```



##########
exec/java-exec/src/main/java/org/apache/drill/exec/cache/VectorAccessibleSerializable.java:
##########
@@ -155,12 +157,18 @@ public void readFromStreamWithContainer(VectorContainer 
myContainer, InputStream
     for (SerializedField metaData : fieldList) {
       final int dataLength = metaData.getBufferLength();
       final MaterializedField field = MaterializedField.create(metaData);
-      final DrillBuf buf = allocator.buffer(dataLength);
-      final ValueVector vector;
+      DrillBuf buf = null;
+      ValueVector vector = null;
       try {
+        buf = allocator.buffer(dataLength);
         buf.writeBytes(input, dataLength);
         vector = TypeHelper.getNewVector(field, allocator);
         vector.load(metaData, buf);
+      } catch (OutOfMemoryException oom) {
+        for (ValueVector valueVector : vectorList) {
+          valueVector.clear();
+        }
+        throw UserException.memoryError(oom).message("Allocator memory 
failed").build(logger);

Review Comment:
     when we prepare to allocator memory  using "allocator.buffer(dataLength)" 
for hashjoinPop allocator, if actual memory > maxAllocation(The parameter is 
calculated  by call computeOperatorMemory) ,then it throw exception, like 
following my test。
     user  can adjust directMemory parameters (DRILL_MAX_DIRECT_MEMORY) or 
reduce concurrency based on actual  conditions. 
   
   ```
   Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Unable to 
allocate buffer of size 16384 (rounded from 14359) due to memory limit 
(41943040). Current allocation: 22583616
           at 
org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:241)
           at 
org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:216)
           at 
org.apache.drill.exec.cache.VectorAccessibleSerializable.readFromStreamWithContainer(VectorAccessibleSerializable.java:172)
   ```





> HashJoinPOP memory leak is caused by  an oom exception when read data from 
> Stream with container 
> -------------------------------------------------------------------------------------------------
>
>                 Key: DRILL-8484
>                 URL: https://issues.apache.org/jira/browse/DRILL-8484
>             Project: Apache Drill
>          Issue Type: Bug
>          Components:  Server
>    Affects Versions: 1.21.1
>            Reporter: shihuafeng
>            Priority: Major
>             Fix For: 1.22.0
>
>
> *Describe the bug*
> An oom exception occurred When read data from Stream with container 
> ,resulting in hashJoinPOP memory leak 
> *To Reproduce*
> prepare data for tpch 1s
>  # 30 concurrent for tpch sql8
>  # set direct memory 5g
>  # when it had OutOfMemoryException , stopped all sql.
>  # finding memory leak
> *leak  info* 
> {code:java}
>    Allocator(frag:5:0) 5000000/1000000/31067136/40041943040 
> (res/actual/peak/limit)
>       child allocators: 1
>         Allocator(op:5:0:1:HashJoinPOP) 1000000/16384/22822912/41943040 
> (res/actual/peak/limit)
>           child allocators: 0
>           ledgers: 2
>             ledger[1882757] allocator: op:5:0:1:HashJoinPOP), isOwning: true, 
> size: 8192, references: 2, life: 16936270178816167..0, allocatorManager: 
> [1703465, life: 16936270178813617..0] holds 4 buffers.
>                 DrillBuf[2041995], udle: [1703441 0..957]{code}
> {code:java}
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to