[jira] [Commented] (DRILL-8483) SpilledRecordBatch memory leak when the program threw an exception during the process of building a hash table

2024-03-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DRILL-8483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17832322#comment-17832322
 ] 

ASF GitHub Bot commented on DRILL-8483:
---

cgivre merged PR #2888:
URL: https://github.com/apache/drill/pull/2888




> SpilledRecordBatch memory leak when the program threw an exception during the 
> process of building a hash table
> --
>
> Key: DRILL-8483
> URL: https://issues.apache.org/jira/browse/DRILL-8483
> Project: Apache Drill
>  Issue Type: Bug
>  Components:  Server
>Affects Versions: 1.21.1
>Reporter: shihuafeng
>Priority: Major
> Fix For: 1.21.2
>
>
> During the process of reading data from disk to building hash tables in 
> memory, if an exception is thrown, it will result in a memory  
> SpilledRecordBatch leak
> exception log as following
> {code:java}
> Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Unable to 
> allocate buffer of size 8192 due to memory limit (41943040). Current 
> allocation: 3684352
>         at 
> org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:241)
>         at 
> org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:216)
>         at 
> org.apache.drill.exec.vector.VarCharVector.allocateNew(VarCharVector.java:411)
>         at 
> org.apache.drill.exec.vector.NullableVarCharVector.allocateNew(NullableVarCharVector.java:270)
>         at 
> org.apache.drill.exec.physical.impl.common.HashPartition.allocateNewVectorContainer(HashPartition.java:215)
>         at 
> org.apache.drill.exec.physical.impl.common.HashPartition.allocateNewCurrentBatchAndHV(HashPartition.java:238)
>         at 
> org.apache.drill.exec.physical.impl.common.HashPartition.(HashPartition.java:165){code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (DRILL-8483) SpilledRecordBatch memory leak when the program threw an exception during the process of building a hash table

2024-03-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DRILL-8483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824832#comment-17824832
 ] 

ASF GitHub Bot commented on DRILL-8483:
---

shfshihuafeng opened a new pull request, #2888:
URL: https://github.com/apache/drill/pull/2888

   …exception during the process of building a hash table (#2887)
   
   # [DRILL-8483](https://issues.apache.org/jira/browse/DRILL-8483): 
SpilledRecordBatch memory leak when the program threw an exception during the 
process of building a hash table
   
   (Please replace `PR Title` with actual PR Title)
   
   ## Description
   
   During the process of reading data from disk to building hash tables in 
memory, if an exception is thrown, it will result in a memory  
SpilledRecordBatch leak
   
   ## Documentation
   (Please describe user-visible changes similar to what should appear in the 
Drill documentation.)
   
   ## Testing
   prepare data for tpch 1s
   1. 30 concurrent for tpch sql8
   2. set direct memory 5g
   3. when it had OutOfMemoryException , stopped all sql.
   4. finding memory leak
   
   test script
   
   ```
   random_sql(){
   #for i in `seq 1 3`
   while true
   do
   
 num=$((RANDOM%22+1))
 if [ -f $fileName ]; then
 echo "$fileName" " is exit"
 exit 0
 else
 $drill_home/sqlline -u \"jdbc:drillr:zk=ip:2181/drillbits_shf\" -f 
tpch_sql8.sql >> sql8.log 2>&1
 fi
   done
   }
   
   main(){
   #sleep 2h
   
   #TPCH power test
   for i in `seq 1 30`
   do
   random_sql &
   done
   }
   ```




> SpilledRecordBatch memory leak when the program threw an exception during the 
> process of building a hash table
> --
>
> Key: DRILL-8483
> URL: https://issues.apache.org/jira/browse/DRILL-8483
> Project: Apache Drill
>  Issue Type: Bug
>  Components:  Server
>Affects Versions: 1.21.1
>Reporter: shihuafeng
>Priority: Major
> Fix For: 1.21.2
>
>
> During the process of reading data from disk to building hash tables in 
> memory, if an exception is thrown, it will result in a memory  
> SpilledRecordBatch leak
> exception log as following
> {code:java}
> Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Unable to 
> allocate buffer of size 8192 due to memory limit (41943040). Current 
> allocation: 3684352
>         at 
> org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:241)
>         at 
> org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:216)
>         at 
> org.apache.drill.exec.vector.VarCharVector.allocateNew(VarCharVector.java:411)
>         at 
> org.apache.drill.exec.vector.NullableVarCharVector.allocateNew(NullableVarCharVector.java:270)
>         at 
> org.apache.drill.exec.physical.impl.common.HashPartition.allocateNewVectorContainer(HashPartition.java:215)
>         at 
> org.apache.drill.exec.physical.impl.common.HashPartition.allocateNewCurrentBatchAndHV(HashPartition.java:238)
>         at 
> org.apache.drill.exec.physical.impl.common.HashPartition.(HashPartition.java:165){code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)