tomscut commented on issue #1359:
URL: 
https://github.com/apache/incubator-gluten/issues/1359#issuecomment-3690774094

   @zhztheplayer I encountered the same situation.
   
   ```
   GCC Version | GCC: (GNU) 11.2.1 20210728 (Red Hat 11.2.1-1)
   Gluten Branch | improve-docs
   Gluten Build Time | 2025-12-24T09:03:43Z
   Gluten Repo URL | https://github.com/apache/incubator-gluten.git
   Gluten Revision | e24f811b95ec71333c7ad25b5199a46c960daa8a
   Gluten Revision Time | 2025-12-24 02:52:10 +0000
   Gluten Version | 1.5.1-SNAPSHOT
   Hadoop Version | 2.7.4
   Java Version | 1.8
   Scala Version | 2.12.15
   Spark Version | 3.4.4
   ```
   
   ```
   25/12/25 10:44:35 WARN ManagedReservationListener: Error reserving memory 
from target
   
org.apache.gluten.memory.memtarget.ThrowOnOomMemoryTarget$OutOfMemoryException: 
Not enough spark off-heap execution memory. Acquired: 1392.0 MiB, granted: 0.0 
B. Try tweaking config option spark.memory.offHeap.size to get larger space to 
run this application (if spark.gluten.memory.dynamic.offHeap.sizing.enabled is 
not enabled). 
   Current config settings: 
        spark.gluten.memory.offHeap.size.in.bytes=7.0 GiB
        spark.gluten.memory.task.offHeap.size.in.bytes=1798.2 MiB
        spark.gluten.memory.conservative.task.offHeap.size.in.bytes=899.1 MiB
        spark.memory.offHeap.enabled=true
        spark.gluten.memory.dynamic.offHeap.sizing.enabled=true
   Dynamic off-heap sizing memory target stats: 
        DynamicOffHeapSizing.26825: Current used bytes: 5.4 GiB, peak bytes: 
5.4 GiB
   
        at 
org.apache.gluten.memory.memtarget.ThrowOnOomMemoryTarget.borrow(ThrowOnOomMemoryTarget.java:104)
        at 
org.apache.gluten.memory.listener.ManagedReservationListener.reserve(ManagedReservationListener.java:49)
        at 
org.apache.gluten.vectorized.ColumnarBatchOutIterator.nativeHasNext(Native 
Method)
        at 
org.apache.gluten.vectorized.ColumnarBatchOutIterator.hasNext0(ColumnarBatchOutIterator.java:57)
        at 
org.apache.gluten.iterator.ClosableIterator.hasNext(ClosableIterator.java:39)
        at 
scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:45)
        at 
org.apache.gluten.iterator.IteratorsV1$InvocationFlowProtection.hasNext(IteratorsV1.scala:154)
        at 
org.apache.gluten.iterator.IteratorsV1$IteratorCompleter.hasNext(IteratorsV1.scala:66)
        at 
org.apache.gluten.iterator.IteratorsV1$PayloadCloser.hasNext(IteratorsV1.scala:38)
        at 
org.apache.gluten.iterator.IteratorsV1$LifeTimeAccumulator.hasNext(IteratorsV1.scala:95)
        at scala.collection.Iterator.isEmpty(Iterator.scala:387)
        at scala.collection.Iterator.isEmpty$(Iterator.scala:387)
        at 
org.apache.gluten.iterator.IteratorsV1$LifeTimeAccumulator.isEmpty(IteratorsV1.scala:85)
        at 
org.apache.gluten.execution.VeloxColumnarToRowExec$.toRowIterator(VeloxColumnarToRowExec.scala:121)
        at 
org.apache.gluten.execution.VeloxColumnarToRowExec.$anonfun$doExecuteInternal$1(VeloxColumnarToRowExec.scala:77)
        at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:853)
        at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:853)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)
        at 
org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
        at org.apache.spark.scheduler.Task.run(Task.scala:139)
        at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1536)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
   W20251225 10:44:35.079272 2359350 SortBuffer.cpp:258] Failed to reserve 
1.36GB for memory pool op.1.0.0.OrderBy, usage: 5.42GB, reservation: 5.43GB
   25/12/25 10:44:35 INFO DynamicOffHeapSizingMemoryTarget: Updated VM flags: 
MaxHeapFreeRatio from 70 to 5.
   25/12/25 10:44:35 WARN DynamicOffHeapSizingMemoryTarget: Starting full gc to 
shrink JVM memory: Total On-heap: 4883218432, Free On-heap: 4606363408, Total 
Off-heap: 5839519744, Used On-Heap: 276855024, Executor memory: 11453595648.
   25/12/25 10:44:35 WARN DynamicOffHeapSizingMemoryTarget: Finished full gc to 
shrink JVM memory: Total On-heap: 4883218432, Free On-heap: 4620295920, Total 
Off-heap: 5839519744, Used On-Heap: 262922512, Executor memory: 11453595648, 
[GC Retry times: 3].
   25/12/25 10:44:35 INFO DynamicOffHeapSizingMemoryTarget: Reverted VM flags 
back.
   25/12/25 10:44:35 WARN DynamicOffHeapSizingMemoryTarget: Failing allocation 
as unified memory is OOM. Used Off-heap: 5839519744, Used On-Heap: 276822352, 
Free On-heap: 4606396080, Total On-heap: 4883218432, Max On-heap: 11453595648, 
Allocation: 1459617792.
   25/12/25 10:44:35 INFO TaskMemoryManager: Memory used in task 108599
   25/12/25 10:44:35 INFO TaskMemoryManager: 0 bytes of memory were used by 
task 108599 but are not associated with specific consumers
   25/12/25 10:44:35 INFO TaskMemoryManager: 0 bytes of memory are used for 
execution and 76767578 bytes of memory are used for storage
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to