marin-ma commented on issue #11542:
URL:
https://github.com/apache/incubator-gluten/issues/11542#issuecomment-3840507515
@wForget From the stack you posted, it appears that the initial memory
acquisition is from the native velox pipeline, not from the shuffle writer.
```
at
org.apache.gluten.memory.memtarget.OverAcquire.borrow(OverAcquire.java:63)
at
org.apache.gluten.memory.memtarget.ThrowOnOomMemoryTarget.borrow(ThrowOnOomMemoryTarget.java:35)
at
org.apache.gluten.memory.listener.ManagedReservationListener.reserve(ManagedReservationListener.java:49)
at
org.apache.gluten.vectorized.ColumnarBatchOutIterator.nativeHasNext(Native
Method)
at
org.apache.gluten.vectorized.ColumnarBatchOutIterator.hasNext0(ColumnarBatchOutIterator.java:57)
```
And from the memory usage dump, seems like it's using partitionBufferPool
which is only used by the hash based shuffle writer.
```
| \- Capacity[8.0 EiB].3: Current
used bytes: 2045.0 MiB, peak bytes: 2.0 GiB
| +- UniffleShuffleWriter.3: Current
used bytes: 2000.0 MiB, peak bytes: 2008.0 MiB
| | \- single: Current
used bytes: 2000.0 MiB, peak bytes: 2008.0 MiB
| | +- gluten::MemoryAllocator: Current
used bytes: 1996.5 MiB, peak bytes: 2003.6 MiB
| | | +- VeloxShuffleWriter.partitionBufferPool: Current
used bytes: 1996.5 MiB, peak bytes: 2003.6 MiB
| | | +- default: Current
used bytes: 0.0 B, peak bytes: 0.0 B
| | | \- PartitionWriter.cached_payload: Current
used bytes: 0.0 B, peak bytes: 5.9 MiB
| | \- root: Current
used bytes: 0.0 B, peak bytes: 1024.0 KiB
| | \- default_leaf: Current
used bytes: 0.0 B, peak bytes: 896.0 B
```
Could you also share the oom stack trace for using the sort shuffle writer?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]