bryancall commented on issue #9625:
URL: https://github.com/apache/trafficserver/issues/9625#issuecomment-4013536227

   To answer the outstanding questions:
   
   **Why the OOM with 92GB RAM cache on a 128GB system:** The RAM cache size 
only controls the cache itself. ATS also needs significant memory for IO 
buffers, connection state, header heaps, SSL sessions, and other internal 
structures. Your SIGUSR1 dump showed ~100GB in `ioBufAllocator` alone — these 
are the buffers used for active request/response data flowing through the 
proxy. At 3-4 Gbps with large objects, the IO buffer pool can grow very large. 
A 92GB RAM cache on a 128GB system leaves only ~36GB for everything else, which 
is not enough under load.
   
   **Why disabling the freelist (`-F`) didn't help:** The freelist controls how 
ATS recycles freed memory internally. Disabling it means ATS uses the system 
allocator directly, but it doesn't reduce the total memory ATS needs — it just 
changes how that memory is managed. The core issue is that ATS's total memory 
demand (RAM cache + IO buffers + overhead) exceeded 128GB.
   
   **Recommendation:** Reduce `proxy.config.ram_cache.size` to leave more 
headroom. A general guideline is to reserve at least 30-40% of system RAM for 
ATS overhead and the OS. On a 128GB system, a RAM cache of 60-70GB would be 
more appropriate.
   
   **Regarding jemalloc:** The `LD_PRELOAD` approach you described would work 
for using jemalloc with the prebuilt Debian binary. However, jemalloc alone 
won't solve an overcommit issue — it improves fragmentation and allocation 
performance, not total memory usage.
   
   Closing this issue. If you are still seeing unexpected memory growth after 
reducing the RAM cache size, please open a new issue with the updated 
configuration and SIGUSR1 memory dump.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to