rschmitt commented on PR #578:
URL: 
https://github.com/apache/httpcomponents-core/pull/578#issuecomment-3550296603

   I asked [Aleksey Shipilëv](https://shipilev.net/) for his thoughts:
   > Depends. In a pure allocation benchmark, allocation would likely be on par 
with reuse. But once you get far from that ideal, awkward things start to 
happen.
   > 1. When there is _any_ non-trivial live set in the heap, GC would have to 
at least visit it every so often; that "so often" is driven by GC frequency, 
which is driven by allocation rate. Pure allocation speed and pure reclamation 
cost becomes much less relevant in this scenario -- what _else_ is happenning 
dominates hard. Generational GCs win you some, but they really only prolong the 
inevitable.
   > 2. When objects are allocated, they are nominally zeroed. Under high 
allocation rate, that is easily the slowest part, think ~10 GB/sec per thread. 
Re-use often comes with avoiding these cleanups, often at the cost of weaker 
security posture (leaking data between reused buffers).
   > 3. For smaller objects, the metadata management (headers, all that fluff) 
dominates the allocation path performance, _and_ is often logically intermixed 
with the _real_ work. E.g. you rarely allocate 10M objects just because, there 
is likely some compute in between. But allocating `new byte[BUF_SIZE]` 
(`BUF_SIZE=1M` defined in another file) is very easy. So hitting (1) and (2) is 
much easier the larger the object in questions get.
   > 4. For smaller objects, the pooling overheads become on par with the size 
of the objects themselves. The calculation for total memory footprint can push 
the scale in either direction.
   > 5. For some _awkward_ classes like DirectByteBuffers that have separate 
cleanup schedule, unbounded allocation is a recipe for a meltdown.
   >
   > So answer is somewhat along the lines of: Pooling common (small) objects? 
Nah, too much hassle for too little gain. Pooling large buffers? Yes, that is a 
common perf optimization. Pooling large buffers with special lifecycle? YES, do 
not even _think_ about _not_ doing the pooling. For everything in between the 
answer is somewhere in between.
   
   Here, "special lifecycle" refers to things like finalizers, Cleaners, weak 
references, etc.; nothing that would apply to a simple byte buffer.
   
   Another interesting point that came up is that if you use heap (non-direct) 
byte buffers, and if the pool doesn't hold on to byte buffer references while 
they are leased out, then there is no risk of a memory leak: returning the 
buffer to the pool is purely an optimization.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to