arturobernalg commented on PR #578: URL: https://github.com/apache/httpcomponents-core/pull/578#issuecomment-3836874109
Hi, I took a first pass at profiling the classic-over-async HTTP/1 facade using the ProfileClassicOverAsyncHttp1Main harness (IntelliJ allocation profiler). I may be interpreting this wrong (or the harness may still have artifacts), but the results suggest the dominant allocation hot spot is in the classic bridge buffering path. In the baseline run, the largest allocation volume appears on the response path: ClientHttp1StreamDuplexer.consumeData(ByteBuffer) → ClassicToAsyncResponseConsumer.consume(ByteBuffer) → SharedInputBuffer.fill(ByteBuffer) → ExpandableBuffer.ensureAdjustedCapacity(int) In a run with my allocator/buffer changes enabled, allocations attributed to that path drop from ~6.33 GB to ~4.68 MB (diff view shows ~-99.9% on that stack). In addition, overall allocation volume in the run dropped from ~8.49 GB to ~5.08 GB (~-40%), and java.nio.ByteBuffer.allocate(int) volume dropped substantially (in an earlier run: 8.66 GB → 716 MB, ~-92%). I’m aware these numbers are very sensitive to the profiling setup (threads selected, test wrappers, etc.), so I don’t want to over-claim anything. If you think this harness is not representative, or if there’s a preferred way you’d like this benchmarked), I’m happy to adjust and rerun with whatever methodology you consider reliable. Thanks for the guidance. <img width="1897" height="451" alt="image" src="https://github.com/user-attachments/assets/cb317193-3c44-4ca8-bfdc-7535120a4ae8" /> <img width="1875" height="429" alt="image" src="https://github.com/user-attachments/assets/902dc1bc-e6e9-4512-8550-7a0c47876024" /> <img width="1875" height="429" alt="image" src="https://github.com/user-attachments/assets/78cc4834-bb87-4d43-bce4-bed0963d583c" /> -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
