Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17625
@jsoltren , a quick look at your current implementation, looks like you
only track the Netty memory usage in `NettyBlockTransferService`, but in Spark
there're some other places which will create Netty clientFactory or server, I
think it would be better to also track memory usage in those places:
1. Netty RPC client factory and RPC Server.
2. Netty file download client factory.
3. Netty external shuffle client factory.
4. Netty block transfer client factory and server - currently you did this
already. For the server I think it is not used for external shuffle.
Also Spark has Netty context (rpc, shuffle), shall we also need different
netty metrics for shuffle, rpc...?
I would suggest to only expose Netty metrics internally in this PR and get
rid of UI things for the following PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]