[ https://issues.apache.org/jira/browse/SPARK-9104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250675#comment-16250675 ]
Saisai Shao commented on SPARK-9104: ------------------------------------ [~vsr] I think SPARK-21934 already exposed Netty shuffle metrics to metrics system, you can follow SPARK-21934 for the details. For other Netty context like RPC, I don't have a strong feeling of support them, because usually memory usage is not so heavy for context like NettyRpcEnv. > expose network layer memory usage > --------------------------------- > > Key: SPARK-9104 > URL: https://issues.apache.org/jira/browse/SPARK-9104 > Project: Spark > Issue Type: Sub-task > Components: Spark Core > Reporter: Zhang, Liye > Assignee: Saisai Shao > Fix For: 2.3.0 > > > The default network transportation is netty, and when transfering blocks for > shuffle, the network layer will consume a decent size of memory, we shall > collect the memory usage of this part and expose it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org