Github user jerryshao commented on the issue:

    https://github.com/apache/spark/pull/18935
  
    @squito , as the next following step, I would expose these metrics with 
MetricsSystem, I'm thinking of exposing shuffle related Netty memory usage. For 
RPC related memory usage, I'm not fully sure about the value of exposing them, 
though exposing them is a quite cheap call.
    
    And furthermore, I think we can collect them in the driver side and display 
either through REST API or via web UI. But I think part of should be considered 
well on how to deliver to the end users.
    
    As for the content of netty memory metrics, by default only two major 
memory usage will be exposed unless we enable verbose. I was thinking of 
picking some detailed metrics, but I found it is hard to decide which one is 
more important over others, so instead I chose to expose all of them and let 
user to decide which one is important. Also like you mentioned they're quite 
cheap and off by default, so I think it might be good to also optionally expose 
detailed infos.
    
    > are you at all surprised that the direct & heap memory are exactly the 
same? Are things getting mixed up somewhere? Or maybe those are just the values 
from the initialization netty always does. 
    
    I think it doesn't mix up anything. AFAIK it is what Netty 
`PooledByteBufAllocator` does at initialization, this 16MB is the default chunk 
size. I guess at initialization, Netty will create one chunk per arena.
    
    
    



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to