gortiz commented on PR #17674: URL: https://github.com/apache/pinot/pull/17674#issuecomment-3883442699
> Q: Just to understand: the memory stats are per static instance of netty, as in per pool? I'm guessing 1 for SSE and 1 for MSE? any others? We use Netty to implement SSE communication between brokers and servers and gRPC uses their own shade version of Netty under the hood. We use gRPC in 4 different places: - The client gRPC protocol, an alternative to the REST API. Here, the gRPC server side is our broker. - The direct client to server gRPC protocol, used for example by Trino/Presto. Here the gRPC server side is our servers - The MSE plan protocol, where Pinot servers are the gRPC servers and brokers are gRPC clients. - The MSE mailbox protocol, where Pinot servers and brokers play the role of gRPC servers (and only Pinot servers play the role of gRPC clients). SSE, Client gRPC, direct client to serve gRPC and MSE mailbox protocols; expose their own metrics to report memory usage (note we don't have metrics for the plan protocol, but that is super simple and doesn't consume much). But each of these gRPC servers is also constrained by the max memory Netty can use (where SSE uses the limits of the unshaded Netty instance and all gRPC use the limits defined on the gRPC shaded Netty instance). So the metric for used memory by SSE (assuming the fix in https://github.com/apache/pinot/pull/17667 is merged) should be similar to the used memory shown by the new metric for unshaded Netty, but for gRPC it will be the sum of the used memory by the MSE mailbox, MSE plan, and client gRPC protocols. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
