ctubbsii commented on PR #4879: URL: https://github.com/apache/accumulo/pull/4879#issuecomment-2418123919
I found the discussion on https://stackoverflow.com/q/71129297/196405 helpful to explain how HttpClient reuses (or doesn't) the connections. One answer suggested `jdk.httpclient.connectionPoolSize` to limit the number of keep-alive connections, which is further documented at https://docs.oracle.com/en/java/javase/11/core/java-networking.html Another answer clarified that won't work, because it's only the limit of idle keep-alive connections, not the number of concurrent active ones. So, you have to limit it in the application. It says you also can't limit it by controlling the size of the executor's thread pool, because the client is fully async. They suggested using a Semaphore in the application to impose a max. If the proposed implementation in https://github.com/apache/accumulo/pull/4982#issuecomment-2417960681 works, then we don't really have to fix this, but we'd have to change the implementation to collect the logs for later retrieval, as that comment proposes. However, my concern over that approach is that the current design allows other components to use this Appender. An implementation that queues the logs for later retrieval by the monitor via Thrift will not be able to be reused by other components, and that was one of my originally intended uses with the creation of this Appender. So, I'm not entirely sure what the best approach is here. Ideally, I would prefer to just eliminate log collection in the monitor entirely. Users who need to collect and monitor logs for alerting or later analysis should really be using a log collection and aggregation system that is appropriate for that, and we shouldn't be baking this into the monitor at all. Removing it entirely would save us a lot of grief. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
