keith-turner opened a new issue, #4877:
URL: https://github.com/apache/accumulo/issues/4877

   **Is your feature request related to a problem? Please describe.**
   
   When accumulo server processes [send log 
messages](https://github.com/apache/accumulo/blob/233baddbe89850c2d303a25386ee828c0e21b23b/server/monitor/src/main/java/org/apache/accumulo/monitor/util/logging/AccumuloMonitorAppender.java#L124)
 to the monitor process its possible the log messages will continually build up 
in monitor memory.  One thing that leads to this is that Jetty has an [ubounded 
queue](https://jetty.org/docs/jetty/12/programming-guide/arch/threads.html#thread-pool-queue)
 on its thread pool.  So when the threads processing log messages can not keep 
up, the messages will start to build up.    
   
   
   **Describe the solution you'd like**
   
   The jetty docs suggest using the 
[QoSHandler](https://jetty.org/docs/jetty/12/programming-guide/server/http.html#handler-use-qos)
 to avoid the problem of too much data building up on the thread pool queue.  
It would be useful to know if the monitor could leverage this to avoid putting 
to many entries on the jetty thread pool queue.
   
   **Describe alternatives you've considered**
   
   If the QoSHandler is not workable, would need to see what else jetty has to 
offer.
   
   Another thing to consider is client side back pressure or dropping log 
messages.  When something appends using the AccumuloMonitorAppender.java it 
creates a future which it ignores.  So each log append has no concept of what 
happened with the previous append. Not really sure what happens when the 
previous ignored futures are still in progress because the monitor is not 
keeping up.  Does the client just keep queuing up stuff to send and adding more 
http request?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to