keith-turner commented on issue #4664:
URL: https://github.com/apache/accumulo/issues/4664#issuecomment-2176856667

   Currently in the manager the same thread pool services fate, compaction 
coordinator, etc RPCs.  So this could cause a problem if its a fixed size 
thread pool and all of the threads are waiting on a compaction queue.  This 
would prevent fate RPCs from running or it could even block a compaction queue 
that has work (like all threads are waiting on queue A that has no work and 
compaction queue B has work, but nothing can get to it).
   
   One simple way to solve this would be an unbounded thread pool to service 
manager RPCs.  However there is some limit (not sure what it is) where too many 
threads in a JVM start to cause problems.  Another possible solution is to make 
the server side processing for these messages async.  Thrift supports async 
server processing, but there is no documentation for it.  With async processing 
when an RPC request is waiting for a compaction job there would be no thread 
associated with it, which would be ideal.  Since there are no docs for thrift 
async servers I have not been able to determine if its possible to mix sync and 
async processing for RPCs in thrift, because we would probably not want to make 
everything async in the manager for code complexity reasons.  If they can not 
be mixed, maybe we could create a thrift service with its own port that only 
services  request for compaction jobs.  Maybe that could even be switched to 
using grpc as a trial run of grpc which also seems to supp
 ort async. If we wait long enough would probably not need async processing for 
this as java virtual threads could be used instead.  Not sure what is the best 
course of action here, just posting some notes from researching this a bit.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to