ctubbsii commented on PR #2609:
URL: https://github.com/apache/accumulo/pull/2609#issuecomment-1095537411

   > There isn't any counting of threads.
   
   The Java Semaphore class is literally "A counting semaphore". We're using it 
to keep a count of the number of threads currently permitted to perform the 
action. Technically, a sized thread pool would also have a counter, but my 
point wasn't really about counters... so much as it was about using 
higher-level concepts instead of using lower-level primitives directly, to 
manage the resource.
   
   >  I am not sure how we would do this with the Semaphore() and the 
ThreadPools class.
   
   If I understand this code correctly (and I very well may not), the threads 
we're executing these writes in are coming from libthrift, created to handle 
the network request. I was thinking along the lines of having a thread pool / 
executor that handles the actual writes, and these RPC threads just delegate 
the write work to that pool, and wait on its Future. Could have a different 
pool for metadata writes and user table writes, so only user table writes are 
constrained. My understanding of the Semaphore solution is that we are just 
doing the write in the RPC thread from libthrift. I guess that's fine, assuming 
libthrift doesn't put a limit on the number of threads it creates and prevent 
us from writing metadata because all its threads are in use.
   
   > We could rename the property to `tserver.write.thread.permits` or 
something.
   
   No, that's just exposing implementation details. It's fine with the current 
name, regardless of implementation. The only thing that might help is if the 
name indicated that it applied only to user tables, not to metadata tables. But 
I don't care about the name that much, if the description is good enough.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to