bbeaudreault commented on PR #5141:
URL: https://github.com/apache/hbase/pull/5141#issuecomment-1485207184

   Thanks for looking Nick! Good question. 
   
   I'm far from an expert on the topic of cpu cache lines and such, but I've 
been curious about volatile for a while and googled it a few times in the past 
(again just now :)). There are numerous articles and SO answers out there, and 
it seems like the consensus is that on modern architectures the volatile read 
is uncontended and should be approximately the same performance of a normal 
read (so on the order of ns). It doesn't introduce any additional cpu opcodes 
and makes use of hardware memory barriers, which are close to free on x86+. It 
becomes a bit trickier when the value is being updated often, because then you 
introduce L1 cache invalidations, etc. But that's not relevant here since 
onConfigurationChange should be rare. I realize this is vague, but also I've 
never seen any indication in profiling of overhead of any of our existing 
volatiles (there are a bunch). 
   
   To that point, this PR follows the common pattern for how we've been dealing 
with onConfigurationChange. There already a bunch of other volatiles in 
RpcServer, RpcExecutor, etc all related to this. You bring up a good point that 
it might be nice to unify all of these with per-handler caching but it might 
require some tricky refactoring given the classes involved. I think if we 
wanted to do that it might make sense to do it in a separate jira and only if 
we can find evidence that it'd provide a performance speedup, since it'd 
definitely increase complexity. Not sure it's worth it here for just these 3 
volatiles, when there are a bunch more in other handler/rpc related classes. 
   
   What do you think?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to