Apache9 commented on pull request #4125: URL: https://github.com/apache/hbase/pull/4125#issuecomment-1074552386
> > adding sync call in the code path will cause serious performance impact > > I'll take another look at `BufferCallBeforeInitHandler.java` and try to leverage that instead of using the `sync()` call, but would you please clarify why do you say it will have serious performance impact? As mentioned, the client doesn't handle many connections and it must wait for the connection to be fully usable before doing anything. I don't see big difference in _where_ we actually pile up the incoming calls: in a handler or force the client stop until we're ready. The client has to wait either way. See here: http://normanmaurer.me/presentations/2014-facebook-eng-netty/slides.html#23.0 What we provide to user at client side is a fully asynchronous network library. If user is building a hbase proxy, it will call the methods which return a CompletableFuture directly in the netty event loop thread. If we use sync here, the event loop may be blocked, and if the remote side is slow and does not responed in time, it could even block for several seconds. Never do blocking io operations in async library, please. And for rpc server, it is another story. We can control the start up process of a region server, so we can make sure that we only call sync in the main thread, or some other threads which are not part of the non-blocking threads. Using sync can reduce the complexity of code, that's true. But you must make sure that you are not in non-blocking threads before calling it. Does zookeeper client use netty and use sync when connecting to zookeeper server? When building the async client in HBase we uses the async operation for zookeeper client, if it is the case then I think we may have performance issue then... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
