Vishal K commented on ZOOKEEPER-900:

Hi Flavio,

Thanks for your feedback. I will do the code changes.

For point 2 above, I was referring to the code that deletes the SenderWorker 
and ReceiveWorker pair after receiving a connect request. I was concerned that 
a peer might send frequent connect request before to the remote peer before the 
remote peer can initiate connection back. But I think the                 
Notification n = recvqueue.poll(notTimeout,  TimeUnit.MILLISECONDS); in 
lookForLeader will prevent this scenario. Also, this won't be a concern if we 
decide to remove the part that kills the pair for each connect.

I am also thinking of adding a sanity check that will accept connections only 
from peers that are not listed in the zoo.cfg file or OBSERVER_ID.
I have not used observes so far. Can you please explain why a node will use 
OBSERVER_ID instead of its sid? In particular, I am referring to the following 
code in QuorumCnxManager:
            // Read server id
            sid = Long.valueOf(msgBuffer.getLong());
            if(sid == QuorumPeer.OBSERVER_ID){
                 * Choose identifier at random. We need a value to identify
                 * the connection.
                sid = observerCounter--;
                LOG.info("Setting arbitrary identifier to observer: " + sid);

> FLE implementation should be improved to use non-blocking sockets
> -----------------------------------------------------------------
>                 Key: ZOOKEEPER-900
>                 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-900
>             Project: Zookeeper
>          Issue Type: Bug
>            Reporter: Vishal K
>            Assignee: Flavio Junqueira
>            Priority: Critical
> From earlier email exchanges:
> 1. Blocking connects and accepts:
> a) The first problem is in manager.toSend(). This invokes connectOne(), which 
> does a blocking connect. While testing, I changed the code so that 
> connectOne() starts a new thread called AsyncConnct(). AsyncConnect.run() 
> does a socketChannel.connect(). After starting AsyncConnect, connectOne 
> starts a timer. connectOne continues with normal operations if the connection 
> is established before the timer expires, otherwise, when the timer expires it 
> interrupts AsyncConnect() thread and returns. In this way, I can have an 
> upper bound on the amount of time we need to wait for connect to succeed. Of 
> course, this was a quick fix for my testing. Ideally, we should use Selector 
> to do non-blocking connects/accepts. I am planning to do that later once we 
> at least have a quick fix for the problem and consensus from others for the 
> real fix (this problem is big blocker for us). Note that it is OK to do 
> blocking IO in SenderWorker and RecvWorker threads since they block IO to the 
> respective !
> b) The blocking IO problem is not just restricted to connectOne(), but also 
> in receiveConnection(). The Listener thread calls receiveConnection() for 
> each incoming connection request. receiveConnection does blocking IO to get 
> peer's info (s.read(msgBuffer)). Worse, it invokes connectOne() back to the 
> peer that had sent the connection request. All of this is happening from the 
> Listener. In short, if a peer fails after initiating a connection, the 
> Listener thread won't be able to accept connections from other peers, because 
> it would be stuck in read() or connetOne(). Also the code has an inherent 
> cycle. initiateConnection() and receiveConnection() will have to be very 
> carefully synchronized otherwise, we could run into deadlocks. This code is 
> going to be difficult to maintain/modify.
> Also see: https://issues.apache.org/jira/browse/ZOOKEEPER-822

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

Reply via email to