fapifta edited a comment on pull request #3579:
URL: https://github.com/apache/hadoop/pull/3579#issuecomment-958358986


   @symious actually the connection is not two different connection, but the 
same connection to the same DataNode (it is a socket connection from the client 
node to the datanode without knowing the actual using object instance, or the 
type of the connection). The problem is with what we send via the connection. 
In a secure environment, we are doing a SASL handshake, and for that we are 
sending a SASL auth header, but a DataNode expecting a single auth does not 
expect the SASL auth header, it expects an int with a version, and for that 
reads the first 4 bytes, which in the SASL auth case is 0xDEAD which translates 
to the first cryptic -8531 in the DN log.
   
   The setupIOStreams method works based on configuration, and server defaults 
coming in via the DfsClient (if I remember correctly), and the shared 
atomicBoolean is set to control the behaviour of the SASLDataTransferClient 
inside the DfsClient, so that the SASLDataTransferClient does not try to do a 
SASL auth, does not set a SASL header, but sends the plain request.
   
   The first client behaves correctly, but when the second client tries to 
connect to the DN, its shared atomicBoolean (instantiated in the DfsClient 
constructor) is not set before the patch, so it sends a SASL header and 
initiates a SASL handshake first, and fails because of the EOF it gets, as the 
DN closes the connection when the exception happens on its side.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to