ZanderXu commented on PR #4369:
URL: https://github.com/apache/hadoop/pull/4369#issuecomment-1151228050

   Thanks @Hexiaoqiao for your suggestion. Yeah, your are right, we need more 
failed information for client, like transfer source failed or transfer target 
failed.  If client have more information about failed transfer, It can 
accurately and efficiently remove abnormal nodes. But this would be a big 
feature.
   
   Fortunately, at present, as long as failed exception throw to client, the 
client defaults to thinking that the new dn is abnormal, and will exclude it 
and retry transfer. During retrying transfer, Client will chose new source dn 
and new target dn. Therefor, the source and target dn in the previous failed 
transfer round will be replaced. 
   If it is target dn caused failed, excluded the target dn will be ok.
   If it is source dn caused failed,  it will be removed when building the new 
pipeline.
   
   So I think simple process is just throw failed exception to client, and 
client can find and remove the real abnormal datanode. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to