[
https://issues.apache.org/jira/browse/HDFS-5014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13824928#comment-13824928
]
Uma Maheswara Rao G commented on HDFS-5014:
-------------------------------------------
Thanks for working on this. I think there is no problem in allowing resignation
commands alone concurrently right?
I am bit worrying on the above locking and putting double checks to handle
specific cases of race conditions. Instead making code complex, how about
allowing registaration commands allowing without lock and all other command
should go under lock. ex: if command is register, then don't take lock and
simply process it. else go with current flow by taking lock on BPService
itself. (bit odd to process some commands out of switch, but this looks simpler
to me as of now :-) )Do you guys seeing any concurrent issue in processing
registrations commands concurrently? [Correct me if did not follow you guys]
{quote}
I'm starting to think that we need to work out a way for the re-register
polling loops to yield the lock in case of repeated failure, to give the other
BPServicActor a chance. If a BPServiceActor yields like this, then it must also
have a way to trigger the other BPServiceActor to repeat its heartbeat before
executing any additional commands. It's vital to re-check current state of the
other one before proceeding to handle its commands.
{quote}
I am not sure what is your idea here. But providing solution with out spreading
locks would be great I think.
Good efforts.
> BPOfferService#processCommandFromActor() synchronization on namenode RPC call
> delays IBR to Active NN, if Stanby NN is unstable
> -------------------------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-5014
> URL: https://issues.apache.org/jira/browse/HDFS-5014
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode, ha
> Affects Versions: 3.0.0, 2.0.4-alpha
> Reporter: Vinay
> Assignee: Vinay
> Attachments: HDFS-5014.patch, HDFS-5014.patch, HDFS-5014.patch,
> HDFS-5014.patch, HDFS-5014.patch, HDFS-5014.patch, HDFS-5014.patch
>
>
> In one of our cluster, following has happened which failed HDFS write.
> 1. Standby NN was unstable and continously restarting due to some errors. But
> Active NN was stable.
> 2. MR Job was writing files.
> 3. At some point SNN went down again while datanode processing the REGISTER
> command for SNN.
> 4. Datanodes started retrying to connect to SNN to register at the following
> code in BPServiceActor#retrieveNamespaceInfo() which will be called under
> synchronization.
> {code} try {
> nsInfo = bpNamenode.versionRequest();
> LOG.debug(this + " received versionRequest response: " + nsInfo);
> break;{code}
> Unfortunately in all datanodes at same point this happened.
> 5. For next 7-8 min standby was down, and no blocks were reported to active
> NN at this point and writes have failed.
> So culprit is {{BPOfferService#processCommandFromActor()}} is completely
> synchronized which is not required.
--
This message was sent by Atlassian JIRA
(v6.1#6144)