[ https://issues.apache.org/jira/browse/HDFS-13165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445593#comment-16445593 ]
Rakesh R commented on HDFS-13165: --------------------------------- {quote}I've asked a few times why a new command (ie. newest name in this patch is DNA_MOVEBLOCK) is required to move blocks across storages instead of using the existing \{{DNA_TRANSFER? }}I've heard mention of delHint issues, but I'm unclear why it's an issue? {quote} Thank you again [~daryn] for explaining {{delHint}} design thoughts. I understand your point and completely agree with you, {{delHint}} is not explicitly required in move block and is not a differentiate factor. {quote}Sure, DNA_TRANSFER could be optimized to short-circuit for local moves, {quote} Yes, this functionality can be easily supported. {quote}What am I missing? {quote} DNA_TRANSFER doesn't have ERROR_BLOCK_PINNED checks. Does it make sense to add this check? This is again an optimization to avoid/reduce block move retries, which is available with Mover tool. I agree to use the {{DNA_TRANSFER}} for the *internal* sps. I have tried prototyping DNA_TRANSFER functionality to *external* sps (sits outside Namenode), here it requires a complex [Sender#writeBlock|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Sender.java#L121] function call with many arguments, do you agree to use the existing *Mover* protocol [Sender#replaceBlock|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Sender.java#L241] for the *external* sps. With this proposal, I understand both external/internal have different function calls, but my attempt is to simplify the code changes. Secondly, how about refactoring sps code to support DNA_TRANSFER protocol changes to a separate jira sub-task, then continue implementing there. I will create a patch with only the IBR related changes in this jira and then commit it, that would help me rather than carrying bigger patch here. > [SPS]: Collects successfully moved block details via IBR > -------------------------------------------------------- > > Key: HDFS-13165 > URL: https://issues.apache.org/jira/browse/HDFS-13165 > Project: Hadoop HDFS > Issue Type: Sub-task > Reporter: Rakesh R > Assignee: Rakesh R > Priority: Major > Attachments: HDFS-13165-HDFS-10285-00.patch, > HDFS-13165-HDFS-10285-01.patch, HDFS-13165-HDFS-10285-02.patch, > HDFS-13165-HDFS-10285-03.patch, HDFS-13165-HDFS-10285-04.patch, > HDFS-13165-HDFS-10285-05.patch, HDFS-13165-HDFS-10285-06.patch, > HDFS-13165-HDFS-10285-07.patch, HDFS-13165-HDFS-10285-08.patch, > HDFS-13166-HDFS-10285-07.patch > > > This task to make use of the existing IBR to get moved block details and > remove unwanted future tracking logic exists in BlockStorageMovementTracker > code, this is no more needed as the file level tracking maintained at NN > itself. > Following comments taken from HDFS-10285, > [here|https://issues.apache.org/jira/browse/HDFS-10285?focusedCommentId=16347472&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16347472] > Comment-3) > {quote}BPServiceActor > Is it actually sending back the moved blocks? Aren’t IBRs sufficient?{quote} > Comment-21) > {quote} > BlockStorageMovementTracker > Many data structures are riddled with non-threadsafe race conditions and risk > of CMEs. > Ex. The moverTaskFutures map. Adding new blocks and/or adding to a block's > list of futures is synchronized. However the run loop does an unsynchronized > block get, unsynchronized future remove, unsynchronized isEmpty, possibly > another unsynchronized get, only then does it do a synchronized remove of the > block. The whole chunk of code should be synchronized. > Is the problematic moverTaskFutures even needed? It's aggregating futures > per-block for seemingly no reason. Why track all the futures at all instead > of just relying on the completion service? As best I can tell: > It's only used to determine if a future from the completion service should be > ignored during shutdown. Shutdown sets the running boolean to false and > clears the entire datastructure so why not use the running boolean like a > check just a little further down? > As synchronization to sleep up to 2 seconds before performing a blocking > moverCompletionService.take, but only when it thinks there are no active > futures. I'll ignore the missed notify race that the bounded wait masks, but > the real question is why not just do the blocking take? > Why all the complexity? Am I missing something? > BlocksMovementsStatusHandler > Suffers same type of thread safety issues as StoragePolicySatisfyWorker. Ex. > blockIdVsMovementStatus is inconsistent synchronized. Does synchronize to > return an unmodifiable list which sadly does nothing to protect the caller > from CME. > handle is iterating over a non-thread safe list. > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org