[jira] [Commented] (HDFS-13381) [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path
[ https://issues.apache.org/jira/browse/HDFS-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577578#comment-16577578 ] Hudson commented on HDFS-13381: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14752 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14752/]) HDFS-13381 : [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare (umamahesh: rev 66e8f9b31529226309c924226a53dead3e6fcf11) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/DatanodeCacheManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/NamenodeProtocolTranslatorPB.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/IntraSPSNameNodeFileIdCollector.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/sps/ExternalSPSFilePathCollector.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/Context.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/NamenodeProtocol.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/SPSService.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/sps/TestBlockStorageMovementAttemptedItems.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/sps/TestExternalStoragePolicySatisfier.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/proto/NamenodeProtocol.proto * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/sps/ExternalSPSBlockMoveTaskHandler.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/sps/TestStoragePolicySatisfier.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/StoragePolicySatisfier.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/FileCollector.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/sps/ExternalStoragePolicySatisfier.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/NamenodeProtocolServerSideTranslatorPB.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Namesystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/BlockStorageMovementNeeded.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/IntraSPSNameNodeContext.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/sps/ExternalSPSContext.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/sps/TestStoragePolicySatisfierWithStripedFile.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/StoragePolicySatisfyManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/ItemInfo.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/BlockStorageMovementAttemptedItems.java > [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path > > > Key: HDFS-13381 > URL: https://issues.apache.org/jira/browse/HDFS-13381 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Rakesh R >Assignee: Rakesh R >Priority: Major > Fix For: HDFS-10285 > > Attachments: HDFS-13381-HDFS-10285-00.patch, > HDFS-13381-HDFS-10285-01.patch, HDFS-13381-HDFS-10285-02.patch, > HDFS-13381-HDFS-10285-03.patch > > > This Jira task will address the following comments: > # Use DFSUtilClient::makePathFromFileId, instead of generics(one for string > path and another for inodeId) like today. > # Only the context impl differs for external/internal sps. Here, it can > simply move FileCollector and BlockMoveTaskHandler to Context interface. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands,
[jira] [Commented] (HDFS-13381) [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path
[ https://issues.apache.org/jira/browse/HDFS-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16530111#comment-16530111 ] Uma Maheswara Rao G commented on HDFS-13381: Thanks Rakesh for cleaning up generic part. I agree that we can think other optimization after this phase. +1 on the latest patch to commit. > [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path > > > Key: HDFS-13381 > URL: https://issues.apache.org/jira/browse/HDFS-13381 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Rakesh R >Assignee: Rakesh R >Priority: Major > Attachments: HDFS-13381-HDFS-10285-00.patch, > HDFS-13381-HDFS-10285-01.patch, HDFS-13381-HDFS-10285-02.patch, > HDFS-13381-HDFS-10285-03.patch > > > This Jira task will address the following comments: > # Use DFSUtilClient::makePathFromFileId, instead of generics(one for string > path and another for inodeId) like today. > # Only the context impl differs for external/internal sps. Here, it can > simply move FileCollector and BlockMoveTaskHandler to Context interface. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13381) [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path
[ https://issues.apache.org/jira/browse/HDFS-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529219#comment-16529219 ] genericqa commented on HDFS-13381: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 33s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} HDFS-10285 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 38s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 12s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} HDFS-10285 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 19s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}121m 39s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}176m 21s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestNestedEncryptionZones | | | hadoop.hdfs.server.blockmanagement.TestPendingReconstruction | | | hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.TestAclsEndToEnd | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestEncryptionZones | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery | | | hadoop.hdfs.server.namenode.TestReencryption | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-13381 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929876/HDFS-13381-HDFS-10285-03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 472a42c11641 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh
[jira] [Commented] (HDFS-13381) [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path
[ https://issues.apache.org/jira/browse/HDFS-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529178#comment-16529178 ] Rakesh R commented on HDFS-13381: - Attached another patch fixing sps related test failures and checkstyle warning. > [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path > > > Key: HDFS-13381 > URL: https://issues.apache.org/jira/browse/HDFS-13381 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Rakesh R >Assignee: Rakesh R >Priority: Major > Attachments: HDFS-13381-HDFS-10285-00.patch, > HDFS-13381-HDFS-10285-01.patch, HDFS-13381-HDFS-10285-02.patch, > HDFS-13381-HDFS-10285-03.patch > > > This Jira task will address the following comments: > # Use DFSUtilClient::makePathFromFileId, instead of generics(one for string > path and another for inodeId) like today. > # Only the context impl differs for external/internal sps. Here, it can > simply move FileCollector and BlockMoveTaskHandler to Context interface. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13381) [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path
[ https://issues.apache.org/jira/browse/HDFS-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529115#comment-16529115 ] genericqa commented on HDFS-13381: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 37s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} HDFS-10285 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 53s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 7s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} HDFS-10285 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 42s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 366 unchanged - 0 fixed = 367 total (was 366) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}136m 7s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}211m 16s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile | | | hadoop.hdfs.server.namenode.TestNestedEncryptionZones | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate | | | hadoop.hdfs.TestAclsEndToEnd | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.TestEncryptionZones | | | hadoop.hdfs.server.namenode.TestReencryption | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-13381 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929857/HDFS-13381-HDFS-10285-02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 9dd85d90069c 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27
[jira] [Commented] (HDFS-13381) [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path
[ https://issues.apache.org/jira/browse/HDFS-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529043#comment-16529043 ] Rakesh R commented on HDFS-13381: - Attaching another patch inline with the external sps merge thoughts. This patch tried to use the {{Path filePath = DFSUtilClient.makePathFromFileId(inodeID);}} function and replaces the generics to differentiate String/int file path. > [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path > > > Key: HDFS-13381 > URL: https://issues.apache.org/jira/browse/HDFS-13381 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Rakesh R >Assignee: Rakesh R >Priority: Major > Attachments: HDFS-13381-HDFS-10285-00.patch, > HDFS-13381-HDFS-10285-01.patch, HDFS-13381-HDFS-10285-02.patch > > > This Jira task will address the following comments: > # Use DFSUtilClient::makePathFromFileId, instead of generics(one for string > path and another for inodeId) like today. > # Only the context impl differs for external/internal sps. Here, it can > simply move FileCollector and BlockMoveTaskHandler to Context interface. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13381) [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path
[ https://issues.apache.org/jira/browse/HDFS-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459263#comment-16459263 ] Chris Douglas commented on HDFS-13381: -- bq. We need one unified implementation that uses minimal external calls that can be abstracted in the context to either make rpc or fsn calls Do you have a particular refactoring in mind? In this case, the separate implementations batch their requests differently, because the internal implementation uses {{FSTreeTraverser}}. Looking at the patch, one could make the two implementations more similar, but not significantly more modular or easier to maintain. IIRC, each context implements "queue up the next batch of work". Your point on maintenance is well taken, but {{FileCollector}} looks like a vestigial abstraction. It's not actually pluggable; the internal/external contexts put their scan logic into a class, to match the EC pattern. We are only talking about two implementations of an explicitly internal (i.e., {{Unstable}}) API... Is the problem with the {{FSTreeTraverser}} pattern, generally? You've stated the goal clearly, but if the traversal is generalized then the internal context will do no batching under the lock. Did anyone- including the EC folks- publish any benchmarks/testing that justify this abstraction? If the impl benefits from it, then as a specialization it has merit, but it should be significant. How should we test that? > [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path > > > Key: HDFS-13381 > URL: https://issues.apache.org/jira/browse/HDFS-13381 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Rakesh R >Assignee: Rakesh R >Priority: Major > Attachments: HDFS-13381-HDFS-10285-00.patch, > HDFS-13381-HDFS-10285-01.patch > > > This Jira task will address the following comments: > # Use DFSUtilClient::makePathFromFileId, instead of generics(one for string > path and another for inodeId) like today. > # Only the context impl differs for external/internal sps. Here, it can > simply move FileCollector and BlockMoveTaskHandler to Context interface. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13381) [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path
[ https://issues.apache.org/jira/browse/HDFS-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459012#comment-16459012 ] Daryn Sharp commented on HDFS-13381: {quote}{quote}More than one file collector implementation means the namesystem context abstraction is being violated. {quote} Only two specific implementations, one should be for external context and other one for strictly internal context. Like I said in the above comment, we added FileCollector interface for better code readability. {quote} Here's the disconnect: The needed abstraction is not about "better code readability". The context was supposed to prove the feature is pluggable since it arguably doesn't belong in the NN. It's not pluggable if the bulk of the feature requires multiple impls (whether via "if" conditionals, or separate impl classes). Multiple impls impact ongoing stability, maintenance, and testing overhead for everyone. We need one unified implementation that uses minimal external calls that can be abstracted in the context to either make rpc or fsn calls. > [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path > > > Key: HDFS-13381 > URL: https://issues.apache.org/jira/browse/HDFS-13381 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Rakesh R >Assignee: Rakesh R >Priority: Major > Attachments: HDFS-13381-HDFS-10285-00.patch, > HDFS-13381-HDFS-10285-01.patch > > > This Jira task will address the following comments: > # Use DFSUtilClient::makePathFromFileId, instead of generics(one for string > path and another for inodeId) like today. > # Only the context impl differs for external/internal sps. Here, it can > simply move FileCollector and BlockMoveTaskHandler to Context interface. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13381) [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path
[ https://issues.apache.org/jira/browse/HDFS-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16458391#comment-16458391 ] Rakesh R commented on HDFS-13381: - Thanks [~chris.douglas] for summarizing the thoughts. [~daryn], Appreciate your continuous help in shaping up SPS feature. It would be great to see your feedback, shall we move ahead with the proposed interface changes? > [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path > > > Key: HDFS-13381 > URL: https://issues.apache.org/jira/browse/HDFS-13381 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Rakesh R >Assignee: Rakesh R >Priority: Major > Attachments: HDFS-13381-HDFS-10285-00.patch, > HDFS-13381-HDFS-10285-01.patch > > > This Jira task will address the following comments: > # Use DFSUtilClient::makePathFromFileId, instead of generics(one for string > path and another for inodeId) like today. > # Only the context impl differs for external/internal sps. Here, it can > simply move FileCollector and BlockMoveTaskHandler to Context interface. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13381) [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path
[ https://issues.apache.org/jira/browse/HDFS-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445356#comment-16445356 ] Chris Douglas commented on HDFS-13381: -- [~daryn]: to over-simplify [~rakeshr]'s description, the {{IntraSPSNameNodeFileIdCollector}} adds additional logic is for batching i.e., implementing {{FSTreeTraverser}} hooks, borrowing the pattern from encryption zones. The external/internal separation seems intact, as both {{FileCollector}} implementations are only accessible through their respective {{Context}}. This might be emphasized by removing the {{FileCollector}} interface, since it serves no particular purpose. Would that be sufficient? > [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path > > > Key: HDFS-13381 > URL: https://issues.apache.org/jira/browse/HDFS-13381 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Rakesh R >Assignee: Rakesh R >Priority: Major > Attachments: HDFS-13381-HDFS-10285-00.patch, > HDFS-13381-HDFS-10285-01.patch > > > This Jira task will address the following comments: > # Use DFSUtilClient::makePathFromFileId, instead of generics(one for string > path and another for inodeId) like today. > # Only the context impl differs for external/internal sps. Here, it can > simply move FileCollector and BlockMoveTaskHandler to Context interface. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13381) [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path
[ https://issues.apache.org/jira/browse/HDFS-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16437443#comment-16437443 ] Rakesh R commented on HDFS-13381: - Thanks a lot [~daryn] for the comments. Kindly, go through my reply. Please feel free to correct me if I misunderstood any of your thoughts:) {quote}The overall intent of using file id paths is completely decoupling the file id collector from the namesystem {quote} Adding few details about the SPS design change proposal in this jira. As we know, {{Context}} has two specific implementations - (1){{ExternalSPSContext}} used by *external* mode(standalone mode), which uses {{DFSClient#listPaths(childPath,lastReturnedName, false);}} call for traversing and collecting the file paths. (2){{IntraSPSNameNodeContext}} used by *internal* mode, resides within Namenode and holds the reference of {{namesystem}}, then traversing {{FSDirectory}} by making use of the existing {{FSTreeTraverser.traverseDir}} functionality. So, internal sps has a dependency with the namesystem, right? With the above design, we have exposed a convenient private interface called {{FileCollector}} and have provided only two specific implementations. The idea of this private interface is just to make the code maintenance easy - (1)ExternalSPSFilePathCollector for the *external* sps, which initializes and holds the reference of {{DistributedFileSystem dfs}} (2) IntraSPSNameNodeFileIdCollector for the *internal* sps, which holds the reference of {{namesystem.getFSDirectory()}}. These implementations are completely abstracted to specific Context and Context has respective API exposed for the traversal functionality like below, {code:java} Context.java /** * Do scan and collects the files under that directory and adds to the given * BlockStorageMovementNeeded. * * @param filePath * file path */ void scanAndCollectFiles(long filePath) throws IOException, InterruptedException; {code} {quote}More than one file collector implementation means the namesystem context abstraction is being violated. {quote} Only two specific implementations, one should be for *external* context and other one for strictly *internal* context. Like I said in the above comment, we added {{FileCollector}} interface for better code readability. Lets imagine that I haven't provided {{FileCollector}} interface, then the code looks like {{ExternalSPSFilePathCollector}} implementation will be inlined into ExternalSPSContext class and {{IntraSPSNameNodeFileIdCollector}} implementation will be inlined into IntraSPSNameNodeContext class. {noformat} |-ExternalSPSContext#ExternalSPSFilePathCollector (Traverse using DistributedFileSystem APIs) | Context-| | |-IntraSPSNameNodeContext#IntraSPSNameNodeFileIdCollector (Traverse using NameSystem APIs) {noformat} {quote}SPS is not a true standalone service if there's two implementations of various internal components like the file collector. {quote} Like I described above, file collection algo varies for *external* and *internal*, in the way it fetches the details from Namenode. So, I believe the exiting {{Context}} interface can abstract the specific implementation. Welcome any better proposal/thoughts to unify both of these traversal logic into single with minimal changes. {quote}The overhead to maintain and test a tightly and loosely coupled version will not be sustainable. The context shim must be the only pluggable piece. Please provide a single implementation of the collector that leverages inode id lookups via the context performing an inode path conversion. {quote} IIUC, your point is, why someone bothered about plugging in {{FileCollector}}. Actually, we added {{FileCollector}} interface for better code readability and only the Context interface is the pluggable piece and all the other specific implementations are residing inside Context implementations. If {{FileCollector}} interface is not conveying the purpose clearly, then I can inline these code into the respective Context class, should I ? With the [^HDFS-13381-HDFS-10285-01.patch] in this jira, you can see {{IntraSPSNameNodeContext#fileCollector}} and {{ExternalSPSContext#fileCollector}}. I hope you saw this patch. {code:java} ***ExternalSPSContext.java*** public ExternalSPSContext(SPSService service, NameNodeConnector nnc) { this.service = service; this.nnc = nnc; this.fileCollector = new ExternalSPSFilePathCollector(service); this.externalHandler = new ExternalSPSBlockMoveTaskHandler( service.getConf(), nnc, service); this.blkMovementListener = new ExternalBlockMovementListener(); } ***IntraSPSNameNodeContext.java*** public IntraSPSNameNodeContext(Namesystem namesystem, BlockManager blockManager, SPSService service) { this.namesystem = namesystem; this.blockManager =
[jira] [Commented] (HDFS-13381) [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path
[ https://issues.apache.org/jira/browse/HDFS-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436073#comment-16436073 ] Daryn Sharp commented on HDFS-13381: The overall intent of using file id paths is completely decoupling the file id collector from the namesystem. One of the cited reasons for an internal/external file collector was the perceived inability to lookup by id – which is already possible via inode paths. More than one file collector implementation means the namesystem context abstraction is being violated. SPS is not a true standalone service if there's two implementations of various internal components like the file collector. The overhead to maintain and test a tightly and loosely coupled version will not be sustainable. The context shim must be the only pluggable piece. Please provide a single implementation of the collector that leverages inode id lookups via the context performing an inode path conversion. > [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path > > > Key: HDFS-13381 > URL: https://issues.apache.org/jira/browse/HDFS-13381 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Rakesh R >Assignee: Rakesh R >Priority: Major > Attachments: HDFS-13381-HDFS-10285-00.patch, > HDFS-13381-HDFS-10285-01.patch > > > This Jira task will address the following comments: > # Use DFSUtilClient::makePathFromFileId, instead of generics(one for string > path and another for inodeId) like today. > # Only the context impl differs for external/internal sps. Here, it can > simply move FileCollector and BlockMoveTaskHandler to Context interface. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13381) [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path
[ https://issues.apache.org/jira/browse/HDFS-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422806#comment-16422806 ] genericqa commented on HDFS-13381: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} HDFS-10285 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 59s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 20s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} HDFS-10285 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 24s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}111m 30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}169m 11s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-13381 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12917194/HDFS-13381-HDFS-10285-01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 8cc063490fd1 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-10285 / e24bb54 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23750/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23750/testReport/ | | Max. process+thread count | 3324 (vs. ulimit of 1) | |
[jira] [Commented] (HDFS-13381) [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path
[ https://issues.apache.org/jira/browse/HDFS-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422216#comment-16422216 ] genericqa commented on HDFS-13381: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} HDFS-10285 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 9s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 58s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} HDFS-10285 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 31s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 85 unchanged - 0 fixed = 86 total (was 85) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 29s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 3m 19s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 31s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 41m 58s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-13381 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12917184/HDFS-13381-HDFS-10285-00.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 86662f15f517 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-10285 / e24bb54 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | mvninstall | https://builds.apache.org/job/PreCommit-HDFS-Build/23747/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt | | compile |