[jira] [Commented] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks
[ https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16552695#comment-16552695 ] Ewan Higgs commented on HDFS-13310: --- Merged into HDFS-12090 branch. > [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup > blocks > -- > > Key: HDFS-13310 > URL: https://issues.apache.org/jira/browse/HDFS-13310 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13310-HDFS-12090.001.patch, > HDFS-13310-HDFS-12090.002.patch, HDFS-13310-HDFS-12090.003.patch, > HDFS-13310-HDFS-12090.004.patch, HDFS-13310-HDFS-12090.005.patch, > HDFS-13310-HDFS-12090.006.patch, HDFS-13310-HDFS-12090.007.patch > > > As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands > in the heartbeat response that instructs it to backup a block. > This should take the form of two sub commands: PUT_FILE (when the file is <=1 > block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see > HDFS-13186). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks
[ https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16538048#comment-16538048 ] Virajith Jalaparti commented on HDFS-13310: --- [^HDFS-13310-HDFS-12090.007.patch] fixes the findbugs error in the last jenkins run. > [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup > blocks > -- > > Key: HDFS-13310 > URL: https://issues.apache.org/jira/browse/HDFS-13310 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13310-HDFS-12090.001.patch, > HDFS-13310-HDFS-12090.002.patch, HDFS-13310-HDFS-12090.003.patch, > HDFS-13310-HDFS-12090.004.patch, HDFS-13310-HDFS-12090.005.patch, > HDFS-13310-HDFS-12090.006.patch, HDFS-13310-HDFS-12090.007.patch > > > As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands > in the heartbeat response that instructs it to backup a block. > This should take the form of two sub commands: PUT_FILE (when the file is <=1 > block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see > HDFS-13186). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks
[ https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16537737#comment-16537737 ] genericqa commented on HDFS-13310: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 43s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 12 new or modified test files. {color} | || || || || {color:brown} HDFS-12090 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 35s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 56s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 0s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 24s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 42s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 34s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s{color} | {color:green} HDFS-12090 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 19m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 56s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 8s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}123m 37s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}248m 19s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Null passed for non-null parameter of java.nio.ByteBuffer.wrap(byte[]) in org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(DatanodeProtocolProtos$SyncTaskExecutionResultProto) Method invoked at PBHelper.java:of java.nio.ByteBuffer.wrap(byte[]) in org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(DatanodeProtocolProtos$SyncTaskExecutionResultProto) Method invoked at PBHelper.java:[line 1311] | | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations | |
[jira] [Commented] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks
[ https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16537352#comment-16537352 ] Virajith Jalaparti commented on HDFS-13310: --- Thanks for the update [~ehiggs]. I made the following changes to [^HDFS-13310-HDFS-12090.005.patch] to get [^HDFS-13310-HDFS-12090.006.patch]. If you are fine with this, I am +1 on the patch pending jenkins. - Fixed whitespace and checkstyle errors - Modified SyncTaskExecutionResult to use and return a read-only ByteBuffer instead of byte[] (should fix the findbugs) - Replaced the use of \{{Lists.newArrayListWithCapacity}} with \{{ArrayList#ArrayList(int)}} in \{{PBHelper}}, and \{{Lists.newArrayList()}} with \{{ArrayList#ArrayList()}}} in \{{NameNodeAdapter}}. > [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup > blocks > -- > > Key: HDFS-13310 > URL: https://issues.apache.org/jira/browse/HDFS-13310 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13310-HDFS-12090.001.patch, > HDFS-13310-HDFS-12090.002.patch, HDFS-13310-HDFS-12090.003.patch, > HDFS-13310-HDFS-12090.004.patch, HDFS-13310-HDFS-12090.005.patch, > HDFS-13310-HDFS-12090.006.patch > > > As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands > in the heartbeat response that instructs it to backup a block. > This should take the form of two sub commands: PUT_FILE (when the file is <=1 > block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see > HDFS-13186). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks
[ https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16534851#comment-16534851 ] genericqa commented on HDFS-13310: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 42s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 12 new or modified test files. {color} | || || || || {color:brown} HDFS-12090 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 36s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 43s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 1s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 41s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 55s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 33s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s{color} | {color:green} HDFS-12090 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 54s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 1s{color} | {color:orange} hadoop-hdfs-project: The patch generated 16 new + 692 unchanged - 1 fixed = 708 total (was 693) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 8s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 43s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 28s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 52s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}223m 15s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client | | | org.apache.hadoop.hdfs.server.protocol.SyncTaskExecutionResult.getResult() may expose internal representation by returning SyncTaskExecutionResult.result At SyncTaskExecutionResult.java:by returning SyncTaskExecutionResult.result At SyncTaskExecutionResult.java:[line 38] | | | new org.apache.hadoop.hdfs.server.protocol.SyncTaskExecutionResult(byte[], Long) may expose internal repr
[jira] [Commented] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks
[ https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16534662#comment-16534662 ] Ewan Higgs commented on HDFS-13310: --- 005 - Removed PUT_FILE and flattened BlockSyncTask and BlockSyncTaskProto > [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup > blocks > -- > > Key: HDFS-13310 > URL: https://issues.apache.org/jira/browse/HDFS-13310 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13310-HDFS-12090.001.patch, > HDFS-13310-HDFS-12090.002.patch, HDFS-13310-HDFS-12090.003.patch, > HDFS-13310-HDFS-12090.004.patch, HDFS-13310-HDFS-12090.005.patch > > > As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands > in the heartbeat response that instructs it to backup a block. > This should take the form of two sub commands: PUT_FILE (when the file is <=1 > block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see > HDFS-13186). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks
[ https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532915#comment-16532915 ] genericqa commented on HDFS-13310: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 12 new or modified test files. {color} | || || || || {color:brown} HDFS-12090 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 28s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 21s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 11s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 45s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 15s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 49s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s{color} | {color:green} HDFS-12090 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 17m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 14s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 12s{color} | {color:orange} hadoop-hdfs-project: The patch generated 53 new + 693 unchanged - 1 fixed = 746 total (was 694) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 56s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 59s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 28s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}128m 3s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 5s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}240m 16s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client | | | org.apache.hadoop.hdfs.server.protocol.SyncTaskExecutionResult.getResult() may expose internal representation by returning SyncTaskExecutionResult.result At SyncTaskExecutionResult.java:by returning SyncTaskExecutionResult.result At SyncTaskExecutionResult.java:[line 38] | | | new org.apache.hadoop.hdfs.server.protocol.SyncTaskExecutionResult(byte[], Long) may expose internal repr
[jira] [Commented] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks
[ https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532656#comment-16532656 ] Ewan Higgs commented on HDFS-13310: --- {quote}Can we add javadoc to all the new messages introduced in DatanodeProtocol.proto, and all newly added classes (SyncTask). Any particular reason for static imports in PBHelper.java? If not, I would prefer not declaring these as static imports. {quote} 004 - Add Javadoc - Remove static imports [~virajith] shall I remove PUT_FILE in this ticket or in a followup? > [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup > blocks > -- > > Key: HDFS-13310 > URL: https://issues.apache.org/jira/browse/HDFS-13310 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13310-HDFS-12090.001.patch, > HDFS-13310-HDFS-12090.002.patch, HDFS-13310-HDFS-12090.003.patch, > HDFS-13310-HDFS-12090.004.patch > > > As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands > in the heartbeat response that instructs it to backup a block. > This should take the form of two sub commands: PUT_FILE (when the file is <=1 > block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see > HDFS-13186). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks
[ https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532227#comment-16532227 ] genericqa commented on HDFS-13310: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 2s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 12 new or modified test files. {color} | || || || || {color:brown} HDFS-12090 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 34s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 16s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 10s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 35s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 46s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 2s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} HDFS-12090 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 12s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 53s{color} | {color:orange} hadoop-hdfs-project: The patch generated 56 new + 692 unchanged - 1 fixed = 748 total (was 693) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 38s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 39s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 28s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}129m 55s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}231m 59s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client | | | org.apache.hadoop.hdfs.server.protocol.SyncTaskExecutionResult.getResult() may expose internal representation by returning SyncTaskExecutionResult.result At SyncTaskExecutionResult.java:by returning SyncTaskExecutionResult.result At SyncTaskExecutionResult.java:[line 34] | | | new org.apache.hadoop.hdfs.server.protocol.SyncTaskExecutionResult(byte[], Long) may expose internal representation by storing an externally mutable object into SyncTaskExecutionResult.r
[jira] [Commented] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks
[ https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532088#comment-16532088 ] Virajith Jalaparti commented on HDFS-13310: --- Thanks for posting this [~ehiggs]. I made the following modifications in the patch and posted [^HDFS-13310-HDFS-12090.003.patch]. - Formatted newly added code to fit into the 80 characters. - reverted unnecessary changes to Datanode.java - Added javadoc for BulkSyncTaskExecutionFeedback in DatanodeProtocol#sendHeartbeat - I didn't see a reason to use {{Pair}} in the constructor of SyncTaskExecutionResult. I removed this. A couple of comments: - Can we add javadoc to all the new messages introduced in {{DatanodeProtocol.proto}}, and all newly added classes (*SyncTask*). - Any particular reason for static imports in PBHelper.java? If not, I would prefer not declaring these as static imports. > [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup > blocks > -- > > Key: HDFS-13310 > URL: https://issues.apache.org/jira/browse/HDFS-13310 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13310-HDFS-12090.001.patch, > HDFS-13310-HDFS-12090.002.patch, HDFS-13310-HDFS-12090.003.patch > > > As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands > in the heartbeat response that instructs it to backup a block. > This should take the form of two sub commands: PUT_FILE (when the file is <=1 > block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see > HDFS-13186). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks
[ https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16513324#comment-16513324 ] genericqa commented on HDFS-13310: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 12 new or modified test files. {color} | || || || || {color:brown} HDFS-12090 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 17s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 38s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 12s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 46s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 46s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 37s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} HDFS-12090 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 27s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 10s{color} | {color:orange} hadoop-hdfs-project: The patch generated 89 new + 843 unchanged - 1 fixed = 932 total (was 844) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 1s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 44s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 27s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}210m 50s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client | | | org.apache.hadoop.hdfs.server.protocol.SyncTaskExecutionResult.getResult() may expose internal representation by returning SyncTaskExecutionResult.result At SyncTaskExecutionResult.java:by returning SyncTaskExecutionResult.result At SyncTaskExecutionResult.java:[line 36] | | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeReconfiguration | | | hadoop.hdfs.server.namenode.TestNestedEncryptionZones | | | hadoop.hdfs.server.blockmanagement.TestBlockSt
[jira] [Commented] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks
[ https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16513176#comment-16513176 ] Ewan Higgs commented on HDFS-13310: --- The protocol should also have the target storage uuid so that the datanode knows which FsVolumeImpl (or ProvidedVolumeImpl, rather) should be updated with the new replica information. > [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup > blocks > -- > > Key: HDFS-13310 > URL: https://issues.apache.org/jira/browse/HDFS-13310 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13310-HDFS-12090.001.patch, > HDFS-13310-HDFS-12090.002.patch > > > As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands > in the heartbeat response that instructs it to backup a block. > This should take the form of two sub commands: PUT_FILE (when the file is <=1 > block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see > HDFS-13186). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks
[ https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16513173#comment-16513173 ] Ewan Higgs commented on HDFS-13310: --- Feedback from [~chris.douglas]: PUT_FILE adds extra complication here. When writing a file, if a DN splits but is still writing to the remote storage then it could interfere with another DN that is tasked with writing the file. This should be solved by adding a `complete` phase to the PUT_FILE. At this point, there's very little difference between PUT_FILE and MULTIPART_PUT_PART. With this in mind, consider removing PUT_PART. > [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup > blocks > -- > > Key: HDFS-13310 > URL: https://issues.apache.org/jira/browse/HDFS-13310 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13310-HDFS-12090.001.patch, > HDFS-13310-HDFS-12090.002.patch > > > As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands > in the heartbeat response that instructs it to backup a block. > This should take the form of two sub commands: PUT_FILE (when the file is <=1 > block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see > HDFS-13186). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks
[ https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16437367#comment-16437367 ] Daryn Sharp commented on HDFS-13310: {quote}The current BlockCommand protocol treats all blocks independently *since DNs don't really have a concept of a file*; only blocks. This is the reason we want a new command. {quote} The bolded (my emphasis) statement is accurately captures the crux of the issue: is it a deficiency of the DN to only understand the concept of block? The DN is currently has a simple and elegant design. It stores blocks. It moves blocks. It deletes blocks. That's the design abstraction I implied will become leaky. That simplicity, which I believe is an excellent design strength, is at odds with the design of this s4 upload feature. The DN must know the file id, offset/length of the replica within the file, and block locations for an unknown reason. Here's my general concerns: * Should the DN effectively become "file aware"? Perhaps it might be ok if only for backup and only in the provided storage type. * Will subsequent patches extend this file-awareness to more of the DN? If yes, I have serious reservations. * How will this functionality be managed? Do you intend to add the control service directly into the NN? * How will the feature interact with replication operations and the balancer? Before debating the fine points, please help me understand the overall feature: Is the intent that an admin must explicitly issue a "backup" operation? If yes, what are the pros/cons over using a (modified) distcp? > [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup > blocks > -- > > Key: HDFS-13310 > URL: https://issues.apache.org/jira/browse/HDFS-13310 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13310-HDFS-12090.001.patch, > HDFS-13310-HDFS-12090.002.patch > > > As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands > in the heartbeat response that instructs it to backup a block. > This should take the form of two sub commands: PUT_FILE (when the file is <=1 > block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see > HDFS-13186). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks
[ https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16421497#comment-16421497 ] Ewan Higgs commented on HDFS-13310: --- Hi Daryn, {quote}I don't understand why the DN needs all kinds of new commands (here and other jiras) that are equivalent to "copy or move this block". If you want to do multi-part upload to s3 magic, that should be hidden behind the "provided" plugin when a block is copied/moved to it. {quote} I know you've been following this project but to clear up any confusion let me restate that a goal/constraint that we are working with here is that we would like the synchronization end point to work at the file level and not the block level. So any command to the DN to copy a block needs to be part of a collection of blocks all being copied together. The current BlockCommand protocol treats all blocks independently since DNs don't really have a concept of a file; only blocks. This is the reason we want a new command. {quote}Is there any way to generalize this feature? {quote} Optimally we would love to reuse existing commands where it makes sense. Therefore, the first question is: can we reuse DNA_TRANSFER? DNA_TRANSFER looks like the following: {quote} {{/**}} {{* Command to instruct datanodes to perform certain action}} {{* on the given set of blocks.}} {{*/}} {{message BlockCommandProto {}} {{ enum Action {}} {{ TRANSFER = 1; // Transfer blocks to another datanode}} {{ INVALIDATE = 2; // Invalidate blocks}} {{ SHUTDOWN = 3; // Shutdown the datanode}} {{ }}} {{ }} {{ required Action action = 1;}} {{ required string blockPoolId = 2;}} {{ repeated BlockProto blocks = 3;}} {{ repeated DatanodeInfosProto targets = 4;}} {{ repeated StorageUuidsProto targetStorageUuids = 5;}} {{ repeated StorageTypesProto targetStorageTypes = 6;}} {{}}} {quote} As we are not writing to another Datanode per se, DatanodeInfosProto is not an adequate field for defining a target so this could be used with some extension. We /could/ write to a Datanode with the Provided Storage as the targetStorageUuid, but remember: we can't just move the block to the external storage endpoint without context of the whole file so this won't work (afaics). So we need a new command (of DatanodeCommandProto.Type). You call this a leaky abstraction, but there is no abstraction going on: it's a command to do exactly what the command name is. I guess one could take exception here that we are tying it directly to the ability to synchronize files instead of offering something like DNA_GROUP_TRANSFER_PART to denote that we are transferring the block as part of a group. Such a command could be reused by anyone who needs to do such a thing (Maybe HDFS-10419 (HDSL) could use this?) Another approach could be to reuse BlockCommandProto/DNA_TRANSFER and introduce a new sum type for the target consisting of DataNodeAndStorage (existing case), PutFile/PutFilePart (for our use case), and e.g. Maybe HDFS-10419 will call for a DataNodeAndContainer in case we want some new container repacking command. Further, it would change the semantics of DNA_TRANSFER when writing a file part: 1. we don't want to delete successfully migrated blocks until the entire multipart write has been completed and 2. we need to support multiple replicas of a block on the same node (to implement 1). > [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup > blocks > -- > > Key: HDFS-13310 > URL: https://issues.apache.org/jira/browse/HDFS-13310 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13310-HDFS-12090.001.patch, > HDFS-13310-HDFS-12090.002.patch > > > As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands > in the heartbeat response that instructs it to backup a block. > This should take the form of two sub commands: PUT_FILE (when the file is <=1 > block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see > HDFS-13186). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks
[ https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16419882#comment-16419882 ] Daryn Sharp commented on HDFS-13310: Is there any way to generalize this feature? Scanning the patch, it looks like a leaky abstraction. I don't understand why the DN needs all kinds of new commands (here and other jiras) that are equivalent to "copy or move this block". If you want to do multi-part upload to s3 magic, that should be hidden behind the "provided" plugin when a block is copied/moved to it. Not leaked all throughout hdfs. > [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup > blocks > -- > > Key: HDFS-13310 > URL: https://issues.apache.org/jira/browse/HDFS-13310 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13310-HDFS-12090.001.patch, > HDFS-13310-HDFS-12090.002.patch > > > As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands > in the heartbeat response that instructs it to backup a block. > This should take the form of two sub commands: PUT_FILE (when the file is <=1 > block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see > HDFS-13186). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org