[jira] [Commented] (HDFS-13241) RBF: TestRouterSafemode failed if the port 8888 is in use
[ https://issues.apache.org/jira/browse/HDFS-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396586#comment-16396586 ] maobaolong commented on HDFS-13241: --- [~elgoiri] Thank you for your review and commit for me. > RBF: TestRouterSafemode failed if the port is in use > - > > Key: HDFS-13241 > URL: https://issues.apache.org/jira/browse/HDFS-13241 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs, test >Affects Versions: 3.2.0 >Reporter: maobaolong >Assignee: maobaolong >Priority: Major > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0 > > Attachments: HDFS-13241.001.patch, HDFS-13241.002.patch > > > TestRouterSafemode failed if the port is in use. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13251) Avoid using hard coded datanode data dirs in unit tests
[ https://issues.apache.org/jira/browse/HDFS-13251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396582#comment-16396582 ] genericqa commented on HDFS-13251: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 39s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 8 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 59s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 52s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 11 new + 100 unchanged - 0 fixed = 111 total (was 100) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 4s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}115m 41s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}171m 49s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-13251 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914203/HDFS-13251.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ddd62af6af49 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7fab787 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/23441/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23441/testReport/ | | Max. process+thread count | 3052 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23441/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT
[jira] [Commented] (HDFS-11600) Refactor TestDFSStripedOutputStreamWithFailure test classes
[ https://issues.apache.org/jira/browse/HDFS-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396573#comment-16396573 ] genericqa commented on HDFS-11600: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 26 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 51s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs generated 0 new + 394 unchanged - 3 fixed = 394 total (was 397) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 40s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 12 new + 0 unchanged - 0 fixed = 12 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 58s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 13s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}122m 9s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSClientRetries | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-11600 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914209/HDFS-11600.006.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8322e630daa7 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 45d1b0f | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/23442/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23442/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
[jira] [Commented] (HDFS-5926) Documentation should clarify dfs.datanode.du.reserved impact from reserved disk capacity
[ https://issues.apache.org/jira/browse/HDFS-5926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396566#comment-16396566 ] genericqa commented on HDFS-5926: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 33m 38s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 51s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 0s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}151m 41s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-5926 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12891730/HDFS-5926-1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux caebf63400b1 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7fab787 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23440/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23440/testReport/ | | Max. process+thread count | 3895 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23440/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Documentation should clarify dfs.datanode.du.reserved impact from reserved > disk capacity >
[jira] [Commented] (HDFS-12505) Extend TestFileStatusWithECPolicy with a random EC policy
[ https://issues.apache.org/jira/browse/HDFS-12505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396551#comment-16396551 ] Takanobu Asanuma commented on HDFS-12505: - Thanks for the review, [~xiaochen]! Uploaded a new patch which fixes the checkstyle issue. > Extend TestFileStatusWithECPolicy with a random EC policy > - > > Key: HDFS-12505 > URL: https://issues.apache.org/jira/browse/HDFS-12505 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding, test >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-12505.1.patch, HDFS-12505.2.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12505) Extend TestFileStatusWithECPolicy with a random EC policy
[ https://issues.apache.org/jira/browse/HDFS-12505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HDFS-12505: Attachment: HDFS-12505.2.patch > Extend TestFileStatusWithECPolicy with a random EC policy > - > > Key: HDFS-12505 > URL: https://issues.apache.org/jira/browse/HDFS-12505 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding, test >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-12505.1.patch, HDFS-12505.2.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13249) Document webhdfs support for getting snapshottable directory list
[ https://issues.apache.org/jira/browse/HDFS-13249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396549#comment-16396549 ] genericqa commented on HDFS-13249: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 29m 9s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 9s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 42m 20s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-13249 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914213/HDFS-13249.002.patch | | Optional Tests | asflicense mvnsite | | uname | Linux 7764d7473326 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0355ec2 | | maven | version: Apache Maven 3.3.9 | | Max. process+thread count | 342 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23443/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Document webhdfs support for getting snapshottable directory list > - > > Key: HDFS-13249 > URL: https://issues.apache.org/jira/browse/HDFS-13249 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation, webhdfs >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDFS-13249.001.patch, HDFS-13249.002.patch > > > This ticket is opened to document the WebHDFS: Add support for getting > snasphottable directory list from HDFS-13141 in WebHDFS.md. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12677) Extend TestReconstructStripedFile with a random EC policy
[ https://issues.apache.org/jira/browse/HDFS-12677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396548#comment-16396548 ] Takanobu Asanuma commented on HDFS-12677: - Thanks for revising, reviewing and committing it, everyone! > Extend TestReconstructStripedFile with a random EC policy > - > > Key: HDFS-12677 > URL: https://issues.apache.org/jira/browse/HDFS-12677 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding, test >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Fix For: 3.1.0 > > Attachments: HDFS-12677.002.patch, HDFS-12677.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-336) dfsadmin -report should report number of blocks from datanode
[ https://issues.apache.org/jira/browse/HDFS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396535#comment-16396535 ] genericqa commented on HDFS-336: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 49s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 45s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 33m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 45s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 45s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 25s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}135m 1s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 50s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}216m 25s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSClientRetries | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-336 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914191/HDFS-336.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc xml | | uname | Linux 63a1b7b0c9ec 3.13.0-137-generic
[jira] [Commented] (HDFS-13141) WebHDFS: Add support for getting snasphottable directory list
[ https://issues.apache.org/jira/browse/HDFS-13141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396532#comment-16396532 ] Xiaoyu Yao commented on HDFS-13141: --- [~chris.douglas] , thanks for the comments. Sorry, I just saw it after commit. {quote}bq. "Instead of extracting fields from one {{HdfsFileStatus}} to construct another, could {{SnapshottableDirectoryStatus}} just take the original struct, to avoid the {{convert}} method?" {quote} Good point! I will work with [~ljain] with a new SnapshottableDirectoryStatus constructor that takes HdfsFileStatus directly to avoid the unnecessary construct and convert. {code} public SnapshottableDirectoryStatus(HdfsFileStatus dirStatus, int snapshotNumber, int snapshotQuota, byte[] parentFullPath) { this.dirStatus = dirStatus; this.snapshotNumber = snapshotNumber; this.snapshotQuota = snapshotQuota; this.parentFullPath = parentFullPath; } {code} > WebHDFS: Add support for getting snasphottable directory list > - > > Key: HDFS-13141 > URL: https://issues.apache.org/jira/browse/HDFS-13141 > Project: Hadoop HDFS > Issue Type: Task > Components: webhdfs >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 3.1.0, 3.2.0 > > Attachments: HDFS-13141.001.patch, HDFS-13141.002.patch, > HDFS-13141.003.patch > > > This Jira aims to implement get snapshottable directory list operation for > webHdfs filesystem. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12886) Ignore minReplication for block recovery
[ https://issues.apache.org/jira/browse/HDFS-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396527#comment-16396527 ] genericqa commented on HDFS-12886: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 33s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 21s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}119m 43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}181m 39s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestRollingUpgrade | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-12886 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12900580/HDFS-12886.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 249b4c51d5af 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 91c82c9 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23439/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23439/testReport/ | | Max. process+thread count | 3039 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23439/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was
[jira] [Commented] (HDFS-13249) Document webhdfs support for getting snapshottable directory list
[ https://issues.apache.org/jira/browse/HDFS-13249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396521#comment-16396521 ] Lokesh Jain commented on HDFS-13249: [~xyao] Thanks for reviewing the patch! v2 patch addresses your comments. > Document webhdfs support for getting snapshottable directory list > - > > Key: HDFS-13249 > URL: https://issues.apache.org/jira/browse/HDFS-13249 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation, webhdfs >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDFS-13249.001.patch, HDFS-13249.002.patch > > > This ticket is opened to document the WebHDFS: Add support for getting > snasphottable directory list from HDFS-13141 in WebHDFS.md. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10803) TestBalancerWithMultipleNameNodes#testBalancing2OutOf3Blockpools fails intermittently due to no free space available
[ https://issues.apache.org/jira/browse/HDFS-10803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396522#comment-16396522 ] genericqa commented on HDFS-10803: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 19s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 31s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}132m 36s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}194m 1s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestBlockStoragePolicy | | | hadoop.hdfs.TestLeaseRecovery2 | | | hadoop.hdfs.server.namenode.TestListCorruptFileBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-10803 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12825596/HDFS-10803.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 3dfa5d54a3e3 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 91c82c9 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23435/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23435/testReport/ | | Max. process+thread count | 3288 (vs. ulimit of 1) | | modules | C:
[jira] [Updated] (HDFS-13249) Document webhdfs support for getting snapshottable directory list
[ https://issues.apache.org/jira/browse/HDFS-13249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDFS-13249: --- Attachment: HDFS-13249.002.patch > Document webhdfs support for getting snapshottable directory list > - > > Key: HDFS-13249 > URL: https://issues.apache.org/jira/browse/HDFS-13249 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation, webhdfs >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDFS-13249.001.patch, HDFS-13249.002.patch > > > This ticket is opened to document the WebHDFS: Add support for getting > snasphottable directory list from HDFS-13141 in WebHDFS.md. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13141) WebHDFS: Add support for getting snasphottable directory list
[ https://issues.apache.org/jira/browse/HDFS-13141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396515#comment-16396515 ] Hudson commented on HDFS-13141: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13822 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13822/]) HDFS-13141. WebHDFS: Add support for getting snasphottable directory (xyao: rev 0355ec20ebeb988679c7192c7024bef7a2a3bced) * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/GetOpParam.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java > WebHDFS: Add support for getting snasphottable directory list > - > > Key: HDFS-13141 > URL: https://issues.apache.org/jira/browse/HDFS-13141 > Project: Hadoop HDFS > Issue Type: Task > Components: webhdfs >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 3.1.0, 3.2.0 > > Attachments: HDFS-13141.001.patch, HDFS-13141.002.patch, > HDFS-13141.003.patch > > > This Jira aims to implement get snapshottable directory list operation for > webHdfs filesystem. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-336) dfsadmin -report should report number of blocks from datanode
[ https://issues.apache.org/jira/browse/HDFS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396512#comment-16396512 ] genericqa commented on HDFS-336: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 42s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 26s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 52s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 277 unchanged - 0 fixed = 279 total (was 277) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 31s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 26s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}144m 34s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}202m 29s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness | | | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport | | |
[jira] [Commented] (HDFS-13249) Document webhdfs support for getting snapshottable directory list
[ https://issues.apache.org/jira/browse/HDFS-13249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396508#comment-16396508 ] Xiaoyu Yao commented on HDFS-13249: --- Thanks [~ljain] for working on this and post the patch. It looks good to me overall. Just have one question: Line 1322-1342 seems a dupplicate of Line 1302-1321, can we just keep one of them to avoid confusion? > Document webhdfs support for getting snapshottable directory list > - > > Key: HDFS-13249 > URL: https://issues.apache.org/jira/browse/HDFS-13249 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation, webhdfs >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDFS-13249.001.patch > > > This ticket is opened to document the WebHDFS: Add support for getting > snasphottable directory list from HDFS-13141 in WebHDFS.md. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13141) WebHDFS: Add support for getting snasphottable directory list
[ https://issues.apache.org/jira/browse/HDFS-13141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396507#comment-16396507 ] Chris Douglas commented on HDFS-13141: -- bq. After check HdfsFileStatus, it seems we don't have a getter for flags for HdfsFileStatus. Is this something we miss from HDFS-12681? It was deliberate. The flags support existing methods on {{HdfsFileStatus}} and {{FileStatus}}. Conversion between the enums directly through the type implies a tighter binding than I hoped we'd maintain going forward. While {{FileStatus}} should expose only metadata common to most implementations, {{HdfsFileStatus}} can specialize on HDFS. Instead of extracting fields from one {{HdfsFileStatus}} to construct another, could {{SnapshottableDirectoryStatus}} just take the original struct, to avoid the {{convert}} method? > WebHDFS: Add support for getting snasphottable directory list > - > > Key: HDFS-13141 > URL: https://issues.apache.org/jira/browse/HDFS-13141 > Project: Hadoop HDFS > Issue Type: Task > Components: webhdfs >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 3.1.0, 3.2.0 > > Attachments: HDFS-13141.001.patch, HDFS-13141.002.patch, > HDFS-13141.003.patch > > > This Jira aims to implement get snapshottable directory list operation for > webHdfs filesystem. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13190) Document WebHDFS support for snapshot diff
[ https://issues.apache.org/jira/browse/HDFS-13190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-13190: -- Fix Version/s: 3.2.0 > Document WebHDFS support for snapshot diff > -- > > Key: HDFS-13190 > URL: https://issues.apache.org/jira/browse/HDFS-13190 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation, webhdfs >Reporter: Xiaoyu Yao >Assignee: Lokesh Jain >Priority: Major > Fix For: 3.1.0, 3.0.2, 3.2.0 > > Attachments: HDFS-13190.001.patch, HDFS-13190.002.patch > > > This ticket is opened to document the WebHDFS: Add support for snasphot diff > from HDFS-13052 in WebHDFS.md. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13141) WebHDFS: Add support for getting snasphottable directory list
[ https://issues.apache.org/jira/browse/HDFS-13141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-13141: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.2.0 3.1.0 Status: Resolved (was: Patch Available) Thanks [~ljain] for the contribution. I've commit the fix to the trunk and branch-3.1 > WebHDFS: Add support for getting snasphottable directory list > - > > Key: HDFS-13141 > URL: https://issues.apache.org/jira/browse/HDFS-13141 > Project: Hadoop HDFS > Issue Type: Task > Components: webhdfs >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 3.1.0, 3.2.0 > > Attachments: HDFS-13141.001.patch, HDFS-13141.002.patch, > HDFS-13141.003.patch > > > This Jira aims to implement get snapshottable directory list operation for > webHdfs filesystem. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13269) After too many open file exception occurred, the standby NN never do checkpoint
maobaolong created HDFS-13269: - Summary: After too many open file exception occurred, the standby NN never do checkpoint Key: HDFS-13269 URL: https://issues.apache.org/jira/browse/HDFS-13269 Project: Hadoop HDFS Issue Type: Bug Components: hdfs Affects Versions: 3.2.0 Reporter: maobaolong do saveNameSpace in dfsadmin. The output as following: {code:java} saveNamespace: No image directories available! {code} The Namenode log show: {code:java} [2018-01-13T10:32:19.903+08:00] [INFO] [Standby State Checkpointer] : Triggering checkpoint because there have been 10159265 txns since the last checkpoint, which exceeds the configured threshold 1000 [2018-01-13T10:32:19.903+08:00] [INFO] [Standby State Checkpointer] : Save namespace ... ... [2018-01-13T10:37:10.539+08:00] [WARN] [1985938863@qtp-61073295-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@HOST_A:50070] : EXCEPTION java.io.IOException: Too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) at org.mortbay.jetty.nio.SelectChannelConnector$1.acceptChannel(SelectChannelConnector.java:75) at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:686) at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) [2018-01-13T10:37:15.421+08:00] [ERROR] [FSImageSaver for /data0/nn of type IMAGE_AND_EDITS] : Unable to save image for /data0/nn java.io.FileNotFoundException: /data0/nn/current/fsimage_40247283317.md5.tmp (Too many open files) at java.io.FileOutputStream.open0(Native Method) at java.io.FileOutputStream.open(FileOutputStream.java:270) at java.io.FileOutputStream.(FileOutputStream.java:213) at java.io.FileOutputStream.(FileOutputStream.java:162) at org.apache.hadoop.hdfs.util.AtomicFileOutputStream.(AtomicFileOutputStream.java:58) at org.apache.hadoop.hdfs.util.MD5FileUtils.saveMD5File(MD5FileUtils.java:157) at org.apache.hadoop.hdfs.util.MD5FileUtils.saveMD5File(MD5FileUtils.java:149) at org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImage(FSImage.java:990) at org.apache.hadoop.hdfs.server.namenode.FSImage$FSImageSaver.run(FSImage.java:1039) at java.lang.Thread.run(Thread.java:745) [2018-01-13T10:37:15.421+08:00] [ERROR] [Standby State Checkpointer] : Error reported on storage directory Storage Directory /data0/nn [2018-01-13T10:37:15.421+08:00] [WARN] [Standby State Checkpointer] : About to remove corresponding storage: /data0/nn [2018-01-13T10:37:15.429+08:00] [ERROR] [Standby State Checkpointer] : Exception in doCheckpoint java.io.IOException: Failed to save in any storage directories while saving namespace. at org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1176) at org.apache.hadoop.hdfs.server.namenode.FSImage.saveNamespace(FSImage.java:1107) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.doCheckpoint(StandbyCheckpointer.java:185) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.access$1400(StandbyCheckpointer.java:62) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread.doWork(StandbyCheckpointer.java:353) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread.access$700(StandbyCheckpointer.java:260) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread$1.run(StandbyCheckpointer.java:280) at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread.run(StandbyCheckpointer.java:276) ... [2018-01-13T15:52:33.783+08:00] [INFO] [Standby State Checkpointer] : Save namespace ... [2018-01-13T15:52:33.783+08:00] [ERROR] [Standby State Checkpointer] : Exception in doCheckpoint java.io.IOException: No image directories available! at org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1152) at org.apache.hadoop.hdfs.server.namenode.FSImage.saveNamespace(FSImage.java:1107) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.doCheckpoint(StandbyCheckpointer.java:185) at
[jira] [Commented] (HDFS-12773) RBF: Improve State Store FS implementation
[ https://issues.apache.org/jira/browse/HDFS-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396498#comment-16396498 ] genericqa commented on HDFS-12773: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 38s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 42s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 24s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 39s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 49s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}138m 34s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}233m 37s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-12773 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914183/HDFS-12773.006.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux a344eb9a6017 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[jira] [Commented] (HDFS-11600) Refactor TestDFSStripedOutputStreamWithFailure test classes
[ https://issues.apache.org/jira/browse/HDFS-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396493#comment-16396493 ] SammiChen commented on HDFS-11600: -- Thanks [~xiaochen] for the comments. bq. Do you know why this range was chosen? I didn't know initial reason. By going through the code, I guess at the moment the TestDFSStripedOutputStreamWithFailure is introduced, the only supported EC policy is RS-6-3-64K. The intent is to test the file with length varies from [0, 1, 2] block groups, each time block group's cell number varies from [0 - (6(data block number)*4(cell per block)-1] , plus [-1,0,1] delta length. So approximately there will be total 3 * ((6 * 4) -1) * 3 = 207 length variants. While now we support more EC policies, especially RS-10-4, so the previous 210 variants doesn't stand any more. Actually the variants should varies when different EC polices is used inTestDFSStripedOutputStreamWithFailureWithRandomECPolicy. bq. This is from existing code, but now may be a good chance to change - could you do tearDown with a @After annotation? This way, each test doesn't have to try-finally. Agree. While there is a loop in testBlockTokenExpired which requires setup and tearDown the cluster every iterate. So It seems better to keep it. > Refactor TestDFSStripedOutputStreamWithFailure test classes > --- > > Key: HDFS-11600 > URL: https://issues.apache.org/jira/browse/HDFS-11600 > Project: Hadoop HDFS > Issue Type: Improvement > Components: test >Affects Versions: 3.0.0-alpha2 >Reporter: Andrew Wang >Priority: Minor > Attachments: HDFS-11600-1.patch, HDFS-11600.002.patch, > HDFS-11600.003.patch, HDFS-11600.004.patch, HDFS-11600.005.patch, > HDFS-11600.006.patch > > > TestDFSStripedOutputStreamWithFailure has a great number of subclasses. The > tests are parameterized based on the name of these subclasses. > Seems like we could parameterize these tests with JUnit and then not need all > these separate test classes. > Another note, the tests will randomly return instead of running the test. > Using {{Assume}} instead would make it more clear in the test output that > these tests were skipped. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11600) Refactor TestDFSStripedOutputStreamWithFailure test classes
[ https://issues.apache.org/jira/browse/HDFS-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HDFS-11600: - Attachment: HDFS-11600.006.patch > Refactor TestDFSStripedOutputStreamWithFailure test classes > --- > > Key: HDFS-11600 > URL: https://issues.apache.org/jira/browse/HDFS-11600 > Project: Hadoop HDFS > Issue Type: Improvement > Components: test >Affects Versions: 3.0.0-alpha2 >Reporter: Andrew Wang >Priority: Minor > Attachments: HDFS-11600-1.patch, HDFS-11600.002.patch, > HDFS-11600.003.patch, HDFS-11600.004.patch, HDFS-11600.005.patch, > HDFS-11600.006.patch > > > TestDFSStripedOutputStreamWithFailure has a great number of subclasses. The > tests are parameterized based on the name of these subclasses. > Seems like we could parameterize these tests with JUnit and then not need all > these separate test classes. > Another note, the tests will randomly return instead of running the test. > Using {{Assume}} instead would make it more clear in the test output that > these tests were skipped. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12514) Cancelled HedgedReads cause block to be marked as suspect on Windows
[ https://issues.apache.org/jira/browse/HDFS-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396492#comment-16396492 ] genericqa commented on HDFS-12514: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 52s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 37 unchanged - 1 fixed = 37 total (was 38) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 57s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 54s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 15s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}137m 20s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Dead store to ioem in org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(ByteBuffer, int, OutputStream, boolean, DataTransferThrottler) At BlockSender.java:org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(ByteBuffer, int, OutputStream, boolean, DataTransferThrottler) At BlockSender.java:[line 661] | | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-12514 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914195/HDFS-12514.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c999e80fbfd2
[jira] [Updated] (HDFS-12455) WebHDFS - Adding "snapshot enabled" status to ListStatus query result.
[ https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-12455: -- Fix Version/s: 3.2.0 > WebHDFS - Adding "snapshot enabled" status to ListStatus query result. > -- > > Key: HDFS-12455 > URL: https://issues.apache.org/jira/browse/HDFS-12455 > Project: Hadoop HDFS > Issue Type: Improvement > Components: snapshots, webhdfs >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Fix For: 3.1.0, 3.0.2, 3.2.0 > > Attachments: HDFS-12455.01.patch, HDFS-12455.02.patch, > HDFS-12455.03.patch, HDFS-12455.04.patch, HDFS-12455.05.patch > > > WebHDFS - ListStatus query does not provide any information about a folder's > "snapshot enabled" status. Since "ListStatus" lists other attributes it will > be good to include this attribute as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13226) RBF: Throw the exception if mount table entry validated failed
[ https://issues.apache.org/jira/browse/HDFS-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396482#comment-16396482 ] Hudson commented on HDFS-13226: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13820 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13820/]) HDFS-13226. RBF: Throw the exception if mount table entry validated (yqlin: rev 19292bc264cada5117ec76063d36cc88159afdf4) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/BaseRecord.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/store/records/TestMountTable.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/RouterState.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/MembershipState.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/MountTable.java > RBF: Throw the exception if mount table entry validated failed > -- > > Key: HDFS-13226 > URL: https://issues.apache.org/jira/browse/HDFS-13226 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Affects Versions: 3.0.0 >Reporter: maobaolong >Assignee: maobaolong >Priority: Major > Labels: RBF > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0 > > Attachments: HDFS-13226.001.patch, HDFS-13226.002.patch, > HDFS-13226.003.patch, HDFS-13226.004.patch, HDFS-13226.005.patch, > HDFS-13226.006.patch, HDFS-13226.007.patch, HDFS-13226.008.patch, > HDFS-13226.009.patch > > > one of the mount entry source path rule is that the source path must start > with '\', somebody didn't follow the rule and execute the following command: > {code:bash} > $ hdfs dfsrouteradmin -add addnode/ ns1 /addnode/ > {code} > But, the console show we are successful add this entry. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12455) WebHDFS - Adding "snapshot enabled" status to ListStatus query result.
[ https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-12455: -- Fix Version/s: 3.0.2 > WebHDFS - Adding "snapshot enabled" status to ListStatus query result. > -- > > Key: HDFS-12455 > URL: https://issues.apache.org/jira/browse/HDFS-12455 > Project: Hadoop HDFS > Issue Type: Improvement > Components: snapshots, webhdfs >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Fix For: 3.1.0, 3.0.2, 3.2.0 > > Attachments: HDFS-12455.01.patch, HDFS-12455.02.patch, > HDFS-12455.03.patch, HDFS-12455.04.patch, HDFS-12455.05.patch > > > WebHDFS - ListStatus query does not provide any information about a folder's > "snapshot enabled" status. Since "ListStatus" lists other attributes it will > be good to include this attribute as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13226) RBF: Throw the exception if mount table entry validated failed
[ https://issues.apache.org/jira/browse/HDFS-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-13226: - Affects Version/s: (was: 3.2.0) 3.0.0 > RBF: Throw the exception if mount table entry validated failed > -- > > Key: HDFS-13226 > URL: https://issues.apache.org/jira/browse/HDFS-13226 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Affects Versions: 3.0.0 >Reporter: maobaolong >Assignee: maobaolong >Priority: Major > Labels: RBF > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0 > > Attachments: HDFS-13226.001.patch, HDFS-13226.002.patch, > HDFS-13226.003.patch, HDFS-13226.004.patch, HDFS-13226.005.patch, > HDFS-13226.006.patch, HDFS-13226.007.patch, HDFS-13226.008.patch, > HDFS-13226.009.patch > > > one of the mount entry source path rule is that the source path must start > with '\', somebody didn't follow the rule and execute the following command: > {code:bash} > $ hdfs dfsrouteradmin -add addnode/ ns1 /addnode/ > {code} > But, the console show we are successful add this entry. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13226) RBF: Throw the exception if mount table entry validated failed
[ https://issues.apache.org/jira/browse/HDFS-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-13226: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.2 2.9.1 2.10.0 3.1.0 Status: Resolved (was: Patch Available) Committed to trunk, branch-3.1, branch-3.0, branch-2 and branch-2.9. Thanks [~maobaolong] for the contribution and thanks [~elgoiri] for the review! > RBF: Throw the exception if mount table entry validated failed > -- > > Key: HDFS-13226 > URL: https://issues.apache.org/jira/browse/HDFS-13226 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Affects Versions: 3.0.0 >Reporter: maobaolong >Assignee: maobaolong >Priority: Major > Labels: RBF > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0 > > Attachments: HDFS-13226.001.patch, HDFS-13226.002.patch, > HDFS-13226.003.patch, HDFS-13226.004.patch, HDFS-13226.005.patch, > HDFS-13226.006.patch, HDFS-13226.007.patch, HDFS-13226.008.patch, > HDFS-13226.009.patch > > > one of the mount entry source path rule is that the source path must start > with '\', somebody didn't follow the rule and execute the following command: > {code:bash} > $ hdfs dfsrouteradmin -add addnode/ ns1 /addnode/ > {code} > But, the console show we are successful add this entry. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13179) TestLazyPersistReplicaRecovery#testDnRestartWithSavedReplicas fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-13179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396469#comment-16396469 ] Gabor Bota commented on HDFS-13179: --- I've uploaded the successful and the failed test logs with some additional logging included. > TestLazyPersistReplicaRecovery#testDnRestartWithSavedReplicas fails > intermittently > -- > > Key: HDFS-13179 > URL: https://issues.apache.org/jira/browse/HDFS-13179 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Affects Versions: 3.0.0 >Reporter: Gabor Bota >Priority: Critical > Attachments: test runs.zip > > > The error caused by TimeoutException because the test is waiting to ensure > that the file is replicated to DISK storage but the replication can't be > finished to DISK during the 30s timeout in ensureFileReplicasOnStorageType(), > but the file is still on RAM_DISK - so there is no data loss. > Adding the following to TestLazyPersistReplicaRecovery.java:56 essentially > fixes the flakiness. > {code:java} > try { > ensureFileReplicasOnStorageType(path1, DEFAULT); > }catch (TimeoutException t){ > LOG.warn("We got \"" + t.getMessage() + "\" so trying to find data on > RAM_DISK"); > ensureFileReplicasOnStorageType(path1, RAM_DISK); > } > } > {code} > Some thoughts: > * Successful and failed tests run similar to the point when datanode > restarts. Restart line is the following in the log: LazyPersistTestCase - > Restarting the DataNode > * There is a line which only occurs in the failed test: *addStoredBlock: > Redundant addStoredBlock request received for blk_1073741825_1001 on node > 127.0.0.1:49455 size 5242880* > * This redundant request at BlockManager#addStoredBlock could be the main > reason for the test fail. Something wrong with the gen stamp? Corrupt > replicas? > = > Current fail ratio based on my test of TestLazyPersistReplicaRecovery: > 1000 runs, 34 failures (3.4% fail) > Failure rate analysis: > TestLazyPersistReplicaRecovery.testDnRestartWithSavedReplicas: 3.4% > 33 failures caused by: {noformat} > java.util.concurrent.TimeoutException: Timed out waiting for condition. > Thread diagnostics: Timestamp: 2018-01-05 11:50:34,964 "IPC Server handler 6 > on 39589" > {noformat} > 1 failure caused by: {noformat} > java.net.BindException: Problem binding to [localhost:56729] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery.testDnRestartWithSavedReplicas(TestLazyPersistReplicaRecovery.java:49) > Caused by: java.net.BindException: Address already in use at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery.testDnRestartWithSavedReplicas(TestLazyPersistReplicaRecovery.java:49) > {noformat} > = > Example stacktrace: > {noformat} > Timed out waiting for condition. Thread diagnostics: > Timestamp: 2017-11-01 10:36:49,499 > "Thread-1" prio=5 tid=13 runnable > java.lang.Thread.State: RUNNABLE > at java.lang.Thread.dumpThreads(Native Method) > at java.lang.Thread.getAllStackTraces(Thread.java:1610) > at > org.apache.hadoop.test.TimedOutTestsListener.buildThreadDump(TimedOutTestsListener.java:87) > at > org.apache.hadoop.test.TimedOutTestsListener.buildThreadDiagnosticString(TimedOutTestsListener.java:73) > at org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:369) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.ensureFileReplicasOnStorageType(LazyPersistTestCase.java:140) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery.testDnRestartWithSavedReplicas(TestLazyPersistReplicaRecovery.java:54) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > ... > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13179) TestLazyPersistReplicaRecovery#testDnRestartWithSavedReplicas fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-13179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota updated HDFS-13179: -- Attachment: test runs.zip > TestLazyPersistReplicaRecovery#testDnRestartWithSavedReplicas fails > intermittently > -- > > Key: HDFS-13179 > URL: https://issues.apache.org/jira/browse/HDFS-13179 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Affects Versions: 3.0.0 >Reporter: Gabor Bota >Priority: Critical > Attachments: test runs.zip > > > The error caused by TimeoutException because the test is waiting to ensure > that the file is replicated to DISK storage but the replication can't be > finished to DISK during the 30s timeout in ensureFileReplicasOnStorageType(), > but the file is still on RAM_DISK - so there is no data loss. > Adding the following to TestLazyPersistReplicaRecovery.java:56 essentially > fixes the flakiness. > {code:java} > try { > ensureFileReplicasOnStorageType(path1, DEFAULT); > }catch (TimeoutException t){ > LOG.warn("We got \"" + t.getMessage() + "\" so trying to find data on > RAM_DISK"); > ensureFileReplicasOnStorageType(path1, RAM_DISK); > } > } > {code} > Some thoughts: > * Successful and failed tests run similar to the point when datanode > restarts. Restart line is the following in the log: LazyPersistTestCase - > Restarting the DataNode > * There is a line which only occurs in the failed test: *addStoredBlock: > Redundant addStoredBlock request received for blk_1073741825_1001 on node > 127.0.0.1:49455 size 5242880* > * This redundant request at BlockManager#addStoredBlock could be the main > reason for the test fail. Something wrong with the gen stamp? Corrupt > replicas? > = > Current fail ratio based on my test of TestLazyPersistReplicaRecovery: > 1000 runs, 34 failures (3.4% fail) > Failure rate analysis: > TestLazyPersistReplicaRecovery.testDnRestartWithSavedReplicas: 3.4% > 33 failures caused by: {noformat} > java.util.concurrent.TimeoutException: Timed out waiting for condition. > Thread diagnostics: Timestamp: 2018-01-05 11:50:34,964 "IPC Server handler 6 > on 39589" > {noformat} > 1 failure caused by: {noformat} > java.net.BindException: Problem binding to [localhost:56729] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery.testDnRestartWithSavedReplicas(TestLazyPersistReplicaRecovery.java:49) > Caused by: java.net.BindException: Address already in use at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery.testDnRestartWithSavedReplicas(TestLazyPersistReplicaRecovery.java:49) > {noformat} > = > Example stacktrace: > {noformat} > Timed out waiting for condition. Thread diagnostics: > Timestamp: 2017-11-01 10:36:49,499 > "Thread-1" prio=5 tid=13 runnable > java.lang.Thread.State: RUNNABLE > at java.lang.Thread.dumpThreads(Native Method) > at java.lang.Thread.getAllStackTraces(Thread.java:1610) > at > org.apache.hadoop.test.TimedOutTestsListener.buildThreadDump(TimedOutTestsListener.java:87) > at > org.apache.hadoop.test.TimedOutTestsListener.buildThreadDiagnosticString(TimedOutTestsListener.java:73) > at org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:369) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.ensureFileReplicasOnStorageType(LazyPersistTestCase.java:140) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery.testDnRestartWithSavedReplicas(TestLazyPersistReplicaRecovery.java:54) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > ... > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13226) RBF: Throw the exception if mount table entry validated failed
[ https://issues.apache.org/jira/browse/HDFS-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-13226: - Summary: RBF: Throw the exception if mount table entry validated failed (was: RBF: We should throw the failure validate and refuse this mount entry) > RBF: Throw the exception if mount table entry validated failed > -- > > Key: HDFS-13226 > URL: https://issues.apache.org/jira/browse/HDFS-13226 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Affects Versions: 3.2.0 >Reporter: maobaolong >Assignee: maobaolong >Priority: Major > Labels: RBF > Fix For: 3.2.0 > > Attachments: HDFS-13226.001.patch, HDFS-13226.002.patch, > HDFS-13226.003.patch, HDFS-13226.004.patch, HDFS-13226.005.patch, > HDFS-13226.006.patch, HDFS-13226.007.patch, HDFS-13226.008.patch, > HDFS-13226.009.patch > > > one of the mount entry source path rule is that the source path must start > with '\', somebody didn't follow the rule and execute the following command: > {code:bash} > $ hdfs dfsrouteradmin -add addnode/ ns1 /addnode/ > {code} > But, the console show we are successful add this entry. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12156) TestFSImage fails without -Pnative
[ https://issues.apache.org/jira/browse/HDFS-12156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396454#comment-16396454 ] Hudson commented on HDFS-12156: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13819 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13819/]) HDFS-12156. TestFSImage fails without -Pnative (aajisaka: rev 319defafc105c0d0b69b83828b578d9c453036f5) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java > TestFSImage fails without -Pnative > -- > > Key: HDFS-12156 > URL: https://issues.apache.org/jira/browse/HDFS-12156 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 3.0.2 > > Attachments: HDFS-12156.01.patch, HDFS-12156.02.patch > > > TestFSImage#testCompression tests LZ4 codec and it fails when native library > is not available. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10803) TestBalancerWithMultipleNameNodes#testBalancing2OutOf3Blockpools fails intermittently due to no free space available
[ https://issues.apache.org/jira/browse/HDFS-10803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396453#comment-16396453 ] Hudson commented on HDFS-10803: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13819 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13819/]) HDFS-10803. (yqlin: rev 4afd50b10650a72162c40cf86dea44676013f262) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithMultipleNameNodes.java > TestBalancerWithMultipleNameNodes#testBalancing2OutOf3Blockpools fails > intermittently due to no free space available > > > Key: HDFS-10803 > URL: https://issues.apache.org/jira/browse/HDFS-10803 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Major > Fix For: 2.10.0, 3.2.0 > > Attachments: HDFS-10803.001.patch > > > The test {{TestBalancerWithMultipleNameNodes#testBalancing2OutOf3Blockpools}} > fails intermittently. The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/16534/testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerWithMultipleNameNodes/testBalancing2OutOf3Blockpools/): > {code} > java.io.IOException: Creating block, no free space available > at > org.apache.hadoop.hdfs.server.datanode.SimulatedFSDataset$BInfo.(SimulatedFSDataset.java:151) > at > org.apache.hadoop.hdfs.server.datanode.SimulatedFSDataset.injectBlocks(SimulatedFSDataset.java:580) > at > org.apache.hadoop.hdfs.MiniDFSCluster.injectBlocks(MiniDFSCluster.java:2679) > at > org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes.unevenDistribution(TestBalancerWithMultipleNameNodes.java:405) > at > org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes.testBalancing2OutOf3Blockpools(TestBalancerWithMultipleNameNodes.java:516) > {code} > The error message means that the datanode's capacity has used up and there is > no other space to create a new file block. > I looked into the code, I found the main reason seemed that the > {{capacities}} for cluster is not correctly constructed in the second > cluster startup before preparing to redistribute blocks in test. > The related code: > {code} > // Here we do redistribute blocks nNameNodes times for each node, > // we need to adjust the capacities. Otherwise it will cause the no > // free space errors sometimes. > final MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf) > .nnTopology(MiniDFSNNTopology.simpleFederatedTopology(nNameNodes)) > .numDataNodes(nDataNodes) > .racks(racks) > .simulatedCapacities(newCapacities) > .format(false) > .build(); > LOG.info("UNEVEN 11"); > ... > for(int n = 0; n < nNameNodes; n++) { > // redistribute blocks > final Block[][] blocksDN = TestBalancer.distributeBlocks( > blocks[n], s.replication, distributionPerNN); > > for(int d = 0; d < blocksDN.length; d++) > cluster.injectBlocks(n, d, Arrays.asList(blocksDN[d])); > LOG.info("UNEVEN 13: n=" + n); > } > {code} > And that means the totalUsed value has been increased as > {{nNameNodes*usedSpacePerNN}} rather than {{usedSpacePerNN}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13253) RBF: Quota management incorrect parent-child relationship judgement
[ https://issues.apache.org/jira/browse/HDFS-13253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396455#comment-16396455 ] Hudson commented on HDFS-13253: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13819 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13819/]) HDFS-13253. RBF: Quota management incorrect parent-child relationship (yqlin: rev 7fab787de72756863a91c2358da5c611afdb80e9) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/FederationUtil.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuotaManager.java > RBF: Quota management incorrect parent-child relationship judgement > --- > > Key: HDFS-13253 > URL: https://issues.apache.org/jira/browse/HDFS-13253 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Major > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0 > > Attachments: HDFS-13253.001.patch, HDFS-13253.002.patch, > HDFS-13253.003.patch > > > The Router quota management does not check for parent-child relation > properly. Similar to HDFS-13233. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13226) RBF: We should throw the failure validate and refuse this mount entry
[ https://issues.apache.org/jira/browse/HDFS-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396452#comment-16396452 ] Yiqun Lin commented on HDFS-13226: -- +1, committing this... > RBF: We should throw the failure validate and refuse this mount entry > - > > Key: HDFS-13226 > URL: https://issues.apache.org/jira/browse/HDFS-13226 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Affects Versions: 3.2.0 >Reporter: maobaolong >Assignee: maobaolong >Priority: Major > Labels: RBF > Fix For: 3.2.0 > > Attachments: HDFS-13226.001.patch, HDFS-13226.002.patch, > HDFS-13226.003.patch, HDFS-13226.004.patch, HDFS-13226.005.patch, > HDFS-13226.006.patch, HDFS-13226.007.patch, HDFS-13226.008.patch, > HDFS-13226.009.patch > > > one of the mount entry source path rule is that the source path must start > with '\', somebody didn't follow the rule and execute the following command: > {code:bash} > $ hdfs dfsrouteradmin -add addnode/ ns1 /addnode/ > {code} > But, the console show we are successful add this entry. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-5926) Documentation should clarify dfs.datanode.du.reserved impact from reserved disk capacity
[ https://issues.apache.org/jira/browse/HDFS-5926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396449#comment-16396449 ] Gabor Bota commented on HDFS-5926: -- Hi [~fahlke], Can this change be committed? Do you have any comment on it? Thanks, Gabor > Documentation should clarify dfs.datanode.du.reserved impact from reserved > disk capacity > > > Key: HDFS-5926 > URL: https://issues.apache.org/jira/browse/HDFS-5926 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Affects Versions: 0.20.2 >Reporter: Alexander Fahlke >Assignee: Gabor Bota >Priority: Minor > Labels: newbie > Attachments: HDFS-5926-1.patch > > > I'm using hadoop-0.20.2 on Debian Squeeze and ran into the same confusion as > many others with the parameter for dfs.datanode.du.reserved. One day some > data nodes got out of disk errors although there was space left on the disks. > The following values are rounded to make the problem more clear: > - the disk for the DFS data has 1000GB and only one Partition (ext3) for DFS > data > - you plan to set the dfs.datanode.du.reserved to 20GB > - the reserved reserved-blocks-percentage by tune2fs is 5% (the default) > That gives all users, except root, 5% less capacity that they can use. > Although the System reports the total of 1000GB as usable for all users via > df. The hadoop-deamons are not running as root. > If i read it right, than hadoop get's the free capacity via df. > > Starting in > {{/src/hdfs/org/apache/hadoop/hdfs/server/datanode/FSDataset.java}} on line > 350: {{return usage.getCapacity()-reserved;}} > going to {{/src/core/org/apache/hadoop/fs/DF.java}} which says: > {{"Filesystem disk space usage statistics. Uses the unix 'df' program"}} > When you have 5% reserved by tune2fs (in our case 50GB) and you give > dfs.datanode.du.reserved only 20GB, than you can possibly ran into out of > disk errors that hadoop can't handle. > In this case you must add the planned 20GB du reserved to the reserved > capacity by tune2fs. This results in (at least) 70GB for > dfs.datanode.du.reserved in my case. > Two ideas: > # The documentation must be clear at this point to avoid this problem. > # Hadoop could check for reserved space by tune2fs (or other tools) and add > this value to the dfs.datanode.du.reserved parameter. > This ticket is a follow up from the Mailinglist: > https://mail-archives.apache.org/mod_mbox/hadoop-common-user/201312.mbox/%3CCAHodO=Kbv=13T=2otz+s8nsodbs1icnzqyxt_0wdfxy5gks...@mail.gmail.com%3E -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13253) RBF: Quota management incorrect parent-child relationship judgement
[ https://issues.apache.org/jira/browse/HDFS-13253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-13253: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.2.0 3.0.2 2.9.1 2.10.0 3.1.0 Status: Resolved (was: Patch Available) Committed to trunk, branch-3.1, branch-3.0, branch-2 and branch-2.9. Since global quota didn't merged in branch-3.0 and branch-2.9, for these two branch, only apply the fix of MountTableResolver. Thanks [~elgoiri] for the review. > RBF: Quota management incorrect parent-child relationship judgement > --- > > Key: HDFS-13253 > URL: https://issues.apache.org/jira/browse/HDFS-13253 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Major > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0 > > Attachments: HDFS-13253.001.patch, HDFS-13253.002.patch, > HDFS-13253.003.patch > > > The Router quota management does not check for parent-child relation > properly. Similar to HDFS-13233. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13251) Avoid using hard coded datanode data dirs in unit tests
[ https://issues.apache.org/jira/browse/HDFS-13251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396437#comment-16396437 ] Ajay Kumar commented on HDFS-13251: --- [~xyao], thanks for review. patch v2 to address your suggestions. {quote}TestDataNodeHotSwapVolumes.java Line 696: Can you elaborate on the change of directory index to i + 2 from i{quote} Test is adding new dirs. (we already have 0 and 1, so test succeeds with i+2 but not with i+0, i+1) {quote}Line 1038: should this be cluster.getIntancesStorageDir(0,0)?{quote} Both 0,0 and 0,1 works as we want to keep one of the existing dirs, Changed it to 0,0 in new patch. > Avoid using hard coded datanode data dirs in unit tests > --- > > Key: HDFS-13251 > URL: https://issues.apache.org/jira/browse/HDFS-13251 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar >Priority: Major > Attachments: HDFS-13251.000.patch, HDFS-13251.001.patch, > HDFS-13251.002.patch > > > There are a few unit tests that rely on hard-coded MiniDFSCluster data dir > names. > > * TestDataNodeVolumeFailureToleration > * TestDataNodeVolumeFailureReporting > * TestDiskBalancerCommand > * TestBlockStatsMXBean > * TestDataNodeVolumeMetrics > * TestDFSAdmin > * TestDataNodeHotSwapVolumes > * TestDataNodeVolumeFailure > This ticket is opened to use > {code:java} > MiniDFSCluster#getInstanceStorageDir(0, 1); > instead of like below > new File(cluster.getDataDirectory(), "data1");{code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13251) Avoid using hard coded datanode data dirs in unit tests
[ https://issues.apache.org/jira/browse/HDFS-13251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-13251: -- Attachment: HDFS-13251.002.patch > Avoid using hard coded datanode data dirs in unit tests > --- > > Key: HDFS-13251 > URL: https://issues.apache.org/jira/browse/HDFS-13251 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar >Priority: Major > Attachments: HDFS-13251.000.patch, HDFS-13251.001.patch, > HDFS-13251.002.patch > > > There are a few unit tests that rely on hard-coded MiniDFSCluster data dir > names. > > * TestDataNodeVolumeFailureToleration > * TestDataNodeVolumeFailureReporting > * TestDiskBalancerCommand > * TestBlockStatsMXBean > * TestDataNodeVolumeMetrics > * TestDFSAdmin > * TestDataNodeHotSwapVolumes > * TestDataNodeVolumeFailure > This ticket is opened to use > {code:java} > MiniDFSCluster#getInstanceStorageDir(0, 1); > instead of like below > new File(cluster.getDataDirectory(), "data1");{code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13251) Avoid using hard coded datanode data dirs in unit tests
[ https://issues.apache.org/jira/browse/HDFS-13251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-13251: -- Status: Patch Available (was: Open) > Avoid using hard coded datanode data dirs in unit tests > --- > > Key: HDFS-13251 > URL: https://issues.apache.org/jira/browse/HDFS-13251 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar >Priority: Major > Attachments: HDFS-13251.000.patch, HDFS-13251.001.patch, > HDFS-13251.002.patch > > > There are a few unit tests that rely on hard-coded MiniDFSCluster data dir > names. > > * TestDataNodeVolumeFailureToleration > * TestDataNodeVolumeFailureReporting > * TestDiskBalancerCommand > * TestBlockStatsMXBean > * TestDataNodeVolumeMetrics > * TestDFSAdmin > * TestDataNodeHotSwapVolumes > * TestDataNodeVolumeFailure > This ticket is opened to use > {code:java} > MiniDFSCluster#getInstanceStorageDir(0, 1); > instead of like below > new File(cluster.getDataDirectory(), "data1");{code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12156) TestFSImage fails without -Pnative
[ https://issues.apache.org/jira/browse/HDFS-12156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-12156: - Resolution: Fixed Fix Version/s: 3.0.2 2.8.4 2.9.1 2.10.0 3.1.0 Status: Resolved (was: Patch Available) Committed this to trunk, branch-3.1, branch-3.0, branch-2, branch-2.9, and branch-2.8. Thanks [~szetszwo] for reviewing this. > TestFSImage fails without -Pnative > -- > > Key: HDFS-12156 > URL: https://issues.apache.org/jira/browse/HDFS-12156 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 3.0.2 > > Attachments: HDFS-12156.01.patch, HDFS-12156.02.patch > > > TestFSImage#testCompression tests LZ4 codec and it fails when native library > is not available. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12156) TestFSImage fails without -Pnative
[ https://issues.apache.org/jira/browse/HDFS-12156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396418#comment-16396418 ] Akira Ajisaka commented on HDFS-12156: -- The rebase is only for imports, so I'm committing this based on Nicholas's +1. > TestFSImage fails without -Pnative > -- > > Key: HDFS-12156 > URL: https://issues.apache.org/jira/browse/HDFS-12156 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Attachments: HDFS-12156.01.patch, HDFS-12156.02.patch > > > TestFSImage#testCompression tests LZ4 codec and it fails when native library > is not available. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10803) TestBalancerWithMultipleNameNodes#testBalancing2OutOf3Blockpools fails intermittently due to no free space available
[ https://issues.apache.org/jira/browse/HDFS-10803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-10803: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.2.0 2.10.0 Status: Resolved (was: Patch Available) Thanks [~hanishakoneru] for the review, committed to trunk and branch-2. > TestBalancerWithMultipleNameNodes#testBalancing2OutOf3Blockpools fails > intermittently due to no free space available > > > Key: HDFS-10803 > URL: https://issues.apache.org/jira/browse/HDFS-10803 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Major > Fix For: 2.10.0, 3.2.0 > > Attachments: HDFS-10803.001.patch > > > The test {{TestBalancerWithMultipleNameNodes#testBalancing2OutOf3Blockpools}} > fails intermittently. The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/16534/testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerWithMultipleNameNodes/testBalancing2OutOf3Blockpools/): > {code} > java.io.IOException: Creating block, no free space available > at > org.apache.hadoop.hdfs.server.datanode.SimulatedFSDataset$BInfo.(SimulatedFSDataset.java:151) > at > org.apache.hadoop.hdfs.server.datanode.SimulatedFSDataset.injectBlocks(SimulatedFSDataset.java:580) > at > org.apache.hadoop.hdfs.MiniDFSCluster.injectBlocks(MiniDFSCluster.java:2679) > at > org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes.unevenDistribution(TestBalancerWithMultipleNameNodes.java:405) > at > org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes.testBalancing2OutOf3Blockpools(TestBalancerWithMultipleNameNodes.java:516) > {code} > The error message means that the datanode's capacity has used up and there is > no other space to create a new file block. > I looked into the code, I found the main reason seemed that the > {{capacities}} for cluster is not correctly constructed in the second > cluster startup before preparing to redistribute blocks in test. > The related code: > {code} > // Here we do redistribute blocks nNameNodes times for each node, > // we need to adjust the capacities. Otherwise it will cause the no > // free space errors sometimes. > final MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf) > .nnTopology(MiniDFSNNTopology.simpleFederatedTopology(nNameNodes)) > .numDataNodes(nDataNodes) > .racks(racks) > .simulatedCapacities(newCapacities) > .format(false) > .build(); > LOG.info("UNEVEN 11"); > ... > for(int n = 0; n < nNameNodes; n++) { > // redistribute blocks > final Block[][] blocksDN = TestBalancer.distributeBlocks( > blocks[n], s.replication, distributionPerNN); > > for(int d = 0; d < blocksDN.length; d++) > cluster.injectBlocks(n, d, Arrays.asList(blocksDN[d])); > LOG.info("UNEVEN 13: n=" + n); > } > {code} > And that means the totalUsed value has been increased as > {{nNameNodes*usedSpacePerNN}} rather than {{usedSpacePerNN}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-13265) MiniDFSCluster should set reasonable defaults to reduce resource consumption
[ https://issues.apache.org/jira/browse/HDFS-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen reassigned HDFS-13265: -- Assignee: Erik Krogen > MiniDFSCluster should set reasonable defaults to reduce resource consumption > > > Key: HDFS-13265 > URL: https://issues.apache.org/jira/browse/HDFS-13265 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, namenode, test >Reporter: Erik Krogen >Assignee: Erik Krogen >Priority: Major > > MiniDFSCluster takes its defaults from {{DFSConfigKeys}} defaults, but many > of these are not suitable for a unit test environment. For example, the > default handler thread count of 10 is definitely more than necessary for > (almost?) any unit test. We should set reasonable, lower defaults unless a > test specifically requires more. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work started] (HDFS-13265) MiniDFSCluster should set reasonable defaults to reduce resource consumption
[ https://issues.apache.org/jira/browse/HDFS-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-13265 started by Erik Krogen. -- > MiniDFSCluster should set reasonable defaults to reduce resource consumption > > > Key: HDFS-13265 > URL: https://issues.apache.org/jira/browse/HDFS-13265 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, namenode, test >Reporter: Erik Krogen >Assignee: Erik Krogen >Priority: Major > > MiniDFSCluster takes its defaults from {{DFSConfigKeys}} defaults, but many > of these are not suitable for a unit test environment. For example, the > default handler thread count of 10 is definitely more than necessary for > (almost?) any unit test. We should set reasonable, lower defaults unless a > test specifically requires more. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12975) Changes to the NameNode to support reads from standby
[ https://issues.apache.org/jira/browse/HDFS-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396408#comment-16396408 ] Konstantin Shvachko commented on HDFS-12975: Reviewed 003 patch. Looks good. +1. I think we should be able to transition standby to observer and back. But active should be able to transition only to standby. At least initially, we may "optimize" that later. Will update the design doc with this info. > Changes to the NameNode to support reads from standby > - > > Key: HDFS-12975 > URL: https://issues.apache.org/jira/browse/HDFS-12975 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Konstantin Shvachko >Assignee: Chao Sun >Priority: Major > Attachments: HDFS-12975.000.patch, HDFS-12975.001.patch, > HDFS-12975.002.patch, HDFS-12975.003.patch > > > In order to support reads from standby NameNode needs changes to add Observer > role, turn off checkpointing and such. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12977) Add stateId to RPC headers.
[ https://issues.apache.org/jira/browse/HDFS-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396403#comment-16396403 ] Konstantin Shvachko commented on HDFS-12977: Was looking into this and realized the main problem here is that we still set {{transactionId}} explicitly in {{ipc.Server.setupResponse()}}, which defeats the purpose of abstracting the state context. Here is a suggestion # May be a better name for the {{CallStateHandler}} interface would be {{ServerStateContext}} #- with abstract method {code}void updateResponseState(RpcResponseHeaderProto.Builder responseHeader);{code} #- Then we should have a class {{NNStateIdContext}} in {{NameNodeRPCServer}}, with the {{namesystem}} as its member, and which sets txId into the header. # {{ipc.Server}} has additional member {{serverStateContext}}. Then {{ipc.Server.setupResponse()}} should make a call: {code} if(callStateHandler != null) callStateHandler.updateResponseState(headerBuilder); {code} # This should eliminate other changes like adding {{Server.Call.transactionId}} or changing {{ProtobufRPCEngine.call()}} method. # We should use {{stateId}} consistently instead of {{transactionId}}. LMK if it makes sense. > Add stateId to RPC headers. > --- > > Key: HDFS-12977 > URL: https://issues.apache.org/jira/browse/HDFS-12977 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ipc, namenode >Reporter: Konstantin Shvachko >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS_12977.trunk.001.patch, HDFS_12977.trunk.002.patch, > HDFS_12977.trunk.003.patch > > > stateId is a new field in the RPC headers of NameNode proto calls. > stateId is the journal transaction Id, which represents LastSeenId for the > clients and LastWrittenId for NameNodes. See more in [reads from Standby > design > doc|https://issues.apache.org/jira/secure/attachment/12902925/ConsistentReadsFromStandbyNode.pdf]. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13198) RBF: RouterHeartbeatService throws out CachedStateStore related exceptions when starting router
[ https://issues.apache.org/jira/browse/HDFS-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396379#comment-16396379 ] genericqa commented on HDFS-13198: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 27s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 47s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 55s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m 54s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}146m 4s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSClientRetries | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-13198 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914178/HDFS-13198.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c1b6cd33dcd6 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 39a5fba | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/23432/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23432/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results |
[jira] [Commented] (HDFS-12156) TestFSImage fails without -Pnative
[ https://issues.apache.org/jira/browse/HDFS-12156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396366#comment-16396366 ] genericqa commented on HDFS-12156: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 59s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 2s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 20s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}143m 6s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.TestReconstructStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-12156 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914177/HDFS-12156.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 3602f4475ca4 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 39a5fba | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23431/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23431/testReport/ | | Max. process+thread count | 4233 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23431/console | | Powered by | Apache Yetus
[jira] [Commented] (HDFS-7527) TestDecommission.testIncludeByRegistrationName fails occassionally in trunk
[ https://issues.apache.org/jira/browse/HDFS-7527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396364#comment-16396364 ] genericqa commented on HDFS-7527: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 57s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 54s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 21 unchanged - 0 fixed = 22 total (was 21) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 18s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 17s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}176m 34s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-7527 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914172/HDFS-7527.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux db9fd30c0c31 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 39a5fba | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/23429/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23429/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23429/testReport/ | | Max. process+thread count | 3076 (vs. ulimit of 1) | |
[jira] [Commented] (HDFS-12288) Fix DataNode's xceiver count calculation
[ https://issues.apache.org/jira/browse/HDFS-12288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396360#comment-16396360 ] genericqa commented on HDFS-12288: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 46s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 33s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 4s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}140m 2s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}189m 23s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.server.federation.router.TestRouterSafemode | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes | | | hadoop.hdfs.web.TestWebHdfsTimeouts | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-12288 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12881509/HDFS-12288.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8352c7d1f238 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 39a5fba | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23428/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results |
[jira] [Commented] (HDFS-12514) Cancelled HedgedReads cause block to be marked as suspect on Windows
[ https://issues.apache.org/jira/browse/HDFS-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396357#comment-16396357 ] Íñigo Goiri commented on HDFS-12514: [^HDFS-12514.001.patch] doesn't apply anymore; rebased and posted [^HDFS-12514.002.patch]. If it comes clean, I'd like to commit this. > Cancelled HedgedReads cause block to be marked as suspect on Windows > > > Key: HDFS-12514 > URL: https://issues.apache.org/jira/browse/HDFS-12514 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs-client >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Major > Attachments: HDFS-12514.001.patch, HDFS-12514.002.patch > > > DFSClient with hedged reads enabled will often close previous spawned > connections if it successfully reads from one of them. This can result in > DataNode's BlockSender getting a socket exception and wrongly marking the > block as suspect and to be rescanned for errors. > This patch is aimed at adding windows specific network related exception > messages to be ignored in BlockSender.sendPacket. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12514) Cancelled HedgedReads cause block to be marked as suspect on Windows
[ https://issues.apache.org/jira/browse/HDFS-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12514: --- Attachment: HDFS-12514.002.patch > Cancelled HedgedReads cause block to be marked as suspect on Windows > > > Key: HDFS-12514 > URL: https://issues.apache.org/jira/browse/HDFS-12514 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs-client >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Major > Attachments: HDFS-12514.001.patch, HDFS-12514.002.patch > > > DFSClient with hedged reads enabled will often close previous spawned > connections if it successfully reads from one of them. This can result in > DataNode's BlockSender getting a socket exception and wrongly marking the > block as suspect and to be rescanned for errors. > This patch is aimed at adding windows specific network related exception > messages to be ignored in BlockSender.sendPacket. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12886) Ignore minReplication for block recovery
[ https://issues.apache.org/jira/browse/HDFS-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396352#comment-16396352 ] Íñigo Goiri commented on HDFS-12886: [~daryn], did you have a chance to looks at this? I'd like to commit this soon. > Ignore minReplication for block recovery > > > Key: HDFS-12886 > URL: https://issues.apache.org/jira/browse/HDFS-12886 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs, namenode >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Major > Attachments: HDFS-12886.001.patch, HDFS-12886.002.patch > > > Ignore minReplication for blocks that went through recovery, and allow NN to > complete them and replicate. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12514) Cancelled HedgedReads cause block to be marked as suspect on Windows
[ https://issues.apache.org/jira/browse/HDFS-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396349#comment-16396349 ] genericqa commented on HDFS-12514: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} HDFS-12514 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-12514 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888086/HDFS-12514.001.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23437/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Cancelled HedgedReads cause block to be marked as suspect on Windows > > > Key: HDFS-12514 > URL: https://issues.apache.org/jira/browse/HDFS-12514 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs-client >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Major > Attachments: HDFS-12514.001.patch > > > DFSClient with hedged reads enabled will often close previous spawned > connections if it successfully reads from one of them. This can result in > DataNode's BlockSender getting a socket exception and wrongly marking the > block as suspect and to be rescanned for errors. > This patch is aimed at adding windows specific network related exception > messages to be ignored in BlockSender.sendPacket. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-336) dfsadmin -report should report number of blocks from datanode
[ https://issues.apache.org/jira/browse/HDFS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396340#comment-16396340 ] Bharat Viswanadham commented on HDFS-336: - Thank You [~arpitagarwal] for offline review comments. Attached patch v03 to address review comments. {quote}1. before creating the file you can call getDataNodeStats and assert that numBlocks is 0. {quote} Done {quote}2. Set block size to 512, and then write a 1KB file. {quote} Done > dfsadmin -report should report number of blocks from datanode > - > > Key: HDFS-336 > URL: https://issues.apache.org/jira/browse/HDFS-336 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lohit Vijayarenu >Assignee: Bharat Viswanadham >Priority: Minor > Labels: newbie > Attachments: HDFS-336.00.patch, HDFS-336.01.patch, HDFS-336.02.patch > > > _hadoop dfsadmin -report_ seems to miss number of blocks from a datanode. > Number of blocks hosted by a datanode is a good info which should be included > in the report. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13239) Fix non-empty dir warning message when setting default EC policy
[ https://issues.apache.org/jira/browse/HDFS-13239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396338#comment-16396338 ] genericqa commented on HDFS-13239: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 44s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 11s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}135m 4s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-13239 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914175/HDFS-13239.04.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux d80cb814e505 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 39a5fba | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23430/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23430/testReport/ | | Max. process+thread count | 5827 (vs.
[jira] [Commented] (HDFS-12514) Cancelled HedgedReads cause block to be marked as suspect on Windows
[ https://issues.apache.org/jira/browse/HDFS-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396336#comment-16396336 ] Lukas Majercak commented on HDFS-12514: --- [~shahrs87], "Software caused connection abort" will not be in the exception message, the same way as "Connection reset by peer" is not in the message for *WSAECONNRESET.* > Cancelled HedgedReads cause block to be marked as suspect on Windows > > > Key: HDFS-12514 > URL: https://issues.apache.org/jira/browse/HDFS-12514 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs-client >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Major > Attachments: HDFS-12514.001.patch > > > DFSClient with hedged reads enabled will often close previous spawned > connections if it successfully reads from one of them. This can result in > DataNode's BlockSender getting a socket exception and wrongly marking the > block as suspect and to be rescanned for errors. > This patch is aimed at adding windows specific network related exception > messages to be ignored in BlockSender.sendPacket. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-336) dfsadmin -report should report number of blocks from datanode
[ https://issues.apache.org/jira/browse/HDFS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-336: Attachment: HDFS-336.02.patch > dfsadmin -report should report number of blocks from datanode > - > > Key: HDFS-336 > URL: https://issues.apache.org/jira/browse/HDFS-336 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lohit Vijayarenu >Assignee: Bharat Viswanadham >Priority: Minor > Labels: newbie > Attachments: HDFS-336.00.patch, HDFS-336.01.patch, HDFS-336.02.patch > > > _hadoop dfsadmin -report_ seems to miss number of blocks from a datanode. > Number of blocks hosted by a datanode is a good info which should be included > in the report. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-6681) TestRBWBlockInvalidation#testBlockInvalidationWhenRBWReplicaMissedInDN is flaky and sometimes gets stuck in infinite loops
[ https://issues.apache.org/jira/browse/HDFS-6681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396331#comment-16396331 ] genericqa commented on HDFS-6681: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 9m 51s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 4s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 47s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 7 unchanged - 0 fixed = 10 total (was 7) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 34s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 17s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}165m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-6681 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914164/HDFS-6681.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2d74448a0c35 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 39a5fba | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/23427/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23427/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23427/testReport/ | | Max. process+thread count | 4033 (vs. ulimit of 1) | | modules | C:
[jira] [Commented] (HDFS-10803) TestBalancerWithMultipleNameNodes#testBalancing2OutOf3Blockpools fails intermittently due to no free space available
[ https://issues.apache.org/jira/browse/HDFS-10803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396329#comment-16396329 ] Hanisha Koneru commented on HDFS-10803: --- Thanks for the fix [~linyiqun]. The patch LGTM. Tested with multiple runs with and without the patch. +1. > TestBalancerWithMultipleNameNodes#testBalancing2OutOf3Blockpools fails > intermittently due to no free space available > > > Key: HDFS-10803 > URL: https://issues.apache.org/jira/browse/HDFS-10803 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Major > Attachments: HDFS-10803.001.patch > > > The test {{TestBalancerWithMultipleNameNodes#testBalancing2OutOf3Blockpools}} > fails intermittently. The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/16534/testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerWithMultipleNameNodes/testBalancing2OutOf3Blockpools/): > {code} > java.io.IOException: Creating block, no free space available > at > org.apache.hadoop.hdfs.server.datanode.SimulatedFSDataset$BInfo.(SimulatedFSDataset.java:151) > at > org.apache.hadoop.hdfs.server.datanode.SimulatedFSDataset.injectBlocks(SimulatedFSDataset.java:580) > at > org.apache.hadoop.hdfs.MiniDFSCluster.injectBlocks(MiniDFSCluster.java:2679) > at > org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes.unevenDistribution(TestBalancerWithMultipleNameNodes.java:405) > at > org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes.testBalancing2OutOf3Blockpools(TestBalancerWithMultipleNameNodes.java:516) > {code} > The error message means that the datanode's capacity has used up and there is > no other space to create a new file block. > I looked into the code, I found the main reason seemed that the > {{capacities}} for cluster is not correctly constructed in the second > cluster startup before preparing to redistribute blocks in test. > The related code: > {code} > // Here we do redistribute blocks nNameNodes times for each node, > // we need to adjust the capacities. Otherwise it will cause the no > // free space errors sometimes. > final MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf) > .nnTopology(MiniDFSNNTopology.simpleFederatedTopology(nNameNodes)) > .numDataNodes(nDataNodes) > .racks(racks) > .simulatedCapacities(newCapacities) > .format(false) > .build(); > LOG.info("UNEVEN 11"); > ... > for(int n = 0; n < nNameNodes; n++) { > // redistribute blocks > final Block[][] blocksDN = TestBalancer.distributeBlocks( > blocks[n], s.replication, distributionPerNN); > > for(int d = 0; d < blocksDN.length; d++) > cluster.injectBlocks(n, d, Arrays.asList(blocksDN[d])); > LOG.info("UNEVEN 13: n=" + n); > } > {code} > And that means the totalUsed value has been increased as > {{nNameNodes*usedSpacePerNN}} rather than {{usedSpacePerNN}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13241) RBF: TestRouterSafemode failed if the port 8888 is in use
[ https://issues.apache.org/jira/browse/HDFS-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396322#comment-16396322 ] Hudson commented on HDFS-13241: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13818 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13818/]) HDFS-13241. RBF: TestRouterSafemode failed if the port is in use. (inigoiri: rev 91c82c90f05ea75fe59c6ffad3dc3fcac1429e9e) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterSafemode.java > RBF: TestRouterSafemode failed if the port is in use > - > > Key: HDFS-13241 > URL: https://issues.apache.org/jira/browse/HDFS-13241 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs, test >Affects Versions: 3.2.0 >Reporter: maobaolong >Assignee: maobaolong >Priority: Major > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0 > > Attachments: HDFS-13241.001.patch, HDFS-13241.002.patch > > > TestRouterSafemode failed if the port is in use. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13239) Fix non-empty dir warning message when setting default EC policy
[ https://issues.apache.org/jira/browse/HDFS-13239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396320#comment-16396320 ] Hanisha Koneru commented on HDFS-13239: --- +1 pending Jenkins. > Fix non-empty dir warning message when setting default EC policy > > > Key: HDFS-13239 > URL: https://issues.apache.org/jira/browse/HDFS-13239 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Hanisha Koneru >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HDFS-13239.00.patch, HDFS-13239.01.patch, > HDFS-13239.02.patch, HDFS-13239.03.patch, HDFS-13239.04.patch > > > When EC policy is set on a non-empty directory, the following warning message > is given: > {code} > $hdfs ec -setPolicy -policy RS-6-3-1024k -path /ec1 > Warning: setting erasure coding policy on a non-empty directory will not > automatically convert existing files to RS-6-3-1024k > {code} > When we do not specify the -policy parameter when setting EC policy on a > directory, it takes the default EC policy. Setting default EC policy in this > way on a non-empty directory gives the following warning message: > {code} > $hdfs ec -setPolicy -path /ec2 > Warning: setting erasure coding policy on a non-empty directory will not > automatically convert existing files to null > {code} > Notice that the warning message in the 2nd case has the ecPolicy name shown > as null. We should instead give the default EC policy name in this message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12837) Intermittent failure TestReencryptionWithKMS#testReencryptionKMSDown
[ https://issues.apache.org/jira/browse/HDFS-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396319#comment-16396319 ] genericqa commented on HDFS-12837: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 51s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 58s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 10s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}172m 3s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-12837 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914163/HDFS-12837.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ce61f316b8d5 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 39a5fba | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23426/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23426/testReport/ | | Max. process+thread count | 2955 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23426/console | | Powered by |
[jira] [Commented] (HDFS-13241) RBF: TestRouterSafemode failed if the port 8888 is in use
[ https://issues.apache.org/jira/browse/HDFS-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396312#comment-16396312 ] Íñigo Goiri commented on HDFS-13241: Committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9. Thanks [~maobaolong] for the contribution. > RBF: TestRouterSafemode failed if the port is in use > - > > Key: HDFS-13241 > URL: https://issues.apache.org/jira/browse/HDFS-13241 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs, test >Affects Versions: 3.2.0 >Reporter: maobaolong >Assignee: maobaolong >Priority: Major > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0 > > Attachments: HDFS-13241.001.patch, HDFS-13241.002.patch > > > TestRouterSafemode failed if the port is in use. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13241) RBF: TestRouterSafemode failed if the port 8888 is in use
[ https://issues.apache.org/jira/browse/HDFS-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13241: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.2.0 3.0.2 2.9.1 2.10.0 3.1.0 Status: Resolved (was: Patch Available) > RBF: TestRouterSafemode failed if the port is in use > - > > Key: HDFS-13241 > URL: https://issues.apache.org/jira/browse/HDFS-13241 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs, test >Affects Versions: 3.2.0 >Reporter: maobaolong >Assignee: maobaolong >Priority: Major > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0 > > Attachments: HDFS-13241.001.patch, HDFS-13241.002.patch > > > TestRouterSafemode failed if the port is in use. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-336) dfsadmin -report should report number of blocks from datanode
[ https://issues.apache.org/jira/browse/HDFS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396309#comment-16396309 ] Bharat Viswanadham commented on HDFS-336: - Thank You [~arpitagarwal] for review. Added a test case for functionality test. > dfsadmin -report should report number of blocks from datanode > - > > Key: HDFS-336 > URL: https://issues.apache.org/jira/browse/HDFS-336 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lohit Vijayarenu >Assignee: Bharat Viswanadham >Priority: Minor > Labels: newbie > Attachments: HDFS-336.00.patch, HDFS-336.01.patch > > > _hadoop dfsadmin -report_ seems to miss number of blocks from a datanode. > Number of blocks hosted by a datanode is a good info which should be included > in the report. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-336) dfsadmin -report should report number of blocks from datanode
[ https://issues.apache.org/jira/browse/HDFS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-336: Attachment: HDFS-336.01.patch > dfsadmin -report should report number of blocks from datanode > - > > Key: HDFS-336 > URL: https://issues.apache.org/jira/browse/HDFS-336 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lohit Vijayarenu >Assignee: Bharat Viswanadham >Priority: Minor > Labels: newbie > Attachments: HDFS-336.00.patch, HDFS-336.01.patch > > > _hadoop dfsadmin -report_ seems to miss number of blocks from a datanode. > Number of blocks hosted by a datanode is a good info which should be included > in the report. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12505) Extend TestFileStatusWithECPolicy with a random EC policy
[ https://issues.apache.org/jira/browse/HDFS-12505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396261#comment-16396261 ] genericqa commented on HDFS-12505: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 48s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 42s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 0 unchanged - 1 fixed = 1 total (was 1) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 12s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}146m 22s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}196m 51s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.web.TestWebHdfsTimeouts | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-12505 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888028/HDFS-12505.1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux e1c10028551a 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / cceb68f | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/23422/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt |
[jira] [Updated] (HDFS-12773) RBF: Improve State Store FS implementation
[ https://issues.apache.org/jira/browse/HDFS-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12773: --- Attachment: HDFS-12773.006.patch > RBF: Improve State Store FS implementation > -- > > Key: HDFS-12773 > URL: https://issues.apache.org/jira/browse/HDFS-12773 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-12773.000.patch, HDFS-12773.001.patch, > HDFS-12773.002.patch, HDFS-12773.003.patch, HDFS-12773.004.patch, > HDFS-12773.005.patch, HDFS-12773.006.patch > > > HDFS-10630 introduced a filesystem implementation of the State Store for unit > tests. However, this implementation doesn't handle multiple writers > concurrently. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12773) RBF: Improve State Store FS implementation
[ https://issues.apache.org/jira/browse/HDFS-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396250#comment-16396250 ] Íñigo Goiri commented on HDFS-12773: [~ywskycn], {{TestStateStoreFileBase}} now fails because of the monotonicNow() change. It could be fixed by checking monotonicNow() in {{testTempOld()}} but that's not correct because multiple Routers need to check for that timestamp. I'm reverting that one back to {{now()}}. > RBF: Improve State Store FS implementation > -- > > Key: HDFS-12773 > URL: https://issues.apache.org/jira/browse/HDFS-12773 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-12773.000.patch, HDFS-12773.001.patch, > HDFS-12773.002.patch, HDFS-12773.003.patch, HDFS-12773.004.patch, > HDFS-12773.005.patch > > > HDFS-10630 introduced a filesystem implementation of the State Store for unit > tests. However, this implementation doesn't handle multiple writers > concurrently. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13141) WebHDFS: Add support for getting snasphottable directory list
[ https://issues.apache.org/jira/browse/HDFS-13141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396246#comment-16396246 ] Xiaoyu Yao commented on HDFS-13141: --- Thanks [~ljain] for the update. Patch v3 LGTM, +1. I will commit it shortly. The checkstyle issue on "Avoid nested blocks" is ignored to keep the coding convention. > WebHDFS: Add support for getting snasphottable directory list > - > > Key: HDFS-13141 > URL: https://issues.apache.org/jira/browse/HDFS-13141 > Project: Hadoop HDFS > Issue Type: Task > Components: webhdfs >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDFS-13141.001.patch, HDFS-13141.002.patch, > HDFS-13141.003.patch > > > This Jira aims to implement get snapshottable directory list operation for > webHdfs filesystem. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12773) RBF: Improve State Store FS implementation
[ https://issues.apache.org/jira/browse/HDFS-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396242#comment-16396242 ] genericqa commented on HDFS-12773: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 15s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}121m 24s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}180m 34s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.federation.store.driver.TestStateStoreFileBase | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-12773 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914115/HDFS-12773.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b15582aa71eb 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / cceb68f | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23421/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23421/testReport/ | | Max. process+thread count | 3060 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Commented] (HDFS-13198) RBF: RouterHeartbeatService throws out CachedStateStore related exceptions when starting router
[ https://issues.apache.org/jira/browse/HDFS-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396238#comment-16396238 ] Íñigo Goiri commented on HDFS-13198: [^HDFS-13198.001.patch] looks good. If we add {{TestRouterHeartbeatService}}, we may want to add a simple case for it too where we are successful. > RBF: RouterHeartbeatService throws out CachedStateStore related exceptions > when starting router > --- > > Key: HDFS-13198 > URL: https://issues.apache.org/jira/browse/HDFS-13198 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Wei Yan >Assignee: Wei Yan >Priority: Minor > Attachments: HDFS-13198.000.patch, HDFS-13198.001.patch > > > Exception looks like: > {code:java} > 2018-02-23 19:04:56,341 ERROR router.RouterHeartbeatService: Cannot get > version for class > org.apache.hadoop.hdfs.server.federation.store.MembershipStore: Cached State > Store not initialized, MembershipState records not valid > 2018-02-23 19:04:56,341 ERROR router.RouterHeartbeatService: Cannot get > version for class > org.apache.hadoop.hdfs.server.federation.store.MountTableStore: Cached State > Store not initialized, MountTable records not valid > Exception in thread "Router Heartbeat Async" java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreSerializableImpl.serialize(StateStoreSerializableImpl.java:60) > at > org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl.putAll(StateStoreZooKeeperImpl.java:191) > at > org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreBaseImpl.put(StateStoreBaseImpl.java:75) > at > org.apache.hadoop.hdfs.server.federation.store.impl.RouterStoreImpl.routerHeartbeat(RouterStoreImpl.java:88) > at > org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.updateStateStore(RouterHeartbeatService.java:95) > at > org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.access$000(RouterHeartbeatService.java:43) > at > org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService$1.run(RouterHeartbeatService.java:68) > at java.lang.Thread.run(Thread.java:748){code} > This is because, during starting the Router, the CachedStateStore hasn't been > initialized and cannot serve requests. Although the router will still be > started, it would be better to fix the exceptions. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-7304) TestFileCreation#testOverwriteOpenForWrite hangs
[ https://issues.apache.org/jira/browse/HDFS-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-7304: Resolution: Cannot Reproduce Assignee: (was: Akira Ajisaka) Target Version/s: (was: 3.1.0) Status: Resolved (was: Patch Available) > TestFileCreation#testOverwriteOpenForWrite hangs > > > Key: HDFS-7304 > URL: https://issues.apache.org/jira/browse/HDFS-7304 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kihwal Lee >Priority: Major > Attachments: HDFS-7304.patch, HDFS-7304.patch > > > The test case times out. It has been observed in multiple pre-commit builds. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13198) RBF: RouterHeartbeatService throws out CachedStateStore related exceptions when starting router
[ https://issues.apache.org/jira/browse/HDFS-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396229#comment-16396229 ] Wei Yan commented on HDFS-13198: Put a patch [^HDFS-13198.001.patch] to address comments and add a testcase there. It is not easy to put a testcase around, current test is to make sure the updateStateStore() won't throw exception if the statestore is not available. > RBF: RouterHeartbeatService throws out CachedStateStore related exceptions > when starting router > --- > > Key: HDFS-13198 > URL: https://issues.apache.org/jira/browse/HDFS-13198 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Wei Yan >Assignee: Wei Yan >Priority: Minor > Attachments: HDFS-13198.000.patch, HDFS-13198.001.patch > > > Exception looks like: > {code:java} > 2018-02-23 19:04:56,341 ERROR router.RouterHeartbeatService: Cannot get > version for class > org.apache.hadoop.hdfs.server.federation.store.MembershipStore: Cached State > Store not initialized, MembershipState records not valid > 2018-02-23 19:04:56,341 ERROR router.RouterHeartbeatService: Cannot get > version for class > org.apache.hadoop.hdfs.server.federation.store.MountTableStore: Cached State > Store not initialized, MountTable records not valid > Exception in thread "Router Heartbeat Async" java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreSerializableImpl.serialize(StateStoreSerializableImpl.java:60) > at > org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl.putAll(StateStoreZooKeeperImpl.java:191) > at > org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreBaseImpl.put(StateStoreBaseImpl.java:75) > at > org.apache.hadoop.hdfs.server.federation.store.impl.RouterStoreImpl.routerHeartbeat(RouterStoreImpl.java:88) > at > org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.updateStateStore(RouterHeartbeatService.java:95) > at > org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.access$000(RouterHeartbeatService.java:43) > at > org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService$1.run(RouterHeartbeatService.java:68) > at java.lang.Thread.run(Thread.java:748){code} > This is because, during starting the Router, the CachedStateStore hasn't been > initialized and cannot serve requests. Although the router will still be > started, it would be better to fix the exceptions. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13198) RBF: RouterHeartbeatService throws out CachedStateStore related exceptions when starting router
[ https://issues.apache.org/jira/browse/HDFS-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Yan updated HDFS-13198: --- Attachment: HDFS-13198.001.patch > RBF: RouterHeartbeatService throws out CachedStateStore related exceptions > when starting router > --- > > Key: HDFS-13198 > URL: https://issues.apache.org/jira/browse/HDFS-13198 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Wei Yan >Assignee: Wei Yan >Priority: Minor > Attachments: HDFS-13198.000.patch, HDFS-13198.001.patch > > > Exception looks like: > {code:java} > 2018-02-23 19:04:56,341 ERROR router.RouterHeartbeatService: Cannot get > version for class > org.apache.hadoop.hdfs.server.federation.store.MembershipStore: Cached State > Store not initialized, MembershipState records not valid > 2018-02-23 19:04:56,341 ERROR router.RouterHeartbeatService: Cannot get > version for class > org.apache.hadoop.hdfs.server.federation.store.MountTableStore: Cached State > Store not initialized, MountTable records not valid > Exception in thread "Router Heartbeat Async" java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreSerializableImpl.serialize(StateStoreSerializableImpl.java:60) > at > org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl.putAll(StateStoreZooKeeperImpl.java:191) > at > org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreBaseImpl.put(StateStoreBaseImpl.java:75) > at > org.apache.hadoop.hdfs.server.federation.store.impl.RouterStoreImpl.routerHeartbeat(RouterStoreImpl.java:88) > at > org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.updateStateStore(RouterHeartbeatService.java:95) > at > org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.access$000(RouterHeartbeatService.java:43) > at > org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService$1.run(RouterHeartbeatService.java:68) > at java.lang.Thread.run(Thread.java:748){code} > This is because, during starting the Router, the CachedStateStore hasn't been > initialized and cannot serve requests. Although the router will still be > started, it would be better to fix the exceptions. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7304) TestFileCreation#testOverwriteOpenForWrite hangs
[ https://issues.apache.org/jira/browse/HDFS-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396226#comment-16396226 ] Akira Ajisaka commented on HDFS-7304: - bq. Test not failing on trunk, 1000/1000 passed, seems like not flaky without applying the patch. I think we can close this. > TestFileCreation#testOverwriteOpenForWrite hangs > > > Key: HDFS-7304 > URL: https://issues.apache.org/jira/browse/HDFS-7304 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kihwal Lee >Assignee: Akira Ajisaka >Priority: Major > Attachments: HDFS-7304.patch, HDFS-7304.patch > > > The test case times out. It has been observed in multiple pre-commit builds. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13268) TestWebHdfsFileContextMainOperations fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396224#comment-16396224 ] Akira Ajisaka commented on HDFS-13268: -- bq. It would make sense to try to fix GenericTestUtils#getRandomizedTempPath() and GenericTestUtils#getTempPath() instead of adding a name for all tests. +1 > TestWebHdfsFileContextMainOperations fails on Windows > - > > Key: HDFS-13268 > URL: https://issues.apache.org/jira/browse/HDFS-13268 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Íñigo Goiri >Assignee: Xiao Liang >Priority: Major > Attachments: HDFS-13268.000.patch > > > HDFS-10256 changed this to rely on the generic path, however Windows > generates a full address which does not work. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11142) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk
[ https://issues.apache.org/jira/browse/HDFS-11142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396221#comment-16396221 ] genericqa commented on HDFS-11142: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 57s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 32s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}149m 11s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-11142 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12839119/HDFS-11142.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux dbcd93153e62 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / cceb68f | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23423/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23423/testReport/ | | Max. process+thread count | 4013 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Commented] (HDFS-12156) TestFSImage fails without -Pnative
[ https://issues.apache.org/jira/browse/HDFS-12156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396219#comment-16396219 ] Akira Ajisaka commented on HDFS-12156: -- 02: rebased. > TestFSImage fails without -Pnative > -- > > Key: HDFS-12156 > URL: https://issues.apache.org/jira/browse/HDFS-12156 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Attachments: HDFS-12156.01.patch, HDFS-12156.02.patch > > > TestFSImage#testCompression tests LZ4 codec and it fails when native library > is not available. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12156) TestFSImage fails without -Pnative
[ https://issues.apache.org/jira/browse/HDFS-12156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-12156: - Attachment: HDFS-12156.02.patch > TestFSImage fails without -Pnative > -- > > Key: HDFS-12156 > URL: https://issues.apache.org/jira/browse/HDFS-12156 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Attachments: HDFS-12156.01.patch, HDFS-12156.02.patch > > > TestFSImage#testCompression tests LZ4 codec and it fails when native library > is not available. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13268) TestWebHdfsFileContextMainOperations fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396216#comment-16396216 ] Íñigo Goiri commented on HDFS-13268: [~vinayrpet], [~ajisakaa], I'd like to get your feedback as you guys were involved in HDFS-10256. The main issue is that now we are using full windows paths for HDFS addresses. > TestWebHdfsFileContextMainOperations fails on Windows > - > > Key: HDFS-13268 > URL: https://issues.apache.org/jira/browse/HDFS-13268 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Íñigo Goiri >Assignee: Xiao Liang >Priority: Major > Attachments: HDFS-13268.000.patch > > > HDFS-10256 changed this to rely on the generic path, however Windows > generates a full address which does not work. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11398) TestDataNodeVolumeFailure#testUnderReplicationAfterVolFailure still fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396206#comment-16396206 ] genericqa commented on HDFS-11398: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 30s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 51s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}153m 46s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}201m 28s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.web.TestWebHdfsTimeouts | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-11398 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12851609/HDFS-11398.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 5131ce0ccac5 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / cceb68f | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23418/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23418/testReport/ | | Max. process+thread count | 4521 (vs. ulimit of 1) | | modules | C:
[jira] [Commented] (HDFS-13239) Fix non-empty dir warning message when setting default EC policy
[ https://issues.apache.org/jira/browse/HDFS-13239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396200#comment-16396200 ] Bharat Viswanadham commented on HDFS-13239: --- Attached patch v04 to fix checkstyle issues. > Fix non-empty dir warning message when setting default EC policy > > > Key: HDFS-13239 > URL: https://issues.apache.org/jira/browse/HDFS-13239 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Hanisha Koneru >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HDFS-13239.00.patch, HDFS-13239.01.patch, > HDFS-13239.02.patch, HDFS-13239.03.patch, HDFS-13239.04.patch > > > When EC policy is set on a non-empty directory, the following warning message > is given: > {code} > $hdfs ec -setPolicy -policy RS-6-3-1024k -path /ec1 > Warning: setting erasure coding policy on a non-empty directory will not > automatically convert existing files to RS-6-3-1024k > {code} > When we do not specify the -policy parameter when setting EC policy on a > directory, it takes the default EC policy. Setting default EC policy in this > way on a non-empty directory gives the following warning message: > {code} > $hdfs ec -setPolicy -path /ec2 > Warning: setting erasure coding policy on a non-empty directory will not > automatically convert existing files to null > {code} > Notice that the warning message in the 2nd case has the ecPolicy name shown > as null. We should instead give the default EC policy name in this message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13239) Fix non-empty dir warning message when setting default EC policy
[ https://issues.apache.org/jira/browse/HDFS-13239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13239: -- Attachment: HDFS-13239.04.patch > Fix non-empty dir warning message when setting default EC policy > > > Key: HDFS-13239 > URL: https://issues.apache.org/jira/browse/HDFS-13239 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Hanisha Koneru >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HDFS-13239.00.patch, HDFS-13239.01.patch, > HDFS-13239.02.patch, HDFS-13239.03.patch, HDFS-13239.04.patch > > > When EC policy is set on a non-empty directory, the following warning message > is given: > {code} > $hdfs ec -setPolicy -policy RS-6-3-1024k -path /ec1 > Warning: setting erasure coding policy on a non-empty directory will not > automatically convert existing files to RS-6-3-1024k > {code} > When we do not specify the -policy parameter when setting EC policy on a > directory, it takes the default EC policy. Setting default EC policy in this > way on a non-empty directory gives the following warning message: > {code} > $hdfs ec -setPolicy -path /ec2 > Warning: setting erasure coding policy on a non-empty directory will not > automatically convert existing files to null > {code} > Notice that the warning message in the 2nd case has the ecPolicy name shown > as null. We should instead give the default EC policy name in this message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12587) Use Parameterized tests in TestBlockInfoStriped and TestLowRedundancyBlockQueues to apply multiple EC policies
[ https://issues.apache.org/jira/browse/HDFS-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396198#comment-16396198 ] genericqa commented on HDFS-12587: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 34s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 31s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 6 unchanged - 2 fixed = 6 total (was 8) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 35s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}124m 43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}188m 7s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDatanodeRegistration | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy | | | hadoop.hdfs.server.datanode.TestDataNodeUUID | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-12587 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12890327/HDFS-12587.1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 6f6c99924db5 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / cceb68f | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23419/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23419/testReport/ | | Max. process+thread count | 3095 (vs. ulimit of
[jira] [Commented] (HDFS-7527) TestDecommission.testIncludeByRegistrationName fails occassionally in trunk
[ https://issues.apache.org/jira/browse/HDFS-7527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396197#comment-16396197 ] Ajay Kumar commented on HDFS-7527: -- Patch v3 rebased with current trunk. Removed {{HostFileManager}} change as it is already included and reduced datanode heartbeat time to 250ms. > TestDecommission.testIncludeByRegistrationName fails occassionally in trunk > --- > > Key: HDFS-7527 > URL: https://issues.apache.org/jira/browse/HDFS-7527 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode, test >Reporter: Yongjun Zhang >Assignee: Binglin Chang >Priority: Major > Labels: flaky-test > Attachments: HDFS-7527.001.patch, HDFS-7527.002.patch, > HDFS-7527.003.patch > > > https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport/ > {quote} > Error Message > test timed out after 36 milliseconds > Stacktrace > java.lang.Exception: test timed out after 36 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName(TestDecommission.java:957) > 2014-12-15 12:00:19,958 ERROR datanode.DataNode > (BPServiceActor.java:run(836)) - Initialization failed for Block pool > BP-887397778-67.195.81.153-1418644469024 (Datanode Uuid null) service to > localhost/127.0.0.1:40565 Datanode denied communication with namenode because > the host is not in the include-list: DatanodeRegistration(127.0.0.1, > datanodeUuid=55d8cbff-d8a3-4d6d-ab64-317fff0ee279, infoPort=54318, > infoSecurePort=0, ipcPort=43726, > storageInfo=lv=-56;cid=testClusterID;nsid=903754315;c=0) > at > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:915) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:4402) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1196) > at > org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:92) > at > org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:26296) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:637) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:966) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2127) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2123) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2121) > 2014-12-15 12:00:29,087 FATAL datanode.DataNode > (BPServiceActor.java:run(841)) - Initialization failed for Block pool > BP-887397778-67.195.81.153-1418644469024 (Datanode Uuid null) service to > localhost/127.0.0.1:40565. Exiting. > java.io.IOException: DN shut down before block pool connected > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.retrieveNamespaceInfo(BPServiceActor.java:186) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:216) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:829) > at java.lang.Thread.run(Thread.java:745) > {quote} > Found by tool proposed in HADOOP-11045: > {quote} > [yzhang@localhost jenkinsftf]$ ./determine-flaky-tests-hadoop.py -j > Hadoop-Hdfs-trunk -n 5 | tee bt.log > Recently FAILED builds in url: > https://builds.apache.org//job/Hadoop-Hdfs-trunk > THERE ARE 4 builds (out of 6) that have failed tests in the past 5 days, > as listed below: > ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport > (2014-12-15 03:30:01) > Failed test: > org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName > Failed test: > org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect > Failed test: > org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline > ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1972/testReport > (2014-12-13 10:32:27) > Failed test: > org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName > ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1971/testReport > (2014-12-13 03:30:01) > Failed test: > org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline >
[jira] [Commented] (HDFS-13268) TestWebHdfsFileContextMainOperations fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396196#comment-16396196 ] Íñigo Goiri commented on HDFS-13268: I have the feeling that most unit tests on Windows are failing because of this. It would make sense to try to fix {{GenericTestUtils#getRandomizedTempPath()}} and {{GenericTestUtils#getTempPath()}} instead of adding a name for all tests. > TestWebHdfsFileContextMainOperations fails on Windows > - > > Key: HDFS-13268 > URL: https://issues.apache.org/jira/browse/HDFS-13268 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Íñigo Goiri >Assignee: Xiao Liang >Priority: Major > Attachments: HDFS-13268.000.patch > > > HDFS-10256 changed this to rely on the generic path, however Windows > generates a full address which does not work. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13268) TestWebHdfsFileContextMainOperations fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13268: --- Attachment: HDFS-13268.000.patch > TestWebHdfsFileContextMainOperations fails on Windows > - > > Key: HDFS-13268 > URL: https://issues.apache.org/jira/browse/HDFS-13268 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Íñigo Goiri >Assignee: Xiao Liang >Priority: Major > Attachments: HDFS-13268.000.patch > > > HDFS-10256 changed this to rely on the generic path, however Windows > generates a full address which does not work. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13268) TestWebHdfsFileContextMainOperations fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396192#comment-16396192 ] Íñigo Goiri commented on HDFS-13268: We get the following error: {code} Invalid path name Invalid file name: /D:/hadoop-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/9wKyAQf3hc/test {code} > TestWebHdfsFileContextMainOperations fails on Windows > - > > Key: HDFS-13268 > URL: https://issues.apache.org/jira/browse/HDFS-13268 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Íñigo Goiri >Assignee: Xiao Liang >Priority: Major > > HDFS-10256 changed this to rely on the generic path, however Windows > generates a full address which does not work. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-13268) TestWebHdfsFileContextMainOperations fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri reassigned HDFS-13268: -- Assignee: Xiao Liang > TestWebHdfsFileContextMainOperations fails on Windows > - > > Key: HDFS-13268 > URL: https://issues.apache.org/jira/browse/HDFS-13268 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Íñigo Goiri >Assignee: Xiao Liang >Priority: Major > > HDFS-10256 changed this to rely on the generic path, however Windows > generates a full address which does not work. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13268) TestWebHdfsFileContextMainOperations fails on Windows
Íñigo Goiri created HDFS-13268: -- Summary: TestWebHdfsFileContextMainOperations fails on Windows Key: HDFS-13268 URL: https://issues.apache.org/jira/browse/HDFS-13268 Project: Hadoop HDFS Issue Type: Bug Reporter: Íñigo Goiri HDFS-10256 changed this to rely on the generic path, however Windows generates a full address which does not work. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-7527) TestDecommission.testIncludeByRegistrationName fails occassionally in trunk
[ https://issues.apache.org/jira/browse/HDFS-7527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-7527: - Attachment: HDFS-7527.003.patch > TestDecommission.testIncludeByRegistrationName fails occassionally in trunk > --- > > Key: HDFS-7527 > URL: https://issues.apache.org/jira/browse/HDFS-7527 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode, test >Reporter: Yongjun Zhang >Assignee: Binglin Chang >Priority: Major > Labels: flaky-test > Attachments: HDFS-7527.001.patch, HDFS-7527.002.patch, > HDFS-7527.003.patch > > > https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport/ > {quote} > Error Message > test timed out after 36 milliseconds > Stacktrace > java.lang.Exception: test timed out after 36 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName(TestDecommission.java:957) > 2014-12-15 12:00:19,958 ERROR datanode.DataNode > (BPServiceActor.java:run(836)) - Initialization failed for Block pool > BP-887397778-67.195.81.153-1418644469024 (Datanode Uuid null) service to > localhost/127.0.0.1:40565 Datanode denied communication with namenode because > the host is not in the include-list: DatanodeRegistration(127.0.0.1, > datanodeUuid=55d8cbff-d8a3-4d6d-ab64-317fff0ee279, infoPort=54318, > infoSecurePort=0, ipcPort=43726, > storageInfo=lv=-56;cid=testClusterID;nsid=903754315;c=0) > at > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:915) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:4402) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1196) > at > org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:92) > at > org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:26296) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:637) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:966) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2127) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2123) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2121) > 2014-12-15 12:00:29,087 FATAL datanode.DataNode > (BPServiceActor.java:run(841)) - Initialization failed for Block pool > BP-887397778-67.195.81.153-1418644469024 (Datanode Uuid null) service to > localhost/127.0.0.1:40565. Exiting. > java.io.IOException: DN shut down before block pool connected > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.retrieveNamespaceInfo(BPServiceActor.java:186) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:216) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:829) > at java.lang.Thread.run(Thread.java:745) > {quote} > Found by tool proposed in HADOOP-11045: > {quote} > [yzhang@localhost jenkinsftf]$ ./determine-flaky-tests-hadoop.py -j > Hadoop-Hdfs-trunk -n 5 | tee bt.log > Recently FAILED builds in url: > https://builds.apache.org//job/Hadoop-Hdfs-trunk > THERE ARE 4 builds (out of 6) that have failed tests in the past 5 days, > as listed below: > ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport > (2014-12-15 03:30:01) > Failed test: > org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName > Failed test: > org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect > Failed test: > org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline > ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1972/testReport > (2014-12-13 10:32:27) > Failed test: > org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName > ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1971/testReport > (2014-12-13 03:30:01) > Failed test: > org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline > ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1969/testReport > (2014-12-11 03:30:01) > Failed test: >
[jira] [Commented] (HDFS-12288) Fix DataNode's xceiver count calculation
[ https://issues.apache.org/jira/browse/HDFS-12288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396147#comment-16396147 ] genericqa commented on HDFS-12288: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 49s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 0s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 54s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}155m 20s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}204m 8s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNode | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.datanode.TestDataNodeMXBean | | | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-12288 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12881509/HDFS-12288.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 5f054aae79e8 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ac627f5 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23415/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23415/testReport/ | | Max. process+thread
[jira] [Commented] (HDFS-12677) Extend TestReconstructStripedFile with a random EC policy
[ https://issues.apache.org/jira/browse/HDFS-12677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396146#comment-16396146 ] genericqa commented on HDFS-12677: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 7s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 58s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 50s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 3 new + 394 unchanged - 3 fixed = 397 total (was 397) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 55s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}149m 53s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}211m 30s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-12677 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914101/HDFS-12677.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 023d8701f494 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ac627f5 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | javac |
[jira] [Commented] (HDFS-12677) Extend TestReconstructStripedFile with a random EC policy
[ https://issues.apache.org/jira/browse/HDFS-12677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396136#comment-16396136 ] Hudson commented on HDFS-12677: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13816 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13816/]) HDFS-12677. Extend TestReconstructStripedFile with a random EC policy. (cdouglas: rev 39a5fbae479ecee3a563e2f4eb937471fbf666f8) * (add) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReconstructStripedFileWithRandomECPolicy.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReconstructStripedFile.java > Extend TestReconstructStripedFile with a random EC policy > - > > Key: HDFS-12677 > URL: https://issues.apache.org/jira/browse/HDFS-12677 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding, test >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Fix For: 3.1.0 > > Attachments: HDFS-12677.002.patch, HDFS-12677.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12505) Extend TestFileStatusWithECPolicy with a random EC policy
[ https://issues.apache.org/jira/browse/HDFS-12505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396130#comment-16396130 ] genericqa commented on HDFS-12505: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 54s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 39s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 0 unchanged - 1 fixed = 1 total (was 1) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 5s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 9s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}153m 19s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.TestDFSClientRetries | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.balancer.TestBalancerRPCDelay | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-12505 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888028/HDFS-12505.1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a60839d691e1 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / cceb68f | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/23417/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit |