[jira] [Created] (HDFS-11124) Report blockIds of internal blocks for EC files in Fsck
Takanobu Asanuma created HDFS-11124: --- Summary: Report blockIds of internal blocks for EC files in Fsck Key: HDFS-11124 URL: https://issues.apache.org/jira/browse/HDFS-11124 Project: Hadoop HDFS Issue Type: Sub-task Components: erasure-coding Affects Versions: 3.0.0-alpha1 Reporter: Takanobu Asanuma Assignee: Takanobu Asanuma Fix For: 3.0.0-alpha2 At moment, when we do fsck for an EC file which has corrupt blocks and missing blocks, the result of fsck is like this: {quote} /data/striped 393216 bytes, erasure-coded: policy=RS-DEFAULT-6-3-64k, 1 block(s): /data/striped: CORRUPT blockpool BP-1204772930-172.16.165.209-1478761131832 block blk_-9223372036854775792 CORRUPT 1 blocks of total size 393216 B 0. BP-1204772930-172.16.165.209-1478761131832:blk_-9223372036854775792_1001 len=393216 Live_repl=4 [DatanodeInfoWithStorage[127.0.0.1:61617,DS-bcfebe1f-ff54-4d57-9258-ff5bdfde01b5,DISK](CORRUPT), DatanodeInfoWithStorage[127.0.0.1:61601,DS-9abf64d0-bb6b-434c-8c5e-de8e3b278f91,DISK](CORRUPT), DatanodeInfoWithStorage[127.0.0.1:61596,DS-62698e61-c13f-44f2-9da5-614945960221,DISK](CORRUPT), DatanodeInfoWithStorage[127.0.0.1:61605,DS-bbce6708-16fe-44ca-9f1c-506cf00f7e0d,DISK](LIVE), DatanodeInfoWithStorage[127.0.0.1:61592,DS-9cdd4afd-2dc8-40da-8805-09712e2afcc4,DISK](LIVE), DatanodeInfoWithStorage[127.0.0.1:61621,DS-f2a72d28-c880-4ffe-a70f-0f403e374504,DISK](LIVE), DatanodeInfoWithStorage[127.0.0.1:61629,DS-fa6ac558-2c38-41fe-9ef8-222b3f6b2b3c,DISK](LIVE)] {quote} It would be useful for admins if it reports the blockIds of the internal blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11068) [SPS]: Provide unique trackID to track the block movement sends to coordinator
[ https://issues.apache.org/jira/browse/HDFS-11068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653233#comment-15653233 ] Rakesh R commented on HDFS-11068: - Thank you [~umamaheswararao] for the reviews and useful suggestions. Attached new patch addressing the same. > [SPS]: Provide unique trackID to track the block movement sends to coordinator > -- > > Key: HDFS-11068 > URL: https://issues.apache.org/jira/browse/HDFS-11068 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Affects Versions: HDFS-10285 >Reporter: Rakesh R >Assignee: Rakesh R > Attachments: HDFS-11068-HDFS-10285-01.patch, > HDFS-11068-HDFS-10285-02.patch, HDFS-11068-HDFS-10285.patch > > > Presently DatanodeManager uses constant value -1 as > [trackID|https://github.com/apache/hadoop/blob/HDFS-10285/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java#L1607], > which is a temporary value. As per discussion with [~umamaheswararao], one > proposal is to use {{BlockCollectionId/InodeFileId}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11068) [SPS]: Provide unique trackID to track the block movement sends to coordinator
[ https://issues.apache.org/jira/browse/HDFS-11068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rakesh R updated HDFS-11068: Attachment: HDFS-11068-HDFS-10285-02.patch > [SPS]: Provide unique trackID to track the block movement sends to coordinator > -- > > Key: HDFS-11068 > URL: https://issues.apache.org/jira/browse/HDFS-11068 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Affects Versions: HDFS-10285 >Reporter: Rakesh R >Assignee: Rakesh R > Attachments: HDFS-11068-HDFS-10285-01.patch, > HDFS-11068-HDFS-10285-02.patch, HDFS-11068-HDFS-10285.patch > > > Presently DatanodeManager uses constant value -1 as > [trackID|https://github.com/apache/hadoop/blob/HDFS-10285/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java#L1607], > which is a temporary value. As per discussion with [~umamaheswararao], one > proposal is to use {{BlockCollectionId/InodeFileId}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10996) Ability to specify per-file EC policy at create time
[ https://issues.apache.org/jira/browse/HDFS-10996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653191#comment-15653191 ] Hadoop QA commented on HDFS-10996: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 12 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 33s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 40s{color} | {color:orange} hadoop-hdfs-project: The patch generated 18 new + 987 unchanged - 17 fixed = 1005 total (was 1004) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 54s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 13s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}112m 54s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits | | | hadoop.hdfs.TestEncryptionZones | | | hadoop.hdfs.TestDFSClientRetries | | | hadoop.hdfs.TestEncryptionZonesWithKMS | | | hadoop.hdfs.TestLease | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-10996 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838290/HDFS-10996-v2.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux f8ccd2b51e6e 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 71adf44 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle |
[jira] [Commented] (HDFS-11090) Leave safemode immediately if all blocks have reported in
[ https://issues.apache.org/jira/browse/HDFS-11090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653078#comment-15653078 ] Konstantin Shvachko commented on HDFS-11090: Hey [~andrew.wang], [~jingzhao] is right - leaving SafeMode immediately after reaching 100% of minimally (1) replicated blocks will trigger unnecessary massive block replication. So this would be a big concern for big clusters. It looks like you are trying to optimize for the very first empty cluster startup. One possibility is to add to your management script a startup {{-D}} option, which sets safemode extension to 0 for the first NN startup. Then you don't need to update {{hdfs-site.xml}} for subsequent restarts of NN. > Leave safemode immediately if all blocks have reported in > - > > Key: HDFS-11090 > URL: https://issues.apache.org/jira/browse/HDFS-11090 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 2.7.3 >Reporter: Andrew Wang >Assignee: Yiqun Lin > Attachments: HDFS-11090.001.patch > > > Startup safemode is triggered by two thresholds: % blocks reported in, and > min # datanodes. It's extended by an interval (default 30s) until these two > thresholds are met. > Safemode extension is helpful when the cluster has data, and the default % > blocks threshold (0.99) is used. It gives DNs a little extra time to report > in and thus avoid unnecessary replication work. > However, we can leave startup safemode early if 100% of blocks have reported > in. > Note that operators sometimes change the % blocks threshold to > 1 to never > automatically leave safemode. We should maintain this behavior. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11029) [SPS]:Provide retry mechanism for the blocks which were failed while moving its storage at DNs
[ https://issues.apache.org/jira/browse/HDFS-11029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rakesh R updated HDFS-11029: Resolution: Fixed Fix Version/s: HDFS-10285 Status: Resolved (was: Patch Available) +1, Failures are unrelated to this patch. Thanks [~umamaheswararao]. Committed to HDFS-10285 branch. > [SPS]:Provide retry mechanism for the blocks which were failed while moving > its storage at DNs > -- > > Key: HDFS-11029 > URL: https://issues.apache.org/jira/browse/HDFS-11029 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: HDFS-10285 >Reporter: Uma Maheswara Rao G >Assignee: Uma Maheswara Rao G > Fix For: HDFS-10285 > > Attachments: HDFS-11029-HDFS-10285-00.patch, > HDFS-11029-HDFS-10285-01.patch, HDFS-11029-HDFS-10285-02.patch > > > When DN co-ordinator finds some of blocks associated to trackedID could not > be moved its storages, due to some errors.Here retry may work in some cases, > example if target node has no space. Then retry by finding another target can > work. > So, based on the movement result flag(SUCCESS/FAILURE) from DN Co-ordinator, > NN would retry by scanning the blocks again. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10996) Ability to specify per-file EC policy at create time
[ https://issues.apache.org/jira/browse/HDFS-10996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HDFS-10996: - Attachment: HDFS-10996-v2.patch 1. Fix checkstyle issues 2. Improve code based on Rakesh's suggestions > Ability to specify per-file EC policy at create time > > > Key: HDFS-10996 > URL: https://issues.apache.org/jira/browse/HDFS-10996 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: SammiChen > Attachments: HDFS-10996-v1.patch, HDFS-10996-v2.patch > > > Based on discussion in HDFS-10971, it would be useful to specify the EC > policy when the file is created. This is useful for situations where app > requirements do not map nicely to the current directory-level policies. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10996) Ability to specify per-file EC policy at create time
[ https://issues.apache.org/jira/browse/HDFS-10996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653005#comment-15653005 ] SammiChen commented on HDFS-10996: -- Hi Rakesh, thank you very much for helping review the patch and give so valuable comments! All items will be taken care in next patch. For the last question, my original thought is to improving the {{setErasureCodingPolicy}} and not change the {{create}} function. But after I did some investigation, I found this way doesn't work. Because a {{FSDataOutputStream}} will be returned by {{create}} function, then this {{FSDataOutputStream}} will be used immediately to flush data out. So in order to apply the erasure coding policy to the new file at the very beginning, {{erasureCodingPolicy}} must be passed to create function as a parameter. So far {{setErausreCodingPolicy}} only applies to directory, not file. Set a erasure coding policy to an existing file, will trigger the file content transformed from one redundancy policy to another. I have created a new JIRA HDFS-11075 to track it. Several checkstyle issues are about "More than 7 parameters" of some existing functions, for example, {{ClientProtocol#create}}. I'm prone to keep the current behaviour. > Ability to specify per-file EC policy at create time > > > Key: HDFS-10996 > URL: https://issues.apache.org/jira/browse/HDFS-10996 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: SammiChen > Attachments: HDFS-10996-v1.patch > > > Based on discussion in HDFS-10971, it would be useful to specify the EC > policy when the file is created. This is useful for situations where app > requirements do not map nicely to the current directory-level policies. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11029) [SPS]:Provide retry mechanism for the blocks which were failed while moving its storage at DNs
[ https://issues.apache.org/jira/browse/HDFS-11029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652996#comment-15652996 ] Hadoop QA commented on HDFS-11029: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 1s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 57s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 30s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 17s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 98m 7s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | | hadoop.hdfs.TestErasureCodeBenchmarkThroughput | | | hadoop.hdfs.TestFileChecksum | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-11029 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838283/HDFS-11029-HDFS-10285-02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux f1bac994e435 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-10285 / 3adef4f | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/17497/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17497/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17497/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17497/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > [SPS]:Provide retry
[jira] [Commented] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652949#comment-15652949 ] Hadoop QA commented on HDFS-11122: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 60m 45s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 81m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-11122 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838281/HDFS-11122.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 7d5f9f5192f4 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 71adf44 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17496/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17496/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11122.001.patch, HDFS-11122.002.patch, > HDFS-11122.003.patch > > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack >
[jira] [Commented] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations
[ https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652905#comment-15652905 ] Hadoop QA commented on HDFS-10872: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 38s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 49s{color} | {color:orange} root: The patch generated 2 new + 862 unchanged - 0 fixed = 864 total (was 862) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 7s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 41s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 49s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}131m 44s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.tools.TestDFSAdmin | | | hadoop.hdfs.server.namenode.TestMetaSave | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-10872 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838271/HDFS-10872.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux ec833f598ec1 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 71adf44 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/17494/artifact/patchprocess/diff-checkstyle-root.txt | | unit |
[jira] [Commented] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652903#comment-15652903 ] Takanobu Asanuma commented on HDFS-11122: - The failed tests doesn't seem to be related. +1 (non-binding). Thanks! > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11122.001.patch, HDFS-11122.002.patch, > HDFS-11122.003.patch > > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): > {code} > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) > at > org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) > {code} > The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a > improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652868#comment-15652868 ] Hadoop QA commented on HDFS-11122: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 19s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 97m 25s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.tracing.TestTracing | | | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-11122 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838274/HDFS-11122.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux edf8d9eacf2f 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 71adf44 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17495/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17495/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17495/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >
[jira] [Comment Edited] (HDFS-11068) [SPS]: Provide unique trackID to track the block movement sends to coordinator
[ https://issues.apache.org/jira/browse/HDFS-11068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652862#comment-15652862 ] Uma Maheswara Rao G edited comment on HDFS-11068 at 11/10/16 3:13 AM: -- # - {code} + public MapgetBlocksToMoveStorages() { +Map trackIdVsBlocks = new LinkedHashMap<>(); +synchronized (storageMovementBlocks) { + if (storageMovementBlocks.isEmpty()) { +return trackIdVsBlocks; + } + trackIdVsBlocks.putAll(storageMovementBlocks); + storageMovementBlocks.keySet().removeAll(trackIdVsBlocks.keySet()); +} +return trackIdVsBlocks; } {code} Here what if one trackId/blockcollection contains many blocks to move? So, how about just keep once trackID per heartbeat? (Later we may need to sub decid them into small batches with in tracked itself if blocks are many(ex: a file contains many blocks)) # - {quote} + // TODO: Temporarily using the results from StoragePolicySatisfier + // class. This has to be revisited as part of HDFS-11029. {quote} HDFS-11029 almost ready. we can incorporate required changes and remove this TODO. Thanks for adding TODO. Other than this comments, patch looks great. Thanks was (Author: umamaheswararao): {code} + public Map getBlocksToMoveStorages() { +Map trackIdVsBlocks = new LinkedHashMap<>(); +synchronized (storageMovementBlocks) { + if (storageMovementBlocks.isEmpty()) { +return trackIdVsBlocks; + } + trackIdVsBlocks.putAll(storageMovementBlocks); + storageMovementBlocks.keySet().removeAll(trackIdVsBlocks.keySet()); +} +return trackIdVsBlocks; } {code} Here what if one trackId/blockcollection contains many blocks to move? So, how about just keep once trackID per heartbeat? (Later we may need to sub decid them into small batches with in tracked itself if blocks are many(ex: a file contains many blocks)) {quote} + // TODO: Temporarily using the results from StoragePolicySatisfier + // class. This has to be revisited as part of HDFS-11029. {quote} HDFS-11029 almost ready. we can incorporate required changes and remove this TODO. Thanks for adding TODO. Other than this comments, patch looks great. Thanks > [SPS]: Provide unique trackID to track the block movement sends to coordinator > -- > > Key: HDFS-11068 > URL: https://issues.apache.org/jira/browse/HDFS-11068 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Affects Versions: HDFS-10285 >Reporter: Rakesh R >Assignee: Rakesh R > Attachments: HDFS-11068-HDFS-10285-01.patch, > HDFS-11068-HDFS-10285.patch > > > Presently DatanodeManager uses constant value -1 as > [trackID|https://github.com/apache/hadoop/blob/HDFS-10285/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java#L1607], > which is a temporary value. As per discussion with [~umamaheswararao], one > proposal is to use {{BlockCollectionId/InodeFileId}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11068) [SPS]: Provide unique trackID to track the block movement sends to coordinator
[ https://issues.apache.org/jira/browse/HDFS-11068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652862#comment-15652862 ] Uma Maheswara Rao G commented on HDFS-11068: {code} + public MapgetBlocksToMoveStorages() { +Map trackIdVsBlocks = new LinkedHashMap<>(); +synchronized (storageMovementBlocks) { + if (storageMovementBlocks.isEmpty()) { +return trackIdVsBlocks; + } + trackIdVsBlocks.putAll(storageMovementBlocks); + storageMovementBlocks.keySet().removeAll(trackIdVsBlocks.keySet()); +} +return trackIdVsBlocks; } {code} Here what if one trackId/blockcollection contains many blocks to move? So, how about just keep once trackID per heartbeat? (Later we may need to sub decid them into small batches with in tracked itself if blocks are many(ex: a file contains many blocks)) {quote} + // TODO: Temporarily using the results from StoragePolicySatisfier + // class. This has to be revisited as part of HDFS-11029. {quote} HDFS-11029 almost ready. we can incorporate required changes and remove this TODO. Thanks for adding TODO. Other than this comments, patch looks great. Thanks > [SPS]: Provide unique trackID to track the block movement sends to coordinator > -- > > Key: HDFS-11068 > URL: https://issues.apache.org/jira/browse/HDFS-11068 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Affects Versions: HDFS-10285 >Reporter: Rakesh R >Assignee: Rakesh R > Attachments: HDFS-11068-HDFS-10285-01.patch, > HDFS-11068-HDFS-10285.patch > > > Presently DatanodeManager uses constant value -1 as > [trackID|https://github.com/apache/hadoop/blob/HDFS-10285/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java#L1607], > which is a temporary value. As per discussion with [~umamaheswararao], one > proposal is to use {{BlockCollectionId/InodeFileId}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11029) [SPS]:Provide retry mechanism for the blocks which were failed while moving its storage at DNs
[ https://issues.apache.org/jira/browse/HDFS-11029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652851#comment-15652851 ] Rakesh R commented on HDFS-11029: - Thanks [~umamaheswararao]. +1 on the latest patch. Pending Jenkins. > [SPS]:Provide retry mechanism for the blocks which were failed while moving > its storage at DNs > -- > > Key: HDFS-11029 > URL: https://issues.apache.org/jira/browse/HDFS-11029 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: HDFS-10285 >Reporter: Uma Maheswara Rao G >Assignee: Uma Maheswara Rao G > Attachments: HDFS-11029-HDFS-10285-00.patch, > HDFS-11029-HDFS-10285-01.patch, HDFS-11029-HDFS-10285-02.patch > > > When DN co-ordinator finds some of blocks associated to trackedID could not > be moved its storages, due to some errors.Here retry may work in some cases, > example if target node has no space. Then retry by finding another target can > work. > So, based on the movement result flag(SUCCESS/FAILURE) from DN Co-ordinator, > NN would retry by scanning the blocks again. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11029) [SPS]:Provide retry mechanism for the blocks which were failed while moving its storage at DNs
[ https://issues.apache.org/jira/browse/HDFS-11029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uma Maheswara Rao G updated HDFS-11029: --- Attachment: HDFS-11029-HDFS-10285-02.patch Fixed minor checkstyle and javadoc > [SPS]:Provide retry mechanism for the blocks which were failed while moving > its storage at DNs > -- > > Key: HDFS-11029 > URL: https://issues.apache.org/jira/browse/HDFS-11029 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: HDFS-10285 >Reporter: Uma Maheswara Rao G >Assignee: Uma Maheswara Rao G > Attachments: HDFS-11029-HDFS-10285-00.patch, > HDFS-11029-HDFS-10285-01.patch, HDFS-11029-HDFS-10285-02.patch > > > When DN co-ordinator finds some of blocks associated to trackedID could not > be moved its storages, due to some errors.Here retry may work in some cases, > example if target node has no space. Then retry by finding another target can > work. > So, based on the movement result flag(SUCCESS/FAILURE) from DN Co-ordinator, > NN would retry by scanning the blocks again. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652738#comment-15652738 ] Yiqun Lin edited comment on HDFS-11122 at 11/10/16 2:18 AM: I think the line {{miniCluster.corruptBlockOnDataNodes(block)}} can assure the data to be corrupted, but it can't trigger the reporting bad blocks operation as once. As [~tasanuma0829] suggested, we can read this file and assure the {{ClientProtocol#reportBadBlocks}} to be called. Post the v003 patch to make a improvement. was (Author: linyiqun): I think the line {{miniCluster.corruptBlockOnDataNodes(block)}} can assure the data to be corrupted, but it can't trigger the reported the bad blocks as onec. As [~tasanuma0829] suggested, we can read this file and assure the {{ClientProtocol#reportBadBlocks}} to be called. > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11122.001.patch, HDFS-11122.002.patch, > HDFS-11122.003.patch > > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): > {code} > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) > at > org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) > {code} > The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a > improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11122: - Attachment: HDFS-11122.003.patch I think the line {{miniCluster.corruptBlockOnDataNodes(block)} can assure the data to be corrupted, but it can't trigger the reported the bad blocks as onec. As [~tasanuma0829] suggested, we can read this file and assure the {{ClientProtocol#reportBadBlocks}} to be called. > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11122.001.patch, HDFS-11122.002.patch, > HDFS-11122.003.patch > > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): > {code} > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) > at > org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) > {code} > The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a > improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652738#comment-15652738 ] Yiqun Lin edited comment on HDFS-11122 at 11/10/16 2:16 AM: I think the line {{miniCluster.corruptBlockOnDataNodes(block)}} can assure the data to be corrupted, but it can't trigger the reported the bad blocks as onec. As [~tasanuma0829] suggested, we can read this file and assure the {{ClientProtocol#reportBadBlocks}} to be called. was (Author: linyiqun): I think the line {{miniCluster.corruptBlockOnDataNodes(block)} can assure the data to be corrupted, but it can't trigger the reported the bad blocks as onec. As [~tasanuma0829] suggested, we can read this file and assure the {{ClientProtocol#reportBadBlocks}} to be called. > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11122.001.patch, HDFS-11122.002.patch, > HDFS-11122.003.patch > > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): > {code} > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) > at > org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) > {code} > The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a > improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652719#comment-15652719 ] Takanobu Asanuma commented on HDFS-11122: - I think that's right. > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11122.001.patch, HDFS-11122.002.patch > > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): > {code} > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) > at > org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) > {code} > The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a > improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11120) TestEncryptionZones should waitActive
[ https://issues.apache.org/jira/browse/HDFS-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652676#comment-15652676 ] John Zhuge commented on HDFS-11120: --- Thanks [~xiaochen] for the report, review, and commit! > TestEncryptionZones should waitActive > - > > Key: HDFS-11120 > URL: https://issues.apache.org/jira/browse/HDFS-11120 > Project: Hadoop HDFS > Issue Type: Improvement > Components: test >Affects Versions: 2.8.0 >Reporter: Xiao Chen >Assignee: John Zhuge >Priority: Minor > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HDFS-11120.001.patch, HDFS-11120.002.patch > > > Happened to notice this. > {{TestEncryptionZones#setup}} didn't {{waitActive}} on the minicluster. > There's also a test case that does a unnecessary waitActive: > {code} > cluster.restartNameNode(true); > cluster.waitActive(); > {code} > We should fix this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652671#comment-15652671 ] Yiqun Lin edited comment on HDFS-11122 at 11/10/16 1:49 AM: Thanks [~tasanuma0829] for sharing the thought. {quote} MiniDFSCluster#corruptBlockOnDataNodes does not assure that the datanodes recognize the corrupt block. {quote} This line {{assertEquals("Fail to corrupt all replicas for block " + block, replFactor, blockFilesCorrupted);}} can't assure the corrupt block has been recognized in test(Correct me if I am wrong). In addition, your proposal looks good. was (Author: linyiqun): Thanks [~tasanuma0829] for sharing the thought. {quote} MiniDFSCluster#corruptBlockOnDataNodes does not assure that the datanodes recognize the corrupt block. {quote} This line {{assertEquals("Fail to corrupt all replicas for block " + block, replFactor, blockFilesCorrupted);}} can't assure the corrupt block has been recognized in test, Correct me if I am wrong. Thanks! > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11122.001.patch, HDFS-11122.002.patch > > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): > {code} > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) > at > org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) > {code} > The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a > improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9868) Add ability to read remote cluster configuration for DistCp
[ https://issues.apache.org/jira/browse/HDFS-9868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652674#comment-15652674 ] Hadoop QA commented on HDFS-9868: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 49s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 8s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 48s{color} | {color:orange} root: The patch generated 3 new + 343 unchanged - 0 fixed = 346 total (was 343) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 4s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 16s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 25s{color} | {color:red} hadoop-distcp in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 43s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 79m 7s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.tools.TestDistCpWithSourceClusterConf | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-9868 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838264/HDFS-9868.06.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 62f832afd1fc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / de3a5f8 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/17493/artifact/patchprocess/diff-checkstyle-root.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17493/artifact/patchprocess/patch-unit-hadoop-tools_hadoop-distcp.txt | | Test Results |
[jira] [Commented] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652671#comment-15652671 ] Yiqun Lin commented on HDFS-11122: -- Thanks [~tasanuma0829] for sharing the thought. {quote} MiniDFSCluster#corruptBlockOnDataNodes does not assure that the datanodes recognize the corrupt block. {quote} This line {{assertEquals("Fail to corrupt all replicas for block " + block, replFactor, blockFilesCorrupted);}} can't assure the corrupt block has been recognized in test, Correct me if I am wrong. Thanks! > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11122.001.patch, HDFS-11122.002.patch > > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): > {code} > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) > at > org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) > {code} > The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a > improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11120) TestEncryptionZones should waitActive
[ https://issues.apache.org/jira/browse/HDFS-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652659#comment-15652659 ] Hudson commented on HDFS-11120: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10808 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10808/]) HDFS-11120. TestEncryptionZones should waitActive. Contributed by John (xiao: rev 71adf44c3fc5655700cdc904e61366d438c938eb) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java > TestEncryptionZones should waitActive > - > > Key: HDFS-11120 > URL: https://issues.apache.org/jira/browse/HDFS-11120 > Project: Hadoop HDFS > Issue Type: Improvement > Components: test >Affects Versions: 2.8.0 >Reporter: Xiao Chen >Assignee: John Zhuge >Priority: Minor > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HDFS-11120.001.patch, HDFS-11120.002.patch > > > Happened to notice this. > {{TestEncryptionZones#setup}} didn't {{waitActive}} on the minicluster. > There's also a test case that does a unnecessary waitActive: > {code} > cluster.restartNameNode(true); > cluster.waitActive(); > {code} > We should fix this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652650#comment-15652650 ] Takanobu Asanuma commented on HDFS-11122: - Thanks for working on this, [~linyiqun]. {{MiniDFSCluster#corruptBlockOnDataNodes}} does not assure that the datanodes recognize the corrupt block. I think it would be good to read the file to make sure to call {{ClientProtocol#reportBadBlocks}} after creating the corrupt block. {code:java} final int blockFilesCorrupted = miniCluster .corruptBlockOnDataNodes(block); assertEquals("Fail to corrupt all replicas for block " + block, replFactor, blockFilesCorrupted); try { IOUtils.copyBytes(fs.open(file), new IOUtils.NullOutputStream(), conf, true); } catch (IOException ie) { assertTrue(ie instanceof ChecksumException); } {code} > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11122.001.patch, HDFS-11122.002.patch > > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): > {code} > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) > at > org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) > {code} > The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a > improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11122: - Attachment: HDFS-11122.002.patch Sorry, the test base on v001 patch fails sometimes as well. I looked into this, it did the frequent block report when the timed out happened. The time interval of {{GenericTestUtils.waitFor}} should be adjusted as well if we add the line {{miniCluster.triggerBlockReports()}}. Post the v002 patch. > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11122.001.patch, HDFS-11122.002.patch > > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): > {code} > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) > at > org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) > {code} > The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a > improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11056) Concurrent append and read operations lead to checksum error
[ https://issues.apache.org/jira/browse/HDFS-11056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652614#comment-15652614 ] Hadoop QA commented on HDFS-11056: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 27s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 3s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 55s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 21s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 130 unchanged - 2 fixed = 132 total (was 132) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 2270 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 58s{color} | {color:red} The patch 139 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 34s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_111. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 20s{color} | {color:red} The patch generated 3 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}133m 30s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_111 Failed junit tests | hadoop.hdfs.server.datanode.TestBPOfferService | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica | | | hadoop.hdfs.server.blockmanagement.TestBlockManager | | | hadoop.hdfs.web.TestHttpsFileSystem | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | | hadoop.hdfs.server.datanode.TestBlockReplacement | | JDK v1.7.0_111 Failed junit tests |
[jira] [Updated] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations
[ https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-10872: --- Attachment: HDFS-10872.005.patch Now that HADOOP-13782 has provided us with a metrics class which provides fast concurrent access to {{MutableRate}} metrics ({{MutableRatesWithAggregation}}), the patch has been refactored to make use of this. This required a minor modification to {{MetricsRegistry}} to export a {{newMutableRatesWithAggregation}} method to create a new metrics object in {{FSNamesystem}} to be able to pass into {{FSNamesystemLock}}. This is necessary since we want the lock hold metrics to be emitted within the {{FSNamesystem}} metrics registry but they are generated within {{FSNamesystemLock}}, so we pass the metric object down into the lock for modification. Attaching v005 patch. > Add MutableRate metrics for FSNamesystemLock operations > --- > > Key: HDFS-10872 > URL: https://issues.apache.org/jira/browse/HDFS-10872 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: FSLockPerf.java, HDFS-10872.000.patch, > HDFS-10872.001.patch, HDFS-10872.002.patch, HDFS-10872.003.patch, > HDFS-10872.004.patch, HDFS-10872.005.patch > > > Add metrics for FSNamesystemLock operations to see, overall, how long each > operation is holding the lock for. Use MutableRate metrics for now. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11120) TestEncryptionZones should waitActive
[ https://issues.apache.org/jira/browse/HDFS-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-11120: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha2 2.8.0 Status: Resolved (was: Patch Available) +1. Committed to trunk, branch-2 and branch-2.8. Thanks [~jzhuge] for the fix! > TestEncryptionZones should waitActive > - > > Key: HDFS-11120 > URL: https://issues.apache.org/jira/browse/HDFS-11120 > Project: Hadoop HDFS > Issue Type: Improvement > Components: test >Affects Versions: 2.8.0 >Reporter: Xiao Chen >Assignee: John Zhuge >Priority: Minor > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HDFS-11120.001.patch, HDFS-11120.002.patch > > > Happened to notice this. > {{TestEncryptionZones#setup}} didn't {{waitActive}} on the minicluster. > There's also a test case that does a unnecessary waitActive: > {code} > cluster.restartNameNode(true); > cluster.waitActive(); > {code} > We should fix this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11087) NamenodeFsck should check if the output writer is still writable.
[ https://issues.apache.org/jira/browse/HDFS-11087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652584#comment-15652584 ] Hadoop QA commented on HDFS-11087: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 41s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} branch-2 passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} branch-2 passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 105 unchanged - 2 fixed = 105 total (was 107) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 48m 16s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_111. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}122m 37s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_111 Failed junit tests | hadoop.hdfs.tools.TestDFSAdmin | | JDK v1.7.0_111 Failed junit tests | hadoop.hdfs.server.datanode.TestFsDatasetCache | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:b59b8b7 | | JIRA Issue | HDFS-11087 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838251/HDFS-11087-branch-2.000.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 6657d7c280bd 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26
[jira] [Commented] (HDFS-11119) Support for parallel checking of StorageLocations on DataNode startup
[ https://issues.apache.org/jira/browse/HDFS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652528#comment-15652528 ] Hadoop QA commented on HDFS-9: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 28s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 13 new + 398 unchanged - 1 fixed = 411 total (was 399) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 49s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 76m 46s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.tools.TestHdfsConfigFields | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-9 | | GITHUB PR | https://github.com/apache/hadoop/pull/155 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 55344928ec61 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 59ee8b7 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/17492/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/17492/artifact/patchprocess/whitespace-eol.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17492/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17492/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs
[jira] [Commented] (HDFS-11029) [SPS]:Provide retry mechanism for the blocks which were failed while moving its storage at DNs
[ https://issues.apache.org/jira/browse/HDFS-11029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652515#comment-15652515 ] Hadoop QA commented on HDFS-11029: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 58s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 13s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 23s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 5 new + 4 unchanged - 0 fixed = 9 total (was 4) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 37s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 7 unchanged - 0 fixed = 8 total (was 7) {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 54s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}113m 57s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation | | | hadoop.hdfs.TestMaintenanceState | | | hadoop.hdfs.TestFileChecksum | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-11029 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838242/HDFS-11029-HDFS-10285-01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 9de7f92173e4 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-10285 / 3adef4f | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/17488/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | javadoc | https://builds.apache.org/job/PreCommit-HDFS-Build/17488/artifact/patchprocess/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt | | unit |
[jira] [Updated] (HDFS-9868) Add ability to read remote cluster configuration for DistCp
[ https://issues.apache.org/jira/browse/HDFS-9868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-9868: Attachment: HDFS-9868.06.patch I have given this more thoughts, and attaching patch 6. I agree with the current implementation of passing in source cluster config, which means the distcp should be executed from the target cluster. This is because I can't find a way to generalize the 'remote' concept for both source and destination. Updated the documents accordingly though for clarity. Tested this can work when distcping between to HA clusters, as the new document example shows. Reviews appreciated. > Add ability to read remote cluster configuration for DistCp > --- > > Key: HDFS-9868 > URL: https://issues.apache.org/jira/browse/HDFS-9868 > Project: Hadoop HDFS > Issue Type: New Feature > Components: distcp >Affects Versions: 2.7.1 >Reporter: NING DING >Assignee: NING DING > Attachments: HDFS-9868.05.patch, HDFS-9868.06.patch, > HDFS-9868.1.patch, HDFS-9868.2.patch, HDFS-9868.3.patch, HDFS-9868.4.patch > > > Normally the HDFS cluster is HA enabled. It could take a long time when > coping huge data by distp. If the source cluster changes active namenode, the > distp will run failed. This patch supports the DistCp can read source cluster > files in HA access mode. A source cluster configuration file needs to be > specified (via the -sourceClusterConf option). > The following is an example of the contents of a source cluster > configuration > file: > {code:xml} > > > fs.defaultFS > hdfs://mycluster > > > dfs.nameservices > mycluster > > > dfs.ha.namenodes.mycluster > nn1,nn2 > > > dfs.namenode.rpc-address.mycluster.nn1 > host1:9000 > > > dfs.namenode.rpc-address.mycluster.nn2 > host2:9000 > > > dfs.namenode.http-address.mycluster.nn1 > host1:50070 > > > dfs.namenode.http-address.mycluster.nn2 > host2:50070 > > > dfs.client.failover.proxy.provider.mycluster > > org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider > > > {code} > The invocation of DistCp is as below: > {code} > bash$ hadoop distcp -sourceClusterConf sourceCluster.xml /foo/bar > hdfs://nn2:8020/bar/foo > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11123) [SPS] Make storage policy satisfier daemon work on/off dynamically
Uma Maheswara Rao G created HDFS-11123: -- Summary: [SPS] Make storage policy satisfier daemon work on/off dynamically Key: HDFS-11123 URL: https://issues.apache.org/jira/browse/HDFS-11123 Project: Hadoop HDFS Issue Type: Sub-task Components: datanode, namenode Reporter: Uma Maheswara Rao G The idea of this task is to make SPS daemon thread to start/stop dynamically in Namenode process with out needing to restart complete Namenode. So, this will help in the case of admin wants to switch of this SPS and wants to run Mover tool externally. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11119) Support for parallel checking of StorageLocations on DataNode startup
[ https://issues.apache.org/jira/browse/HDFS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652412#comment-15652412 ] Anu Engineer commented on HDFS-9: - [~arpitagarwal] Thank you for updating the patch. +1, pending jenkins. > Support for parallel checking of StorageLocations on DataNode startup > - > > Key: HDFS-9 > URL: https://issues.apache.org/jira/browse/HDFS-9 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > > The {{AsyncChecker}} support introduced by HDFS-4 can be used to > parallelize checking {{StorageLocation}} s on Datanode startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11087) NamenodeFsck should check if the output writer is still writable.
[ https://issues.apache.org/jira/browse/HDFS-11087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652333#comment-15652333 ] Erik Krogen commented on HDFS-11087: Whoops, thank you! > NamenodeFsck should check if the output writer is still writable. > - > > Key: HDFS-11087 > URL: https://issues.apache.org/jira/browse/HDFS-11087 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.6.5 >Reporter: Konstantin Shvachko >Assignee: Erik Krogen > Attachments: HDFS-11087-branch-2.000.patch, > HDFS-11087.branch-2.000.patch > > > {{NamenodeFsck}} keeps running even after the client was interrupted. So if > you start {{fsck /}} on a large namespace and kill the client, the NameNode > will keep traversing the tree for hours although there is nobody to receive > the result. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11087) NamenodeFsck should check if the output writer is still writable.
[ https://issues.apache.org/jira/browse/HDFS-11087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-11087: --- Attachment: HDFS-11087-branch-2.000.patch > NamenodeFsck should check if the output writer is still writable. > - > > Key: HDFS-11087 > URL: https://issues.apache.org/jira/browse/HDFS-11087 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.6.5 >Reporter: Konstantin Shvachko >Assignee: Erik Krogen > Attachments: HDFS-11087-branch-2.000.patch, > HDFS-11087.branch-2.000.patch > > > {{NamenodeFsck}} keeps running even after the client was interrupted. So if > you start {{fsck /}} on a large namespace and kill the client, the NameNode > will keep traversing the tree for hours although there is nobody to receive > the result. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11087) NamenodeFsck should check if the output writer is still writable.
[ https://issues.apache.org/jira/browse/HDFS-11087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652327#comment-15652327 ] Mingliang Liu commented on HDFS-11087: -- HDFS-11087.branch-2.000.patch -> HDFS-11087-branch-2.000.patch > NamenodeFsck should check if the output writer is still writable. > - > > Key: HDFS-11087 > URL: https://issues.apache.org/jira/browse/HDFS-11087 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.6.5 >Reporter: Konstantin Shvachko >Assignee: Erik Krogen > Attachments: HDFS-11087.branch-2.000.patch > > > {{NamenodeFsck}} keeps running even after the client was interrupted. So if > you start {{fsck /}} on a large namespace and kill the client, the NameNode > will keep traversing the tree for hours although there is nobody to receive > the result. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11056) Concurrent append and read operations lead to checksum error
[ https://issues.apache.org/jira/browse/HDFS-11056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-11056: --- Attachment: HDFS-11056.branch-2.7.patch Attach branch-2.7 patch for precommit check. > Concurrent append and read operations lead to checksum error > > > Key: HDFS-11056 > URL: https://issues.apache.org/jira/browse/HDFS-11056 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, httpfs >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HDFS-11056.001.patch, HDFS-11056.002.patch, > HDFS-11056.branch-2.7.patch, HDFS-11056.branch-2.patch, > HDFS-11056.reproduce.patch > > > If there are two clients, one of them open-append-close a file continuously, > while the other open-read-close the same file continuously, the reader > eventually gets a checksum error in the data read. > On my local Mac, it takes a few minutes to produce the error. This happens to > httpfs clients, but there's no reason not believe this happens to any append > clients. > I have a unit test that demonstrates the checksum error. Will attach later. > Relevant log: > {quote} > 2016-10-25 15:34:45,153 INFO audit - allowed=trueugi=weichiu > (auth:SIMPLE) ip=/127.0.0.1 cmd=opensrc=/tmp/bar.txt > dst=nullperm=null proto=rpc > 2016-10-25 15:34:45,155 INFO DataNode - Receiving > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 src: > /127.0.0.1:51130 dest: /127.0.0.1:50131 > 2016-10-25 15:34:45,155 INFO FsDatasetImpl - Appending to FinalizedReplica, > blk_1073741825_1182, FINALIZED > getNumBytes() = 182 > getBytesOnDisk() = 182 > getVisibleLength()= 182 > getVolume() = > /Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1 > getBlockURI() = > file:/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-837130339-172.16.1.88-1477434851452/current/finalized/subdir0/subdir0/blk_1073741825 > 2016-10-25 15:34:45,167 INFO DataNode - opReadBlock > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 received exception > java.io.IOException: No data exists for block > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 > 2016-10-25 15:34:45,167 WARN DataNode - > DatanodeRegistration(127.0.0.1:50131, > datanodeUuid=41c96335-5e4b-4950-ac22-3d21b353abb8, infoPort=50133, > infoSecurePort=0, ipcPort=50134, > storageInfo=lv=-57;cid=testClusterID;nsid=1472068852;c=1477434851452):Got > exception while serving > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 to /127.0.0.1:51121 > java.io.IOException: No data exists for block > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773) > at > org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289) > at java.lang.Thread.run(Thread.java:745) > 2016-10-25 15:34:45,168 INFO FSNamesystem - > updatePipeline(blk_1073741825_1182, newGS=1183, newLength=182, > newNodes=[127.0.0.1:50131], client=DFSClient_NONMAPREDUCE_-1743096965_197) > 2016-10-25 15:34:45,168 ERROR DataNode - 127.0.0.1:50131:DataXceiver error > processing READ_BLOCK operation src: /127.0.0.1:51121 dst: /127.0.0.1:50131 > java.io.IOException: No data exists for block > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773) > at > org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289) > at java.lang.Thread.run(Thread.java:745) > 2016-10-25 15:34:45,168 INFO FSNamesystem - > updatePipeline(blk_1073741825_1182 => blk_1073741825_1183) success > 2016-10-25 15:34:45,170 WARN DFSClient - Found Checksum error for >
[jira] [Commented] (HDFS-11087) NamenodeFsck should check if the output writer is still writable.
[ https://issues.apache.org/jira/browse/HDFS-11087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652321#comment-15652321 ] Hadoop QA commented on HDFS-11087: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} docker {color} | {color:red} 5m 34s{color} | {color:red} Docker failed to build yetus/hadoop:b59b8b7. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-11087 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838247/HDFS-11087.branch-2.000.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17489/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > NamenodeFsck should check if the output writer is still writable. > - > > Key: HDFS-11087 > URL: https://issues.apache.org/jira/browse/HDFS-11087 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.6.5 >Reporter: Konstantin Shvachko >Assignee: Erik Krogen > Attachments: HDFS-11087.branch-2.000.patch > > > {{NamenodeFsck}} keeps running even after the client was interrupted. So if > you start {{fsck /}} on a large namespace and kill the client, the NameNode > will keep traversing the tree for hours although there is nobody to receive > the result. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11119) Support for parallel checking of StorageLocations on DataNode startup
[ https://issues.apache.org/jira/browse/HDFS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652303#comment-15652303 ] Arpit Agarwal commented on HDFS-9: -- Thank you for the review [~anu]. I have updated the PR with a commit to address your comments. A few responses below: # bq. This is milliseconds right ? but if that is hardcoded in code, then why take an "m" if all of these are time units in milliseconds. bq. In hdfs-default.xml also can we specify that the time unit is in milliseconds. We use Configuration#getTimeDuration which supports suffixes for common time units (m implies minutes). However you are right that this is not at all obvious. I updated the descriptions in hdfs-default.xml with pointers to the list of supported suffixes. # bq. Should we Log.error this case too ? Good point. This error is logged as a fatal exception by the DataNode so we shouldn't need a separate message. I made the behavior consistent for both failure cases. Let me know what you think. # bq. Can we take a ThreadFactory so we can set the name of threads in this pool ? Plus, Are these threads daemons ? Good catch. Fixed both points. # bq. nit : I am presuming the change in Datanode startup is coming in a later patch ? Correct. > Support for parallel checking of StorageLocations on DataNode startup > - > > Key: HDFS-9 > URL: https://issues.apache.org/jira/browse/HDFS-9 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > > The {{AsyncChecker}} support introduced by HDFS-4 can be used to > parallelize checking {{StorageLocation}} s on Datanode startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11087) NamenodeFsck should check if the output writer is still writable.
[ https://issues.apache.org/jira/browse/HDFS-11087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-11087: --- Attachment: HDFS-11087.branch-2.000.patch Attaching v000 patch. Since {{checkError}} internally calls {{flush}}, probably best not to do it _too_ frequently. We already flush after processing every 100 files, so I piggybacked off of that, replacing the call to {{flush}} with a call to {{checkError}} which will then throw an exception if the stream has been closed / otherwise failed. > NamenodeFsck should check if the output writer is still writable. > - > > Key: HDFS-11087 > URL: https://issues.apache.org/jira/browse/HDFS-11087 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.6.5 >Reporter: Konstantin Shvachko >Assignee: Erik Krogen > Attachments: HDFS-11087.branch-2.000.patch > > > {{NamenodeFsck}} keeps running even after the client was interrupted. So if > you start {{fsck /}} on a large namespace and kill the client, the NameNode > will keep traversing the tree for hours although there is nobody to receive > the result. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11087) NamenodeFsck should check if the output writer is still writable.
[ https://issues.apache.org/jira/browse/HDFS-11087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-11087: --- Status: Patch Available (was: Open) > NamenodeFsck should check if the output writer is still writable. > - > > Key: HDFS-11087 > URL: https://issues.apache.org/jira/browse/HDFS-11087 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.6.5 >Reporter: Konstantin Shvachko >Assignee: Erik Krogen > > {{NamenodeFsck}} keeps running even after the client was interrupted. So if > you start {{fsck /}} on a large namespace and kill the client, the NameNode > will keep traversing the tree for hours although there is nobody to receive > the result. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11056) Concurrent append and read operations lead to checksum error
[ https://issues.apache.org/jira/browse/HDFS-11056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-11056: --- Fix Version/s: 3.0.0-alpha2 2.8.0 > Concurrent append and read operations lead to checksum error > > > Key: HDFS-11056 > URL: https://issues.apache.org/jira/browse/HDFS-11056 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, httpfs >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HDFS-11056.001.patch, HDFS-11056.002.patch, > HDFS-11056.branch-2.patch, HDFS-11056.reproduce.patch > > > If there are two clients, one of them open-append-close a file continuously, > while the other open-read-close the same file continuously, the reader > eventually gets a checksum error in the data read. > On my local Mac, it takes a few minutes to produce the error. This happens to > httpfs clients, but there's no reason not believe this happens to any append > clients. > I have a unit test that demonstrates the checksum error. Will attach later. > Relevant log: > {quote} > 2016-10-25 15:34:45,153 INFO audit - allowed=trueugi=weichiu > (auth:SIMPLE) ip=/127.0.0.1 cmd=opensrc=/tmp/bar.txt > dst=nullperm=null proto=rpc > 2016-10-25 15:34:45,155 INFO DataNode - Receiving > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 src: > /127.0.0.1:51130 dest: /127.0.0.1:50131 > 2016-10-25 15:34:45,155 INFO FsDatasetImpl - Appending to FinalizedReplica, > blk_1073741825_1182, FINALIZED > getNumBytes() = 182 > getBytesOnDisk() = 182 > getVisibleLength()= 182 > getVolume() = > /Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1 > getBlockURI() = > file:/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-837130339-172.16.1.88-1477434851452/current/finalized/subdir0/subdir0/blk_1073741825 > 2016-10-25 15:34:45,167 INFO DataNode - opReadBlock > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 received exception > java.io.IOException: No data exists for block > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 > 2016-10-25 15:34:45,167 WARN DataNode - > DatanodeRegistration(127.0.0.1:50131, > datanodeUuid=41c96335-5e4b-4950-ac22-3d21b353abb8, infoPort=50133, > infoSecurePort=0, ipcPort=50134, > storageInfo=lv=-57;cid=testClusterID;nsid=1472068852;c=1477434851452):Got > exception while serving > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 to /127.0.0.1:51121 > java.io.IOException: No data exists for block > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773) > at > org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289) > at java.lang.Thread.run(Thread.java:745) > 2016-10-25 15:34:45,168 INFO FSNamesystem - > updatePipeline(blk_1073741825_1182, newGS=1183, newLength=182, > newNodes=[127.0.0.1:50131], client=DFSClient_NONMAPREDUCE_-1743096965_197) > 2016-10-25 15:34:45,168 ERROR DataNode - 127.0.0.1:50131:DataXceiver error > processing READ_BLOCK operation src: /127.0.0.1:51121 dst: /127.0.0.1:50131 > java.io.IOException: No data exists for block > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773) > at > org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289) > at java.lang.Thread.run(Thread.java:745) > 2016-10-25 15:34:45,168 INFO FSNamesystem - > updatePipeline(blk_1073741825_1182 => blk_1073741825_1183) success > 2016-10-25 15:34:45,170 WARN DFSClient - Found Checksum error for > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 from >
[jira] [Updated] (HDFS-11029) [SPS]:Provide retry mechanism for the blocks which were failed while moving its storage at DNs
[ https://issues.apache.org/jira/browse/HDFS-11029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uma Maheswara Rao G updated HDFS-11029: --- Attachment: HDFS-11029-HDFS-10285-01.patch Thank you so much, [~rakeshr] for the quick reviews. Here is the patch which addresses the comments except #2. For #2, I will make them configurable later along with other parameters in other JIRA. If you notice I already added a TODO. Thanks > [SPS]:Provide retry mechanism for the blocks which were failed while moving > its storage at DNs > -- > > Key: HDFS-11029 > URL: https://issues.apache.org/jira/browse/HDFS-11029 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: HDFS-10285 >Reporter: Uma Maheswara Rao G >Assignee: Uma Maheswara Rao G > Attachments: HDFS-11029-HDFS-10285-00.patch, > HDFS-11029-HDFS-10285-01.patch > > > When DN co-ordinator finds some of blocks associated to trackedID could not > be moved its storages, due to some errors.Here retry may work in some cases, > example if target node has no space. Then retry by finding another target can > work. > So, based on the movement result flag(SUCCESS/FAILURE) from DN Co-ordinator, > NN would retry by scanning the blocks again. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11114) Support for running async disk checks in DataNode
[ https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652123#comment-15652123 ] Arpit Agarwal commented on HDFS-4: -- Thanks Steve. [~kihwal], I can't explain that failure. The test is supposed to be deterministic. I ran a few hundred iterations locally on two different machines and didn't see a repro. Will watch out for more instances of failures. > Support for running async disk checks in DataNode > - > > Key: HDFS-4 > URL: https://issues.apache.org/jira/browse/HDFS-4 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Fix For: 3.0.0-alpha2 > > > Introduce support for running async checks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11119) Support for parallel checking of StorageLocations on DataNode startup
[ https://issues.apache.org/jira/browse/HDFS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651987#comment-15651987 ] Anu Engineer edited comment on HDFS-9 at 11/9/16 8:45 PM: -- [~arpitagarwal] Thanks for the patch. Some minor comments The way time is specified is little confusing. Just wanted to make sure that my understanding is correct. {noformat} public static final String DFS_DATANODE_DISK_CHECK_TIMEOUT_DEFAULT ="10m"; public static final String DFS_DATANODE_DISK_CHECK_MIN_GAP_DEFAULT ="15m"; {noformat} This is milliseconds right ? but if that is hardcoded in code, then why take an "m" if all of these are time units in milliseconds. When I first read code I read 15m as 15 minutes and later realized that it is 15 milliseconds. I think we should pick a larger min gap and also if possible specify that the time unit is not flexible. In hdfs-default.xml also can we specify that the time unit is in milliseconds. It is obvious from code, but I am worried some user might try to specify something like H. {{StorageLocationsChecker.java#check}} {noformat} if (goodLocations.size() == 0) { throw new IOException("All directories in " + DFS_DATANODE_DATA_DIR_KEY + " are invalid: " + failedLocations); } {noformat} Should we Log.error this case too ? {{StorageLocationsChecker.java}} Executors.newCachedThreadPool() Can we take a ThreadFactory so we can set the name of threads in this pool ? Plus, Are these threads daemons ? nit : I am presuming the change in Datanode startup is coming in a later patch ? was (Author: anu): [~arpitagarwal] Thanks for the patch. Some minor comments The way time is specified is little confusing. Just wanted to make sure that my understanding is correct. {noformat} public static final String DFS_DATANODE_DISK_CHECK_TIMEOUT_DEFAULT ="10m"; public static final String DFS_DATANODE_DISK_CHECK_MIN_GAP_DEFAULT ="15m"; {noformat} This is milliseconds right ? but if that is hardcoded in code, then why take an "m" if all of these are time units in milliseconds. When I first read code I read 15m as 15 minutes and later realized that it is 15 milliseconds. I think we should pick a larger min gap and also if possible specify that the time unit is not flexible. In hdfs-default.xml also can we specify that the time unit is in milliseconds. It is obvious from code, but I am worried some user might try to specify something like H. {{StorageLocationsChecker.java#check}} {noformat} if (goodLocations.size() == 0) { throw new IOException("All directories in " + DFS_DATANODE_DATA_DIR_KEY + " are invalid: " + failedLocations); } {noformat} Should we Log.error this case too ? {{StorageLocationsChecker.java}} Executors.newCachedThreadPool() Can we a ThreadFactory so we can set the name of threads in this pool ? Plus, Are these threads daemons ? nit : I am presuming the change in Datanode startup is coming in a later patch ? > Support for parallel checking of StorageLocations on DataNode startup > - > > Key: HDFS-9 > URL: https://issues.apache.org/jira/browse/HDFS-9 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > > The {{AsyncChecker}} support introduced by HDFS-4 can be used to > parallelize checking {{StorageLocation}} s on Datanode startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11119) Support for parallel checking of StorageLocations on DataNode startup
[ https://issues.apache.org/jira/browse/HDFS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651987#comment-15651987 ] Anu Engineer commented on HDFS-9: - [~arpitagarwal] Thanks for the patch. Some minor comments The way time is specified is little confusing. Just wanted to make sure that my understanding is correct. {noformat} public static final String DFS_DATANODE_DISK_CHECK_TIMEOUT_DEFAULT ="10m"; public static final String DFS_DATANODE_DISK_CHECK_MIN_GAP_DEFAULT ="15m"; {noformat} This is milliseconds right ? but if that is hardcoded in code, then why take an "m" if all of these are time units in milliseconds. When I first read code I read 15m as 15 minutes and later realized that it is 15 milliseconds. I think we should pick a larger min gap and also if possible specify that the time unit is not flexible. In hdfs-default.xml also can we specify that the time unit is in milliseconds. It is obvious from code, but I am worried some user might try to specify something like H. {{StorageLocationsChecker.java#check}} {noformat} if (goodLocations.size() == 0) { throw new IOException("All directories in " + DFS_DATANODE_DATA_DIR_KEY + " are invalid: " + failedLocations); } {noformat} Should we Log.error this case too ? {{StorageLocationsChecker.java}} Executors.newCachedThreadPool() Can we a ThreadFactory so we can set the name of threads in this pool ? Plus, Are these threads daemons ? nit : I am presuming the change in Datanode startup is coming in a later patch ? > Support for parallel checking of StorageLocations on DataNode startup > - > > Key: HDFS-9 > URL: https://issues.apache.org/jira/browse/HDFS-9 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > > The {{AsyncChecker}} support introduced by HDFS-4 can be used to > parallelize checking {{StorageLocation}} s on Datanode startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8307) Spurious DNS Queries from hdfs shell
[ https://issues.apache.org/jira/browse/HDFS-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651924#comment-15651924 ] Anu Engineer commented on HDFS-8307: Is this issue fixed in the current trunk ? if not can you please provide a patch for that branch too ? > Spurious DNS Queries from hdfs shell > > > Key: HDFS-8307 > URL: https://issues.apache.org/jira/browse/HDFS-8307 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.7.1 >Reporter: Anu Engineer >Assignee: Andres Perez >Priority: Trivial > Labels: ha > Fix For: 2.7.4 > > Attachments: HDFS-8307-branch-2.7.patch, HDFS-8307.001.patch > > > With HA configured the hdfs shell (org.apache.hadoop.fs.FsShell) seems to > issue a DNS query for the cluster Name. if fs.defaultFS is set to > hdfs://mycluster, then the shell seems to issue a DNS query for > mycluster.FQDN or mycluster. > since mycluster not a machine name DNS query always fails with > "DNS 85 Standard query response 0x2aeb No such name" > Repro Steps: > # Setup a HA cluster > # Log on to any node > # Run wireshark monitoring port 53 - "sudo tshark 'port 53'" > # Run "sudo -u hdfs hdfs dfs -ls /" > # You should be able to see DNS queries to mycluster.FQDN in wireshark -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8307) Spurious DNS Queries from hdfs shell
[ https://issues.apache.org/jira/browse/HDFS-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651878#comment-15651878 ] Andres Perez commented on HDFS-8307: Looking at the code the {{NameNodeProxies}} class was reworked in 2.8, that is why I think it would not apply to the branch after 2.7 > Spurious DNS Queries from hdfs shell > > > Key: HDFS-8307 > URL: https://issues.apache.org/jira/browse/HDFS-8307 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.7.1 >Reporter: Anu Engineer >Assignee: Andres Perez >Priority: Trivial > Labels: ha > Fix For: 2.7.4 > > Attachments: HDFS-8307-branch-2.7.patch, HDFS-8307.001.patch > > > With HA configured the hdfs shell (org.apache.hadoop.fs.FsShell) seems to > issue a DNS query for the cluster Name. if fs.defaultFS is set to > hdfs://mycluster, then the shell seems to issue a DNS query for > mycluster.FQDN or mycluster. > since mycluster not a machine name DNS query always fails with > "DNS 85 Standard query response 0x2aeb No such name" > Repro Steps: > # Setup a HA cluster > # Log on to any node > # Run wireshark monitoring port 53 - "sudo tshark 'port 53'" > # Run "sudo -u hdfs hdfs dfs -ls /" > # You should be able to see DNS queries to mycluster.FQDN in wireshark -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8307) Spurious DNS Queries from hdfs shell
[ https://issues.apache.org/jira/browse/HDFS-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651871#comment-15651871 ] Hadoop QA commented on HDFS-8307: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s{color} | {color:red} HDFS-8307 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-8307 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838214/HDFS-8307.001.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17487/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Spurious DNS Queries from hdfs shell > > > Key: HDFS-8307 > URL: https://issues.apache.org/jira/browse/HDFS-8307 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.7.1 >Reporter: Anu Engineer >Assignee: Andres Perez >Priority: Trivial > Labels: ha > Fix For: 2.7.4 > > Attachments: HDFS-8307-branch-2.7.patch, HDFS-8307.001.patch > > > With HA configured the hdfs shell (org.apache.hadoop.fs.FsShell) seems to > issue a DNS query for the cluster Name. if fs.defaultFS is set to > hdfs://mycluster, then the shell seems to issue a DNS query for > mycluster.FQDN or mycluster. > since mycluster not a machine name DNS query always fails with > "DNS 85 Standard query response 0x2aeb No such name" > Repro Steps: > # Setup a HA cluster > # Log on to any node > # Run wireshark monitoring port 53 - "sudo tshark 'port 53'" > # Run "sudo -u hdfs hdfs dfs -ls /" > # You should be able to see DNS queries to mycluster.FQDN in wireshark -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-8307) Spurious DNS Queries from hdfs shell
[ https://issues.apache.org/jira/browse/HDFS-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-8307: --- Attachment: HDFS-8307.001.patch [~aaperezl] Thanks for providing the patch. The patch looks good to me overall. I am re-attaching your patch with a name that will get this patch run against the trunk. Once we have a run without failures, I will commit this patch. > Spurious DNS Queries from hdfs shell > > > Key: HDFS-8307 > URL: https://issues.apache.org/jira/browse/HDFS-8307 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.7.1 >Reporter: Anu Engineer >Assignee: Andres Perez >Priority: Trivial > Labels: ha > Fix For: 2.7.4 > > Attachments: HDFS-8307-branch-2.7.patch, HDFS-8307.001.patch > > > With HA configured the hdfs shell (org.apache.hadoop.fs.FsShell) seems to > issue a DNS query for the cluster Name. if fs.defaultFS is set to > hdfs://mycluster, then the shell seems to issue a DNS query for > mycluster.FQDN or mycluster. > since mycluster not a machine name DNS query always fails with > "DNS 85 Standard query response 0x2aeb No such name" > Repro Steps: > # Setup a HA cluster > # Log on to any node > # Run wireshark monitoring port 53 - "sudo tshark 'port 53'" > # Run "sudo -u hdfs hdfs dfs -ls /" > # You should be able to see DNS queries to mycluster.FQDN in wireshark -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651765#comment-15651765 ] Mingliang Liu commented on HDFS-11122: -- Seems still failing? > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11122.001.patch > > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): > {code} > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) > at > org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) > {code} > The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a > improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8307) Spurious DNS Queries from hdfs shell
[ https://issues.apache.org/jira/browse/HDFS-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651764#comment-15651764 ] Andres Perez commented on HDFS-8307: [~anu] [~brahmareddy] can you please review the patch and provide any feedback? > Spurious DNS Queries from hdfs shell > > > Key: HDFS-8307 > URL: https://issues.apache.org/jira/browse/HDFS-8307 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.7.1 >Reporter: Anu Engineer >Assignee: Andres Perez >Priority: Trivial > Labels: ha > Fix For: 2.7.4 > > Attachments: HDFS-8307-branch-2.7.patch > > > With HA configured the hdfs shell (org.apache.hadoop.fs.FsShell) seems to > issue a DNS query for the cluster Name. if fs.defaultFS is set to > hdfs://mycluster, then the shell seems to issue a DNS query for > mycluster.FQDN or mycluster. > since mycluster not a machine name DNS query always fails with > "DNS 85 Standard query response 0x2aeb No such name" > Repro Steps: > # Setup a HA cluster > # Log on to any node > # Run wireshark monitoring port 53 - "sudo tshark 'port 53'" > # Run "sudo -u hdfs hdfs dfs -ls /" > # You should be able to see DNS queries to mycluster.FQDN in wireshark -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11120) TestEncryptionZones should waitActive
[ https://issues.apache.org/jira/browse/HDFS-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651752#comment-15651752 ] John Zhuge commented on HDFS-11120: --- TestDFSAdmin failure is unrelated, tracked by HDFS-11122. > TestEncryptionZones should waitActive > - > > Key: HDFS-11120 > URL: https://issues.apache.org/jira/browse/HDFS-11120 > Project: Hadoop HDFS > Issue Type: Improvement > Components: test >Affects Versions: 2.8.0 >Reporter: Xiao Chen >Assignee: John Zhuge >Priority: Minor > Attachments: HDFS-11120.001.patch, HDFS-11120.002.patch > > > Happened to notice this. > {{TestEncryptionZones#setup}} didn't {{waitActive}} on the minicluster. > There's also a test case that does a unnecessary waitActive: > {code} > cluster.restartNameNode(true); > cluster.waitActive(); > {code} > We should fix this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HDFS-11122: - Issue Type: Sub-task (was: Bug) Parent: HDFS-10891 > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11122.001.patch > > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): > {code} > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) > at > org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) > {code} > The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a > improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10885) [SPS]: Mover tool should not be allowed to run when Storage Policy Satisfier is on
[ https://issues.apache.org/jira/browse/HDFS-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651719#comment-15651719 ] Rakesh R commented on HDFS-10885: - Thanks [~zhouwei] for the patch. Please take care below comments. # Can we add a test case to verify StoragePolicySatisfier is getting stopped due to Mover running. One idea is, you could first start NN by disabling SPS and then start Mover process(may be you can simulate Mover existence by just creating MOVER_ID_PATH path). Again restart NN by enabling SPS. I think, SPS will log error message and shutdown himself. Test can assert by querying {{#isStoragePolicySatisfierActive()}} # Please add log to capture exception details. {code} +} catch (IOException e) { + ret = false; +} {code} # Default values are different in hdfs-default.xml and DFSConfigs.java, can we make this same. {code} + public static final boolean DFS_NAMENODE_SPS_ENABLED_DEFAULT = true; + dfs.namenode.sps.enabled + false {code} # What if Mover sends RPC call to a Standby namenode. I think {{nnrpc#isStoragePolicySatisfierActive()}} should throw StandbyException, right?. Otw, Mover thinks that SPS is not running and can continue with his startup by creating MOVER_ID path. Also, we need to ensure SPS is started running only in Active NN. I'm OK to do this task via separate jira if you feel so. > [SPS]: Mover tool should not be allowed to run when Storage Policy Satisfier > is on > -- > > Key: HDFS-10885 > URL: https://issues.apache.org/jira/browse/HDFS-10885 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Affects Versions: HDFS-10285 >Reporter: Wei Zhou >Assignee: Wei Zhou > Fix For: HDFS-10285 > > Attachments: HDFS-10800-HDFS-10885-00.patch, > HDFS-10800-HDFS-10885-01.patch, HDFS-10800-HDFS-10885-02.patch, > HDFS-10885-HDFS-10285.03.patch, HDFS-10885-HDFS-10285.04.patch, > HDFS-10885-HDFS-10285.05.patch, HDFS-10885-HDFS-10285.06.patch, > HDFS-10885-HDFS-10285.07.patch > > > These two can not work at the same time to avoid conflicts and fight with > each other. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11056) Concurrent append and read operations lead to checksum error
[ https://issues.apache.org/jira/browse/HDFS-11056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651541#comment-15651541 ] Hudson commented on HDFS-11056: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10802 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10802/]) HDFS-11056. Concurrent append and read operations lead to checksum (weichiu: rev c619e9b43fd00ba0e59a98ae09685ff719bb722b) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImplTestUtils.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java > Concurrent append and read operations lead to checksum error > > > Key: HDFS-11056 > URL: https://issues.apache.org/jira/browse/HDFS-11056 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, httpfs >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HDFS-11056.001.patch, HDFS-11056.002.patch, > HDFS-11056.branch-2.patch, HDFS-11056.reproduce.patch > > > If there are two clients, one of them open-append-close a file continuously, > while the other open-read-close the same file continuously, the reader > eventually gets a checksum error in the data read. > On my local Mac, it takes a few minutes to produce the error. This happens to > httpfs clients, but there's no reason not believe this happens to any append > clients. > I have a unit test that demonstrates the checksum error. Will attach later. > Relevant log: > {quote} > 2016-10-25 15:34:45,153 INFO audit - allowed=trueugi=weichiu > (auth:SIMPLE) ip=/127.0.0.1 cmd=opensrc=/tmp/bar.txt > dst=nullperm=null proto=rpc > 2016-10-25 15:34:45,155 INFO DataNode - Receiving > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 src: > /127.0.0.1:51130 dest: /127.0.0.1:50131 > 2016-10-25 15:34:45,155 INFO FsDatasetImpl - Appending to FinalizedReplica, > blk_1073741825_1182, FINALIZED > getNumBytes() = 182 > getBytesOnDisk() = 182 > getVisibleLength()= 182 > getVolume() = > /Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1 > getBlockURI() = > file:/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-837130339-172.16.1.88-1477434851452/current/finalized/subdir0/subdir0/blk_1073741825 > 2016-10-25 15:34:45,167 INFO DataNode - opReadBlock > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 received exception > java.io.IOException: No data exists for block > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 > 2016-10-25 15:34:45,167 WARN DataNode - > DatanodeRegistration(127.0.0.1:50131, > datanodeUuid=41c96335-5e4b-4950-ac22-3d21b353abb8, infoPort=50133, > infoSecurePort=0, ipcPort=50134, > storageInfo=lv=-57;cid=testClusterID;nsid=1472068852;c=1477434851452):Got > exception while serving > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 to /127.0.0.1:51121 > java.io.IOException: No data exists for block > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773) > at > org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289) > at java.lang.Thread.run(Thread.java:745) > 2016-10-25 15:34:45,168 INFO FSNamesystem - > updatePipeline(blk_1073741825_1182, newGS=1183, newLength=182, > newNodes=[127.0.0.1:50131], client=DFSClient_NONMAPREDUCE_-1743096965_197) > 2016-10-25 15:34:45,168 ERROR DataNode - 127.0.0.1:50131:DataXceiver error > processing READ_BLOCK operation src: /127.0.0.1:51121 dst: /127.0.0.1:50131 > java.io.IOException: No data exists for block > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773) > at > org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581) > at >
[jira] [Updated] (HDFS-11056) Concurrent append and read operations lead to checksum error
[ https://issues.apache.org/jira/browse/HDFS-11056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-11056: --- Release Note: Load last partial chunk checksum properly into memory when converting a finalized/temporary replica to rbw replica. This ensures concurrent reader reads the correct checksum that matches the data before the update. > Concurrent append and read operations lead to checksum error > > > Key: HDFS-11056 > URL: https://issues.apache.org/jira/browse/HDFS-11056 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, httpfs >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HDFS-11056.001.patch, HDFS-11056.002.patch, > HDFS-11056.branch-2.patch, HDFS-11056.reproduce.patch > > > If there are two clients, one of them open-append-close a file continuously, > while the other open-read-close the same file continuously, the reader > eventually gets a checksum error in the data read. > On my local Mac, it takes a few minutes to produce the error. This happens to > httpfs clients, but there's no reason not believe this happens to any append > clients. > I have a unit test that demonstrates the checksum error. Will attach later. > Relevant log: > {quote} > 2016-10-25 15:34:45,153 INFO audit - allowed=trueugi=weichiu > (auth:SIMPLE) ip=/127.0.0.1 cmd=opensrc=/tmp/bar.txt > dst=nullperm=null proto=rpc > 2016-10-25 15:34:45,155 INFO DataNode - Receiving > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 src: > /127.0.0.1:51130 dest: /127.0.0.1:50131 > 2016-10-25 15:34:45,155 INFO FsDatasetImpl - Appending to FinalizedReplica, > blk_1073741825_1182, FINALIZED > getNumBytes() = 182 > getBytesOnDisk() = 182 > getVisibleLength()= 182 > getVolume() = > /Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1 > getBlockURI() = > file:/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-837130339-172.16.1.88-1477434851452/current/finalized/subdir0/subdir0/blk_1073741825 > 2016-10-25 15:34:45,167 INFO DataNode - opReadBlock > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 received exception > java.io.IOException: No data exists for block > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 > 2016-10-25 15:34:45,167 WARN DataNode - > DatanodeRegistration(127.0.0.1:50131, > datanodeUuid=41c96335-5e4b-4950-ac22-3d21b353abb8, infoPort=50133, > infoSecurePort=0, ipcPort=50134, > storageInfo=lv=-57;cid=testClusterID;nsid=1472068852;c=1477434851452):Got > exception while serving > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 to /127.0.0.1:51121 > java.io.IOException: No data exists for block > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773) > at > org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289) > at java.lang.Thread.run(Thread.java:745) > 2016-10-25 15:34:45,168 INFO FSNamesystem - > updatePipeline(blk_1073741825_1182, newGS=1183, newLength=182, > newNodes=[127.0.0.1:50131], client=DFSClient_NONMAPREDUCE_-1743096965_197) > 2016-10-25 15:34:45,168 ERROR DataNode - 127.0.0.1:50131:DataXceiver error > processing READ_BLOCK operation src: /127.0.0.1:51121 dst: /127.0.0.1:50131 > java.io.IOException: No data exists for block > BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773) > at > org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289) > at java.lang.Thread.run(Thread.java:745) > 2016-10-25 15:34:45,168 INFO FSNamesystem - > updatePipeline(blk_1073741825_1182 => blk_1073741825_1183) success > 2016-10-25 15:34:45,170
[jira] [Commented] (HDFS-11114) Support for running async disk checks in DataNode
[ https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651285#comment-15651285 ] Kihwal Lee commented on HDFS-4: --- This failed recently. https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/220/testReport/junit/org.apache.hadoop.hdfs.server.datanode.checker/TestThrottledAsyncChecker/testContextIsPassed/ {noformat} Stacktrace java.lang.AssertionError: null at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertFalse(Assert.java:64) at org.junit.Assert.assertFalse(Assert.java:74) at org.apache.hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker.testContextIsPassed(TestThrottledAsyncChecker.java:174) Standard Output 2016-11-09 09:25:11,786 [pool-2-thread-1] INFO checker.TestThrottledAsyncChecker (TestThrottledAsyncChecker.java:check(245)) - LatchedCheckable org.apache.hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker$LatchedCheckable@7aeb663a waiting. 2016-11-09 09:25:11,790 [pool-2-thread-1] INFO checker.TestThrottledAsyncChecker (TestThrottledAsyncChecker.java:onFailure(272)) - onFailure callback invoked for org.apache.hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker$LatchedCheckable@7aeb663a with exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1302) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231) at org.apache.hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker$LatchedCheckable.check(TestThrottledAsyncChecker.java:246) at org.apache.hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker$LatchedCheckable.check(TestThrottledAsyncChecker.java:239) at org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$1.call(ThrottledAsyncChecker.java:130) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 2016-11-09 09:25:11,817 [pool-5-thread-1] INFO checker.TestThrottledAsyncChecker (TestThrottledAsyncChecker.java:check(245)) - LatchedCheckable org.apache.hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker$LatchedCheckable@12aed2de waiting. {noformat} > Support for running async disk checks in DataNode > - > > Key: HDFS-4 > URL: https://issues.apache.org/jira/browse/HDFS-4 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Fix For: 3.0.0-alpha2 > > > Introduce support for running async checks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10206) getBlockLocations might not sort datanodes properly by distance
[ https://issues.apache.org/jira/browse/HDFS-10206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651089#comment-15651089 ] Nandakumar commented on HDFS-10206: --- {{NetworkTopology#sortByDistance}} uses {{NetworkTopology#getWeight}} to calculate the distance between reader and node. Additional logic is added in {{NetworkTopology#getWeight}} to calculate the distance based on networkLocation of reader and the node when the following conditions are not satisfy bq. reader.equals(node) & isOnSameRack(reader, node) This will work for DFSClient machine which is not a datanode, since the distance calculation depends on networkLocation and not the parent Node. Please review the patch. Thanks, Nanda > getBlockLocations might not sort datanodes properly by distance > --- > > Key: HDFS-10206 > URL: https://issues.apache.org/jira/browse/HDFS-10206 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ming Ma >Assignee: Nandakumar > Attachments: HDFS-10206.000.patch > > > If the DFSClient machine is not a datanode, but it shares its rack with some > datanodes of the HDFS block requested, {{DatanodeManager#sortLocatedBlocks}} > might not put the local-rack datanodes at the beginning of the sorted list. > That is because the function didn't call {{networktopology.add(client);}} to > properly set the node's parent node; something required by > {{networktopology.sortByDistance}} to compute distance between two nodes in > the same topology tree. > Another issue with {{networktopology.sortByDistance}} is it only > distinguishes local rack from remote rack, but it doesn't support general > distance calculation to tell how remote the rack is. > {noformat} > NetworkTopology.java > protected int getWeight(Node reader, Node node) { > // 0 is local, 1 is same rack, 2 is off rack > // Start off by initializing to off rack > int weight = 2; > if (reader != null) { > if (reader.equals(node)) { > weight = 0; > } else if (isOnSameRack(reader, node)) { > weight = 1; > } > } > return weight; > } > {noformat} > HDFS-10203 has suggested moving the sorting from namenode to DFSClient to > address another issue. Regardless of where we do the sorting, we still need > fix the issues outline here. > Note that BlockPlacementPolicyDefault shares the same NetworkTopology object > used by DatanodeManager and requires Nodes stored in the topology to be > {{DatanodeDescriptor}} for block placement. So we need to make sure we don't > pollute the NetworkTopology if we plan to fix it on the server side. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10206) getBlockLocations might not sort datanodes properly by distance
[ https://issues.apache.org/jira/browse/HDFS-10206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nandakumar updated HDFS-10206: -- Attachment: HDFS-10206.000.patch > getBlockLocations might not sort datanodes properly by distance > --- > > Key: HDFS-10206 > URL: https://issues.apache.org/jira/browse/HDFS-10206 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ming Ma >Assignee: Nandakumar > Attachments: HDFS-10206.000.patch > > > If the DFSClient machine is not a datanode, but it shares its rack with some > datanodes of the HDFS block requested, {{DatanodeManager#sortLocatedBlocks}} > might not put the local-rack datanodes at the beginning of the sorted list. > That is because the function didn't call {{networktopology.add(client);}} to > properly set the node's parent node; something required by > {{networktopology.sortByDistance}} to compute distance between two nodes in > the same topology tree. > Another issue with {{networktopology.sortByDistance}} is it only > distinguishes local rack from remote rack, but it doesn't support general > distance calculation to tell how remote the rack is. > {noformat} > NetworkTopology.java > protected int getWeight(Node reader, Node node) { > // 0 is local, 1 is same rack, 2 is off rack > // Start off by initializing to off rack > int weight = 2; > if (reader != null) { > if (reader.equals(node)) { > weight = 0; > } else if (isOnSameRack(reader, node)) { > weight = 1; > } > } > return weight; > } > {noformat} > HDFS-10203 has suggested moving the sorting from namenode to DFSClient to > address another issue. Regardless of where we do the sorting, we still need > fix the issues outline here. > Note that BlockPlacementPolicyDefault shares the same NetworkTopology object > used by DatanodeManager and requires Nodes stored in the topology to be > {{DatanodeDescriptor}} for block placement. So we need to make sure we don't > pollute the NetworkTopology if we plan to fix it on the server side. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15650939#comment-15650939 ] Hadoop QA commented on HDFS-11122: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 14s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 79m 9s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.tools.TestDFSAdmin | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-11122 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838153/HDFS-11122.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 9d00063aa71c 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 283fa33 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17486/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17486/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17486/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments:
[jira] [Commented] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
[ https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15650794#comment-15650794 ] Hadoop QA commented on HDFS-6: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 5s{color} | {color:green} root generated 0 new + 691 unchanged - 3 fixed = 691 total (was 694) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 37s{color} | {color:green} root: The patch generated 0 new + 66 unchanged - 1 fixed = 66 total (was 67) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 15s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 41s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}146m 55s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR | | | hadoop.hdfs.TestRollingUpgrade | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-6 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838143/HDFS-6.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux c1db11f28bcb 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 283fa33 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17485/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17485/testReport/ | | modules | C: hadoop-common-project/hadoop-common
[jira] [Updated] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11122: - Attachment: HDFS-11122.001.patch > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11122.001.patch > > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): > {code} > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) > at > org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) > {code} > The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a > improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11122: - Status: Patch Available (was: Open) Attach a initial patch to make a simple improvement.Mainly focus on two poins: * Adjust the timeout value for the test. * Add the trigger block report before get block's location. CC [~xiaobingo] and [~liuml07]. Any thoughts? Thanks! > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): > {code} > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) > at > org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) > {code} > The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a > improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
[ https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11122: - Description: After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. The stack infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): {code} java.lang.Exception: test timed out after 3 milliseconds at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) at org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) {code} The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a improvement in the logic of waiting the corrupt blocks. was: After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. The stack infos: {code} java.lang.Exception: test timed out after 3 milliseconds at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) at org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) {code} The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a improvement in the logic of waiting the corrupt blocks. > TestDFSAdmin.testReportCommand fails due to timed out > - > > Key: HDFS-11122 > URL: https://issues.apache.org/jira/browse/HDFS-11122 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > > After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. > The stack > infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/): > {code} > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) > at > org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) > {code} > The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a > improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out
Yiqun Lin created HDFS-11122: Summary: TestDFSAdmin.testReportCommand fails due to timed out Key: HDFS-11122 URL: https://issues.apache.org/jira/browse/HDFS-11122 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 3.0.0-alpha1 Reporter: Yiqun Lin Assignee: Yiqun Lin Priority: Minor After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. The stack infos: {code} java.lang.Exception: test timed out after 3 milliseconds at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268) at org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540) {code} The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a improvement in the logic of waiting the corrupt blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11114) Support for running async disk checks in DataNode
[ https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15650677#comment-15650677 ] Steve Loughran commented on HDFS-4: --- Yetus doesn't like mixing .patch and PR files; once you add a PR, patches get ignored. We've ended in some other patches (HADOOP-13560) with creating new JIRAs just to ensure that the testing went with what we want. I've now switched back to doing .patch files; a few extra lines of work, but that's all. > Support for running async disk checks in DataNode > - > > Key: HDFS-4 > URL: https://issues.apache.org/jira/browse/HDFS-4 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Fix For: 3.0.0-alpha2 > > > Introduce support for running async checks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
[ https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-6: - Release Note: ViewFileSystem#getServerDefaults(Path) throws NotInMountException instead of FileNotFoundException for unmounted path. (was: The APIs FileSystem#getDefaultBlockSize(), FileSystem#getDefaultBlockSize() and FileSystem#getServerDefaults() have been deprecated. Replace these deprecated APIs with recommended APIs.) > Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue > - > > Key: HDFS-6 > URL: https://issues.apache.org/jira/browse/HDFS-6 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-6.001.patch, HDFS-6.002.patch, > HDFS-6.003.patch > > > There were some Jenkins warinings related with TestViewFsDefaultValue in each > Jenkins building. > {code} > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9] > [deprecation] getDefaultBlockSize() in FileSystem has been deprecated > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9] > [deprecation] getDefaultReplication() in FileSystem has been deprecated > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43] > [deprecation] getServerDefaults() in FileSystem has been deprecated > {code} > We should use the method {{getDefaultBlockSize(Path)}} replace with > deprecation API {{getDefaultBlockSize}}. The same to the > {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a > not-in-mountpoint path in filesystem to trigger the > {{NotInMountpointException}} in test. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
[ https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15650503#comment-15650503 ] Akira Ajisaka commented on HDFS-6: -- Thank you, and I'll update this. > Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue > - > > Key: HDFS-6 > URL: https://issues.apache.org/jira/browse/HDFS-6 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-6.001.patch, HDFS-6.002.patch, > HDFS-6.003.patch > > > There were some Jenkins warinings related with TestViewFsDefaultValue in each > Jenkins building. > {code} > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9] > [deprecation] getDefaultBlockSize() in FileSystem has been deprecated > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9] > [deprecation] getDefaultReplication() in FileSystem has been deprecated > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43] > [deprecation] getServerDefaults() in FileSystem has been deprecated > {code} > We should use the method {{getDefaultBlockSize(Path)}} replace with > deprecation API {{getDefaultBlockSize}}. The same to the > {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a > not-in-mountpoint path in filesystem to trigger the > {{NotInMountpointException}} in test. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
[ https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15650499#comment-15650499 ] Yiqun Lin commented on HDFS-6: -- Thanks [~ajisakaa] for the review! Attach a new patch to fix the checkstyle warnings. {quote} would you write a release note to document the incompatibility? {quote} Done, feel free to update it if you think it need to make some changes. > Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue > - > > Key: HDFS-6 > URL: https://issues.apache.org/jira/browse/HDFS-6 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-6.001.patch, HDFS-6.002.patch, > HDFS-6.003.patch > > > There were some Jenkins warinings related with TestViewFsDefaultValue in each > Jenkins building. > {code} > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9] > [deprecation] getDefaultBlockSize() in FileSystem has been deprecated > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9] > [deprecation] getDefaultReplication() in FileSystem has been deprecated > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43] > [deprecation] getServerDefaults() in FileSystem has been deprecated > {code} > We should use the method {{getDefaultBlockSize(Path)}} replace with > deprecation API {{getDefaultBlockSize}}. The same to the > {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a > not-in-mountpoint path in filesystem to trigger the > {{NotInMountpointException}} in test. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
[ https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-6: - Attachment: HDFS-6.003.patch > Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue > - > > Key: HDFS-6 > URL: https://issues.apache.org/jira/browse/HDFS-6 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-6.001.patch, HDFS-6.002.patch, > HDFS-6.003.patch > > > There were some Jenkins warinings related with TestViewFsDefaultValue in each > Jenkins building. > {code} > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9] > [deprecation] getDefaultBlockSize() in FileSystem has been deprecated > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9] > [deprecation] getDefaultReplication() in FileSystem has been deprecated > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43] > [deprecation] getServerDefaults() in FileSystem has been deprecated > {code} > We should use the method {{getDefaultBlockSize(Path)}} replace with > deprecation API {{getDefaultBlockSize}}. The same to the > {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a > not-in-mountpoint path in filesystem to trigger the > {{NotInMountpointException}} in test. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
[ https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-6: - Release Note: The APIs FileSystem#getDefaultBlockSize(), FileSystem#getDefaultBlockSize() and FileSystem#getServerDefaults() have been deprecated. Replace these deprecated APIs with recommended APIs. > Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue > - > > Key: HDFS-6 > URL: https://issues.apache.org/jira/browse/HDFS-6 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-6.001.patch, HDFS-6.002.patch > > > There were some Jenkins warinings related with TestViewFsDefaultValue in each > Jenkins building. > {code} > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9] > [deprecation] getDefaultBlockSize() in FileSystem has been deprecated > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9] > [deprecation] getDefaultReplication() in FileSystem has been deprecated > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43] > [deprecation] getServerDefaults() in FileSystem has been deprecated > {code} > We should use the method {{getDefaultBlockSize(Path)}} replace with > deprecation API {{getDefaultBlockSize}}. The same to the > {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a > not-in-mountpoint path in filesystem to trigger the > {{NotInMountpointException}} in test. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
[ https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-6: - Hadoop Flags: Incompatible change Issue Type: Improvement (was: Bug) Hi [~linyiqun], would you write a release note to document the incompatibility? > Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue > - > > Key: HDFS-6 > URL: https://issues.apache.org/jira/browse/HDFS-6 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-6.001.patch, HDFS-6.002.patch > > > There were some Jenkins warinings related with TestViewFsDefaultValue in each > Jenkins building. > {code} > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9] > [deprecation] getDefaultBlockSize() in FileSystem has been deprecated > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9] > [deprecation] getDefaultReplication() in FileSystem has been deprecated > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43] > [deprecation] getServerDefaults() in FileSystem has been deprecated > {code} > We should use the method {{getDefaultBlockSize(Path)}} replace with > deprecation API {{getDefaultBlockSize}}. The same to the > {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a > not-in-mountpoint path in filesystem to trigger the > {{NotInMountpointException}} in test. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
[ https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15650408#comment-15650408 ] Akira Ajisaka commented on HDFS-6: -- Would you fix checkstyle warnings? I'm +1 if that is addressed. Adding incompatible change flag. > Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue > - > > Key: HDFS-6 > URL: https://issues.apache.org/jira/browse/HDFS-6 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-6.001.patch, HDFS-6.002.patch > > > There were some Jenkins warinings related with TestViewFsDefaultValue in each > Jenkins building. > {code} > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9] > [deprecation] getDefaultBlockSize() in FileSystem has been deprecated > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9] > [deprecation] getDefaultReplication() in FileSystem has been deprecated > [WARNING] > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43] > [deprecation] getServerDefaults() in FileSystem has been deprecated > {code} > We should use the method {{getDefaultBlockSize(Path)}} replace with > deprecation API {{getDefaultBlockSize}}. The same to the > {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a > not-in-mountpoint path in filesystem to trigger the > {{NotInMountpointException}} in test. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11120) TestEncryptionZones should waitActive
[ https://issues.apache.org/jira/browse/HDFS-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15650362#comment-15650362 ] Hadoop QA commented on HDFS-11120: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 42 unchanged - 2 fixed = 42 total (was 44) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}101m 4s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.tools.TestDFSAdmin | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e809691 | | JIRA Issue | HDFS-11120 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838131/HDFS-11120.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 4fd1803938dd 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ed0beba | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17484/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17484/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestEncryptionZones should waitActive > - > > Key: HDFS-11120 > URL: https://issues.apache.org/jira/browse/HDFS-11120 > Project: Hadoop HDFS > Issue Type: Improvement > Components: test >Affects Versions: 2.8.0 >Reporter: Xiao Chen