[jira] [Commented] (HDFS-11646) Add -E option in 'ls' to list erasure coding policy of each file and directory if applicable
[ https://issues.apache.org/jira/browse/HDFS-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048716#comment-16048716 ] Hadoop QA commented on HDFS-11646: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 31s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 52s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 25s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 48s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 21s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 10s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}181m 7s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11646 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12872917/HDFS-11646-004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml cc | | uname | Linux cb72613f8e0b 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6ed54f3 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/19902/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html | | unit |
[jira] [Commented] (HDFS-11972) CBlocks use wrong OPT env vars
[ https://issues.apache.org/jira/browse/HDFS-11972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048700#comment-16048700 ] Anu Engineer commented on HDFS-11972: - Thank you for filing this, I will cleanup both CBlockServer and Ozone. > CBlocks use wrong OPT env vars > -- > > Key: HDFS-11972 > URL: https://issues.apache.org/jira/browse/HDFS-11972 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Allen Wittenauer >Assignee: Mukul Kumar Singh > > Current codebase does: > {code} > hadoop_debug "Appending HADOOP_CBLOCK_OPTS onto HADOOP_OPTS" > HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_CBLOCK_OPTS}" > ... > hadoop_debug "Appending HADOOP_JSCSI_OPTS onto HADOOP_OPTS" > HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_JSCSI_OPTS}" > {code} > This code block breaks the consistency with the rest of the shell scripts: > a) It should be HDFS_CBLOCKSERVER_OPTS, HDFS_JSCSI_OPTS > b) _OPTS, unless they are a deprecated form or some other special case, are > already/automatically appended; there is no need to do them specifically > Also, a description of these env vars should be hadoop-env.sh for > documentation purposes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-11972) CBlocks use wrong OPT env vars
[ https://issues.apache.org/jira/browse/HDFS-11972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh reassigned HDFS-11972: Assignee: Mukul Kumar Singh > CBlocks use wrong OPT env vars > -- > > Key: HDFS-11972 > URL: https://issues.apache.org/jira/browse/HDFS-11972 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Allen Wittenauer >Assignee: Mukul Kumar Singh > > Current codebase does: > {code} > hadoop_debug "Appending HADOOP_CBLOCK_OPTS onto HADOOP_OPTS" > HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_CBLOCK_OPTS}" > ... > hadoop_debug "Appending HADOOP_JSCSI_OPTS onto HADOOP_OPTS" > HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_JSCSI_OPTS}" > {code} > This code block breaks the consistency with the rest of the shell scripts: > a) It should be HDFS_CBLOCKSERVER_OPTS, HDFS_JSCSI_OPTS > b) _OPTS, unless they are a deprecated form or some other special case, are > already/automatically appended; there is no need to do them specifically > Also, a description of these env vars should be hadoop-env.sh for > documentation purposes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11912) Add a snapshot unit test with randomized file IO operations
[ https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] George Huang updated HDFS-11912: Labels: TestGap (was: ) > Add a snapshot unit test with randomized file IO operations > --- > > Key: HDFS-11912 > URL: https://issues.apache.org/jira/browse/HDFS-11912 > Project: Hadoop HDFS > Issue Type: Test > Components: hdfs >Reporter: George Huang >Assignee: George Huang >Priority: Minor > Labels: TestGap > Attachments: HDFS-11912.001.patch, HDFS-11912.002.patch > > > Adding a snapshot unit test with randomized file IO operations. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11972) CBlocks use wrong OPT env vars
[ https://issues.apache.org/jira/browse/HDFS-11972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048660#comment-16048660 ] Allen Wittenauer commented on HDFS-11972: - Looks like Ozone has similar problems *sigh* > CBlocks use wrong OPT env vars > -- > > Key: HDFS-11972 > URL: https://issues.apache.org/jira/browse/HDFS-11972 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Allen Wittenauer > > Current codebase does: > {code} > hadoop_debug "Appending HADOOP_CBLOCK_OPTS onto HADOOP_OPTS" > HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_CBLOCK_OPTS}" > ... > hadoop_debug "Appending HADOOP_JSCSI_OPTS onto HADOOP_OPTS" > HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_JSCSI_OPTS}" > {code} > This code block breaks the consistency with the rest of the shell scripts: > a) It should be HDFS_CBLOCKSERVER_OPTS, HDFS_JSCSI_OPTS > b) _OPTS, unless they are a deprecated form or some other special case, are > already/automatically appended; there is no need to do them specifically > Also, a description of these env vars should be hadoop-env.sh for > documentation purposes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11972) CBlocks use wrong OPT env vars
[ https://issues.apache.org/jira/browse/HDFS-11972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HDFS-11972: Summary: CBlocks use wrong OPT env vars (was: cblockserver uses wrong OPT env var) > CBlocks use wrong OPT env vars > -- > > Key: HDFS-11972 > URL: https://issues.apache.org/jira/browse/HDFS-11972 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Allen Wittenauer > > Current codebase does: > {code} > hadoop_debug "Appending HADOOP_CBLOCK_OPTS onto HADOOP_OPTS" > HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_CBLOCK_OPTS}" > ... > hadoop_debug "Appending HADOOP_JSCSI_OPTS onto HADOOP_OPTS" > HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_JSCSI_OPTS}" > {code} > This code block breaks the consistency with the rest of the shell scripts: > a) It should be HDFS_CBLOCKSERVER_OPTS, HDFS_JSCSI_OPTS > b) _OPTS, unless they are a deprecated form or some other special case, are > already/automatically appended; there is no need to do them specifically > Also, a description of these env vars should be hadoop-env.sh for > documentation purposes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11972) cblockserver uses wrong OPT env var
[ https://issues.apache.org/jira/browse/HDFS-11972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HDFS-11972: Description: Current codebase does: {code} hadoop_debug "Appending HADOOP_CBLOCK_OPTS onto HADOOP_OPTS" HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_CBLOCK_OPTS}" ... hadoop_debug "Appending HADOOP_JSCSI_OPTS onto HADOOP_OPTS" HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_JSCSI_OPTS}" {code} This code block breaks the consistency with the rest of the shell scripts: a) It should be HDFS_CBLOCKSERVER_OPTS, HDFS_JSCSI_OPTS b) _OPTS, unless they are a deprecated form or some other special case, are already/automatically appended; there is no need to do them specifically Also, a description of these env vars should be hadoop-env.sh for documentation purposes. was: Current codebase does: {code} hadoop_debug "Appending HADOOP_CBLOCK_OPTS onto HADOOP_OPTS" HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_CBLOCK_OPTS}" ... hadoop_debug "Appending HADOOP_JSCSI_OPTS onto HADOOP_OPTS" HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_JSCSI_OPTS}" {code} This code block breaks the consistency with the rest of the shell scripts: a) It should be HDFS_CBLOCKSERVER_OPTS, HDFS_JSCSI_OPTS b) _OPTS, unless they are a deprecated form or some other special case, are already/automatically appended; there is no need to do them specifically Also, a description of this env var should be hadoop-env.sh so that it is properly documented. > cblockserver uses wrong OPT env var > --- > > Key: HDFS-11972 > URL: https://issues.apache.org/jira/browse/HDFS-11972 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Allen Wittenauer > > Current codebase does: > {code} > hadoop_debug "Appending HADOOP_CBLOCK_OPTS onto HADOOP_OPTS" > HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_CBLOCK_OPTS}" > ... > hadoop_debug "Appending HADOOP_JSCSI_OPTS onto HADOOP_OPTS" > HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_JSCSI_OPTS}" > {code} > This code block breaks the consistency with the rest of the shell scripts: > a) It should be HDFS_CBLOCKSERVER_OPTS, HDFS_JSCSI_OPTS > b) _OPTS, unless they are a deprecated form or some other special case, are > already/automatically appended; there is no need to do them specifically > Also, a description of these env vars should be hadoop-env.sh for > documentation purposes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11972) cblockserver uses wrong OPT env var
[ https://issues.apache.org/jira/browse/HDFS-11972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HDFS-11972: Description: Current codebase does: {code} hadoop_debug "Appending HADOOP_CBLOCK_OPTS onto HADOOP_OPTS" HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_CBLOCK_OPTS}" ... hadoop_debug "Appending HADOOP_JSCSI_OPTS onto HADOOP_OPTS" HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_JSCSI_OPTS}" {code} This code block breaks the consistency with the rest of the shell scripts: a) It should be HDFS_CBLOCKSERVER_OPTS, HDFS_JSCSI_OPTS b) _OPTS, unless they are a deprecated form or some other special case, are already/automatically appended; there is no need to do them specifically Also, a description of this env var should be hadoop-env.sh so that it is properly documented. was: Current codebase does: {code} hadoop_debug "Appending HADOOP_CBLOCK_OPTS onto HADOOP_OPTS" HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_CBLOCK_OPTS}" {code} This code block breaks the consistency with the rest of the shell scripts: a) It should be HDFS_CBLOCKSERVER_OPTS b) HDFS_CBLOCKSERVER_OPTS is already/automatically appended; there is no need to do it specifically Also, a description of this env var should be hadoop-env.sh so that it is properly documented. > cblockserver uses wrong OPT env var > --- > > Key: HDFS-11972 > URL: https://issues.apache.org/jira/browse/HDFS-11972 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Allen Wittenauer > > Current codebase does: > {code} > hadoop_debug "Appending HADOOP_CBLOCK_OPTS onto HADOOP_OPTS" > HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_CBLOCK_OPTS}" > ... > hadoop_debug "Appending HADOOP_JSCSI_OPTS onto HADOOP_OPTS" > HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_JSCSI_OPTS}" > {code} > This code block breaks the consistency with the rest of the shell scripts: > a) It should be HDFS_CBLOCKSERVER_OPTS, HDFS_JSCSI_OPTS > b) _OPTS, unless they are a deprecated form or some other special case, are > already/automatically appended; there is no need to do them specifically > Also, a description of this env var should be hadoop-env.sh so that it is > properly documented. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11972) cblockserver uses wrong OPT env var
[ https://issues.apache.org/jira/browse/HDFS-11972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HDFS-11972: Description: Current codebase does: {code} hadoop_debug "Appending HADOOP_CBLOCK_OPTS onto HADOOP_OPTS" HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_CBLOCK_OPTS}" {code} This code block breaks the consistency with the rest of the shell scripts: a) It should be HDFS_CBLOCKSERVER_OPTS b) HDFS_CBLOCKSERVER_OPTS is already/automatically appended; there is no need to do it specifically was: Current codebase does: {code} hadoop_debug "Appending HADOOP_CBLOCK_OPTS onto HADOOP_OPTS" HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_CBLOCK_OPTS}" {code} This code block breaks the consistency with the rest of the shell scripts: a) It should be HDFS_CBLOCKSERVER_OPTS b) HDFS_CBLOCKSERVER_OPTS is already/automatically appended; there is no need to do it specifically > cblockserver uses wrong OPT env var > --- > > Key: HDFS-11972 > URL: https://issues.apache.org/jira/browse/HDFS-11972 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Allen Wittenauer > > Current codebase does: > {code} > hadoop_debug "Appending HADOOP_CBLOCK_OPTS onto HADOOP_OPTS" > HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_CBLOCK_OPTS}" > {code} > This code block breaks the consistency with the rest of the shell scripts: > a) It should be HDFS_CBLOCKSERVER_OPTS > b) HDFS_CBLOCKSERVER_OPTS is already/automatically appended; there is no need > to do it specifically -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11972) cblockserver uses wrong OPT env var
[ https://issues.apache.org/jira/browse/HDFS-11972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HDFS-11972: Description: Current codebase does: {code} hadoop_debug "Appending HADOOP_CBLOCK_OPTS onto HADOOP_OPTS" HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_CBLOCK_OPTS}" {code} This code block breaks the consistency with the rest of the shell scripts: a) It should be HDFS_CBLOCKSERVER_OPTS b) HDFS_CBLOCKSERVER_OPTS is already/automatically appended; there is no need to do it specifically Also, a description of this env var should be hadoop-env.sh so that it is properly documented. was: Current codebase does: {code} hadoop_debug "Appending HADOOP_CBLOCK_OPTS onto HADOOP_OPTS" HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_CBLOCK_OPTS}" {code} This code block breaks the consistency with the rest of the shell scripts: a) It should be HDFS_CBLOCKSERVER_OPTS b) HDFS_CBLOCKSERVER_OPTS is already/automatically appended; there is no need to do it specifically > cblockserver uses wrong OPT env var > --- > > Key: HDFS-11972 > URL: https://issues.apache.org/jira/browse/HDFS-11972 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Allen Wittenauer > > Current codebase does: > {code} > hadoop_debug "Appending HADOOP_CBLOCK_OPTS onto HADOOP_OPTS" > HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_CBLOCK_OPTS}" > {code} > This code block breaks the consistency with the rest of the shell scripts: > a) It should be HDFS_CBLOCKSERVER_OPTS > b) HDFS_CBLOCKSERVER_OPTS is already/automatically appended; there is no need > to do it specifically > Also, a description of this env var should be hadoop-env.sh so that it is > properly documented. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11972) cblockserver uses wrong OPT env var
Allen Wittenauer created HDFS-11972: --- Summary: cblockserver uses wrong OPT env var Key: HDFS-11972 URL: https://issues.apache.org/jira/browse/HDFS-11972 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Allen Wittenauer Current codebase does: {code} hadoop_debug "Appending HADOOP_CBLOCK_OPTS onto HADOOP_OPTS" HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_CBLOCK_OPTS}" {code} This code block breaks the consistency with the rest of the shell scripts: a) It should be HDFS_CBLOCKSERVER_OPTS b) HDFS_CBLOCKSERVER_OPTS is already/automatically appended; there is no need to do it specifically -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11646) Add -E option in 'ls' to list erasure coding policy of each file and directory if applicable
[ https://issues.apache.org/jira/browse/HDFS-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] luhuichun updated HDFS-11646: - Attachment: HDFS-11646-004.patch > Add -E option in 'ls' to list erasure coding policy of each file and > directory if applicable > > > Key: HDFS-11646 > URL: https://issues.apache.org/jira/browse/HDFS-11646 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: SammiChen >Assignee: luhuichun > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-11646-001.patch, HDFS-11646-002.patch, > HDFS-11646-003.patch, HDFS-11646-004.patch > > > Add -E option in "ls" to show erasure coding policy of file and directory, > leverage the "number_of_replicas " column. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10999) Introduce separate stats for Replicated and Erasure Coded Blocks apart from the current Aggregated stats
[ https://issues.apache.org/jira/browse/HDFS-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048591#comment-16048591 ] Manoj Govindassamy commented on HDFS-10999: --- Above test failures are not related to the patch. They are passing through locally for me. > Introduce separate stats for Replicated and Erasure Coded Blocks apart from > the current Aggregated stats > > > Key: HDFS-10999 > URL: https://issues.apache.org/jira/browse/HDFS-10999 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Wei-Chiu Chuang >Assignee: Manoj Govindassamy > Labels: hdfs-ec-3.0-nice-to-have, supportability > Attachments: HDFS-10999.01.patch, HDFS-10999.02.patch, > HDFS-10999.03.patch, HDFS-10999.04.patch, HDFS-10999.05.patch > > > Per HDFS-9857, it seems in the Hadoop 3 world, people prefer the more generic > term "low redundancy" to the old-fashioned "under replicated". But this term > is still being used in messages in several places, such as web ui, dfsadmin > and fsck. We should probably change them to avoid confusion. > File this jira to discuss it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11912) Add a snapshot unit test with randomized file IO operations
[ https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048583#comment-16048583 ] Manoj Govindassamy commented on HDFS-11912: --- [~ghuangups], Thanks for working on this. Looks good overall. Few comments and questions below. 1. Operation type, just like weight can be part of the constructor and we can avoid checking the value string starting with to identify the operation and get random operations. {noformat} private enum Operations { FileSystem_CreateFile(2 /*operation weight*/), FileSystem_DeleteFile(2), FileSystem_RenameFile(2), ... .. private static int sumWeights(OperationType type) { .. if (value.name().startsWith(type.name())) { {noformat} 2. Random operation initialization in line 153 and line 157 are not needed. 3. In {{createFile}}, line 616, the loop limit says TOTAL_BLOCKS, but it is infact treated like max file count. 4. In {{createFile}}, line 607, some top level directory might have been set as snapshotable earlier. Later when you enable snapshot for a nested directory under it, the call can fail with below exception. Are you ignoring the exception? {noformat} if (s.isAncestorDirectory(dir)) { throw new SnapshotException( "Nested snapshottable directories not allowed: path=" + path + ", the subdirectory " + s.getFullPathName() + " is already a snapshottable directory."); } if (dir.isAncestorDirectory(s)) { throw new SnapshotException( "Nested snapshottable directories not allowed: path=" + path + ", the ancestor " + s.getFullPathName() + " is already a snapshottable directory."); } {noformat} 5. Line 280, we can assert here. Otherwise, there is an issue with creating random operation. 6. Line 304, same here. 7. Line 318, we can add a unique number suffix to the created directory so as to differentiate where the failure happened 8. Line 366, rename dir suffix can also take a unique number for differentiation. 9. Line 551, 552: Aren't FileStatus[] from listStatus already sorted ? can you please check? > Add a snapshot unit test with randomized file IO operations > --- > > Key: HDFS-11912 > URL: https://issues.apache.org/jira/browse/HDFS-11912 > Project: Hadoop HDFS > Issue Type: Test > Components: hdfs >Reporter: George Huang >Assignee: George Huang >Priority: Minor > Attachments: HDFS-11912.001.patch, HDFS-11912.002.patch > > > Adding a snapshot unit test with randomized file IO operations. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11949) Add testcase for ensuring that FsShell cann't move file to the target directory that file exists
[ https://issues.apache.org/jira/browse/HDFS-11949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048578#comment-16048578 ] legend commented on HDFS-11949: --- Hi [~yzhangal]: Could you help review , the testcase is helpful to ensure that the command "put" is correct. The error of haoop QA isn't caused by the patch. > Add testcase for ensuring that FsShell cann't move file to the target > directory that file exists > > > Key: HDFS-11949 > URL: https://issues.apache.org/jira/browse/HDFS-11949 > Project: Hadoop HDFS > Issue Type: Test > Components: test >Affects Versions: 3.0.0-alpha4 >Reporter: legend >Priority: Minor > Attachments: HDFS-11949.patch > > > moveFromLocal returns error when move file to the target directory that the > file exists. So we need add test case to check it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10999) Introduce separate stats for Replicated and Erasure Coded Blocks apart from the current Aggregated stats
[ https://issues.apache.org/jira/browse/HDFS-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048569#comment-16048569 ] Hadoop QA commented on HDFS-10999: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 15 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 43s{color} | {color:red} hadoop-hdfs-project generated 36 new + 55 unchanged - 0 fixed = 91 total (was 55) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 53s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 1053 unchanged - 79 fixed = 1055 total (was 1132) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 19s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 0s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}131m 28s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMXBean | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-10999 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12872904/HDFS-10999.05.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 775a647eedd2 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6ed54f3 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/19901/artifact/patchprocess/diff-compile-javac-hadoop-hdfs-project.txt | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/19901/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt | | unit |
[jira] [Updated] (HDFS-11082) Erasure Coding : Provide replicated EC policy to just replicating the files
[ https://issues.apache.org/jira/browse/HDFS-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-11082: --- Priority: Critical (was: Major) > Erasure Coding : Provide replicated EC policy to just replicating the files > --- > > Key: HDFS-11082 > URL: https://issues.apache.org/jira/browse/HDFS-11082 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Reporter: Rakesh R >Assignee: SammiChen >Priority: Critical > Labels: hdfs-ec-3.0-must-do > > The idea of this jira is to provide a new {{replicated EC policy}} so that we > can override the EC policy on a parent directory and go back to just > replicating the files based on replication factors. > Thanks [~andrew.wang] for the > [discussions|https://issues.apache.org/jira/browse/HDFS-11072?focusedCommentId=15620743=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15620743]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11082) Erasure Coding : Provide replicated EC policy to just replicating the files
[ https://issues.apache.org/jira/browse/HDFS-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-11082: --- Labels: hdfs-ec-3.0-must-do (was: ) We've been talking with some internal teams about erasure coding, and they flagged this as a must-do. Hoping we can revisit this JIRA in time for beta1. > Erasure Coding : Provide replicated EC policy to just replicating the files > --- > > Key: HDFS-11082 > URL: https://issues.apache.org/jira/browse/HDFS-11082 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Reporter: Rakesh R >Assignee: SammiChen > Labels: hdfs-ec-3.0-must-do > > The idea of this jira is to provide a new {{replicated EC policy}} so that we > can override the EC policy on a parent directory and go back to just > replicating the files based on replication factors. > Thanks [~andrew.wang] for the > [discussions|https://issues.apache.org/jira/browse/HDFS-11072?focusedCommentId=15620743=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15620743]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11789) Maintain Short-Circuit Read Statistics
[ https://issues.apache.org/jira/browse/HDFS-11789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048517#comment-16048517 ] Arpit Agarwal edited comment on HDFS-11789 at 6/13/17 11:37 PM: Thanks for updating the patch [~hanishakoneru]. A few comments: # Typos in {{SHORT_CIRCUIT_READ_LATECNY_METRIC_REGISTERD_NAME}}. # The test-only method {{getShortCircuitReadRollingAverages}} should be tagged with {{\@VisibleForTesting}}. Also it should be package-private if possible. # I think we can eliminate the METRICS_ENABLED config key. We can enable the metric when the sampling percentage is > 0. # Let's avoid static initialization of {{BlockReaderLocal#metrics}}. One issue is that adding the overhead of a MutableRollingAverages object (and its rates roller thread) for all clients, most of which may will never enable SCR statistics. I think your original approach of initializing it within a lock on construction was fine, the overhead of that lock compared to the overhead of cloning a file descriptor via system calls should be minimal. Still reviewing the tests. was (Author: arpitagarwal): Thanks for updating the patch [~hanishakoneru]. A few comments: # Typos in {{SHORT_CIRCUIT_READ_LATECNY_METRIC_REGISTERD_NAME}}. # The test-only method {{getShortCircuitReadRollingAverages}} should be tagged with {{\@VisibleForTesting}}. Also it should be package-private if possible. # I think we can eliminate the METRICS_SAMPLING_PERCENTAGE_KEY config key. We can enable the metric when the sampling percentage is > 0. # Let's avoid static initialization of {{BlockReaderLocal#metrics}}. One issue is that adding the overhead of a MutableRollingAverages object (and its rates roller thread) for all clients, most of which may will never enable SCR statistics. I think your original approach of initializing it within a lock on construction was fine, the overhead of that lock compared to the overhead of cloning a file descriptor via system calls should be minimal. Still reviewing the tests. > Maintain Short-Circuit Read Statistics > -- > > Key: HDFS-11789 > URL: https://issues.apache.org/jira/browse/HDFS-11789 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HDFS-11789.001.patch, HDFS-11789.002.patch > > > If a disk or controller hardware is faulty then short-circuit read requests > can stall indefinitely while reading from the file descriptor. Currently > there is no way to detect when short-circuit read requests are slow or > blocked. > This Jira proposes that each BlockReaderLocal maintain read statistics while > it is active by measuring the time taken for a pre-determined fraction of > read requests. These per-reader stats can be aggregated into global stats > when the reader is closed. The aggregate statistics can be exposed via JMX. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11789) Maintain Short-Circuit Read Statistics
[ https://issues.apache.org/jira/browse/HDFS-11789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048517#comment-16048517 ] Arpit Agarwal edited comment on HDFS-11789 at 6/13/17 11:35 PM: Thanks for updating the patch [~hanishakoneru]. A few comments: # Typos in {{SHORT_CIRCUIT_READ_LATECNY_METRIC_REGISTERD_NAME}}. # The test-only method {{getShortCircuitReadRollingAverages}} should be tagged with {{\@VisibleForTesting}}. Also it should be package-private if possible. # I think we can eliminate the METRICS_SAMPLING_PERCENTAGE_KEY config key. We can enable the metric when the sampling percentage is > 0. # Let's avoid static initialization of {{BlockReaderLocal#metrics}}. One issue is that adding the overhead of a MutableRollingAverages object (and its rates roller thread) for all clients, most of which may will never enable SCR statistics. I think your original approach of initializing it within a lock on construction was fine, the overhead of that lock compared to the overhead of cloning a file descriptor via system calls should be minimal. Still reviewing the tests. was (Author: arpitagarwal): Thanks for updating the patch [~hanishakoneru]. A few comments: # Typos in {{SHORT_CIRCUIT_READ_LATECNY_METRIC_REGISTERD_NAME}}. # The test-only method {{getShortCircuitReadRollingAverages}} should be tagged with {{\@VisibleForTesting}}. # I think we can eliminate the METRICS_SAMPLING_PERCENTAGE_KEY config key. We can enable the metric when the sampling percentage is > 0. # Let's avoid static initialization of {{BlockReaderLocal#metrics}}. One issue is that adding the overhead of a MutableRollingAverages object (and its rates roller thread) for all clients, most of which may will never enable SCR statistics. I think your original approach of initializing it within a lock on construction was fine, the overhead of that lock compared to the overhead of cloning a file descriptor via system calls should be minimal. Still reviewing the tests. > Maintain Short-Circuit Read Statistics > -- > > Key: HDFS-11789 > URL: https://issues.apache.org/jira/browse/HDFS-11789 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HDFS-11789.001.patch, HDFS-11789.002.patch > > > If a disk or controller hardware is faulty then short-circuit read requests > can stall indefinitely while reading from the file descriptor. Currently > there is no way to detect when short-circuit read requests are slow or > blocked. > This Jira proposes that each BlockReaderLocal maintain read statistics while > it is active by measuring the time taken for a pre-determined fraction of > read requests. These per-reader stats can be aggregated into global stats > when the reader is closed. The aggregate statistics can be exposed via JMX. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11789) Maintain Short-Circuit Read Statistics
[ https://issues.apache.org/jira/browse/HDFS-11789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048517#comment-16048517 ] Arpit Agarwal commented on HDFS-11789: -- Thanks for updating the patch [~hanishakoneru]. A few comments: # Typos in {{SHORT_CIRCUIT_READ_LATECNY_METRIC_REGISTERD_NAME}}. # The test-only method {{getShortCircuitReadRollingAverages}} should be tagged with {{\@VisibleForTesting}}. # I think we can eliminate the METRICS_SAMPLING_PERCENTAGE_KEY config key. We can enable the metric when the sampling percentage is > 0. # Let's avoid static initialization of {{BlockReaderLocal#metrics}}. One issue is that adding the overhead of a MutableRollingAverages object (and its rates roller thread) for all clients, most of which may will never enable SCR statistics. I think your original approach of initializing it within a lock on construction was fine, the overhead of that lock compared to the overhead of cloning a file descriptor via system calls should be minimal. Still reviewing the tests. > Maintain Short-Circuit Read Statistics > -- > > Key: HDFS-11789 > URL: https://issues.apache.org/jira/browse/HDFS-11789 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HDFS-11789.001.patch, HDFS-11789.002.patch > > > If a disk or controller hardware is faulty then short-circuit read requests > can stall indefinitely while reading from the file descriptor. Currently > there is no way to detect when short-circuit read requests are slow or > blocked. > This Jira proposes that each BlockReaderLocal maintain read statistics while > it is active by measuring the time taken for a pre-determined fraction of > read requests. These per-reader stats can be aggregated into global stats > when the reader is closed. The aggregate statistics can be exposed via JMX. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10999) Introduce separate stats for Replicated and Erasure Coded Blocks apart from the current Aggregated stats
[ https://issues.apache.org/jira/browse/HDFS-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manoj Govindassamy updated HDFS-10999: -- Attachment: HDFS-10999.05.patch Thanks for the review [~eddyxu]. Attached v05 patch to address the following. Please take a look. 1. Fixed {{BlockStats#toString}} and {{ECBlockGroupsStats#toString}} to use StringBuilder 2. Fixed checkstyle issues > Introduce separate stats for Replicated and Erasure Coded Blocks apart from > the current Aggregated stats > > > Key: HDFS-10999 > URL: https://issues.apache.org/jira/browse/HDFS-10999 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Wei-Chiu Chuang >Assignee: Manoj Govindassamy > Labels: hdfs-ec-3.0-nice-to-have, supportability > Attachments: HDFS-10999.01.patch, HDFS-10999.02.patch, > HDFS-10999.03.patch, HDFS-10999.04.patch, HDFS-10999.05.patch > > > Per HDFS-9857, it seems in the Hadoop 3 world, people prefer the more generic > term "low redundancy" to the old-fashioned "under replicated". But this term > is still being used in messages in several places, such as web ui, dfsadmin > and fsck. We should probably change them to avoid confusion. > File this jira to discuss it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11789) Maintain Short-Circuit Read Statistics
[ https://issues.apache.org/jira/browse/HDFS-11789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048431#comment-16048431 ] Hadoop QA commented on HDFS-11789: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 41s{color} | {color:orange} hadoop-hdfs-project: The patch generated 44 new + 47 unchanged - 0 fixed = 91 total (was 47) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 1 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 14s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 57s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}131m 47s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | | | hadoop.hdfs.server.datanode.TestDataNodeUUID | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11789 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12872890/HDFS-11789.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 89a80f177256 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8633ef8 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/19900/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/19900/artifact/patchprocess/whitespace-tabs.txt | | unit |
[jira] [Commented] (HDFS-11670) [SPS]: Add CLI command for satisfy storage policy operations
[ https://issues.apache.org/jira/browse/HDFS-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048419#comment-16048419 ] Hadoop QA commented on HDFS-11670: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 24m 55s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 44s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}121m 44s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}173m 56s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.server.namenode.TestSecurityTokenEditLog | | | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.server.namenode.TestFileTruncate | | | hadoop.hdfs.server.datanode.TestDataNodeUUID | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11670 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12872888/HDFS-11670-HDFS-10285.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux ce46164273d2 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-10285 / 6d428ed | | Default Java | 1.8.0_131 | | findbugs | v3.0.0 | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/19899/artifact/patchprocess/whitespace-eol.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19899/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19899/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19899/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > [SPS]: Add CLI command for satisfy storage policy operations > > > Key: HDFS-11670 >
[jira] [Commented] (HDFS-10999) Introduce separate stats for Replicated and Erasure Coded Blocks apart from the current Aggregated stats
[ https://issues.apache.org/jira/browse/HDFS-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048393#comment-16048393 ] Lei (Eddy) Xu commented on HDFS-10999: -- Hi, Manoj, The latest patch LGTM. Some minor issues: * We might want to use {{StringBuilder}} in {{BlockStats#toString}} +1 pending the minor change. > Introduce separate stats for Replicated and Erasure Coded Blocks apart from > the current Aggregated stats > > > Key: HDFS-10999 > URL: https://issues.apache.org/jira/browse/HDFS-10999 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Wei-Chiu Chuang >Assignee: Manoj Govindassamy > Labels: hdfs-ec-3.0-nice-to-have, supportability > Attachments: HDFS-10999.01.patch, HDFS-10999.02.patch, > HDFS-10999.03.patch, HDFS-10999.04.patch > > > Per HDFS-9857, it seems in the Hadoop 3 world, people prefer the more generic > term "low redundancy" to the old-fashioned "under replicated". But this term > is still being used in messages in several places, such as web ui, dfsadmin > and fsck. We should probably change them to avoid confusion. > File this jira to discuss it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11971) libhdfs++: A few portability issues
[ https://issues.apache.org/jira/browse/HDFS-11971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048387#comment-16048387 ] James Clampffer commented on HDFS-11971: Nice find on the linking fix. I'd prefer if the file rename changes were pushed into another patch though since it makes it really hard to see what functional stuff you changed in this diff. I think if we want to continue to support the C API that C based tools and examples should live in their own directory; ideally they should also be forced to build in C99 mode. One of the issues with the C stuff now is some C++ has leaked in since everything builds with -std=c++11, last time I tried to link a pure C application with hdfs.h I ran into some issues. > libhdfs++: A few portability issues > --- > > Key: HDFS-11971 > URL: https://issues.apache.org/jira/browse/HDFS-11971 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments: HDFS-11971.HDFS-8707.000.patch > > > I recently encountered a few portability issues with libhdfs++ while trying > to build it as a stand alone project (and also as part of another Apache > project). > 1. Method fixCase in configuration.h file produces a warning "conversion to > ‘char’ from ‘int’ may alter its value [-Werror=conversion]" which does not > allow libhdfs++ to be compiled as part of the codebase that treats such > warnings as errors (can be fixed with a simple cast). > 2. In CMakeLists.txt file (in libhdfspp directory) we do > find_package(Threads) however we do not link it to the targets (e.g. > hdfspp_static), which causes the build to fail with pthread errors. After the > Threads package is found we need to link it using ${CMAKE_THREAD_LIBS_INIT}. > 3. All the tools and examples fail to build as part of a standalone libhdfs++ > because they are missing multiple libraries such as protobuf, ssl, pthread, > etc. This happens because we link them to a shared library hdfspp instead of > hdfspp_static library. We should either link all the tools and examples to > hdfspp_static library or explicitly add linking to all missing libraries for > each tool/example. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11916) Extend TestErasureCodingPolicies/TestErasureCodingPolicyWithSnapshot with a random EC policy
[ https://issues.apache.org/jira/browse/HDFS-11916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048372#comment-16048372 ] Lei (Eddy) Xu commented on HDFS-11916: -- Hey, [~tasanuma0829] Thanks a lot for the patch. Could you help to clarify the purpose of this test? My understanding is that using a random policy for each time, it might cause flaky tests. For example, one implementation of EC policy has bug, but it is hard to reproduce in the following jenkins run? Some small nits: {code} private static ErasureCodingPolicy ecPolicy; {code} Maybe we can just not use {{static}} here? {code} public TestErasureCodingPoliciesWithRandomECPolicy() { ecPolicy = StripedFileTestUtil.getRandomNonDefaultECPolicy(); LOG.info(ecPolicy); } {code} Could you add more context into the {{LOG.info()}}. > Extend TestErasureCodingPolicies/TestErasureCodingPolicyWithSnapshot with a > random EC policy > > > Key: HDFS-11916 > URL: https://issues.apache.org/jira/browse/HDFS-11916 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding, test >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-11916.1.patch, HDFS-11916.2.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11797) BlockManager#createLocatedBlocks() can throw ArrayIndexOutofBoundsException when corrupt replicas are inconsistent
[ https://issues.apache.org/jira/browse/HDFS-11797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048370#comment-16048370 ] Wei-Chiu Chuang commented on HDFS-11797: Reviewed again. I believe this is resolved via HDFS-11445. Also verified that for every place where a block is removed from BlocksMap, it is also removed from CorruptReplicasMaps. So I think we can close as a dup of HDFS-11445. > BlockManager#createLocatedBlocks() can throw ArrayIndexOutofBoundsException > when corrupt replicas are inconsistent > -- > > Key: HDFS-11797 > URL: https://issues.apache.org/jira/browse/HDFS-11797 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kuhu Shukla >Assignee: Kuhu Shukla >Priority: Critical > Attachments: HDFS-11797.001.patch > > > The calculation for {{numMachines}} can be too less (causing > ArrayIndexOutOfBoundsException) or too many (causing NPE (HDFS-9958)) if data > structures find inconsistent number of corrupt replicas. This was earlier > found related to failed storages. This JIRA tracks a change that works for > all possible cases of inconsistencies. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11682) TestBalancer#testBalancerWithStripedFile is flaky
[ https://issues.apache.org/jira/browse/HDFS-11682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048341#comment-16048341 ] Lei (Eddy) Xu commented on HDFS-11682: -- Hi, [~manojg] Thanks for taking care of reviewing. Catching {{TimeoutException}} around {{waitForBalancer}} only, because if {{waitForHeartBeat()}} throws Timeout, it is not due to the mis-calculation of disk usage. So it should not be ignored. > TestBalancer#testBalancerWithStripedFile is flaky > - > > Key: HDFS-11682 > URL: https://issues.apache.org/jira/browse/HDFS-11682 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-alpha4 >Reporter: Andrew Wang >Assignee: Lei (Eddy) Xu > Attachments: HDFS-11682.00.patch, HDFS-11682.01.patch, > IndexOutOfBoundsException.log, timeout.log > > > Saw this fail in two different ways on a precommit run, but pass locally. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11966) [SPS] Correct the log in BlockStorageMovementAttemptedItems#blockStorageMovementResultCheck
[ https://issues.apache.org/jira/browse/HDFS-11966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048300#comment-16048300 ] Hadoop QA commented on HDFS-11966: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 32s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 59s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}122m 8s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeUUID | | | hadoop.hdfs.server.namenode.TestFileTruncate | | | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11966 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12872886/HDFS-11966-HDFS-10285-002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux cb6046ec7066 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-10285 / 6d428ed | | Default Java | 1.8.0_131 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19898/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19898/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19898/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > [SPS] Correct the log in > BlockStorageMovementAttemptedItems#blockStorageMovementResultCheck > --- > > Key: HDFS-11966 > URL: https://issues.apache.org/jira/browse/HDFS-11966 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode
[jira] [Updated] (HDFS-11789) Maintain Short-Circuit Read Statistics
[ https://issues.apache.org/jira/browse/HDFS-11789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDFS-11789: -- Attachment: HDFS-11789.002.patch Thanks for the review [~arpitagarwal]. Patch v02 addresses your comments. > Maintain Short-Circuit Read Statistics > -- > > Key: HDFS-11789 > URL: https://issues.apache.org/jira/browse/HDFS-11789 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HDFS-11789.001.patch, HDFS-11789.002.patch > > > If a disk or controller hardware is faulty then short-circuit read requests > can stall indefinitely while reading from the file descriptor. Currently > there is no way to detect when short-circuit read requests are slow or > blocked. > This Jira proposes that each BlockReaderLocal maintain read statistics while > it is active by measuring the time taken for a pre-determined fraction of > read requests. These per-reader stats can be aggregated into global stats > when the reader is closed. The aggregate statistics can be exposed via JMX. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11670) [SPS]: Add CLI command for satisfy storage policy operations
[ https://issues.apache.org/jira/browse/HDFS-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Surendra Singh Lilhore updated HDFS-11670: -- Attachment: HDFS-11670-HDFS-10285.004.patch Thanks [~rakeshr] for review.. Attached updated patch. Please review... > [SPS]: Add CLI command for satisfy storage policy operations > > > Key: HDFS-11670 > URL: https://issues.apache.org/jira/browse/HDFS-11670 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Affects Versions: HDFS-10285 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Attachments: HDFS-11670-HDFS-10285.001.patch, > HDFS-11670-HDFS-10285.002.patch, HDFS-11670-HDFS-10285.003.patch, > HDFS-11670-HDFS-10285.004.patch > > > This jira to discuss and implement set of satisfy storage policy > sub-commands. Following are the list of sub-commands: > # Schedule blocks to move based on file/directory policy: > {code}hdfs storagepolicies -satisfyStoragePolicy -path ]{code} > # Its good to have one command to check SPS is enabled or not. Based on this > user can take the decision to run the Mover: > {code} > hdfs storagepolicies -isSPSRunning > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11966) [SPS] Correct the log in BlockStorageMovementAttemptedItems#blockStorageMovementResultCheck
[ https://issues.apache.org/jira/browse/HDFS-11966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Surendra Singh Lilhore updated HDFS-11966: -- Attachment: HDFS-11966-HDFS-10285-002.patch Thanks [~rakeshr] for review... Attached updated patch.. > [SPS] Correct the log in > BlockStorageMovementAttemptedItems#blockStorageMovementResultCheck > --- > > Key: HDFS-11966 > URL: https://issues.apache.org/jira/browse/HDFS-11966 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: HDFS-10285 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore >Priority: Minor > Attachments: HDFS-11966-HDFS-10285-001.patch, > HDFS-11966-HDFS-10285-002.patch > > > {{BlockStorageMovementAttemptedItems#blockStorageMovementResultCheck}} print > confusing log when block movement is success. > Logs > > 2017-06-10 17:33:20,690 INFO > org.apache.hadoop.hdfs.server.namenode.BlockStorageMovementAttemptedItems: > Blocks storage movement is SUCCESS for the track id: 16386 reported from > co-ordinating datanode.{color:red} But the trackID doesn't exists in > storageMovementAttemptedItems list{color} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11971) libhdfs++: A few portability issues
[ https://issues.apache.org/jira/browse/HDFS-11971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048112#comment-16048112 ] Hadoop QA commented on HDFS-11971: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 40s{color} | {color:green} HDFS-8707 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 45s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 31s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s{color} | {color:green} HDFS-8707 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 3s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 11s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK v1.7.0_131. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 56m 1s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:5ae34ac | | JIRA Issue | HDFS-11971 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12872880/HDFS-11971.HDFS-8707.000.patch | | Optional Tests | asflicense compile cc mvnsite javac unit | | uname | Linux 19daa1b77064 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-8707 / 40e3290 | | Default Java | 1.7.0_131 | | Multi-JDK versions | /usr/lib/jvm/java-8-oracle:1.8.0_131 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 | | JDK v1.7.0_131 Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19897/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19897/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > libhdfs++: A few portability issues > --- > > Key: HDFS-11971 > URL: https://issues.apache.org/jira/browse/HDFS-11971 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments: HDFS-11971.HDFS-8707.000.patch > > > I recently encountered a few portability issues with libhdfs++ while trying > to build it as a stand alone project (and also as part of another Apache > project). > 1. Method fixCase in configuration.h file produces a warning "conversion to > ‘char’ from ‘int’ may alter its
[jira] [Updated] (HDFS-11971) libhdfs++: A few portability issues
[ https://issues.apache.org/jira/browse/HDFS-11971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anatoli Shein updated HDFS-11971: - Attachment: HDFS-11971.HDFS-8707.000.patch In this patch I fixed the conversion warning, added linking to CMAKE_THREAD_LIBS_INIT in hdfspp and to hdfspp_static in examples and tools. While in examples I also removed the redundant directories per each example (to be consistent with tools and tests) and fixed the naming of the executables of the examples to be consistent also. I did not address the find_package(Protobuf) issue yet. > libhdfs++: A few portability issues > --- > > Key: HDFS-11971 > URL: https://issues.apache.org/jira/browse/HDFS-11971 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments: HDFS-11971.HDFS-8707.000.patch > > > I recently encountered a few portability issues with libhdfs++ while trying > to build it as a stand alone project (and also as part of another Apache > project). > 1. Method fixCase in configuration.h file produces a warning "conversion to > ‘char’ from ‘int’ may alter its value [-Werror=conversion]" which does not > allow libhdfs++ to be compiled as part of the codebase that treats such > warnings as errors (can be fixed with a simple cast). > 2. In CMakeLists.txt file (in libhdfspp directory) we do > find_package(Threads) however we do not link it to the targets (e.g. > hdfspp_static), which causes the build to fail with pthread errors. After the > Threads package is found we need to link it using ${CMAKE_THREAD_LIBS_INIT}. > 3. All the tools and examples fail to build as part of a standalone libhdfs++ > because they are missing multiple libraries such as protobuf, ssl, pthread, > etc. This happens because we link them to a shared library hdfspp instead of > hdfspp_static library. We should either link all the tools and examples to > hdfspp_static library or explicitly add linking to all missing libraries for > each tool/example. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11971) libhdfs++: A few portability issues
[ https://issues.apache.org/jira/browse/HDFS-11971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anatoli Shein updated HDFS-11971: - Status: Patch Available (was: Open) > libhdfs++: A few portability issues > --- > > Key: HDFS-11971 > URL: https://issues.apache.org/jira/browse/HDFS-11971 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > > I recently encountered a few portability issues with libhdfs++ while trying > to build it as a stand alone project (and also as part of another Apache > project). > 1. Method fixCase in configuration.h file produces a warning "conversion to > ‘char’ from ‘int’ may alter its value [-Werror=conversion]" which does not > allow libhdfs++ to be compiled as part of the codebase that treats such > warnings as errors (can be fixed with a simple cast). > 2. In CMakeLists.txt file (in libhdfspp directory) we do > find_package(Threads) however we do not link it to the targets (e.g. > hdfspp_static), which causes the build to fail with pthread errors. After the > Threads package is found we need to link it using ${CMAKE_THREAD_LIBS_INIT}. > 3. All the tools and examples fail to build as part of a standalone libhdfs++ > because they are missing multiple libraries such as protobuf, ssl, pthread, > etc. This happens because we link them to a shared library hdfspp instead of > hdfspp_static library. We should either link all the tools and examples to > hdfspp_static library or explicitly add linking to all missing libraries for > each tool/example. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10785) libhdfs++: Implement the rest of the tools
[ https://issues.apache.org/jira/browse/HDFS-10785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Clampffer updated HDFS-10785: --- Resolution: Fixed Status: Resolved (was: Patch Available) Committed to HDFS-8707. Thanks [~anatoli.shein]! > libhdfs++: Implement the rest of the tools > -- > > Key: HDFS-10785 > URL: https://issues.apache.org/jira/browse/HDFS-10785 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments: HDFS-10785.HDFS-8707.000.patch, > HDFS-10785.HDFS-8707.001.patch, HDFS-10785.HDFS-8707.002.patch, > HDFS-10785.HDFS-8707.003.patch, HDFS-10785.HDFS-8707.004.patch, > HDFS-10785.HDFS-8707.005.patch, HDFS-10785.HDFS-8707.006.patch, > HDFS-10785.HDFS-8707.007.patch, HDFS-10785.HDFS-8707.008.patch, > HDFS-10785.HDFS-8707.009.patch, HDFS-10785.HDFS-8707.010.patch, > HDFS-10785.HDFS-8707.011.patch, HDFS-10785.HDFS-8707.012.patch, > HDFS-10785.HDFS-8707.013.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11971) libhdfs++: A few portability issues
[ https://issues.apache.org/jira/browse/HDFS-11971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048007#comment-16048007 ] Anatoli Shein commented on HDFS-11971: -- Just discovered another small portability problem. If we link libhdfspp to a specific installation of protobuf while another incompatible version of protobuf is installed system-wide, the libhdfspp build crashes with protobuf compatibility errors. To fix that we should not do "find_package(Protobuf)" if it protobuf variables have been already set manually. > libhdfs++: A few portability issues > --- > > Key: HDFS-11971 > URL: https://issues.apache.org/jira/browse/HDFS-11971 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > > I recently encountered a few portability issues with libhdfs++ while trying > to build it as a stand alone project (and also as part of another Apache > project). > 1. Method fixCase in configuration.h file produces a warning "conversion to > ‘char’ from ‘int’ may alter its value [-Werror=conversion]" which does not > allow libhdfs++ to be compiled as part of the codebase that treats such > warnings as errors (can be fixed with a simple cast). > 2. In CMakeLists.txt file (in libhdfspp directory) we do > find_package(Threads) however we do not link it to the targets (e.g. > hdfspp_static), which causes the build to fail with pthread errors. After the > Threads package is found we need to link it using ${CMAKE_THREAD_LIBS_INIT}. > 3. All the tools and examples fail to build as part of a standalone libhdfs++ > because they are missing multiple libraries such as protobuf, ssl, pthread, > etc. This happens because we link them to a shared library hdfspp instead of > hdfspp_static library. We should either link all the tools and examples to > hdfspp_static library or explicitly add linking to all missing libraries for > each tool/example. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11646) Add -E option in 'ls' to list erasure coding policy of each file and directory if applicable
[ https://issues.apache.org/jira/browse/HDFS-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047987#comment-16047987 ] Hadoop QA commented on HDFS-11646: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 21s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 45s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 23s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 3s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 8s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 55s{color} | {color:orange} root: The patch generated 8 new + 138 unchanged - 0 fixed = 146 total (was 138) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 27s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 20s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 39s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}137m 59s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11646 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12872851/HDFS-11646-003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml cc | | uname | Linux 35a0f0e312bf 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8633ef8 | | Default Java | 1.8.0_131 | | findbugs |
[jira] [Commented] (HDFS-11797) BlockManager#createLocatedBlocks() can throw ArrayIndexOutofBoundsException when corrupt replicas are inconsistent
[ https://issues.apache.org/jira/browse/HDFS-11797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047968#comment-16047968 ] Brahma Reddy Battula commented on HDFS-11797: - Yes,it's almost hitting HDFS-11445 and now {{BlockInfoContiguousUnderConstruction#setGenerationStampAndVerifyReplicas()}} and {{BlockInfoContiguousUnderConstruction#commitBlock()}} return the list of stale replicas and then removing these stored blockInfo in BlockManager.. > BlockManager#createLocatedBlocks() can throw ArrayIndexOutofBoundsException > when corrupt replicas are inconsistent > -- > > Key: HDFS-11797 > URL: https://issues.apache.org/jira/browse/HDFS-11797 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kuhu Shukla >Assignee: Kuhu Shukla >Priority: Critical > Attachments: HDFS-11797.001.patch > > > The calculation for {{numMachines}} can be too less (causing > ArrayIndexOutOfBoundsException) or too many (causing NPE (HDFS-9958)) if data > structures find inconsistent number of corrupt replicas. This was earlier > found related to failed storages. This JIRA tracks a change that works for > all possible cases of inconsistencies. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11647) Add -E option in hdfs "count" command to show erasure policy summarization
[ https://issues.apache.org/jira/browse/HDFS-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047946#comment-16047946 ] Hadoop QA commented on HDFS-11647: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 47s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 24s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 6s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 59s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 37s{color} | {color:orange} root: The patch generated 1 new + 135 unchanged - 0 fixed = 136 total (was 135) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 4s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 7s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 22s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 19s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}185m 18s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.tools.TestDFSAdminWithHA | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11647 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12872834/HDFS-11647-004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml cc | | uname | Linux 7ce60a4afd05 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Commented] (HDFS-11303) Hedged read might hang infinitely if read data from all DN failed
[ https://issues.apache.org/jira/browse/HDFS-11303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047945#comment-16047945 ] Wei-Chiu Chuang commented on HDFS-11303: Hi [~zhangchen], thanks for filing the jira and posting the patch. I'd like to help reviewing the patch. The fix itself looks good. Here's my quick comments, mostly cosmetic: 1. I am not so sure about the test: you mentioned the client had timeouts connecting to DNs, but the test throws ChecksumException -- it is not clear to me if client exhibit the same symptom in both scenarios. Perhaps you can use {{DFSClientFaultInjector.readFromDatanodeDelay}} to insert a delay instead? 2. Could you add test timeout limit? For example a 10 second timeout: {{\@Test(timeout=10)}} 3. It appears to me that the test expects to throw BlockMissingException. Instead of the following: {code} } catch (BlockMissingException e) { assertTrue(true); } {code} Would you mind update the test and use ExpectedException to assert that the exception is expected? 4. {code} if (true) { System.out.println("-- throw Checksum Exception"); throw new ChecksumException("ChecksumException test", 100); } {code} Please remove {{if(true)}} and please use DFSClient.LOG instead of System.out.println for log printing 5. There's a slight code conflict due to HDFS-11708. Please rebase the patch. > Hedged read might hang infinitely if read data from all DN failed > -- > > Key: HDFS-11303 > URL: https://issues.apache.org/jira/browse/HDFS-11303 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 3.0.0-alpha1 >Reporter: Chen Zhang >Assignee: Chen Zhang > Attachments: HDFS-11303-001.patch, HDFS-11303-001.patch, > HDFS-11303-002.patch, HDFS-11303-002.patch > > > Hedged read will read from a DN first, if timeout, then read other DNs > simultaneously. > If read all DN failed, this bug will cause the future-list not empty(the > first timeout request left in list), and hang in the loop infinitely -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11736) OIV tests should not write outside 'target' directory.
[ https://issues.apache.org/jira/browse/HDFS-11736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047920#comment-16047920 ] Hadoop QA commented on HDFS-11736: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 24s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 88m 12s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.web.TestWebHdfsTimeouts | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11736 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12872849/HDFS-11736.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 104cd1d1afdb 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8633ef8 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19895/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19895/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19895/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > OIV tests should not write outside 'target' directory. > -- > > Key: HDFS-11736 > URL: https://issues.apache.org/jira/browse/HDFS-11736 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Konstantin Shvachko >Assignee: Yiqun Lin > Labels: newbie++, test >
[jira] [Updated] (HDFS-11646) Add -E option in 'ls' to list erasure coding policy of each file and directory if applicable
[ https://issues.apache.org/jira/browse/HDFS-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] luhuichun updated HDFS-11646: - Attachment: HDFS-11646-003.patch > Add -E option in 'ls' to list erasure coding policy of each file and > directory if applicable > > > Key: HDFS-11646 > URL: https://issues.apache.org/jira/browse/HDFS-11646 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: SammiChen >Assignee: luhuichun > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-11646-001.patch, HDFS-11646-002.patch, > HDFS-11646-003.patch > > > Add -E option in "ls" to show erasure coding policy of file and directory, > leverage the "number_of_replicas " column. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11646) Add -E option in 'ls' to list erasure coding policy of each file and directory if applicable
[ https://issues.apache.org/jira/browse/HDFS-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] luhuichun updated HDFS-11646: - Attachment: (was: HADOOP-11646.patch) > Add -E option in 'ls' to list erasure coding policy of each file and > directory if applicable > > > Key: HDFS-11646 > URL: https://issues.apache.org/jira/browse/HDFS-11646 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: SammiChen >Assignee: luhuichun > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-11646-001.patch, HDFS-11646-002.patch, > HDFS-11646-003.patch > > > Add -E option in "ls" to show erasure coding policy of file and directory, > leverage the "number_of_replicas " column. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11736) OIV tests should not write outside 'target' directory.
[ https://issues.apache.org/jira/browse/HDFS-11736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047809#comment-16047809 ] Hadoop QA commented on HDFS-11736: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 1s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 53s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 2123 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 55s{color} | {color:red} The patch 70 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 56s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_131. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}117m 5s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_131 Failed junit tests | hadoop.hdfs.web.TestHttpsFileSystem | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | JDK v1.7.0_131 Failed junit tests | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.namenode.TestCacheDirectives | | | hadoop.hdfs.web.TestHttpsFileSystem | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:67e87c9 | | JIRA Issue | HDFS-11736 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12872832/HDFS-11736-branch-2.7.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux f84b7b11515e 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64
[jira] [Comment Edited] (HDFS-11670) [SPS]: Add CLI command for satisfy storage policy operations
[ https://issues.apache.org/jira/browse/HDFS-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047789#comment-16047789 ] Rakesh R edited comment on HDFS-11670 at 6/13/17 12:23 PM: --- [~surendrasingh], patch looks almost OK to me. Please take care below comments. # Documentation is not updated with the latest command changes. Please take care. {code} [-storagePolicySatisfierStatus] ### Storage Policy Satisfier Status Check the status of Storage Policy Satisfier in namenode. If it is running, return 'Enabled'. Otherwise return 'Disabled'. * Command: hdfs storagepolicies -storagePolicySatisfierStatus {code} # For the {{isSPSRunning}} command, how about add validation for any unnecessary args passed. Refer [CryptoAdmin.java#L176|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java#L176] {code} if (!args.isEmpty()) { System.err.print("Can't understand arguments: " + Joiner.on(" ").join(args) + "\n"); System.err.println("Usage is " + getLongUsage()); return 1; } {code} was (Author: rakeshr): [~surendrasingh], patch looks almost OK to me. Please take care below comments. # Documentation is not updated with the latest command changes. Please take care. {code} [-storagePolicySatisfierStatus] ### Storage Policy Satisfier Status Check the status of Storage Policy Satisfier in namenode. If it is running, return 'Enabled'. Otherwise return 'Disabled'. * Command: hdfs storagepolicies -storagePolicySatisfierStatus {code} # Probably, we could add validation for any unnecessary args passed. Refer [CryptoAdmin.java#L176|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java#L176] {code} if (!args.isEmpty()) { System.err.print("Can't understand arguments: " + Joiner.on(" ").join(args) + "\n"); System.err.println("Usage is " + getLongUsage()); return 1; } {code} > [SPS]: Add CLI command for satisfy storage policy operations > > > Key: HDFS-11670 > URL: https://issues.apache.org/jira/browse/HDFS-11670 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Affects Versions: HDFS-10285 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Attachments: HDFS-11670-HDFS-10285.001.patch, > HDFS-11670-HDFS-10285.002.patch, HDFS-11670-HDFS-10285.003.patch > > > This jira to discuss and implement set of satisfy storage policy > sub-commands. Following are the list of sub-commands: > # Schedule blocks to move based on file/directory policy: > {code}hdfs storagepolicies -satisfyStoragePolicy -path ]{code} > # Its good to have one command to check SPS is enabled or not. Based on this > user can take the decision to run the Mover: > {code} > hdfs storagepolicies -isSPSRunning > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11670) [SPS]: Add CLI command for satisfy storage policy operations
[ https://issues.apache.org/jira/browse/HDFS-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047789#comment-16047789 ] Rakesh R commented on HDFS-11670: - [~surendrasingh], patch looks almost OK to me. Please take care below comments. # Documentation is not updated with the latest command changes. Please take care. {code} [-storagePolicySatisfierStatus] ### Storage Policy Satisfier Status Check the status of Storage Policy Satisfier in namenode. If it is running, return 'Enabled'. Otherwise return 'Disabled'. * Command: hdfs storagepolicies -storagePolicySatisfierStatus {code} # Probably, we could add validation for any unnecessary args passed. Refer [CryptoAdmin.java#L176|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java#L176] {code} if (!args.isEmpty()) { System.err.print("Can't understand arguments: " + Joiner.on(" ").join(args) + "\n"); System.err.println("Usage is " + getLongUsage()); return 1; } {code} > [SPS]: Add CLI command for satisfy storage policy operations > > > Key: HDFS-11670 > URL: https://issues.apache.org/jira/browse/HDFS-11670 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Affects Versions: HDFS-10285 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Attachments: HDFS-11670-HDFS-10285.001.patch, > HDFS-11670-HDFS-10285.002.patch, HDFS-11670-HDFS-10285.003.patch > > > This jira to discuss and implement set of satisfy storage policy > sub-commands. Following are the list of sub-commands: > # Schedule blocks to move based on file/directory policy: > {code}hdfs storagepolicies -satisfyStoragePolicy -path ]{code} > # Its good to have one command to check SPS is enabled or not. Based on this > user can take the decision to run the Mover: > {code} > hdfs storagepolicies -isSPSRunning > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11736) OIV tests should not write outside 'target' directory.
[ https://issues.apache.org/jira/browse/HDFS-11736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11736: - Attachment: HDFS-11736.003.patch > OIV tests should not write outside 'target' directory. > -- > > Key: HDFS-11736 > URL: https://issues.apache.org/jira/browse/HDFS-11736 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Konstantin Shvachko >Assignee: Yiqun Lin > Labels: newbie++, test > Attachments: HDFS-11736.001.patch, HDFS-11736.002.patch, > HDFS-11736.003.patch, HDFS-11736-branch-2.7.001.patch, > HDFS-11736-branch-2.7.002.patch > > > A few tests use {{Files.createTempDir()}} from Guava package, but do not set > {{java.io.tmpdir}} system property. Thus the temp directory is created in > unpredictable places and is not being cleaned up by {{mvn clean}}. > This was probably introduced in {{TestOfflineImageViewer}} and then > replicated in {{TestCheckpoint}}, {{TestStandbyCheckpoints}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11736) OIV tests should not write outside 'target' directory.
[ https://issues.apache.org/jira/browse/HDFS-11736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11736: - Attachment: (was: HDFS-11736.003.patch) > OIV tests should not write outside 'target' directory. > -- > > Key: HDFS-11736 > URL: https://issues.apache.org/jira/browse/HDFS-11736 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Konstantin Shvachko >Assignee: Yiqun Lin > Labels: newbie++, test > Attachments: HDFS-11736.001.patch, HDFS-11736.002.patch, > HDFS-11736.003.patch, HDFS-11736-branch-2.7.001.patch, > HDFS-11736-branch-2.7.002.patch > > > A few tests use {{Files.createTempDir()}} from Guava package, but do not set > {{java.io.tmpdir}} system property. Thus the temp directory is created in > unpredictable places and is not being cleaned up by {{mvn clean}}. > This was probably introduced in {{TestOfflineImageViewer}} and then > replicated in {{TestCheckpoint}}, {{TestStandbyCheckpoints}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11841) libc.so.6 detected double free on TestDFSIO write when erasure coding and enabled Intel ISA-L
[ https://issues.apache.org/jira/browse/HDFS-11841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] liaoyuxiangqin updated HDFS-11841: -- Description: when i execute hadoop jar hadoop-mapreduce-client-jobclient-3.0.0-alpha3-SNAPSHOT-tests.jar TestDFSIO -write -nrFiles 1 -size 1MB on above environment, glibc detected double free or corruption, detail information as follows: {pane} File System Counters FILE: Number of bytes read=489391103089 FILE: Number of bytes written=42020064724 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=5751218285 HDFS: Number of bytes written=52456018688978 HDFS: Number of read operations=150165017 HDFS: Number of large read operations=0 HDFS: Number of write operations=150065006 Map-Reduce Framework Map input records=1 Map output records=5 Map output bytes=707116 Map output materialized bytes=867116 Input split bytes=1288890 Combine input records=0 Combine output records=0 Reduce input groups=5 Reduce shuffle bytes=867116 Reduce input records=5 Reduce output records=5 Spilled Records=10 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=34575 Total committed heap usage (bytes)=46386259689472 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=1148890 File Output Format Counters Bytes Written=88 glibc detected /home/hadoop/jdk1.8.0_111/bin/java: double free or corruption (out): 0x7fcd84a6c6c0 === Backtrace: = /lib64/libc.so.6[0x3d1fe724a6] /lib64/libc.so.6(cfree+0x6c)[0x3d1fe771bc] [0x7fcdb138e8e6] === Memory map: 0040-00401000 r-xp 08:01 3720754 /home/hadoop/jdk1.8.0_111/bin/java 0060-00601000 rw-p 08:01 3720754 /home/hadoop/jdk1.8.0_111/bin/java 0204f000-0460 rw-p 00:00 0 [heap] 64940-6f210 rw-p 00:00 0 6f210-74318 ---p 00:00 0 74318-7c000 rw-p 00:00 0 7c000-7c03c rw-p 00:00 0 7c03c-8 ---p 00:00 0 3d1fa0-3d1fa21000 r-xp 08:01 8126508 /lib64/ld-2.14.1.so 3d1fc2-3d1fc21000 r--p 0002 08:01 8126508 /lib64/ld-2.14.1.so 3d1fc21000-3d1fc23000 rw-p 00021000 08:01 8126508 /lib64/ld-2.14.1.so 3d1fe0-3d1ff83000 r-xp 08:01 8126562 /lib64/libc-2.14.1.so 3d1ff83000-3d20182000 ---p 00183000 08:01 8126562 /lib64/libc-2.14.1.so /home/hadoop/jdk1.8.0_111/jre/lib/jsse.jar 7fcd9260a000-7fcd9260f000 r--s 00044000 08:01 3852117 /home/gy/hadoop-3.0.0-alpha3-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.0.0-alpha3-SNAPSHOT.jar 7fcd9260f000-7fcd92612000 r--s 0001f000 08:01 3852191 /home/gy/hadoop-3.0.0-alpha3-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-registry-3.0.0-alpha3-SNAPSHOT.jarAborted {panel} was: when i execute hadoop jar hadoop-mapreduce-client-jobclient-3.0.0-alpha3-SNAPSHOT-tests.jar TestDFSIO -write -nrFiles 1 -size 1MB on above environment, glibc detected double free or corruption, detail information as follows: {pane} File System Counters FILE: Number of bytes read=489391103089 FILE: Number of bytes written=42020064724 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=5751218285 HDFS: Number of bytes written=52456018688978 HDFS: Number of read operations=150165017 HDFS: Number of large read operations=0 HDFS: Number of write operations=150065006 Map-Reduce Framework Map input records=1 Map output records=5 Map output bytes=707116 Map output materialized bytes=867116 Input split bytes=1288890 Combine input records=0 Combine output records=0 Reduce
[jira] [Commented] (HDFS-11646) Add -E option in 'ls' to list erasure coding policy of each file and directory if applicable
[ https://issues.apache.org/jira/browse/HDFS-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047748#comment-16047748 ] Hadoop QA commented on HDFS-11646: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} HDFS-11646 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-11646 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12872836/HADOOP-11646.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19894/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add -E option in 'ls' to list erasure coding policy of each file and > directory if applicable > > > Key: HDFS-11646 > URL: https://issues.apache.org/jira/browse/HDFS-11646 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: SammiChen >Assignee: luhuichun > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HADOOP-11646.patch, HDFS-11646-001.patch, > HDFS-11646-002.patch > > > Add -E option in "ls" to show erasure coding policy of file and directory, > leverage the "number_of_replicas " column. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11646) Add -E option in 'ls' to list erasure coding policy of each file and directory if applicable
[ https://issues.apache.org/jira/browse/HDFS-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] luhuichun updated HDFS-11646: - Attachment: HADOOP-11646.patch > Add -E option in 'ls' to list erasure coding policy of each file and > directory if applicable > > > Key: HDFS-11646 > URL: https://issues.apache.org/jira/browse/HDFS-11646 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: SammiChen >Assignee: luhuichun > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HADOOP-11646.patch, HDFS-11646-001.patch, > HDFS-11646-002.patch > > > Add -E option in "ls" to show erasure coding policy of file and directory, > leverage the "number_of_replicas " column. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11966) [SPS] Correct the log in BlockStorageMovementAttemptedItems#blockStorageMovementResultCheck
[ https://issues.apache.org/jira/browse/HDFS-11966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047734#comment-16047734 ] Rakesh R commented on HDFS-11966: - Thanks [~surendrasingh] for the contribution. Would you mind fixing the minor checkstyle warning reported. Apart from that +1 for the patch. > [SPS] Correct the log in > BlockStorageMovementAttemptedItems#blockStorageMovementResultCheck > --- > > Key: HDFS-11966 > URL: https://issues.apache.org/jira/browse/HDFS-11966 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: HDFS-10285 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore >Priority: Minor > Attachments: HDFS-11966-HDFS-10285-001.patch > > > {{BlockStorageMovementAttemptedItems#blockStorageMovementResultCheck}} print > confusing log when block movement is success. > Logs > > 2017-06-10 17:33:20,690 INFO > org.apache.hadoop.hdfs.server.namenode.BlockStorageMovementAttemptedItems: > Blocks storage movement is SUCCESS for the track id: 16386 reported from > co-ordinating datanode.{color:red} But the trackID doesn't exists in > storageMovementAttemptedItems list{color} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11647) Add -E option in hdfs "count" command to show erasure policy summarization
[ https://issues.apache.org/jira/browse/HDFS-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] luhuichun updated HDFS-11647: - Attachment: HDFS-11647-004.patch > Add -E option in hdfs "count" command to show erasure policy summarization > -- > > Key: HDFS-11647 > URL: https://issues.apache.org/jira/browse/HDFS-11647 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: SammiChen >Assignee: luhuichun > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-11647-001.patch, HDFS-11647-002.patch, > HDFS-11647-003.patch, HDFS-11647-004.patch > > > Add -E option in hdfs "count" command to show erasure policy summarization -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11736) OIV tests should not write outside 'target' directory.
[ https://issues.apache.org/jira/browse/HDFS-11736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11736: - Attachment: HDFS-11736-branch-2.7.002.patch HDFS-11736.003.patch Thanks [~ajisakaa], attach the new patch to fix checkstyle warnings. . > OIV tests should not write outside 'target' directory. > -- > > Key: HDFS-11736 > URL: https://issues.apache.org/jira/browse/HDFS-11736 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Konstantin Shvachko >Assignee: Yiqun Lin > Labels: newbie++, test > Attachments: HDFS-11736.001.patch, HDFS-11736.002.patch, > HDFS-11736.003.patch, HDFS-11736-branch-2.7.001.patch, > HDFS-11736-branch-2.7.002.patch > > > A few tests use {{Files.createTempDir()}} from Guava package, but do not set > {{java.io.tmpdir}} system property. Thus the temp directory is created in > unpredictable places and is not being cleaned up by {{mvn clean}}. > This was probably introduced in {{TestOfflineImageViewer}} and then > replicated in {{TestCheckpoint}}, {{TestStandbyCheckpoints}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11679) Ozone: SCM CLI: Implement list container command
[ https://issues.apache.org/jira/browse/HDFS-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047685#comment-16047685 ] Hadoop QA commented on HDFS-11679: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 14s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 52s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 56s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 3s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 54s{color} | {color:green} HDFS-7240 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 16s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 4s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}151m 17s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.cblock.TestBufferManager | | | hadoop.ozone.scm.node.TestNodeManager | | | hadoop.cblock.TestCBlockReadWrite | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | Timed out junit tests | org.apache.hadoop.cblock.TestLocalBlockCache | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11679 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12872817/HDFS-11679-HDFS-7240.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 303d550a9aea 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 0a05da9 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19891/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19891/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project | | Console output |
[jira] [Commented] (HDFS-11736) OIV tests should not write outside 'target' directory.
[ https://issues.apache.org/jira/browse/HDFS-11736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047663#comment-16047663 ] Akira Ajisaka commented on HDFS-11736: -- Would you remove unused imports to fix checkstyle warnings? I'm +1 if that is addressed. > OIV tests should not write outside 'target' directory. > -- > > Key: HDFS-11736 > URL: https://issues.apache.org/jira/browse/HDFS-11736 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Konstantin Shvachko >Assignee: Yiqun Lin > Labels: newbie++, test > Attachments: HDFS-11736.001.patch, HDFS-11736.002.patch, > HDFS-11736-branch-2.7.001.patch > > > A few tests use {{Files.createTempDir()}} from Guava package, but do not set > {{java.io.tmpdir}} system property. Thus the temp directory is created in > unpredictable places and is not being cleaned up by {{mvn clean}}. > This was probably introduced in {{TestOfflineImageViewer}} and then > replicated in {{TestCheckpoint}}, {{TestStandbyCheckpoints}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11946) Ozone: Containers in different datanodes are mapped to the same location
[ https://issues.apache.org/jira/browse/HDFS-11946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HDFS-11946: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) I have committed this. Thanks, Nandakumar! > Ozone: Containers in different datanodes are mapped to the same location > > > Key: HDFS-11946 > URL: https://issues.apache.org/jira/browse/HDFS-11946 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Tsz Wo Nicholas Sze >Assignee: Nandakumar > Attachments: HDFS-11946-HDFS-7240.000.patch > > > This is a problem in unit tests. Containers with the same container name in > different datanodes are mapped to the same local path location. As a result, > the first datanode will be able to succeed creating the container file but > the remaining datanodes will fail to create the container file with > FileAlreadyExistsException. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11946) Ozone: Containers in different datanodes are mapped to the same location
[ https://issues.apache.org/jira/browse/HDFS-11946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047636#comment-16047636 ] Tsz Wo Nicholas Sze commented on HDFS-11946: +1 the patch looks good. The container paths now depends on datanodes as shown below. {code} 2017-06-13 17:21:39,608 [StateMachineUpdater-127.0.0.1:57876] INFO - Created a new container. File: /Users/szetszwo/hadoop/HDFS-7240/hadoop-hdfs-project/hadoop-hdfs/target/test/data/MiniOzoneClustere408d468-109c-4793-9ceb-11f1722a588d/ab790656-eef3-4ab6-8b25-434601c5cbb3/cont-meta-dn-1/repository/86535e98-683f-4e59-b102-1f84778684fd.container 2017-06-13 17:21:39,677 [StateMachineUpdater-127.0.0.1:57871] INFO - Created a new container. File: /Users/szetszwo/hadoop/HDFS-7240/hadoop-hdfs-project/hadoop-hdfs/target/test/data/MiniOzoneClustere408d468-109c-4793-9ceb-11f1722a588d/ab790656-eef3-4ab6-8b25-434601c5cbb3/cont-meta-dn-0/repository/86535e98-683f-4e59-b102-1f84778684fd.container 2017-06-13 17:21:39,678 [StateMachineUpdater-127.0.0.1:57882] INFO - Created a new container. File: /Users/szetszwo/hadoop/HDFS-7240/hadoop-hdfs-project/hadoop-hdfs/target/test/data/MiniOzoneClustere408d468-109c-4793-9ceb-11f1722a588d/ab790656-eef3-4ab6-8b25-434601c5cbb3/cont-meta-dn-2/repository/86535e98-683f-4e59-b102-1f84778684fd.container {code} > Ozone: Containers in different datanodes are mapped to the same location > > > Key: HDFS-11946 > URL: https://issues.apache.org/jira/browse/HDFS-11946 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Tsz Wo Nicholas Sze >Assignee: Nandakumar > Attachments: HDFS-11946-HDFS-7240.000.patch > > > This is a problem in unit tests. Containers with the same container name in > different datanodes are mapped to the same local path location. As a result, > the first datanode will be able to succeed creating the container file but > the remaining datanodes will fail to create the container file with > FileAlreadyExistsException. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11946) Ozone: Containers in different datanodes are mapped to the same location
[ https://issues.apache.org/jira/browse/HDFS-11946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047580#comment-16047580 ] Hadoop QA commented on HDFS-11946: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 35s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 93m 57s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.ozone.scm.TestXceiverClientManager | | | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11946 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12872771/HDFS-11946-HDFS-7240.000.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 2aa899179681 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 0a05da9 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19890/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19890/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19890/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone: Containers in different datanodes are mapped to the same location > > > Key: HDFS-11946 > URL: https://issues.apache.org/jira/browse/HDFS-11946 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Tsz Wo Nicholas Sze >Assignee: Nandakumar > Attachments:
[jira] [Updated] (HDFS-11679) Ozone: SCM CLI: Implement list container command
[ https://issues.apache.org/jira/browse/HDFS-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuanbo Liu updated HDFS-11679: -- Attachment: HDFS-11679-HDFS-7240.003.patch > Ozone: SCM CLI: Implement list container command > > > Key: HDFS-11679 > URL: https://issues.apache.org/jira/browse/HDFS-11679 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Yang >Assignee: Yuanbo Liu > Labels: command-line > Attachments: HDFS-11679-HDFS-7240.001.patch, > HDFS-11679-HDFS-7240.002.patch, HDFS-11679-HDFS-7240.003.patch > > > Implement the command to list containers > {code} > hdfs scm -container list -start [-count <100> | -end > ]{code} > Lists all containers known to SCM. The option -start allows the listing to > start from a specified container and -count controls the number of entries > returned but it is mutually exclusive with the -end option which returns keys > from the -start to -end range. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11679) Ozone: SCM CLI: Implement list container command
[ https://issues.apache.org/jira/browse/HDFS-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047530#comment-16047530 ] Yuanbo Liu commented on HDFS-11679: --- After HDFS-11926, propose to use "-prefix" instead of "-end" for consistence. > Ozone: SCM CLI: Implement list container command > > > Key: HDFS-11679 > URL: https://issues.apache.org/jira/browse/HDFS-11679 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Yang >Assignee: Yuanbo Liu > Labels: command-line > Attachments: HDFS-11679-HDFS-7240.001.patch, > HDFS-11679-HDFS-7240.002.patch > > > Implement the command to list containers > {code} > hdfs scm -container list -start [-count <100> | -end > ]{code} > Lists all containers known to SCM. The option -start allows the listing to > start from a specified container and -count controls the number of entries > returned but it is mutually exclusive with the -end option which returns keys > from the -start to -end range. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11946) Ozone: Containers in different datanodes are mapped to the same location
[ https://issues.apache.org/jira/browse/HDFS-11946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nandakumar updated HDFS-11946: -- Status: Patch Available (was: Open) > Ozone: Containers in different datanodes are mapped to the same location > > > Key: HDFS-11946 > URL: https://issues.apache.org/jira/browse/HDFS-11946 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Tsz Wo Nicholas Sze >Assignee: Nandakumar > Attachments: HDFS-11946-HDFS-7240.000.patch > > > This is a problem in unit tests. Containers with the same container name in > different datanodes are mapped to the same local path location. As a result, > the first datanode will be able to succeed creating the container file but > the remaining datanodes will fail to create the container file with > FileAlreadyExistsException. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11939) Ozone : add read/write random access to Chunks of a key
[ https://issues.apache.org/jira/browse/HDFS-11939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047480#comment-16047480 ] Hadoop QA commented on HDFS-11939: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 48s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 30s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 53s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 47s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 1s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s{color} | {color:green} HDFS-7240 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 48s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 15s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 12s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 25s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}160m 8s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client | | | Inconsistent synchronization of org.apache.hadoop.scm.storage.ChunkOutputChannel.buffer; locked 68% of time Unsynchronized access at ChunkOutputChannel.java:68% of time Unsynchronized access at ChunkOutputChannel.java:[line 232] | | Failed junit tests | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery | | | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy | | | hadoop.cblock.TestBufferManager | | | hadoop.ozone.scm.node.TestNodeManager | | | hadoop.cblock.TestCBlockReadWrite | | | hadoop.ozone.container.ozoneimpl.TestRatisManager | | Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 | | | org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis | | | org.apache.hadoop.cblock.TestLocalBlockCache | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11939 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12872799/HDFS-11939-HDFS-7240.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux d9b0d2569129 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |