[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.
[ https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-11902: -- Attachment: HDFS-11902-HDFS-9806.004.patch Fixed checkstyle issues with earlier patch. > [READ] Merge BlockFormatProvider and FileRegionProvider. > > > Key: HDFS-11902 > URL: https://issues.apache.org/jira/browse/HDFS-11902 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-11902-HDFS-9806.001.patch, > HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, > HDFS-11902-HDFS-9806.004.patch > > > Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform > almost the same function on the Namenode and Datanode respectively. This JIRA > is to merge them into one. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.
[ https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-11902: -- Status: Patch Available (was: Open) > [READ] Merge BlockFormatProvider and FileRegionProvider. > > > Key: HDFS-11902 > URL: https://issues.apache.org/jira/browse/HDFS-11902 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-11902-HDFS-9806.001.patch, > HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, > HDFS-11902-HDFS-9806.004.patch > > > Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform > almost the same function on the Namenode and Datanode respectively. This JIRA > is to merge them into one. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.
[ https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-11902: -- Status: Open (was: Patch Available) > [READ] Merge BlockFormatProvider and FileRegionProvider. > > > Key: HDFS-11902 > URL: https://issues.apache.org/jira/browse/HDFS-11902 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-11902-HDFS-9806.001.patch, > HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch > > > Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform > almost the same function on the Namenode and Datanode respectively. This JIRA > is to merge them into one. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12273) Federation UI
Íñigo Goiri created HDFS-12273: -- Summary: Federation UI Key: HDFS-12273 URL: https://issues.apache.org/jira/browse/HDFS-12273 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Íñigo Goiri Assignee: Íñigo Goiri -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11554) [Documentation] Router-based federation documentation
[ https://issues.apache.org/jira/browse/HDFS-11554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117825#comment-16117825 ] Hadoop QA commented on HDFS-11554: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 21m 6s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11554 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12880781/HDFS-11554-001.patch | | Optional Tests | asflicense mvnsite xml | | uname | Linux 379261d96988 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8d3fd81 | | modules | C: hadoop-project hadoop-hdfs-project/hadoop-hdfs U: . | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/20589/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > [Documentation] Router-based federation documentation > - > > Key: HDFS-11554 > URL: https://issues.apache.org/jira/browse/HDFS-11554 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-11554-000.patch, HDFS-11554-001.patch > > > Documentation describing Router-based HDFS federation (e.g., how to start a > federated cluster). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11554) [Documentation] Router-based federation documentation
[ https://issues.apache.org/jira/browse/HDFS-11554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-11554: --- Attachment: HDFS-11554-001.patch > [Documentation] Router-based federation documentation > - > > Key: HDFS-11554 > URL: https://issues.apache.org/jira/browse/HDFS-11554 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-11554-000.patch, HDFS-11554-001.patch > > > Documentation describing Router-based HDFS federation (e.g., how to start a > federated cluster). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11554) [Documentation] Router-based federation documentation
[ https://issues.apache.org/jira/browse/HDFS-11554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1611#comment-1611 ] Hadoop QA commented on HDFS-11554: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 27s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 1m 6s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 8s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11554 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12880730/HDFS-11554-000.patch | | Optional Tests | asflicense mvnsite xml | | uname | Linux 92b300655c0c 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8d3fd81 | | mvnsite | https://builds.apache.org/job/PreCommit-HDFS-Build/20588/artifact/patchprocess/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/20588/artifact/patchprocess/whitespace-eol.txt | | modules | C: hadoop-project hadoop-hdfs-project/hadoop-hdfs U: . | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/20588/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > [Documentation] Router-based federation documentation > - > > Key: HDFS-11554 > URL: https://issues.apache.org/jira/browse/HDFS-11554 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-11554-000.patch > > > Documentation describing Router-based HDFS federation (e.g., how to start a > federated cluster). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12221) Replace xcerces in XmlEditsVisitor
[ https://issues.apache.org/jira/browse/HDFS-12221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117776#comment-16117776 ] Hadoop QA commented on HDFS-12221: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 7m 21s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-project-dist hadoop-yarn-project/hadoop-yarn hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-client-modules/hadoop-client-runtime hadoop-client-modules/hadoop-client-minicluster {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 52s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 9 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 14s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 20s{color} | {color:green} root generated 0 new + 1373 unchanged - 6 fixed = 1373 total (was 1379) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 29s{color} | {color:orange} root: The patch generated 5 new + 7 unchanged - 0 fixed = 12 total (was 7) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 6m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 9s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-project-dist hadoop-yarn-project/hadoop-yarn hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-client-modules/hadoop-client-runtime hadoop-client-modules/hadoop-client-minicluster {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 10s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 9s{color} | {color:green} hadoop-project-dist in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 24s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 8s{color} | {color:red} hadoop-yarn in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 47s{color} |
[jira] [Updated] (HDFS-11554) [Documentation] Router-based federation documentation
[ https://issues.apache.org/jira/browse/HDFS-11554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-11554: --- Status: Patch Available (was: Open) > [Documentation] Router-based federation documentation > - > > Key: HDFS-11554 > URL: https://issues.apache.org/jira/browse/HDFS-11554 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-11554-000.patch > > > Documentation describing Router-based HDFS federation (e.g., how to start a > federated cluster). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12134) libhdfs++: Add a synchronization interface for the GSSAPI
[ https://issues.apache.org/jira/browse/HDFS-12134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117754#comment-16117754 ] Deepak Majeti commented on HDFS-12134: -- +1 LGTM > libhdfs++: Add a synchronization interface for the GSSAPI > - > > Key: HDFS-12134 > URL: https://issues.apache.org/jira/browse/HDFS-12134 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: James Clampffer >Assignee: James Clampffer > Attachments: HDFS-12134.HDFS-8707.000.patch, > HDFS-12134.HDFS-8707.001.patch, HDFS-12134.HDFS-8707.002.patch, > HDFS-12134.HDFS-8707.003.patch, HDFS-12134.HDFS-8707.004.patch > > > Bits of the GSSAPI that Cyrus Sasl uses aren't thread safe. There needs to > be a way for a client application to share a lock with this library in order > to prevent race conditions. It can be done using event callbacks through the > C API but we can provide something more robust (RAII) in the C++ API. > Proposed client supplied lock, pretty much the C++17 lockable concept. Use a > default if one isn't provided. This would be scoped at the process level > since it's unlikely that multiple instances of libgssapi unless someone puts > some effort in with dlopen/dlsym. > {code} > class LockProvider > { > virtual ~LockProvider() {} > // allow client application to deny access to the lock > virtual bool try_lock() = 0; > virtual void unlock() = 0; > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10646) Federation admin tool
[ https://issues.apache.org/jira/browse/HDFS-10646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117747#comment-16117747 ] Hadoop QA commented on HDFS-10646: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 0s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} HDFS-10467 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 10s{color} | {color:green} HDFS-10467 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} HDFS-10467 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} HDFS-10467 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} HDFS-10467 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 8s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-10467 has 10 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} HDFS-10467 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 44s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 403 unchanged - 0 fixed = 404 total (was 403) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 25s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 14s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}105m 59s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-10646 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12880752/HDFS-10646-HDFS-10467-002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml shellcheck shelldocs findbugs checkstyle cc | | uname | Linux e650fc6d38e3 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-10467 / 64044a4 | | Default Java | 1.8.0_131 | | shellcheck | v0.4.6 | | findbugs | v3.1.0-RC1 | | findbugs |
[jira] [Commented] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.
[ https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117712#comment-16117712 ] Hadoop QA commented on HDFS-11902: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} HDFS-9806 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 54s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 18s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 17s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s{color} | {color:green} HDFS-9806 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 29s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} HDFS-9806 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 33s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 1s{color} | {color:orange} root: The patch generated 2 new + 434 unchanged - 3 fixed = 436 total (was 437) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 15s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 4s{color} | {color:red} hadoop-fs2img in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}133m 25s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestFsck | | | hadoop.hdfs.server.blockmanagement.TestNodeCount | | | hadoop.hdfs.TestDatanodeRegistration | | | hadoop.hdfs.server.blockmanagement.TestBlockManager | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.TestFileChecksum | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure | | | hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks | | | hadoop.hdfs.server.namenode.TestMetaSave | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.TestMaintenanceState | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeLifeline | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork | | |
[jira] [Commented] (HDFS-11035) Better documentation for maintenace mode and upgrade domain
[ https://issues-test.apache.org/jira/browse/HDFS-11035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090130#comment-16090130 ] Ming Ma commented on HDFS-11035: Given these features are related to existing concepts such as decommission and block placement, we can include description of these features in relevant sections of existing *.md files. > Better documentation for maintenace mode and upgrade domain > --- > > Key: HDFS-11035 > URL: https://issues-test.apache.org/jira/browse/HDFS-11035 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, documentation >Affects Versions: 2.9.0 >Reporter: Wei-Chiu Chuang > > HDFS-7541 added upgrade domain and HDFS-7877 added maintenance mode. Existing > documentation about these two features are scarce and the implementation have > evolved from the original design doc. Looking at code and Javadoc and I still > don't quite get how I can get datanodes into maintenance mode/ set up a > upgrade domain. > File this jira to propose that we write an up-to-date description of these > two features. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9388) Refactor decommission related code to support maintenance state for datanodes
[ https://issues.apache.org/jira/browse/HDFS-9388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117682#comment-16117682 ] Hadoop QA commented on HDFS-9388: - [ https://issues-test.apache.org/jira/browse/HDFS-9388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077367#comment-16077367 ] Ming Ma edited comment on HDFS-9388 at 8/8/17 1:06 AM: --- Thanks [~manojg]. was (Author: mingma): 1. Thanks [~manojg]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) > Refactor decommission related code to support maintenance state for datanodes > - > > Key: HDFS-9388 > URL: https://issues.apache.org/jira/browse/HDFS-9388 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha1 >Reporter: Ming Ma >Assignee: Manoj Govindassamy > Fix For: 2.9.0, 3.0.0-beta1 > > Attachments: HDFS-9388.01.patch, HDFS-9388.02.patch > > > Lots of code can be shared between the existing decommission functionality > and to-be-added maintenance state support for datanodes. To make it easier to > add maintenance state support, let us first modify the existing code to make > it more general. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-9388) Refactor decommission related code to support maintenance state for datanodes
[ https://issues-test.apache.org/jira/browse/HDFS-9388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077367#comment-16077367 ] Ming Ma edited comment on HDFS-9388 at 8/8/17 1:06 AM: --- Thanks [~manojg]. was (Author: mingma): 1. Thanks [~manojg]. > Refactor decommission related code to support maintenance state for datanodes > - > > Key: HDFS-9388 > URL: https://issues-test.apache.org/jira/browse/HDFS-9388 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha1 >Reporter: Ming Ma >Assignee: Manoj Govindassamy > Attachments: HDFS-9388.01.patch, HDFS-9388.02.patch > > > Lots of code can be shared between the existing decommission functionality > and to-be-added maintenance state support for datanodes. To make it easier to > add maintenance state support, let us first modify the existing code to make > it more general. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11303) Hedged read might hang infinitely if read data from all DN failed
[ https://issues.apache.org/jira/browse/HDFS-11303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117622#comment-16117622 ] Wei-Chiu Chuang commented on HDFS-11303: [~zhangchen] ping. Would you still like to work on this patch? Thanks > Hedged read might hang infinitely if read data from all DN failed > -- > > Key: HDFS-11303 > URL: https://issues.apache.org/jira/browse/HDFS-11303 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 3.0.0-alpha1 >Reporter: Chen Zhang >Assignee: Chen Zhang > Attachments: HDFS-11303-001.patch, HDFS-11303-001.patch, > HDFS-11303-002.patch, HDFS-11303-002.patch > > > Hedged read will read from a DN first, if timeout, then read other DNs > simultaneously. > If read all DN failed, this bug will cause the future-list not empty(the > first timeout request left in list), and hang in the loop infinitely -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10646) Federation admin tool
[ https://issues.apache.org/jira/browse/HDFS-10646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-10646: --- Attachment: HDFS-10646-HDFS-10467-002.patch * Removed internal setConfiguration and perfBenchmark * {{printMounts}} to static * Changed names for addMount/removeMount/listMounts > Federation admin tool > - > > Key: HDFS-10646 > URL: https://issues.apache.org/jira/browse/HDFS-10646 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Affects Versions: HDFS-10467 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Attachments: HDFS-10646-HDFS-10467-000.patch, > HDFS-10646-HDFS-10467-001.patch, HDFS-10646-HDFS-10467-002.patch > > > Tools for administrators to manage HDFS federation. This includes managing > the mount table and decommission subclusters. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10646) Federation admin tool
[ https://issues.apache.org/jira/browse/HDFS-10646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117608#comment-16117608 ] Chris Douglas commented on HDFS-10646: -- This looks like a good first cut of the admin tool. Just a few nits: * These are vestigial? There are also some {{SetConfiguration}} PB messages, not sure if/how these are used... {noformat} +} else if ("-perfBenchmark".equalsIgnoreCase(cmd)) { // [snip] +} else if ("-setConfiguration".equalsIgnoreCase(cmd)) { {noformat} * {{printMounts}} can be static * Totally subjective, but the commands seem a little verbose. addMount/removeMount/listMounts could be add/rm/list without losing much > Federation admin tool > - > > Key: HDFS-10646 > URL: https://issues.apache.org/jira/browse/HDFS-10646 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Affects Versions: HDFS-10467 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Attachments: HDFS-10646-HDFS-10467-000.patch, > HDFS-10646-HDFS-10467-001.patch > > > Tools for administrators to manage HDFS federation. This includes managing > the mount table and decommission subclusters. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.
[ https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-11902: -- Status: Open (was: Patch Available) > [READ] Merge BlockFormatProvider and FileRegionProvider. > > > Key: HDFS-11902 > URL: https://issues.apache.org/jira/browse/HDFS-11902 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-11902-HDFS-9806.001.patch, > HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch > > > Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform > almost the same function on the Namenode and Datanode respectively. This JIRA > is to merge them into one. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.
[ https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-11902: -- Status: Patch Available (was: Open) > [READ] Merge BlockFormatProvider and FileRegionProvider. > > > Key: HDFS-11902 > URL: https://issues.apache.org/jira/browse/HDFS-11902 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-11902-HDFS-9806.001.patch, > HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch > > > Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform > almost the same function on the Namenode and Datanode respectively. This JIRA > is to merge them into one. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.
[ https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-11902: -- Attachment: HDFS-11902-HDFS-9806.003.patch Posting the rebased patch. > [READ] Merge BlockFormatProvider and FileRegionProvider. > > > Key: HDFS-11902 > URL: https://issues.apache.org/jira/browse/HDFS-11902 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-11902-HDFS-9806.001.patch, > HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch > > > Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform > almost the same function on the Namenode and Datanode respectively. This JIRA > is to merge them into one. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11554) [Documentation] Router-based federation documentation
[ https://issues.apache.org/jira/browse/HDFS-11554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-11554: --- Attachment: HDFS-11554-000.patch First proposal for documentation. > [Documentation] Router-based federation documentation > - > > Key: HDFS-11554 > URL: https://issues.apache.org/jira/browse/HDFS-11554 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-11554-000.patch > > > Documentation describing Router-based HDFS federation (e.g., how to start a > federated cluster). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12221) Replace xcerces in XmlEditsVisitor
[ https://issues.apache.org/jira/browse/HDFS-12221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Yadav updated HDFS-12221: -- Status: Patch Available (was: Open) > Replace xcerces in XmlEditsVisitor > --- > > Key: HDFS-12221 > URL: https://issues.apache.org/jira/browse/HDFS-12221 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Lei (Eddy) Xu >Assignee: Ajay Yadav > Attachments: fsimage_hdfs-12221.xml, HDFS-12221.01.patch, > HDFS-12221.02.patch, HDFS-12221.03.patch > > > XmlEditsVisitor should use new XML capability in the newer JDK, to make JAR > shading easier (HADOOP-14672) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11957) Enable POSIX ACL inheritance by default
[ https://issues.apache.org/jira/browse/HDFS-11957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117494#comment-16117494 ] John Zhuge commented on HDFS-11957: --- Committing to trunk tomorrow if there is no objection. > Enable POSIX ACL inheritance by default > --- > > Key: HDFS-11957 > URL: https://issues.apache.org/jira/browse/HDFS-11957 > Project: Hadoop HDFS > Issue Type: Improvement > Components: security >Affects Versions: 3.0.0-alpha2 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HDFS-11957.001.patch, HDFS-11957.002.patch > > > It is time to enable POSIX ACL inheritance by default. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-6939) Support path-based filtering of inotify events
[ https://issues.apache.org/jira/browse/HDFS-6939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117482#comment-16117482 ] Ming Ma commented on HDFS-6939: --- Yeah we can include this feature if it provides values. Couple questions: * Each RPC getEditsFromTxid call ends up sending the filter over the wire; so filter with lots of paths has perf impact. Do we need to support large number of paths per call? * In the future there could be other type of filters, e.g. a) based on FsEditLogOp type; b) support different logical operators OR, AND, etc. To make it extensible, perhaps we can define an interface with signature shouldNotify(FsEditLogOp) and provide the path-based PathBasedInotifyFilter for now. Then InotifyFSEditLogOpTranslator will be simpler by checking shouldNotify upfront; if we need to add path-and-editop-based filtering, we can just add PathAndOpBasedInotifyFilter without changing InotifyFSEditLogOpTranslator. * DFSClient's existing getInotifyEventStream methods are only used by DistributedFileSystem. So you don't need to keep these old methods on DFSClient; instead have DistributedFileSystem's old getInotifyEventStream methods call DFSClient's new methods. Also maybe we can consider depreciate DistributedFileSystem's old getInotifyEventStream methods. > Support path-based filtering of inotify events > -- > > Key: HDFS-6939 > URL: https://issues.apache.org/jira/browse/HDFS-6939 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client, namenode, qjm >Reporter: James Thomas >Assignee: Surendra Singh Lilhore > Attachments: HDFS-6939-001.patch > > > Users should be able to specify that they only want events involving > particular paths. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12264) DataNode uses a deprecated method IoUtils#cleanup.
[ https://issues.apache.org/jira/browse/HDFS-12264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-12264: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-beta1 2.9.0 Status: Resolved (was: Patch Available) Committed. Thanks for the contribution [~ajayydv]. > DataNode uses a deprecated method IoUtils#cleanup. > -- > > Key: HDFS-12264 > URL: https://issues.apache.org/jira/browse/HDFS-12264 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Yadav >Assignee: Ajay Yadav > Fix For: 2.9.0, 3.0.0-beta1 > > Attachments: HDFS-12264.01.patch > > > DataNode uses a deprecated method IoUtils#cleanup. It can be replaced with > IoUtils#cleanupWithLogger. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12221) Replace xcerces in XmlEditsVisitor
[ https://issues.apache.org/jira/browse/HDFS-12221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Yadav updated HDFS-12221: -- Status: Open (was: Patch Available) > Replace xcerces in XmlEditsVisitor > --- > > Key: HDFS-12221 > URL: https://issues.apache.org/jira/browse/HDFS-12221 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Lei (Eddy) Xu >Assignee: Ajay Yadav > Attachments: fsimage_hdfs-12221.xml, HDFS-12221.01.patch, > HDFS-12221.02.patch, HDFS-12221.03.patch > > > XmlEditsVisitor should use new XML capability in the newer JDK, to make JAR > shading easier (HADOOP-14672) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12221) Replace xcerces in XmlEditsVisitor
[ https://issues.apache.org/jira/browse/HDFS-12221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Yadav updated HDFS-12221: -- Status: Patch Available (was: Open) > Replace xcerces in XmlEditsVisitor > --- > > Key: HDFS-12221 > URL: https://issues.apache.org/jira/browse/HDFS-12221 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Lei (Eddy) Xu >Assignee: Ajay Yadav > Attachments: fsimage_hdfs-12221.xml, HDFS-12221.01.patch, > HDFS-12221.02.patch, HDFS-12221.03.patch > > > XmlEditsVisitor should use new XML capability in the newer JDK, to make JAR > shading easier (HADOOP-14672) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12221) Replace xcerces in XmlEditsVisitor
[ https://issues.apache.org/jira/browse/HDFS-12221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Yadav updated HDFS-12221: -- Status: Open (was: Patch Available) > Replace xcerces in XmlEditsVisitor > --- > > Key: HDFS-12221 > URL: https://issues.apache.org/jira/browse/HDFS-12221 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Lei (Eddy) Xu >Assignee: Ajay Yadav > Attachments: fsimage_hdfs-12221.xml, HDFS-12221.01.patch, > HDFS-12221.02.patch, HDFS-12221.03.patch > > > XmlEditsVisitor should use new XML capability in the newer JDK, to make JAR > shading easier (HADOOP-14672) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12221) Replace xcerces in XmlEditsVisitor
[ https://issues.apache.org/jira/browse/HDFS-12221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117438#comment-16117438 ] Ajay Yadav commented on HDFS-12221: --- [~fight4gold] Uploaded the patch again after fixing the conflicts. " ${runningWithNative}" is not there in new patch. Please review. > Replace xcerces in XmlEditsVisitor > --- > > Key: HDFS-12221 > URL: https://issues.apache.org/jira/browse/HDFS-12221 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Lei (Eddy) Xu >Assignee: Ajay Yadav > Attachments: fsimage_hdfs-12221.xml, HDFS-12221.01.patch, > HDFS-12221.02.patch, HDFS-12221.03.patch > > > XmlEditsVisitor should use new XML capability in the newer JDK, to make JAR > shading easier (HADOOP-14672) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12221) Replace xcerces in XmlEditsVisitor
[ https://issues.apache.org/jira/browse/HDFS-12221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Yadav updated HDFS-12221: -- Attachment: HDFS-12221.03.patch > Replace xcerces in XmlEditsVisitor > --- > > Key: HDFS-12221 > URL: https://issues.apache.org/jira/browse/HDFS-12221 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Lei (Eddy) Xu >Assignee: Ajay Yadav > Attachments: fsimage_hdfs-12221.xml, HDFS-12221.01.patch, > HDFS-12221.02.patch, HDFS-12221.03.patch > > > XmlEditsVisitor should use new XML capability in the newer JDK, to make JAR > shading easier (HADOOP-14672) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11554) [Documentation] Router-based federation documentation
[ https://issues.apache.org/jira/browse/HDFS-11554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-11554: --- Summary: [Documentation] Router-based federation documentation (was: Federation Router documentation) > [Documentation] Router-based federation documentation > - > > Key: HDFS-11554 > URL: https://issues.apache.org/jira/browse/HDFS-11554 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > > Documentation describing Router-based HDFS federation (e.g., how to start a > federated cluster). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12264) DataNode uses a deprecated method IoUtils#cleanup.
[ https://issues.apache.org/jira/browse/HDFS-12264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117389#comment-16117389 ] Hadoop QA commented on HDFS-12264: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 32s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 57m 40s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-12264 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12880702/HDFS-12264.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 476855bc6f1c 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / adb84f3 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/20584/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/20584/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > DataNode uses a deprecated method IoUtils#cleanup. > -- > > Key: HDFS-12264 > URL: https://issues.apache.org/jira/browse/HDFS-12264 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Yadav >Assignee: Ajay Yadav > Attachments: HDFS-12264.01.patch > > > DataNode uses a deprecated method IoUtils#cleanup. It can be replaced with > IoUtils#cleanupWithLogger. -- This message was sent by Atlassian JIRA
[jira] [Commented] (HDFS-11957) Enable POSIX ACL inheritance by default
[ https://issues.apache.org/jira/browse/HDFS-11957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117373#comment-16117373 ] Andrew Wang commented on HDFS-11957: John, do you think we can get this in? > Enable POSIX ACL inheritance by default > --- > > Key: HDFS-11957 > URL: https://issues.apache.org/jira/browse/HDFS-11957 > Project: Hadoop HDFS > Issue Type: Improvement > Components: security >Affects Versions: 3.0.0-alpha2 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HDFS-11957.001.patch, HDFS-11957.002.patch > > > It is time to enable POSIX ACL inheritance by default. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12093) [READ] Share remoteFS between ProvidedReplica instances.
[ https://issues.apache.org/jira/browse/HDFS-12093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12093: -- Resolution: Fixed Status: Resolved (was: Patch Available) > [READ] Share remoteFS between ProvidedReplica instances. > > > Key: HDFS-12093 > URL: https://issues.apache.org/jira/browse/HDFS-12093 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs > Attachments: HDFS-12093-HDFS-9806.001.patch, > HDFS-12093-HDFS-9806.002.patch > > > When a Datanode comes online using Provided storage, it fills the > {{ReplicaMap}} with the known replicas. With Provided Storage, this includes > {{ProvidedReplica}} instances. Each of these objects, in their constructor, > will construct an FileSystem using the Service Provider. This can result in > contacting the remote file system and checking that the credentials are > correct and that the data is there. For large systems this is a prohibitively > expensive operation to perform per replica. > Instead, the {{ProvidedVolumeImpl}} should own the reference to the > {{remoteFS}} and should share it with the {{ProvidedReplica}} objects on > their creation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12093) [READ] Share remoteFS between ProvidedReplica instances.
[ https://issues.apache.org/jira/browse/HDFS-12093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117339#comment-16117339 ] Virajith Jalaparti commented on HDFS-12093: --- The checkstyle errors are because of more than 7 parameters for the constructors of {{ProvidedReplica}} and {{FinalizedProvidedReplica}}. The failing tests are unrelated. The ASF license warnings are not caused by the patch. Committing v002 to the feature branch. > [READ] Share remoteFS between ProvidedReplica instances. > > > Key: HDFS-12093 > URL: https://issues.apache.org/jira/browse/HDFS-12093 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs > Attachments: HDFS-12093-HDFS-9806.001.patch, > HDFS-12093-HDFS-9806.002.patch > > > When a Datanode comes online using Provided storage, it fills the > {{ReplicaMap}} with the known replicas. With Provided Storage, this includes > {{ProvidedReplica}} instances. Each of these objects, in their constructor, > will construct an FileSystem using the Service Provider. This can result in > contacting the remote file system and checking that the credentials are > correct and that the data is there. For large systems this is a prohibitively > expensive operation to perform per replica. > Instead, the {{ProvidedVolumeImpl}} should own the reference to the > {{remoteFS}} and should share it with the {{ProvidedReplica}} objects on > their creation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12237) libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from source
[ https://issues.apache.org/jira/browse/HDFS-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117294#comment-16117294 ] Hadoop QA commented on HDFS-12237: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-8707 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 43s{color} | {color:green} HDFS-8707 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 10s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 38s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 45s{color} | {color:green} HDFS-8707 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 59s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 57s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 19s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK v1.7.0_131. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}150m 12s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:3117e2a | | JIRA Issue | HDFS-12237 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12880676/HDFS-12237.HDFS-8707.001.patch | | Optional Tests | asflicense compile cc mvnsite javac unit | | uname | Linux af9f0d76708d 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-8707 / 3117e2a | | Default Java | 1.7.0_131 | | Multi-JDK versions | /usr/lib/jvm/java-8-oracle:1.8.0_144 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 | | JDK v1.7.0_131 Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/20581/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/20581/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from > source > > > Key: HDFS-12237 > URL: https://issues.apache.org/jira/browse/HDFS-12237 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments:
[jira] [Commented] (HDFS-12264) DataNode uses a deprecated method IoUtils#cleanup.
[ https://issues.apache.org/jira/browse/HDFS-12264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117263#comment-16117263 ] Arpit Agarwal commented on HDFS-12264: -- +1 pending Jenkins. > DataNode uses a deprecated method IoUtils#cleanup. > -- > > Key: HDFS-12264 > URL: https://issues.apache.org/jira/browse/HDFS-12264 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Yadav >Assignee: Ajay Yadav > Attachments: HDFS-12264.01.patch > > > DataNode uses a deprecated method IoUtils#cleanup. It can be replaced with > IoUtils#cleanupWithLogger. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12264) DataNode uses a deprecated method IoUtils#cleanup.
[ https://issues.apache.org/jira/browse/HDFS-12264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117258#comment-16117258 ] Arpit Agarwal edited comment on HDFS-12264 at 8/7/17 9:01 PM: -- [~arpitagarwal],[~anu] Could you please review patch for HDFS-12264. was (Author: ajayydv): [~aagarwa],[~anu] Could you please review patch for HDFS-12264. > DataNode uses a deprecated method IoUtils#cleanup. > -- > > Key: HDFS-12264 > URL: https://issues.apache.org/jira/browse/HDFS-12264 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Yadav >Assignee: Ajay Yadav > Attachments: HDFS-12264.01.patch > > > DataNode uses a deprecated method IoUtils#cleanup. It can be replaced with > IoUtils#cleanupWithLogger. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12264) DataNode uses a deprecated method IoUtils#cleanup.
[ https://issues.apache.org/jira/browse/HDFS-12264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117258#comment-16117258 ] Ajay Yadav commented on HDFS-12264: --- [~aagarwa],[~anu] Could you please review patch for HDFS-12264. > DataNode uses a deprecated method IoUtils#cleanup. > -- > > Key: HDFS-12264 > URL: https://issues.apache.org/jira/browse/HDFS-12264 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Yadav >Assignee: Ajay Yadav > Attachments: HDFS-12264.01.patch > > > DataNode uses a deprecated method IoUtils#cleanup. It can be replaced with > IoUtils#cleanupWithLogger. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12264) DataNode uses a deprecated method IoUtils#cleanup.
[ https://issues.apache.org/jira/browse/HDFS-12264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Yadav updated HDFS-12264: -- Attachment: HDFS-12264.01.patch > DataNode uses a deprecated method IoUtils#cleanup. > -- > > Key: HDFS-12264 > URL: https://issues.apache.org/jira/browse/HDFS-12264 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Yadav >Assignee: Ajay Yadav > Attachments: HDFS-12264.01.patch > > > DataNode uses a deprecated method IoUtils#cleanup. It can be replaced with > IoUtils#cleanupWithLogger. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12264) DataNode uses a deprecated method IoUtils#cleanup.
[ https://issues.apache.org/jira/browse/HDFS-12264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Yadav updated HDFS-12264: -- Status: Patch Available (was: In Progress) > DataNode uses a deprecated method IoUtils#cleanup. > -- > > Key: HDFS-12264 > URL: https://issues.apache.org/jira/browse/HDFS-12264 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Yadav >Assignee: Ajay Yadav > Attachments: HDFS-12264.01.patch > > > DataNode uses a deprecated method IoUtils#cleanup. It can be replaced with > IoUtils#cleanupWithLogger. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work started] (HDFS-12264) DataNode uses a deprecated method IoUtils#cleanup.
[ https://issues.apache.org/jira/browse/HDFS-12264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-12264 started by Ajay Yadav. - > DataNode uses a deprecated method IoUtils#cleanup. > -- > > Key: HDFS-12264 > URL: https://issues.apache.org/jira/browse/HDFS-12264 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Yadav >Assignee: Ajay Yadav > Attachments: HDFS-12264.01.patch > > > DataNode uses a deprecated method IoUtils#cleanup. It can be replaced with > IoUtils#cleanupWithLogger. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12237) libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from source
[ https://issues.apache.org/jira/browse/HDFS-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anatoli Shein updated HDFS-12237: - Status: In Progress (was: Patch Available) > libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from > source > > > Key: HDFS-12237 > URL: https://issues.apache.org/jira/browse/HDFS-12237 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments: HDFS-12237.HDFS-8707.000.patch, > HDFS-12237.HDFS-8707.001.patch > > > Looks like the PROTOC_IS_COMPATIBLE check fails when Protobuf library is > built from source. This happens because the check if performed during the > cmake phase, and the protobuf library needed for this test is build from > source only during the make phase, so the check fails with "ld: cannot find > -lprotobuf" because it was not built yet. We should probably restrict this > test to run only in cases when Protobuf library is already present and not > being built from source. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12237) libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from source
[ https://issues.apache.org/jira/browse/HDFS-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anatoli Shein updated HDFS-12237: - Status: Patch Available (was: In Progress) > libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from > source > > > Key: HDFS-12237 > URL: https://issues.apache.org/jira/browse/HDFS-12237 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments: HDFS-12237.HDFS-8707.000.patch, > HDFS-12237.HDFS-8707.001.patch > > > Looks like the PROTOC_IS_COMPATIBLE check fails when Protobuf library is > built from source. This happens because the check if performed during the > cmake phase, and the protobuf library needed for this test is build from > source only during the make phase, so the check fails with "ld: cannot find > -lprotobuf" because it was not built yet. We should probably restrict this > test to run only in cases when Protobuf library is already present and not > being built from source. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12093) [READ] Share remoteFS between ProvidedReplica instances.
[ https://issues.apache.org/jira/browse/HDFS-12093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117230#comment-16117230 ] Hadoop QA commented on HDFS-12093: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-9806 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 35s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} HDFS-9806 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 35s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 14 unchanged - 0 fixed = 17 total (was 14) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 4s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 20s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 94m 28s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure | | | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-12093 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12880688/HDFS-12093-HDFS-9806.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux f8c3f62d0b10 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-9806 / 77b671c | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/20582/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/20582/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/20582/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-HDFS-Build/20582/artifact/patchprocess/patch-asflicense-problems.txt | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/20582/console | |
[jira] [Commented] (HDFS-12271) Incorrect statement in Downgrade section of HDFS Rolling Upgrade document
[ https://issues.apache.org/jira/browse/HDFS-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117197#comment-16117197 ] Hadoop QA commented on HDFS-12271: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 19m 17s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-12271 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12880694/HDFS-12271.000.patch | | Optional Tests | asflicense mvnsite | | uname | Linux 895dfa05cc00 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / adb84f3 | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/20583/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Incorrect statement in Downgrade section of HDFS Rolling Upgrade document > - > > Key: HDFS-12271 > URL: https://issues.apache.org/jira/browse/HDFS-12271 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Nandakumar >Assignee: Nandakumar >Priority: Minor > Attachments: HDFS-12271.000.patch > > > In {{HDFS Rolling Upgrade}} document under {{Downgrade}} section, instruction > given for {{Downgrade Active and Standby NNs}} has the following statement > bq. Shutdown and upgrade NN1 > which should be > bq. Shutdown and downgrade NN1 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12272) chooseRemoteRack() semantics broken in trunk
[ https://issues.apache.org/jira/browse/HDFS-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117168#comment-16117168 ] Kihwal Lee commented on HDFS-12272: --- My apologies. I was looking at a wrong code base. > chooseRemoteRack() semantics broken in trunk > > > Key: HDFS-12272 > URL: https://issues.apache.org/jira/browse/HDFS-12272 > Project: Hadoop HDFS > Issue Type: Bug > Components: block placement >Affects Versions: 3.0.0-alpha3 >Reporter: Kihwal Lee >Priority: Critical > > The {{chooseRemoteRack()}} method in the default block placement policy was > designed to pick from maximum number of racks. E.g. If asked to pick 2 and > there are 2 or more racks available, the two will be on different racks. It > wasn't implicit or accidental semantics. There was a specific logic in > {{chooseRandom()}} that makes it happen. > This behavior is broken after HDFS-11530 as this logic was removed from > {{chooseRandom()}}. Now the result is unpredictable. Sometimes the replicas > end up in the same rack. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11530) Use HDFS specific network topology to choose datanode in BlockPlacementPolicyDefault
[ https://issues.apache.org/jira/browse/HDFS-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117163#comment-16117163 ] Kihwal Lee commented on HDFS-11530: --- Sorry, never mind. I was looking at a wrong branch. > Use HDFS specific network topology to choose datanode in > BlockPlacementPolicyDefault > > > Key: HDFS-11530 > URL: https://issues.apache.org/jira/browse/HDFS-11530 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: 3.0.0-alpha2 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Fix For: 3.0.0-alpha4 > > Attachments: HDFS-11530.001.patch, HDFS-11530.002.patch, > HDFS-11530.003.patch, HDFS-11530.004.patch, HDFS-11530.005.patch, > HDFS-11530.006.patch, HDFS-11530.007.patch, HDFS-11530.008.patch, > HDFS-11530.009.patch, HDFS-11530.010.patch, HDFS-11530.011.patch, > HDFS-11530.012.patch, HDFS-11530.013.patch, HDFS-11530.014.patch, > HDFS-11530.015.patch > > > The work for {{chooseRandomWithStorageType}} has been merged in HDFS-11482. > But this method is contained in new topology {{DFSNetworkTopology}} which is > specified for HDFS. We should update this and let > {{BlockPlacementPolicyDefault}} use the new way since the original way is > inefficient. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-12272) chooseRemoteRack() semantics broken in trunk
[ https://issues.apache.org/jira/browse/HDFS-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee resolved HDFS-12272. --- Resolution: Invalid > chooseRemoteRack() semantics broken in trunk > > > Key: HDFS-12272 > URL: https://issues.apache.org/jira/browse/HDFS-12272 > Project: Hadoop HDFS > Issue Type: Bug > Components: block placement >Affects Versions: 3.0.0-alpha3 >Reporter: Kihwal Lee >Priority: Critical > > The {{chooseRemoteRack()}} method in the default block placement policy was > designed to pick from maximum number of racks. E.g. If asked to pick 2 and > there are 2 or more racks available, the two will be on different racks. It > wasn't implicit or accidental semantics. There was a specific logic in > {{chooseRandom()}} that makes it happen. > This behavior is broken after HDFS-11530 as this logic was removed from > {{chooseRandom()}}. Now the result is unpredictable. Sometimes the replicas > end up in the same rack. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12272) chooseRemoteRack() semantics broken in trunk
Kihwal Lee created HDFS-12272: - Summary: chooseRemoteRack() semantics broken in trunk Key: HDFS-12272 URL: https://issues.apache.org/jira/browse/HDFS-12272 Project: Hadoop HDFS Issue Type: Bug Components: block placement Affects Versions: 3.0.0-alpha3 Reporter: Kihwal Lee Priority: Critical The {{chooseRemoteRack()}} method in the default block placement policy was designed to pick from maximum number of racks. E.g. If asked to pick 2 and there are 2 or more racks available, the two will be on different racks. It wasn't implicit or accidental semantics. There was a specific logic in {{chooseRandom()}} that makes it happen. This behavior is broken after HDFS-11530 as this logic was removed from {{chooseRandom()}}. Now the result is unpredictable. Sometimes the replicas end up in the same rack. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11530) Use HDFS specific network topology to choose datanode in BlockPlacementPolicyDefault
[ https://issues.apache.org/jira/browse/HDFS-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117145#comment-16117145 ] Kihwal Lee commented on HDFS-11530: --- This breaks the existing {{chooseRandom()}} behavior. It used to pick from maximum number of racks. When {{chooseRemoteRack()}} is called with more than one replica, it used to pick from multiple racks if available. Now it is random, so sometimes they end up in the same rack. > Use HDFS specific network topology to choose datanode in > BlockPlacementPolicyDefault > > > Key: HDFS-11530 > URL: https://issues.apache.org/jira/browse/HDFS-11530 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: 3.0.0-alpha2 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Fix For: 3.0.0-alpha4 > > Attachments: HDFS-11530.001.patch, HDFS-11530.002.patch, > HDFS-11530.003.patch, HDFS-11530.004.patch, HDFS-11530.005.patch, > HDFS-11530.006.patch, HDFS-11530.007.patch, HDFS-11530.008.patch, > HDFS-11530.009.patch, HDFS-11530.010.patch, HDFS-11530.011.patch, > HDFS-11530.012.patch, HDFS-11530.013.patch, HDFS-11530.014.patch, > HDFS-11530.015.patch > > > The work for {{chooseRandomWithStorageType}} has been merged in HDFS-11482. > But this method is contained in new topology {{DFSNetworkTopology}} which is > specified for HDFS. We should update this and let > {{BlockPlacementPolicyDefault}} use the new way since the original way is > inefficient. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12271) Incorrect statement in Downgrade section of HDFS Rolling Upgrade document
[ https://issues.apache.org/jira/browse/HDFS-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nandakumar updated HDFS-12271: -- Attachment: HDFS-12271.000.patch > Incorrect statement in Downgrade section of HDFS Rolling Upgrade document > - > > Key: HDFS-12271 > URL: https://issues.apache.org/jira/browse/HDFS-12271 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Nandakumar >Assignee: Nandakumar >Priority: Minor > Attachments: HDFS-12271.000.patch > > > In {{HDFS Rolling Upgrade}} document under {{Downgrade}} section, instruction > given for {{Downgrade Active and Standby NNs}} has the following statement > bq. Shutdown and upgrade NN1 > which should be > bq. Shutdown and downgrade NN1 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12271) Incorrect statement in Downgrade section of HDFS Rolling Upgrade document
[ https://issues.apache.org/jira/browse/HDFS-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nandakumar updated HDFS-12271: -- Status: Patch Available (was: Open) > Incorrect statement in Downgrade section of HDFS Rolling Upgrade document > - > > Key: HDFS-12271 > URL: https://issues.apache.org/jira/browse/HDFS-12271 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Nandakumar >Assignee: Nandakumar >Priority: Minor > Attachments: HDFS-12271.000.patch > > > In {{HDFS Rolling Upgrade}} document under {{Downgrade}} section, instruction > given for {{Downgrade Active and Standby NNs}} has the following statement > bq. Shutdown and upgrade NN1 > which should be > bq. Shutdown and downgrade NN1 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12271) Incorrect statement in Downgrade section of HDFS Rolling Upgrade document
Nandakumar created HDFS-12271: - Summary: Incorrect statement in Downgrade section of HDFS Rolling Upgrade document Key: HDFS-12271 URL: https://issues.apache.org/jira/browse/HDFS-12271 Project: Hadoop HDFS Issue Type: Bug Components: documentation Reporter: Nandakumar Assignee: Nandakumar Priority: Minor In {{HDFS Rolling Upgrade}} document under {{Downgrade}} section, instruction given for {{Downgrade Active and Standby NNs}} has the following statement bq. Shutdown and upgrade NN1 which should be bq. Shutdown and downgrade NN1 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10646) Federation admin tool
[ https://issues.apache.org/jira/browse/HDFS-10646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117077#comment-16117077 ] Hadoop QA commented on HDFS-10646: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 1s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} HDFS-10467 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 10s{color} | {color:green} HDFS-10467 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s{color} | {color:green} HDFS-10467 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} HDFS-10467 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} HDFS-10467 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 19s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-10467 has 10 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} HDFS-10467 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 2s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 2 new + 409 unchanged - 0 fixed = 411 total (was 409) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 49s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 404 unchanged - 0 fixed = 405 total (was 404) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 26s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 14s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}109m 8s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits | | Timed out junit tests | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-10646 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12880663/HDFS-10646-HDFS-10467-001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml shellcheck shelldocs findbugs checkstyle cc | | uname | Linux fa6478028f84 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Updated] (HDFS-12093) [READ] Share remoteFS between ProvidedReplica instances.
[ https://issues.apache.org/jira/browse/HDFS-12093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12093: -- Status: Open (was: Patch Available) > [READ] Share remoteFS between ProvidedReplica instances. > > > Key: HDFS-12093 > URL: https://issues.apache.org/jira/browse/HDFS-12093 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs > Attachments: HDFS-12093-HDFS-9806.001.patch, > HDFS-12093-HDFS-9806.002.patch > > > When a Datanode comes online using Provided storage, it fills the > {{ReplicaMap}} with the known replicas. With Provided Storage, this includes > {{ProvidedReplica}} instances. Each of these objects, in their constructor, > will construct an FileSystem using the Service Provider. This can result in > contacting the remote file system and checking that the credentials are > correct and that the data is there. For large systems this is a prohibitively > expensive operation to perform per replica. > Instead, the {{ProvidedVolumeImpl}} should own the reference to the > {{remoteFS}} and should share it with the {{ProvidedReplica}} objects on > their creation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12093) [READ] Share remoteFS between ProvidedReplica instances.
[ https://issues.apache.org/jira/browse/HDFS-12093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12093: -- Attachment: HDFS-12093-HDFS-9806.002.patch Posting the patch rebased on latest version of feature branch. > [READ] Share remoteFS between ProvidedReplica instances. > > > Key: HDFS-12093 > URL: https://issues.apache.org/jira/browse/HDFS-12093 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs > Attachments: HDFS-12093-HDFS-9806.001.patch, > HDFS-12093-HDFS-9806.002.patch > > > When a Datanode comes online using Provided storage, it fills the > {{ReplicaMap}} with the known replicas. With Provided Storage, this includes > {{ProvidedReplica}} instances. Each of these objects, in their constructor, > will construct an FileSystem using the Service Provider. This can result in > contacting the remote file system and checking that the credentials are > correct and that the data is there. For large systems this is a prohibitively > expensive operation to perform per replica. > Instead, the {{ProvidedVolumeImpl}} should own the reference to the > {{remoteFS}} and should share it with the {{ProvidedReplica}} objects on > their creation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12093) [READ] Share remoteFS between ProvidedReplica instances.
[ https://issues.apache.org/jira/browse/HDFS-12093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12093: -- Status: Patch Available (was: Open) > [READ] Share remoteFS between ProvidedReplica instances. > > > Key: HDFS-12093 > URL: https://issues.apache.org/jira/browse/HDFS-12093 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs > Attachments: HDFS-12093-HDFS-9806.001.patch, > HDFS-12093-HDFS-9806.002.patch > > > When a Datanode comes online using Provided storage, it fills the > {{ReplicaMap}} with the known replicas. With Provided Storage, this includes > {{ProvidedReplica}} instances. Each of these objects, in their constructor, > will construct an FileSystem using the Service Provider. This can result in > contacting the remote file system and checking that the credentials are > correct and that the data is there. For large systems this is a prohibitively > expensive operation to perform per replica. > Instead, the {{ProvidedVolumeImpl}} should own the reference to the > {{remoteFS}} and should share it with the {{ProvidedReplica}} objects on > their creation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12091) [READ] Check that the replicas served from a {{ProvidedVolumeImpl}} belong to the correct external storage
[ https://issues.apache.org/jira/browse/HDFS-12091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12091: -- Resolution: Fixed Status: Resolved (was: Patch Available) > [READ] Check that the replicas served from a {{ProvidedVolumeImpl}} belong to > the correct external storage > -- > > Key: HDFS-12091 > URL: https://issues.apache.org/jira/browse/HDFS-12091 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-12091-HDFS-9806.001.patch, > HDFS-12091-HDFS-9806.002.patch > > > A {{ProvidedVolumeImpl}} can only serve blocks that "belong" to it. i.e., for > blocks served from a {{ProvidedVolumeImpl}}, the {{baseURI}} of the > {{ProvidedVolumeImpl}} should be a prefix of the URI of the blocks. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12091) [READ] Check that the replicas served from a {{ProvidedVolumeImpl}} belong to the correct external storage
[ https://issues.apache.org/jira/browse/HDFS-12091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117015#comment-16117015 ] Virajith Jalaparti commented on HDFS-12091: --- Thanks [~ehiggs]! Committed the patch to the feature branch. > [READ] Check that the replicas served from a {{ProvidedVolumeImpl}} belong to > the correct external storage > -- > > Key: HDFS-12091 > URL: https://issues.apache.org/jira/browse/HDFS-12091 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-12091-HDFS-9806.001.patch, > HDFS-12091-HDFS-9806.002.patch > > > A {{ProvidedVolumeImpl}} can only serve blocks that "belong" to it. i.e., for > blocks served from a {{ProvidedVolumeImpl}}, the {{baseURI}} of the > {{ProvidedVolumeImpl}} should be a prefix of the URI of the blocks. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11814) Benchmark and tune for prefered default cell size
[ https://issues.apache.org/jira/browse/HDFS-11814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117009#comment-16117009 ] Andrew Wang commented on HDFS-11814: Hi Wei, thanks for working on this. Did you test with a highly concurrent workload? One concern is that performance will degrade with a lot of seeks. It would be good to look at disk statistics to make sure disk throughput is fully saturated during the run. > Benchmark and tune for prefered default cell size > - > > Key: HDFS-11814 > URL: https://issues.apache.org/jira/browse/HDFS-11814 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: SammiChen >Assignee: Wei Zhou > Labels: hdfs-ec-3.0-must-do > Attachments: RS-Read.png, RS-Write.png > > > Doing some benchmarking to see which cell size is more desirable, other than > current 64k -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-12270) Allow more spreading of replicas during block placement
[ https://issues.apache.org/jira/browse/HDFS-12270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee reassigned HDFS-12270: - Assignee: Kihwal Lee > Allow more spreading of replicas during block placement > --- > > Key: HDFS-12270 > URL: https://issues.apache.org/jira/browse/HDFS-12270 > Project: Hadoop HDFS > Issue Type: Improvement > Components: block placement >Reporter: Kihwal Lee >Assignee: Kihwal Lee > > The default block placement places the first replica locally if possible, > then on a node in a remote rack, and finally another node in the remote rack. > If more than 3 replicas are requested, the rest are spread across available > racks. This strategy was chosen to minimize the inter-rack traffic and be > able to tolerate a rack-level failure such as switch outages. > This can tolerate a single rack failure, but if there also is a node outage > (double failure), having missing blocks is highly likely. Although network > bandwidth is still limited resource, it is less so than in the past. Some > users might want increased data availability at the price of increased > inter-rack traffic. > This can be achieved by using the upgrade domain feature, but a simple tweak > in the default policy can enable this, in case one does not want to go with > the upgrade domain. > I propose introducing a new config to control this. > Rack placement level 0: default. Current behavior. > Rack placement level 1: Use minimum 3 racks, if available. Allow existing > blocks to remain as is. > Rack placement level 2: Use minimum 3 racks, if available. Apply this policy > to all replication verification. (e.g. replication queue initialization) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12270) Allow more spreading of replicas during block placement
Kihwal Lee created HDFS-12270: - Summary: Allow more spreading of replicas during block placement Key: HDFS-12270 URL: https://issues.apache.org/jira/browse/HDFS-12270 Project: Hadoop HDFS Issue Type: Improvement Components: block placement Reporter: Kihwal Lee The default block placement places the first replica locally if possible, then on a node in a remote rack, and finally another node in the remote rack. If more than 3 replicas are requested, the rest are spread across available racks. This strategy was chosen to minimize the inter-rack traffic and be able to tolerate a rack-level failure such as switch outages. This can tolerate a single rack failure, but if there also is a node outage (double failure), having missing blocks is highly likely. Although network bandwidth is still limited resource, it is less so than in the past. Some users might want increased data availability at the price of increased inter-rack traffic. This can be achieved by using the upgrade domain feature, but a simple tweak in the default policy can enable this, in case one does not want to go with the upgrade domain. I propose introducing a new config to control this. Rack placement level 0: default. Current behavior. Rack placement level 1: Use minimum 3 racks, if available. Allow existing blocks to remain as is. Rack placement level 2: Use minimum 3 racks, if available. Apply this policy to all replication verification. (e.g. replication queue initialization) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12237) libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from source
[ https://issues.apache.org/jira/browse/HDFS-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anatoli Shein updated HDFS-12237: - Attachment: HDFS-12237.HDFS-8707.001.patch Reattaching the patch. > libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from > source > > > Key: HDFS-12237 > URL: https://issues.apache.org/jira/browse/HDFS-12237 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments: HDFS-12237.HDFS-8707.000.patch, > HDFS-12237.HDFS-8707.001.patch > > > Looks like the PROTOC_IS_COMPATIBLE check fails when Protobuf library is > built from source. This happens because the check if performed during the > cmake phase, and the protobuf library needed for this test is build from > source only during the make phase, so the check fails with "ld: cannot find > -lprotobuf" because it was not built yet. We should probably restrict this > test to run only in cases when Protobuf library is already present and not > being built from source. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Reopened] (HDFS-12237) libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from source
[ https://issues.apache.org/jira/browse/HDFS-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anatoli Shein reopened HDFS-12237: -- > libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from > source > > > Key: HDFS-12237 > URL: https://issues.apache.org/jira/browse/HDFS-12237 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments: HDFS-12237.HDFS-8707.000.patch > > > Looks like the PROTOC_IS_COMPATIBLE check fails when Protobuf library is > built from source. This happens because the check if performed during the > cmake phase, and the protobuf library needed for this test is build from > source only during the make phase, so the check fails with "ld: cannot find > -lprotobuf" because it was not built yet. We should probably restrict this > test to run only in cases when Protobuf library is already present and not > being built from source. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12237) libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from source
[ https://issues.apache.org/jira/browse/HDFS-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anatoli Shein updated HDFS-12237: - Status: Patch Available (was: Reopened) > libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from > source > > > Key: HDFS-12237 > URL: https://issues.apache.org/jira/browse/HDFS-12237 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments: HDFS-12237.HDFS-8707.000.patch > > > Looks like the PROTOC_IS_COMPATIBLE check fails when Protobuf library is > built from source. This happens because the check if performed during the > cmake phase, and the protobuf library needed for this test is build from > source only during the make phase, so the check fails with "ld: cannot find > -lprotobuf" because it was not built yet. We should probably restrict this > test to run only in cases when Protobuf library is already present and not > being built from source. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12244) Ozone: the static cache provided by ContainerCache does not work in Unit tests
[ https://issues.apache.org/jira/browse/HDFS-12244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116978#comment-16116978 ] Tsz Wo Nicholas Sze commented on HDFS-12244: Yes, this is something more. The problem is that there is only one cache returned by ContainerCache.getInstance(conf)) since it returns the static field {{ContainerCache.cache}}. When there are multiple datanodes, they all are using the same cache object which does not seem to be shared by design. > Ozone: the static cache provided by ContainerCache does not work in Unit > tests > --- > > Key: HDFS-12244 > URL: https://issues.apache.org/jira/browse/HDFS-12244 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Tsz Wo Nicholas Sze >Assignee: Anu Engineer > > Since a cluster may have >1 datanodes, a static ContainerCache is shared > among the datanodes. When one datanode shutdown, the cache will be shutdown > so that the other datanodes cannot use the cache any more. It results in > "leveldb.DBException: Closed" > {code} > org.iq80.leveldb.DBException: Closed > at org.fusesource.leveldbjni.internal.JniDB.get(JniDB.java:75) > at org.apache.hadoop.utils.LevelDBStore.get(LevelDBStore.java:109) > at > org.apache.hadoop.ozone.container.common.impl.KeyManagerImpl.getKey(KeyManagerImpl.java:116) > at > org.apache.hadoop.ozone.container.common.impl.Dispatcher.handleGetSmallFile(Dispatcher.java:677) > at > org.apache.hadoop.ozone.container.common.impl.Dispatcher.smallFileHandler(Dispatcher.java:293) > at > org.apache.hadoop.ozone.container.common.impl.Dispatcher.dispatch(Dispatcher.java:121) > at > org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatch(ContainerStateMachine.java:94) > ... > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12221) Replace xcerces in XmlEditsVisitor
[ https://issues.apache.org/jira/browse/HDFS-12221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116867#comment-16116867 ] Lei (Eddy) Xu commented on HDFS-12221: -- [~ajayydv] Thanks for the patch. Please rebase your branch and fix conflicts with trunk. Also is this a duplicated line? {code} ${runningWithNative} {code} > Replace xcerces in XmlEditsVisitor > --- > > Key: HDFS-12221 > URL: https://issues.apache.org/jira/browse/HDFS-12221 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Lei (Eddy) Xu >Assignee: Ajay Yadav > Attachments: fsimage_hdfs-12221.xml, HDFS-12221.01.patch, > HDFS-12221.02.patch > > > XmlEditsVisitor should use new XML capability in the newer JDK, to make JAR > shading easier (HADOOP-14672) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10646) Federation admin tool
[ https://issues.apache.org/jira/browse/HDFS-10646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Inigo Goiri updated HDFS-10646: --- Attachment: HDFS-10646-HDFS-10467-001.patch * Fixed unit test * Fixed checkstyle * Fixed whitespace > Federation admin tool > - > > Key: HDFS-10646 > URL: https://issues.apache.org/jira/browse/HDFS-10646 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Affects Versions: HDFS-10467 >Reporter: Inigo Goiri >Assignee: Inigo Goiri > Attachments: HDFS-10646-HDFS-10467-000.patch, > HDFS-10646-HDFS-10467-001.patch > > > Tools for administrators to manage HDFS federation. This includes managing > the mount table and decommission subclusters. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12172) Reduce EZ lookup overhead
[ https://issues.apache.org/jira/browse/HDFS-12172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116825#comment-16116825 ] Daryn Sharp commented on HDFS-12172: I'll take a look today. > Reduce EZ lookup overhead > - > > Key: HDFS-12172 > URL: https://issues.apache.org/jira/browse/HDFS-12172 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 2.7.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Attachments: HDFS-12172.01.patch, HDFS-12172.patch > > > A number of inefficiencies exist in EZ lookups. These are amplified by > frequent operations like list status. Once one encryption zone exists, all > operations take the performance penalty. > Ex. Operations should not perform redundant lookups. EZ path reconstruction > should be lazy since it's not required in the common case. Renames do not > need to reallocate new IIPs to check parent dirs for EZ. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12134) libhdfs++: Add a synchronization interface for the GSSAPI
[ https://issues.apache.org/jira/browse/HDFS-12134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116781#comment-16116781 ] James Clampffer commented on HDFS-12134: w.r.t. lack of new and modified tests: Added a bunch, they just don't get picked up by test4tests yet, hoping to figure that out soon. If anyone knows their way around the build system and wants to contribute that'd be really appreciated. HDFS-12168 filed for getting it working. > libhdfs++: Add a synchronization interface for the GSSAPI > - > > Key: HDFS-12134 > URL: https://issues.apache.org/jira/browse/HDFS-12134 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: James Clampffer >Assignee: James Clampffer > Attachments: HDFS-12134.HDFS-8707.000.patch, > HDFS-12134.HDFS-8707.001.patch, HDFS-12134.HDFS-8707.002.patch, > HDFS-12134.HDFS-8707.003.patch, HDFS-12134.HDFS-8707.004.patch > > > Bits of the GSSAPI that Cyrus Sasl uses aren't thread safe. There needs to > be a way for a client application to share a lock with this library in order > to prevent race conditions. It can be done using event callbacks through the > C API but we can provide something more robust (RAII) in the C++ API. > Proposed client supplied lock, pretty much the C++17 lockable concept. Use a > default if one isn't provided. This would be scoped at the process level > since it's unlikely that multiple instances of libgssapi unless someone puts > some effort in with dlopen/dlsym. > {code} > class LockProvider > { > virtual ~LockProvider() {} > // allow client application to deny access to the lock > virtual bool try_lock() = 0; > virtual void unlock() = 0; > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12005) Ozone: Web interface for SCM
[ https://issues.apache.org/jira/browse/HDFS-12005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116767#comment-16116767 ] Elek, Marton commented on HDFS-12005: - The current plan is to share the common part of the KSM web interface with the SCM (jvm args, uptime, RPC latency). As a first step I would add the following SCM specific stats: * Size openContainers from the BlockManager * Table of the Nodes of the NodesManager * Aggregated NodeStats from NodeManager > Ozone: Web interface for SCM > > > Key: HDFS-12005 > URL: https://issues.apache.org/jira/browse/HDFS-12005 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton > > This is a propsal about how a web interface could be implemented for SCM (and > later for KSM) similar to the namenode ui. > 1. JS framework > There are three big option here. > A.) One is to use a full featured web framework with all the webpack/npm > minify/uglify magic. Build time the webpack/npm scripts should be run and the > result will be added to the jar file. > B.) It could be simplified if the generated minified/uglified js files are > added to the project on commit time. It requires an additional step for every > new patch (to generate the new minified javascripts) but doesn't require > additional JS build tools during the build. > C.) The third option is to make it as simple as possible similar to the > current namenode ui which uses javascript but every dependency is commited > (without JS minify/uglify and other preprocessing). > I prefer to the third one as: > * I have seen a lot of problems during frequent builds od older tez-ui > versions (bower version mismatch, npm version mismatch, npm transitive > dependency problems, proxy problem with older versions). All they could be > fixed but requires additional JS/NPM magic/knowledge. Without additional npm > build step the hdfs projects build could be kept more simple. > * The complexity of the planned SCM/KSM ui (hopefully it will remain simple) > doesn't require more sophisticated model. (Eg. we don't need JS require as we > need only a few controllers) > * HDFS developers mostly backend developers and not JS developers > 2. Frameworks > The big advantages of a more modern JS framework is the simplified > programming model (for example with two way databinding) I suggest to use a > more modern framework (not just jquery) which supports plain js (not just > ECMA2015/2016/typescript) and just include the required js files in the > projects (similar to the included bootstrap or as the existing namenode ui > works). > > * React could be a good candidate but it requires more library as it's just > a ui framework, even the REST calls need separated library. It could be used > with plain javascript instead of JSX and classes but not straightforward, and > it's more verbose. > > * Ember is used in yarnui2 but the main strength of the ember is the CLI > which couldn't be used for the simplified approach easily. I think ember is > best with the A.) option > * Angular 1 is a good candidate (but not so fancy). In case of angular 1 > the component based approach should be used (in that case later it could be > easier to migrate to angular 2 or react) > * The mainstream side of Angular 2 uses typescript, it could work with > plain JS but it could require additional knowledge, most of the tutorials and > documentation shows the typescript approach. > I suggest to use angular 1 or react. Maybe angular is easier to use as don't > need to emulate JSX with function calls, simple HTML templates could be used. > 3. Backend > I would prefer the approach of the existing namenode ui where the backend is > just the jmx endpoint. To keep it as simple as possible I suggest to try to > avoid dedicated REST backend if possible. Later we can use REST api of > SCM/KSM if they will be implemented. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work started] (HDFS-12005) Ozone: Web interface for SCM
[ https://issues.apache.org/jira/browse/HDFS-12005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-12005 started by Elek, Marton. --- > Ozone: Web interface for SCM > > > Key: HDFS-12005 > URL: https://issues.apache.org/jira/browse/HDFS-12005 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton > > This is a propsal about how a web interface could be implemented for SCM (and > later for KSM) similar to the namenode ui. > 1. JS framework > There are three big option here. > A.) One is to use a full featured web framework with all the webpack/npm > minify/uglify magic. Build time the webpack/npm scripts should be run and the > result will be added to the jar file. > B.) It could be simplified if the generated minified/uglified js files are > added to the project on commit time. It requires an additional step for every > new patch (to generate the new minified javascripts) but doesn't require > additional JS build tools during the build. > C.) The third option is to make it as simple as possible similar to the > current namenode ui which uses javascript but every dependency is commited > (without JS minify/uglify and other preprocessing). > I prefer to the third one as: > * I have seen a lot of problems during frequent builds od older tez-ui > versions (bower version mismatch, npm version mismatch, npm transitive > dependency problems, proxy problem with older versions). All they could be > fixed but requires additional JS/NPM magic/knowledge. Without additional npm > build step the hdfs projects build could be kept more simple. > * The complexity of the planned SCM/KSM ui (hopefully it will remain simple) > doesn't require more sophisticated model. (Eg. we don't need JS require as we > need only a few controllers) > * HDFS developers mostly backend developers and not JS developers > 2. Frameworks > The big advantages of a more modern JS framework is the simplified > programming model (for example with two way databinding) I suggest to use a > more modern framework (not just jquery) which supports plain js (not just > ECMA2015/2016/typescript) and just include the required js files in the > projects (similar to the included bootstrap or as the existing namenode ui > works). > > * React could be a good candidate but it requires more library as it's just > a ui framework, even the REST calls need separated library. It could be used > with plain javascript instead of JSX and classes but not straightforward, and > it's more verbose. > > * Ember is used in yarnui2 but the main strength of the ember is the CLI > which couldn't be used for the simplified approach easily. I think ember is > best with the A.) option > * Angular 1 is a good candidate (but not so fancy). In case of angular 1 > the component based approach should be used (in that case later it could be > easier to migrate to angular 2 or react) > * The mainstream side of Angular 2 uses typescript, it could work with > plain JS but it could require additional knowledge, most of the tutorials and > documentation shows the typescript approach. > I suggest to use angular 1 or react. Maybe angular is easier to use as don't > need to emulate JSX with function calls, simple HTML templates could be used. > 3. Backend > I would prefer the approach of the existing namenode ui where the backend is > just the jmx endpoint. To keep it as simple as possible I suggest to try to > avoid dedicated REST backend if possible. Later we can use REST api of > SCM/KSM if they will be implemented. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12237) libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from source
[ https://issues.apache.org/jira/browse/HDFS-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116731#comment-16116731 ] Anatoli Shein edited comment on HDFS-12237 at 8/7/17 3:24 PM: -- Added a test to check if protobuf library exists (searches for {code} google::protobuf::compiler::Parser::Parser() {code} in the protobuf library). Also added {code} set (CMAKE_REQUIRED_INCLUDES ${PROTOBUF_INCLUDE_DIRS}) {code} to resolve errors when protobuf library is placed in the uncommon location. was (Author: anatoli.shein): Added a test to check if protobuf library exists (searches for {code} google::protobuf::compiler::Parser::Parser() {code} in the protobuf library). Also added set {code} (CMAKE_REQUIRED_INCLUDES ${PROTOBUF_INCLUDE_DIRS}) {code} to resolve errors when protobuf library is placed in the uncommon location. > libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from > source > > > Key: HDFS-12237 > URL: https://issues.apache.org/jira/browse/HDFS-12237 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments: HDFS-12237.HDFS-8707.000.patch > > > Looks like the PROTOC_IS_COMPATIBLE check fails when Protobuf library is > built from source. This happens because the check if performed during the > cmake phase, and the protobuf library needed for this test is build from > source only during the make phase, so the check fails with "ld: cannot find > -lprotobuf" because it was not built yet. We should probably restrict this > test to run only in cases when Protobuf library is already present and not > being built from source. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12237) libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from source
[ https://issues.apache.org/jira/browse/HDFS-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116731#comment-16116731 ] Anatoli Shein edited comment on HDFS-12237 at 8/7/17 3:23 PM: -- Added a test to check if protobuf library exists (searches for {code} google::protobuf::compiler::Parser::Parser() {code} in the protobuf library). Also added set {code} (CMAKE_REQUIRED_INCLUDES ${PROTOBUF_INCLUDE_DIRS}) {code} to resolve errors when protobuf library is placed in the uncommon location. was (Author: anatoli.shein): Added a test to check if protobuf library exists (searches for {code:c++} google::protobuf::compiler::Parser::Parser() {code} in the protobuf library). Also added set {code} (CMAKE_REQUIRED_INCLUDES ${PROTOBUF_INCLUDE_DIRS}) {code} to resolve errors when protobuf library is placed in the uncommon location. > libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from > source > > > Key: HDFS-12237 > URL: https://issues.apache.org/jira/browse/HDFS-12237 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments: HDFS-12237.HDFS-8707.000.patch > > > Looks like the PROTOC_IS_COMPATIBLE check fails when Protobuf library is > built from source. This happens because the check if performed during the > cmake phase, and the protobuf library needed for this test is build from > source only during the make phase, so the check fails with "ld: cannot find > -lprotobuf" because it was not built yet. We should probably restrict this > test to run only in cases when Protobuf library is already present and not > being built from source. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12237) libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from source
[ https://issues.apache.org/jira/browse/HDFS-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116731#comment-16116731 ] Anatoli Shein edited comment on HDFS-12237 at 8/7/17 3:23 PM: -- Added a test to check if protobuf library exists (searches for {code:c++} google::protobuf::compiler::Parser::Parser() {code} in the protobuf library). Also added set {code} (CMAKE_REQUIRED_INCLUDES ${PROTOBUF_INCLUDE_DIRS}) {code} to resolve errors when protobuf library is placed in the uncommon location. was (Author: anatoli.shein): Added a test to check if protobuf library exists (searches for google::protobuf::compiler::Parser::Parser() in the protobuf library). Also added set (CMAKE_REQUIRED_INCLUDES ${PROTOBUF_INCLUDE_DIRS}) to resolve errors when protobuf library is placed in the uncommon location. > libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from > source > > > Key: HDFS-12237 > URL: https://issues.apache.org/jira/browse/HDFS-12237 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments: HDFS-12237.HDFS-8707.000.patch > > > Looks like the PROTOC_IS_COMPATIBLE check fails when Protobuf library is > built from source. This happens because the check if performed during the > cmake phase, and the protobuf library needed for this test is build from > source only during the make phase, so the check fails with "ld: cannot find > -lprotobuf" because it was not built yet. We should probably restrict this > test to run only in cases when Protobuf library is already present and not > being built from source. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12237) libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from source
[ https://issues.apache.org/jira/browse/HDFS-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116731#comment-16116731 ] Anatoli Shein edited comment on HDFS-12237 at 8/7/17 3:22 PM: -- Added a test to check if protobuf library exists (searches for google::protobuf::compiler::Parser::Parser() in the protobuf library). Also added set (CMAKE_REQUIRED_INCLUDES ${PROTOBUF_INCLUDE_DIRS}) to resolve errors when protobuf library is placed in the uncommon location. was (Author: anatoli.shein): Added a test to check if protobuf library exists (searches for google::protobuf::compiler::Parser::Parser() in the protobuf library). > libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from > source > > > Key: HDFS-12237 > URL: https://issues.apache.org/jira/browse/HDFS-12237 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments: HDFS-12237.HDFS-8707.000.patch > > > Looks like the PROTOC_IS_COMPATIBLE check fails when Protobuf library is > built from source. This happens because the check if performed during the > cmake phase, and the protobuf library needed for this test is build from > source only during the make phase, so the check fails with "ld: cannot find > -lprotobuf" because it was not built yet. We should probably restrict this > test to run only in cases when Protobuf library is already present and not > being built from source. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12237) libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from source
[ https://issues.apache.org/jira/browse/HDFS-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anatoli Shein updated HDFS-12237: - Attachment: HDFS-12237.HDFS-8707.000.patch Added a test to check if protobuf library exists (searches for google::protobuf::compiler::Parser::Parser() in the protobuf library). > libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from > source > > > Key: HDFS-12237 > URL: https://issues.apache.org/jira/browse/HDFS-12237 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments: HDFS-12237.HDFS-8707.000.patch > > > Looks like the PROTOC_IS_COMPATIBLE check fails when Protobuf library is > built from source. This happens because the check if performed during the > cmake phase, and the protobuf library needed for this test is build from > source only during the make phase, so the check fails with "ld: cannot find > -lprotobuf" because it was not built yet. We should probably restrict this > test to run only in cases when Protobuf library is already present and not > being built from source. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-12237) libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from source
[ https://issues.apache.org/jira/browse/HDFS-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anatoli Shein resolved HDFS-12237. -- Resolution: Fixed > libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from > source > > > Key: HDFS-12237 > URL: https://issues.apache.org/jira/browse/HDFS-12237 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > > Looks like the PROTOC_IS_COMPATIBLE check fails when Protobuf library is > built from source. This happens because the check if performed during the > cmake phase, and the protobuf library needed for this test is build from > source only during the make phase, so the check fails with "ld: cannot find > -lprotobuf" because it was not built yet. We should probably restrict this > test to run only in cases when Protobuf library is already present and not > being built from source. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8387) Erasure Coding: Revisit the long and int datatypes usage in striping logic
[ https://issues.apache.org/jira/browse/HDFS-8387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116628#comment-16116628 ] Rakesh R commented on HDFS-8387: [~andrew.wang] Following are different cases where it casting long to int in {{StripedBlockUtil.java}}. Apart from first case all other cases looks mostly OK to me and please let me know your feedback. # I think, only below case need to be fixed and should use {{long lastStripeDataLen}} {code} Line 177: final int lastStripeDataLen = (int)(dataSize % stripeSize); {code} # Since the Block size is long, so casting is needed here.{code} Line 182: final int numStripes = (int) ((dataSize - 1) / stripeSize + 1); Line 231: int cellIdxInBlk = (int) (offsetInBlk / cellSize); {code} # Uses ByteBuffer and takes integer datatype for position, limit etc, so casting is needed here. {code} Line 320: (int) (rangeStartInBlockGroup % ((long) cellSize * dataBlkNum)); Line 328: int overLapLen = (int) (overlapEnd - overlapStart + 1); Line 331: final int pos = (int) (bufOffset + overlapStart - cellStart); {code} # Uses int datatype to represent indexing and integer casting is OK. {code} Line 401: int firstCellIdxInBG = (int) (rangeStartInBlockGroup / cellSize); Line 402: int lastCellIdxInBG = (int) (rangeEndInBlockGroup / cellSize); {code} # Cell size is int type, so the result will be within int value and uses int datatype for storing the values. {code} Line 406: final int firstCellOffset = (int) (rangeStartInBlockGroup % cellSize); Line 408: (int) Math.min(cellSize - (rangeStartInBlockGroup % cellSize), len); Line 412: final int lastCellSize = (int) (rangeEndInBlockGroup % cellSize) + 1; Line 521: int overLapLen = (int) (overlapEnd - overlapStart + 1); Line 531: (int) (done + overlapStart - cellStart), overLapLen); Line 919: return (int) (reportedBlock.getBlockId() & {code} > Erasure Coding: Revisit the long and int datatypes usage in striping logic > -- > > Key: HDFS-8387 > URL: https://issues.apache.org/jira/browse/HDFS-8387 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Rakesh R >Assignee: Rakesh R > Labels: hdfs-ec-3.0-nice-to-have > > This idea of this jira is to revisit the usage of {{long}} and {{int}} data > types in the striping logic. > Related discussion > [here|https://issues.apache.org/jira/browse/HDFS-8294?focusedCommentId=14540788=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14540788] > in HDFS-8294 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12162) Update listStatus document to describe the behavior when the argument is a file
[ https://issues.apache.org/jira/browse/HDFS-12162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116490#comment-16116490 ] Elek, Marton commented on HDFS-12162: - LGTM. I just tested HDFS-12139 patch, and REST call worked as the patch wrote it. > Update listStatus document to describe the behavior when the argument is a > file > --- > > Key: HDFS-12162 > URL: https://issues.apache.org/jira/browse/HDFS-12162 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, httpfs >Reporter: Yongjun Zhang >Assignee: Ajay Yadav > Attachments: HDFS-12162.01.patch, Screen Shot 2017-08-03 at 11.01.46 > AM.png, Screen Shot 2017-08-03 at 11.02.19 AM.png > > > The listStatus method can take in either directory path or file path as > input, however, currently both the javadoc and external document describe it > as only taking directory as input. This jira is to update the document about > the behavior when the argument is a file path. > Thanks [~xiaochen] for the review and discussion in HDFS-12139, creating this > jira is the result of our discussion there. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12198) Document missing namenode metrics that were added recently
[ https://issues.apache.org/jira/browse/HDFS-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116447#comment-16116447 ] Yiqun Lin edited comment on HDFS-12198 at 8/7/17 11:45 AM: --- Thanks [~ajisakaa], attach the patch for branch-2. I noticed one thing that HADOOP-14502 didn't committed in branch-2. So I only added related metrics of other two JIRAs and missing block report MutableQuantiles metric in patch. was (Author: linyiqun): Thanks [~ajisakaa], attach the patch for branch-2. I noticed one thing that HADOOP-14502 didn't committed in branch-2. So I only added related metrics to other two JIRAs add missing block report MutableQuantiles metric in patch. > Document missing namenode metrics that were added recently > -- > > Key: HDFS-12198 > URL: https://issues.apache.org/jira/browse/HDFS-12198 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Affects Versions: 3.0.0-alpha4 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12198.001.patch, HDFS-12198-branch-2.001.patch > > > There are some namenode metrics added recently but haven't been documented in > {{Metrics.md}}. Totally following metrics and related JIRAs: > *HDFS-12043*: > {noformat} > @Metric ("Number of successful re-replications") >MutableCounterLong successfulReReplications; >@Metric ("Number of times we failed to schedule a block re-replication.") >MutableCounterLong numTimesReReplicationNotScheduled; >@Metric("Number of timed out block re-replications") > MutableCounterLong timeoutReReplications; > {noformat} > *HDFS-11907*: > {noformat} > @Metric("Resource check time") private MutableRate resourceCheckTime; > private final MutableQuantiles[] resourceCheckTimeQuantiles; > {noformat} > *HADOOP-14502*: > {noformat} > @Metric("Number of blockReports from individual storages") > final MutableRate storageBlockReport; > final MutableQuantiles[] storageBlockReportQuantiles; > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12198) Document missing namenode metrics that were added recently
[ https://issues.apache.org/jira/browse/HDFS-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116472#comment-16116472 ] Hadoop QA commented on HDFS-12198: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 3s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} branch-2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 9m 51s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:5e40efe | | JIRA Issue | HDFS-12198 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12880629/HDFS-12198-branch-2.001.patch | | Optional Tests | asflicense mvnsite | | uname | Linux a46c0b2edc33 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | branch-2 / cfdf297 | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/20579/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Document missing namenode metrics that were added recently > -- > > Key: HDFS-12198 > URL: https://issues.apache.org/jira/browse/HDFS-12198 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Affects Versions: 3.0.0-alpha4 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12198.001.patch, HDFS-12198-branch-2.001.patch > > > There are some namenode metrics added recently but haven't been documented in > {{Metrics.md}}. Totally following metrics and related JIRAs: > *HDFS-12043*: > {noformat} > @Metric ("Number of successful re-replications") >MutableCounterLong successfulReReplications; >@Metric ("Number of times we failed to schedule a block re-replication.") >MutableCounterLong numTimesReReplicationNotScheduled; >@Metric("Number of timed out block re-replications") > MutableCounterLong timeoutReReplications; > {noformat} > *HDFS-11907*: > {noformat} > @Metric("Resource check time") private MutableRate resourceCheckTime; > private final MutableQuantiles[] resourceCheckTimeQuantiles; > {noformat} > *HADOOP-14502*: > {noformat} > @Metric("Number of blockReports from individual storages") > final MutableRate storageBlockReport; > final MutableQuantiles[] storageBlockReportQuantiles; > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12036) Add audit log for some erasure coding operations
[ https://issues.apache.org/jira/browse/HDFS-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng updated HDFS-12036: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-beta1 Status: Resolved (was: Patch Available) Committed to trunk. Thanks [~HuafengWang] for the contribution and [~jojochuang] for the review. > Add audit log for some erasure coding operations > > > Key: HDFS-12036 > URL: https://issues.apache.org/jira/browse/HDFS-12036 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 3.0.0-alpha4 >Reporter: Wei-Chiu Chuang >Assignee: Huafeng Wang > Labels: hdfs-ec-3.0-must-do > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12036.001.patch, HDFS-12036.002.patch, > HDFS-12036.003.patch > > > These three FSNameSystem operations do not yet record audit logs. I am not > sure how useful these audit logs would be, but thought I should file them so > that they don't get dropped if they turn out to be needed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12036) Add audit log for some erasure coding operations
[ https://issues.apache.org/jira/browse/HDFS-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng updated HDFS-12036: - Summary: Add audit log for some erasure coding operations (was: Add audit log for getErasureCodingPolicy, getErasureCodingPolicies, getErasureCodingCodecs) > Add audit log for some erasure coding operations > > > Key: HDFS-12036 > URL: https://issues.apache.org/jira/browse/HDFS-12036 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 3.0.0-alpha4 >Reporter: Wei-Chiu Chuang >Assignee: Huafeng Wang > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-12036.001.patch, HDFS-12036.002.patch, > HDFS-12036.003.patch > > > These three FSNameSystem operations do not yet record audit logs. I am not > sure how useful these audit logs would be, but thought I should file them so > that they don't get dropped if they turn out to be needed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12198) Document missing namenode metrics that were added recently
[ https://issues.apache.org/jira/browse/HDFS-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116447#comment-16116447 ] Yiqun Lin edited comment on HDFS-12198 at 8/7/17 11:28 AM: --- Thanks [~ajisakaa], attach the patch for branch-2. I noticed one thing that HADOOP-14502 didn't committed in branch-2. So I only added related metrics to other two JIRAs add missing block report MutableQuantiles metric in patch. was (Author: linyiqun): Thanks [~ajisakaa], attach the patch for branch-2. I noticed one thing that HADOOP-14502 didn't committed in branch-2. So I only added metrics related to other two JIRAs in patch. > Document missing namenode metrics that were added recently > -- > > Key: HDFS-12198 > URL: https://issues.apache.org/jira/browse/HDFS-12198 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Affects Versions: 3.0.0-alpha4 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12198.001.patch, HDFS-12198-branch-2.001.patch > > > There are some namenode metrics added recently but haven't been documented in > {{Metrics.md}}. Totally following metrics and related JIRAs: > *HDFS-12043*: > {noformat} > @Metric ("Number of successful re-replications") >MutableCounterLong successfulReReplications; >@Metric ("Number of times we failed to schedule a block re-replication.") >MutableCounterLong numTimesReReplicationNotScheduled; >@Metric("Number of timed out block re-replications") > MutableCounterLong timeoutReReplications; > {noformat} > *HDFS-11907*: > {noformat} > @Metric("Resource check time") private MutableRate resourceCheckTime; > private final MutableQuantiles[] resourceCheckTimeQuantiles; > {noformat} > *HADOOP-14502*: > {noformat} > @Metric("Number of blockReports from individual storages") > final MutableRate storageBlockReport; > final MutableQuantiles[] storageBlockReportQuantiles; > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12198) Document missing namenode metrics that were added recently
[ https://issues.apache.org/jira/browse/HDFS-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-12198: - Attachment: HDFS-12198-branch-2.001.patch > Document missing namenode metrics that were added recently > -- > > Key: HDFS-12198 > URL: https://issues.apache.org/jira/browse/HDFS-12198 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Affects Versions: 3.0.0-alpha4 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12198.001.patch, HDFS-12198-branch-2.001.patch > > > There are some namenode metrics added recently but haven't been documented in > {{Metrics.md}}. Totally following metrics and related JIRAs: > *HDFS-12043*: > {noformat} > @Metric ("Number of successful re-replications") >MutableCounterLong successfulReReplications; >@Metric ("Number of times we failed to schedule a block re-replication.") >MutableCounterLong numTimesReReplicationNotScheduled; >@Metric("Number of timed out block re-replications") > MutableCounterLong timeoutReReplications; > {noformat} > *HDFS-11907*: > {noformat} > @Metric("Resource check time") private MutableRate resourceCheckTime; > private final MutableQuantiles[] resourceCheckTimeQuantiles; > {noformat} > *HADOOP-14502*: > {noformat} > @Metric("Number of blockReports from individual storages") > final MutableRate storageBlockReport; > final MutableQuantiles[] storageBlockReportQuantiles; > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12198) Document missing namenode metrics that were added recently
[ https://issues.apache.org/jira/browse/HDFS-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-12198: - Attachment: (was: HDFS-12198-branch-2.001.patch) > Document missing namenode metrics that were added recently > -- > > Key: HDFS-12198 > URL: https://issues.apache.org/jira/browse/HDFS-12198 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Affects Versions: 3.0.0-alpha4 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12198.001.patch > > > There are some namenode metrics added recently but haven't been documented in > {{Metrics.md}}. Totally following metrics and related JIRAs: > *HDFS-12043*: > {noformat} > @Metric ("Number of successful re-replications") >MutableCounterLong successfulReReplications; >@Metric ("Number of times we failed to schedule a block re-replication.") >MutableCounterLong numTimesReReplicationNotScheduled; >@Metric("Number of timed out block re-replications") > MutableCounterLong timeoutReReplications; > {noformat} > *HDFS-11907*: > {noformat} > @Metric("Resource check time") private MutableRate resourceCheckTime; > private final MutableQuantiles[] resourceCheckTimeQuantiles; > {noformat} > *HADOOP-14502*: > {noformat} > @Metric("Number of blockReports from individual storages") > final MutableRate storageBlockReport; > final MutableQuantiles[] storageBlockReportQuantiles; > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12198) Document missing namenode metrics that were added recently
[ https://issues.apache.org/jira/browse/HDFS-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-12198: - Attachment: HDFS-12198-branch-2.001.patch Thanks [~ajisakaa], attach the patch for branch-2. I noticed one thing that HADOOP-14502 didn't committed in branch-2. So I only added metrics related to other two JIRAs in patch. > Document missing namenode metrics that were added recently > -- > > Key: HDFS-12198 > URL: https://issues.apache.org/jira/browse/HDFS-12198 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Affects Versions: 3.0.0-alpha4 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12198.001.patch, HDFS-12198-branch-2.001.patch > > > There are some namenode metrics added recently but haven't been documented in > {{Metrics.md}}. Totally following metrics and related JIRAs: > *HDFS-12043*: > {noformat} > @Metric ("Number of successful re-replications") >MutableCounterLong successfulReReplications; >@Metric ("Number of times we failed to schedule a block re-replication.") >MutableCounterLong numTimesReReplicationNotScheduled; >@Metric("Number of timed out block re-replications") > MutableCounterLong timeoutReReplications; > {noformat} > *HDFS-11907*: > {noformat} > @Metric("Resource check time") private MutableRate resourceCheckTime; > private final MutableQuantiles[] resourceCheckTimeQuantiles; > {noformat} > *HADOOP-14502*: > {noformat} > @Metric("Number of blockReports from individual storages") > final MutableRate storageBlockReport; > final MutableQuantiles[] storageBlockReportQuantiles; > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12196) Ozone: DeleteKey-2: Implement container recycling service to delete stale blocks at background
[ https://issues.apache.org/jira/browse/HDFS-12196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116400#comment-16116400 ] Hadoop QA commented on HDFS-12196: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 56s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 59s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 96m 50s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.cblock.TestCBlockReadWrite | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.cblock.TestBufferManager | | | hadoop.ozone.web.client.TestKeys | | Timed out junit tests | org.apache.hadoop.ozone.web.client.TestKeysRatis | | | org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-12196 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12880610/HDFS-12196-HDFS-7240.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 09e7cc8cc12f 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / b153dbb | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/20578/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/20578/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Commented] (HDFS-12036) Add audit log for getErasureCodingPolicy, getErasureCodingPolicies, getErasureCodingCodecs
[ https://issues.apache.org/jira/browse/HDFS-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116386#comment-16116386 ] Kai Zheng commented on HDFS-12036: -- Thanks Huafeng! The idea to do it separately sounds good to me. The latest patch LGTM and +1. > Add audit log for getErasureCodingPolicy, getErasureCodingPolicies, > getErasureCodingCodecs > -- > > Key: HDFS-12036 > URL: https://issues.apache.org/jira/browse/HDFS-12036 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 3.0.0-alpha4 >Reporter: Wei-Chiu Chuang >Assignee: Huafeng Wang > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-12036.001.patch, HDFS-12036.002.patch, > HDFS-12036.003.patch > > > These three FSNameSystem operations do not yet record audit logs. I am not > sure how useful these audit logs would be, but thought I should file them so > that they don't get dropped if they turn out to be needed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12036) Add audit log for getErasureCodingPolicy, getErasureCodingPolicies, getErasureCodingCodecs
[ https://issues.apache.org/jira/browse/HDFS-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116381#comment-16116381 ] Huafeng Wang commented on HDFS-12036: - Hi Kai, the function is defined in ClientProtocol so I think it should be fixed in another issue. I just created one: https://issues.apache.org/jira/browse/HDFS-12269 > Add audit log for getErasureCodingPolicy, getErasureCodingPolicies, > getErasureCodingCodecs > -- > > Key: HDFS-12036 > URL: https://issues.apache.org/jira/browse/HDFS-12036 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 3.0.0-alpha4 >Reporter: Wei-Chiu Chuang >Assignee: Huafeng Wang > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-12036.001.patch, HDFS-12036.002.patch, > HDFS-12036.003.patch > > > These three FSNameSystem operations do not yet record audit logs. I am not > sure how useful these audit logs would be, but thought I should file them so > that they don't get dropped if they turn out to be needed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12269) Better to return a Map rather than HashMap in getErasureCodingCodecs
Huafeng Wang created HDFS-12269: --- Summary: Better to return a Map rather than HashMap in getErasureCodingCodecs Key: HDFS-12269 URL: https://issues.apache.org/jira/browse/HDFS-12269 Project: Hadoop HDFS Issue Type: Improvement Components: erasure-coding Reporter: Huafeng Wang Assignee: Huafeng Wang Priority: Minor Currently the getErasureCodingCodecs function defined in ClientProtocal returns a Hashmap: {code:java} HashMapgetErasureCodingCodecs() throws IOException; {code} It's better to return a Map. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12199) Ozone: OzoneFileSystem: OzoneFileystem initialization code
[ https://issues.apache.org/jira/browse/HDFS-12199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116366#comment-16116366 ] Weiwei Yang edited comment on HDFS-12199 at 8/7/17 10:02 AM: - Hi [~msingh] Thanks for working on this. I maybe able to get more details from the design doc you are going to upload to HDFS-11704, before that becomes to be available, a few comments/questions *Constants* Can we name the schema to {{oz}} instead of {{ozfs}} ? I checked some existing FS implementations, their schema are like, "oss", "adl", "s3a" etc. none of them contains "fs" suffix. *OzoneFileSystem* 1. Since oz FS is initiated per bucket, we need to ensure bucket existence during the {{initialize}}. But there seems no code to check this. 2. line 84 - 88, question: if a client passes, e.g {{/vol/bucket/ppp}}, as the URI path to init oz FS, is this a valid configuration or an exception to throw? 3. line 99, line 109: can we move constants "http://; and "/user" to {{Constants}} class? 4. Can you implement at least one FS API in {{OzoneFileSystem}} to prove this is working? Thank you. was (Author: cheersyang): Hi [~msingh] Thanks for working on this. I maybe able to get more details from the design doc you are going to upload to HDFS-11704, before that becomes to be available, a few comments/questions *Constants* Can we name the schema to {{oz}} instead of {{ozfs}} ? I checked some existing FS implementations, their schema are like, "oss", "adl", "s3a" etc. none of them contains "fs" suffix. *OzoneFileSystem* 1. Since oz FS is initiated per bucket, we need to ensure bucket existence during the {{initialize}}. But there seems no code to check this. 2. line 84 - 88, question: if a client passes, e.g /vol/bucket/ppp, as the URI path to init oz FS, is this a valid configuration or an exception to throw? 3. line 99, line 109: can we move constants "http://; and "/user" to {{Constants}} class? 4. Can you implement at least one FS API in {{OzoneFileSystem}} to prove this is working? Thank you. > Ozone: OzoneFileSystem: OzoneFileystem initialization code > -- > > Key: HDFS-12199 > URL: https://issues.apache.org/jira/browse/HDFS-12199 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Fix For: HDFS-7240 > > Attachments: HDFS-12199-HDFS-7240.001.patch > > > This jira will be used to add OzoneFileySystem initialization code. This jira > is based out of HDFS-11704. I will attach a design document to HDFS-11704. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12199) Ozone: OzoneFileSystem: OzoneFileystem initialization code
[ https://issues.apache.org/jira/browse/HDFS-12199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116366#comment-16116366 ] Weiwei Yang commented on HDFS-12199: Hi [~msingh] Thanks for working on this. I maybe able to get more details from the design doc you are going to upload to HDFS-11704, before that becomes to be available, a few comments/questions *Constants* Can we name the schema to {{oz}} instead of {{ozfs}} ? I checked some existing FS implementations, their schema are like, "oss", "adl", "s3a" etc. none of them contains "fs" suffix. *OzoneFileSystem* 1. Since oz FS is initiated per bucket, we need to ensure bucket existence during the {{initialize}}. But there seems no code to check this. 2. line 84 - 88, question: if a client passes, e.g /vol/bucket/ppp, as the URI path to init oz FS, is this a valid configuration or an exception to throw? 3. line 99, line 109: can we move constants "http://; and "/user" to {{Constants}} class? 4. Can you implement at least one FS API in {{OzoneFileSystem}} to prove this is working? Thank you. > Ozone: OzoneFileSystem: OzoneFileystem initialization code > -- > > Key: HDFS-12199 > URL: https://issues.apache.org/jira/browse/HDFS-12199 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Fix For: HDFS-7240 > > Attachments: HDFS-12199-HDFS-7240.001.patch > > > This jira will be used to add OzoneFileySystem initialization code. This jira > is based out of HDFS-11704. I will attach a design document to HDFS-11704. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12198) Document missing namenode metrics that were added recently
[ https://issues.apache.org/jira/browse/HDFS-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-12198: - Summary: Document missing namenode metrics that were added recently (was: Document missing namenode metrics that added recently) Fix Version/s: 3.0.0-beta1 Committed this to trunk. > Document missing namenode metrics that were added recently > -- > > Key: HDFS-12198 > URL: https://issues.apache.org/jira/browse/HDFS-12198 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Affects Versions: 3.0.0-alpha4 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12198.001.patch > > > There are some namenode metrics added recently but haven't been documented in > {{Metrics.md}}. Totally following metrics and related JIRAs: > *HDFS-12043*: > {noformat} > @Metric ("Number of successful re-replications") >MutableCounterLong successfulReReplications; >@Metric ("Number of times we failed to schedule a block re-replication.") >MutableCounterLong numTimesReReplicationNotScheduled; >@Metric("Number of timed out block re-replications") > MutableCounterLong timeoutReReplications; > {noformat} > *HDFS-11907*: > {noformat} > @Metric("Resource check time") private MutableRate resourceCheckTime; > private final MutableQuantiles[] resourceCheckTimeQuantiles; > {noformat} > *HADOOP-14502*: > {noformat} > @Metric("Number of blockReports from individual storages") > final MutableRate storageBlockReport; > final MutableQuantiles[] storageBlockReportQuantiles; > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12198) Document missing namenode metrics that added recently
[ https://issues.apache.org/jira/browse/HDFS-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116343#comment-16116343 ] Akira Ajisaka commented on HDFS-12198: -- +1, committing to trunk. Hi [~linyiqun], would you create a patch for branch-2? > Document missing namenode metrics that added recently > - > > Key: HDFS-12198 > URL: https://issues.apache.org/jira/browse/HDFS-12198 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Affects Versions: 3.0.0-alpha4 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-12198.001.patch > > > There are some namenode metrics added recently but haven't been documented in > {{Metrics.md}}. Totally following metrics and related JIRAs: > *HDFS-12043*: > {noformat} > @Metric ("Number of successful re-replications") >MutableCounterLong successfulReReplications; >@Metric ("Number of times we failed to schedule a block re-replication.") >MutableCounterLong numTimesReReplicationNotScheduled; >@Metric("Number of timed out block re-replications") > MutableCounterLong timeoutReReplications; > {noformat} > *HDFS-11907*: > {noformat} > @Metric("Resource check time") private MutableRate resourceCheckTime; > private final MutableQuantiles[] resourceCheckTimeQuantiles; > {noformat} > *HADOOP-14502*: > {noformat} > @Metric("Number of blockReports from individual storages") > final MutableRate storageBlockReport; > final MutableQuantiles[] storageBlockReportQuantiles; > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12036) Add audit log for getErasureCodingPolicy, getErasureCodingPolicies, getErasureCodingCodecs
[ https://issues.apache.org/jira/browse/HDFS-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116327#comment-16116327 ] Kai Zheng commented on HDFS-12036: -- Thanks [~HuafengWang] for the update! Just noticed another minor in existing codes. Could you fix it by the way? Thanks! {code} * Get available erasure coding codecs and corresponding coders. */ HashMapgetErasureCodingCodecs() throws IOException { +final String operationName = "getErasureCodingCodecs"; +boolean success = false; checkOperation(OperationCategory.READ); readLock(); try { checkOperation(OperationCategory.READ); - return FSDirErasureCodingOp.getErasureCodingCodecs(this); + final HashMap ret = + FSDirErasureCodingOp.getErasureCodingCodecs(this); + success = true; {code} The function would be better to return a Map instead of a HashMap. And also to {{FSDirErasureCodingOp.getErasureCodingCodecs}} > Add audit log for getErasureCodingPolicy, getErasureCodingPolicies, > getErasureCodingCodecs > -- > > Key: HDFS-12036 > URL: https://issues.apache.org/jira/browse/HDFS-12036 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 3.0.0-alpha4 >Reporter: Wei-Chiu Chuang >Assignee: Huafeng Wang > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-12036.001.patch, HDFS-12036.002.patch, > HDFS-12036.003.patch > > > These three FSNameSystem operations do not yet record audit logs. I am not > sure how useful these audit logs would be, but thought I should file them so > that they don't get dropped if they turn out to be needed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11814) Benchmark and tune for prefered default cell size
[ https://issues.apache.org/jira/browse/HDFS-11814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116318#comment-16116318 ] Wei Zhou commented on HDFS-11814: - >From the test cases above, there is no need to change the default cell size >(64KB). Thanks! > Benchmark and tune for prefered default cell size > - > > Key: HDFS-11814 > URL: https://issues.apache.org/jira/browse/HDFS-11814 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: SammiChen >Assignee: Wei Zhou > Labels: hdfs-ec-3.0-must-do > Attachments: RS-Read.png, RS-Write.png > > > Doing some benchmarking to see which cell size is more desirable, other than > current 64k -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org