[jira] [Commented] (HDFS-15464) ViewFsOverloadScheme should work when -fs option pointing to remote cluster without mount links
[ https://issues.apache.org/jira/browse/HDFS-15464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17155892#comment-17155892 ] Hadoop QA commented on HDFS-15464: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 26m 6s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 1s{color} | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue} 0m 0s{color} | {color:blue} markdownlint was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 54s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 22s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 23s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 21m 36s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 37s{color} | {color:red} hadoop-common in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 40s{color} | {color:red} hadoop-hdfs in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 9s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 15s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 52s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 18s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 37s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 37s{color} | {color:red} hadoop-common in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red}
[jira] [Commented] (HDFS-15464) ViewFsOverloadScheme should work when -fs option pointing to remote cluster without mount links
[ https://issues.apache.org/jira/browse/HDFS-15464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17155791#comment-17155791 ] Hadoop QA commented on HDFS-15464: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 33s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue} 0m 0s{color} | {color:blue} markdownlint was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 5s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 43s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 16s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 25m 9s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 39s{color} | {color:red} hadoop-common in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 44s{color} | {color:red} hadoop-hdfs in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 4m 24s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 2s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 39s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 25m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 25s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 36s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 46s{color} | {color:red} hadoop-common in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red}
[jira] [Comment Edited] (HDFS-13082) cookieverf mismatch error over NFS gateway on Linux
[ https://issues.apache.org/jira/browse/HDFS-13082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17140885#comment-17140885 ] Daniel Howard edited comment on HDFS-13082 at 7/10/20, 7:28 PM: I am running into this as well on Ubuntu 20.04. I can confirm that setting *{{nfs.aix.compatibility.mode.enabled}}* to *{{true}}* resolves this problem. For example: {{0-15:58 djh@c24-03-06 ~> *ls /hadoop/wxxxs/data/*}} {{# No files listed}} {{0-16:01 djh@c24-03-06 ~> *touch /hadoop/wxxxs/data/foo*}} {{0-16:01 djh@c24-03-06 ~> *ls /hadoop/wxxxs/data/*}} {{foo packed-hbfs/ raw/ tmp/}} {{0-16:01 djh@c24-03-06 ~> *rm /hadoop/wxxxs/data/foo*}} {{0-16:01 djh@c24-03-06 ~> *ls /hadoop/wxxxs/data/*}} {{packed-hbfs/ raw/ tmp/}} Writing to this directory forced the NFS server to return the correct directory contents. I have a bunch of this in the log: {{2020-06-19 16:01:35,281 ERROR org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3: cookieverf mismatch. request cookieverf: 1591897331315 dir cookieverf: 1592428367587}} {{2020-06-19 16:01:35,287 ERROR org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3: cookieverf mismatch. request cookieverf: 1591897331315 dir cookieverf: 1592428367587}} {{2020-06-19 16:01:35,454 ERROR org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3: cookieverf mismatch. request cookieverf: 1591897331315 dir cookieverf: 1592428367587}} If AIX compatibility is enabled, the log messages change FROM {{ERROR org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3: cookieverf mismatch.[...]}} TO {{WARN org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3: AIX compatibility mode enabled...}} Judging by the code in {{RpcProgramNfs3.java}} if the {{cookieverf}} does not match a directory's {{mtime}}, then normally the Nfs3 server will return an error to the client. In AIX compatibility mode, the Nfs3 server instead logs a warning and then constructs the response it would have constructed had there been no {{cookieverf}} mismatch. What does this all mean? I don't know, but I am working to see if I can trigger an empty directory situation with the AIX compat enabled. was (Author: dannyman): I am running into this as well on Ubuntu 20.04. I am in the process of testing the AIX compatibility mode. For example: {{0-15:58 djh@c24-03-06 ~> *ls /hadoop/wxxxs/data/*}} {{# No files listed}} {{0-16:01 djh@c24-03-06 ~> *touch /hadoop/wxxxs/data/foo*}} {{0-16:01 djh@c24-03-06 ~> *ls /hadoop/wxxxs/data/*}} {{foo packed-hbfs/ raw/ tmp/}} {{0-16:01 djh@c24-03-06 ~> *rm /hadoop/wxxxs/data/foo*}} {{0-16:01 djh@c24-03-06 ~> *ls /hadoop/wxxxs/data/*}} {{packed-hbfs/ raw/ tmp/}} Writing to this directory forced the NFS server to return the correct directory contents. I have a bunch of this in the log: {{2020-06-19 16:01:35,281 ERROR org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3: cookieverf mismatch. request cookieverf: 1591897331315 dir cookieverf: 1592428367587}} {{2020-06-19 16:01:35,287 ERROR org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3: cookieverf mismatch. request cookieverf: 1591897331315 dir cookieverf: 1592428367587}} {{2020-06-19 16:01:35,454 ERROR org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3: cookieverf mismatch. request cookieverf: 1591897331315 dir cookieverf: 1592428367587}} If AIX compatibility is enabled, the log messages change FROM {{ERROR org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3: cookieverf mismatch.[...]}} TO {{WARN org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3: AIX compatibility mode enabled...}} Judging by the code in {{RpcProgramNfs3.java}} if the {{cookieverf}} does not match a directory's {{mtime}}, then normally the Nfs3 server will return an error to the client. In AIX compatibility mode, the Nfs3 server instead logs a warning and then constructs the response it would have constructed had there been no {{cookieverf}} mismatch. What does this all mean? I don't know, but I am working to see if I can trigger an empty directory situation with the AIX compat enabled. > cookieverf mismatch error over NFS gateway on Linux > --- > > Key: HDFS-13082 > URL: https://issues.apache.org/jira/browse/HDFS-13082 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.7.3 >Reporter: Dan Moraru >Priority: Minor > > Running 'ls' on some directories over an HDFS-NFS gateway sometimes fails to > list the contents of those directories. Running 'ls' on those same > directories mounted via FUSE works. The NFS gateway logs errors like the > following: > 2018-01-29 11:53:01,130 ERROR org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3: > cookieverf mismatch. request cookieverf: 1513390944415 dir cookieverf: > 1516920857335 > Reviewing >
[jira] [Updated] (HDFS-15464) ViewFsOverloadScheme should work when -fs option pointing to remote cluster without mount links
[ https://issues.apache.org/jira/browse/HDFS-15464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uma Maheswara Rao G updated HDFS-15464: --- Status: Patch Available (was: Open) > ViewFsOverloadScheme should work when -fs option pointing to remote cluster > without mount links > --- > > Key: HDFS-15464 > URL: https://issues.apache.org/jira/browse/HDFS-15464 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: viewfsOverloadScheme >Affects Versions: 3.2.1 >Reporter: Uma Maheswara Rao G >Assignee: Uma Maheswara Rao G >Priority: Major > > When users try to connect to remote cluster from the cluster env where you > enabled ViewFSOverloadScheme, it expects to have at least one mount link make > fs init success. > Unfortunately you might not have configured any mount links with that remote > cluster in your current env. You would have configured only with your local > clusters mount points. > In this case fs init will fail with no mount points configured the mount > table if that remote cluster uri's authority. > One idea is that, when there are no mount links configured, we should just > consider that as default cluster, that can be achieved by considering it as > fallback option automatically. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15025) Applying NVDIMM storage media to HDFS
[ https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17155584#comment-17155584 ] Hadoop QA commented on HDFS-15025: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 6s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 1s{color} | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} prototool {color} | {color:blue} 0m 0s{color} | {color:blue} prototool was not available. {color} | | {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue} 0m 0s{color} | {color:blue} markdownlint was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 16 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 26m 47s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 22s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 38s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 9m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 25s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 21m 25s{color} | {color:red} root generated 21 new + 141 unchanged - 21 fixed = 162 total (was 162) {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 25s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 3s{color} | {color:orange} root: The patch generated 4 new + 725 unchanged - 4 fixed = 729 total (was 729) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 37s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 25s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 11s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 46s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color}
[jira] [Updated] (HDFS-15025) Applying NVDIMM storage media to HDFS
[ https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hadoop_hdfs_hw updated HDFS-15025: -- Attachment: HDFS-15025.003.patch Status: Patch Available (was: Open) > Applying NVDIMM storage media to HDFS > - > > Key: HDFS-15025 > URL: https://issues.apache.org/jira/browse/HDFS-15025 > Project: Hadoop HDFS > Issue Type: New Feature > Components: datanode, hdfs >Reporter: hadoop_hdfs_hw >Priority: Major > Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, > HDFS-15025.002.patch, HDFS-15025.003.patch, NVDIMM_patch(WIP).patch > > > The non-volatile memory NVDIMM is faster than SSD, it can be used > simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on > NVDIMM can not only improves the response rate of HDFS, but also ensure the > reliability of the data. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15025) Applying NVDIMM storage media to HDFS
[ https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hadoop_hdfs_hw updated HDFS-15025: -- Status: Open (was: Patch Available) > Applying NVDIMM storage media to HDFS > - > > Key: HDFS-15025 > URL: https://issues.apache.org/jira/browse/HDFS-15025 > Project: Hadoop HDFS > Issue Type: New Feature > Components: datanode, hdfs >Reporter: hadoop_hdfs_hw >Priority: Major > Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, > HDFS-15025.002.patch, NVDIMM_patch(WIP).patch > > > The non-volatile memory NVDIMM is faster than SSD, it can be used > simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on > NVDIMM can not only improves the response rate of HDFS, but also ensure the > reliability of the data. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14498) LeaseManager can loop forever on the file for which create has failed
[ https://issues.apache.org/jira/browse/HDFS-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17155256#comment-17155256 ] Stephen O'Donnell commented on HDFS-14498: -- [~hexiaoqiao] Thanks for following up. I think we should be good to commit this in another day or two. We encountered this problem in a CDH-5.16 cluster, which is a heavily patched 2.6 build. So I believe this issue has existed forever, although it occurs only rarely. I agree we should cherry pick to all active branches (2.10, 3.1, 3.2, 3.3 and trunk). Provided the cherry-pick is clean, and the new tests pass on each branch then we should be OK. > LeaseManager can loop forever on the file for which create has failed > -- > > Key: HDFS-14498 > URL: https://issues.apache.org/jira/browse/HDFS-14498 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.9.0 >Reporter: Sergey Shelukhin >Assignee: Stephen O'Donnell >Priority: Major > Attachments: HDFS-14498.001.patch, HDFS-14498.002.patch > > > The logs from file creation are long gone due to infinite lease logging, > however it presumably failed... the client who was trying to write this file > is definitely long dead. > The version includes HDFS-4882. > We get this log pattern repeating infinitely: > {noformat} > 2019-05-16 14:00:16,893 INFO > [org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@b27557f] > org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease. Holder: > DFSClient_NONMAPREDUCE_-20898906_61, pending creates: 1] has expired hard > limit > 2019-05-16 14:00:16,893 INFO > [org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@b27557f] > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease. > Holder: DFSClient_NONMAPREDUCE_-20898906_61, pending creates: 1], src= > 2019-05-16 14:00:16,893 WARN > [org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@b27557f] > org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.internalReleaseLease: > Failed to release lease for file . Committed blocks are waiting to be > minimally replicated. Try again later. > 2019-05-16 14:00:16,893 WARN > [org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@b27557f] > org.apache.hadoop.hdfs.server.namenode.LeaseManager: Cannot release the path > in the lease [Lease. Holder: DFSClient_NONMAPREDUCE_-20898906_61, > pending creates: 1]. It will be retried. > org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: DIR* > NameSystem.internalReleaseLease: Failed to release lease for file . > Committed blocks are waiting to be minimally replicated. Try again later. > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:3357) > at > org.apache.hadoop.hdfs.server.namenode.LeaseManager.checkLeases(LeaseManager.java:573) > at > org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:509) > at java.lang.Thread.run(Thread.java:745) > $ grep -c "Recovering.*DFSClient_NONMAPREDUCE_-20898906_61, pending creates: > 1" hdfs_nn* > hdfs_nn.log:1068035 > hdfs_nn.log.2019-05-16-14:1516179 > hdfs_nn.log.2019-05-16-15:1538350 > {noformat} > Aside from an actual bug fix, it might make sense to make LeaseManager not > log so much, in case if there are more bugs like this... -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x
[ https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17154414#comment-17154414 ] fengwu edited comment on HDFS-13596 at 7/10/20, 7:09 AM: - [~_ph] , At first I had the same view as you, when I saw this comment :https://issues.apache.org/jira/browse/HDFS-13596?focusedCommentId=16911102=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16911102 I tested downgrade from hdfs 3.2.1 to 2.8.2 success,but 2.7 ~ 2.7.2 failed. So, This can be understood as roll downgrading the current 3.x can only be downgraded to the 2.8+ , agree ? [~hanishakoneru] [~ferhui] was (Author: fengwu99): [~_ph] , At first I had the same view as you, when I saw this comment :https://issues.apache.org/jira/browse/HDFS-13596?focusedCommentId=16911102=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16911102 I tested downgrade from hdfs 3.2.1 to 2.8.2 success,but 2.7 failed. > NN restart fails after RollingUpgrade from 2.x to 3.x > - > > Key: HDFS-13596 > URL: https://issues.apache.org/jira/browse/HDFS-13596 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Fei Hui >Priority: Blocker > Fix For: 3.3.0, 3.2.1, 3.1.3 > > Attachments: HDFS-13596.001.patch, HDFS-13596.002.patch, > HDFS-13596.003.patch, HDFS-13596.004.patch, HDFS-13596.005.patch, > HDFS-13596.006.patch, HDFS-13596.007.patch, HDFS-13596.008.patch, > HDFS-13596.009.patch, HDFS-13596.010.patch > > > After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails > while replaying edit logs. > * After NN is started with rollingUpgrade, the layoutVersion written to > editLogs (before finalizing the upgrade) is the pre-upgrade layout version > (so as to support downgrade). > * When writing transactions to log, NN writes as per the current layout > version. In 3.x, erasureCoding bits are added to the editLog transactions. > * So any edit log written after the upgrade and before finalizing the > upgrade will have the old layout version but the new format of transactions. > * When NN is restarted and the edit logs are replayed, the NN reads the old > layout version from the editLog file. When parsing the transactions, it > assumes that the transactions are also from the previous layout and hence > skips parsing the erasureCoding bits. > * This cascades into reading the wrong set of bits for other fields and > leads to NN shutting down. > Sample error output: > {code:java} > java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected > length 16 > at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) > at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74) > at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86) > at > org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163) > at > org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158) > at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643) > at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710) > 2018-05-17 19:10:06,522 WARN > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception > loading fsimage > java.io.IOException: java.lang.IllegalStateException: Cannot skip to less > than the current value (=16389), where newValue=16388 > at >
[jira] [Updated] (HDFS-14744) RBF: Non secured routers should not log in error mode when UGI is default.
[ https://issues.apache.org/jira/browse/HDFS-14744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wangzhaohui updated HDFS-14744: --- Summary: RBF: Non secured routers should not log in error mode when UGI is default. (was: cc) > RBF: Non secured routers should not log in error mode when UGI is default. > -- > > Key: HDFS-14744 > URL: https://issues.apache.org/jira/browse/HDFS-14744 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: CR Hota >Assignee: CR Hota >Priority: Major > Fix For: 3.3.0 > > Attachments: HDFS-14744.001.patch > > > RouterClientProtocol#getMountPointStatus logs error when groups are not found > for default web user dr.who. The line should be logged in "error" mode for > secured cluster, for unsecured clusters, we may want to just specify "debug" > or else logs are filled up with this non-critical line > {{ERROR org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer: > Cannot get the remote user: There is no primary group for UGI dr.who > (auth:SIMPLE)}} > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14744) cc
[ https://issues.apache.org/jira/browse/HDFS-14744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wangzhaohui updated HDFS-14744: --- Summary: cc (was: RBF: Non secured routers should not log in error mode when UGI is default.) > cc > -- > > Key: HDFS-14744 > URL: https://issues.apache.org/jira/browse/HDFS-14744 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: CR Hota >Assignee: CR Hota >Priority: Major > Fix For: 3.3.0 > > Attachments: HDFS-14744.001.patch > > > RouterClientProtocol#getMountPointStatus logs error when groups are not found > for default web user dr.who. The line should be logged in "error" mode for > secured cluster, for unsecured clusters, we may want to just specify "debug" > or else logs are filled up with this non-critical line > {{ERROR org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer: > Cannot get the remote user: There is no primary group for UGI dr.who > (auth:SIMPLE)}} > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x
[ https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17154414#comment-17154414 ] fengwu edited comment on HDFS-13596 at 7/10/20, 7:00 AM: - [~_ph] , At first I had the same view as you, when I saw this comment :https://issues.apache.org/jira/browse/HDFS-13596?focusedCommentId=16911102=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16911102 I tested downgrade from hdfs 3.2.1 to 2.8.2 success,but 2.7 failed. was (Author: fengwu99): [~_ph] , At first I had the same view as you, when I saw this comment :[https://issues.apache.org/jira/browse/HDFS-13596?focusedCommentId=16911102=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16911102] > NN restart fails after RollingUpgrade from 2.x to 3.x > - > > Key: HDFS-13596 > URL: https://issues.apache.org/jira/browse/HDFS-13596 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Fei Hui >Priority: Blocker > Fix For: 3.3.0, 3.2.1, 3.1.3 > > Attachments: HDFS-13596.001.patch, HDFS-13596.002.patch, > HDFS-13596.003.patch, HDFS-13596.004.patch, HDFS-13596.005.patch, > HDFS-13596.006.patch, HDFS-13596.007.patch, HDFS-13596.008.patch, > HDFS-13596.009.patch, HDFS-13596.010.patch > > > After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails > while replaying edit logs. > * After NN is started with rollingUpgrade, the layoutVersion written to > editLogs (before finalizing the upgrade) is the pre-upgrade layout version > (so as to support downgrade). > * When writing transactions to log, NN writes as per the current layout > version. In 3.x, erasureCoding bits are added to the editLog transactions. > * So any edit log written after the upgrade and before finalizing the > upgrade will have the old layout version but the new format of transactions. > * When NN is restarted and the edit logs are replayed, the NN reads the old > layout version from the editLog file. When parsing the transactions, it > assumes that the transactions are also from the previous layout and hence > skips parsing the erasureCoding bits. > * This cascades into reading the wrong set of bits for other fields and > leads to NN shutting down. > Sample error output: > {code:java} > java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected > length 16 > at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) > at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74) > at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86) > at > org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163) > at > org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158) > at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643) > at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710) > 2018-05-17 19:10:06,522 WARN > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception > loading fsimage > java.io.IOException: java.lang.IllegalStateException: Cannot skip to less > than the current value (=16389), where newValue=16388 > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.resetLastInodeId(FSDirectory.java:1945) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:298) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158) > at
[jira] [Commented] (HDFS-8432) Introduce a minimum compatible layout version to allow downgrade in more rolling upgrade use cases.
[ https://issues.apache.org/jira/browse/HDFS-8432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17155189#comment-17155189 ] fengwu commented on HDFS-8432: -- Hi, [~heliangjun] ! Can the datanode be downgraded successfull during your upgrade? Found in my test roll downgrade from 3.1.3 to 2.7.2, namenode successful , but datanode failed (2.8+ successfully ). because different datanode layout versions is -56 in hdfs 2.7 . {code:java} // code placeholder 2020-07-06 14:45:01,313 WARN org.apache.hadoop.hdfs.server.common.Storage: org.apache.hadoop.hdfs.server.common.IncorrectVersionException: Unexpected version of storage directory /data/hadoop/dfs. Reported: -57. Expecting = -56. 2020-07-06 14:45:01,315 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data/hadoop/dfs/in_use.lock acquired by nodename 21258@test-v03 2020-07-06 14:45:01,315 WARN org.apache.hadoop.hdfs.server.common.Storage: org.apache.hadoop.hdfs.server.common.IncorrectVersionException: Unexpected version of storage directory /data/hadoop/dfs. Reported: -57. Expecting = -56. 2020-07-06 14:45:01,315 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool (Datanode Uuid unassigned) service to test-v01/10.110.228.21:8020. Exiting. java.io.IOException: All specified directories are failed to load. at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:478) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1358) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1323) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802) at java.lang.Thread.run(Thread.java:748) {code} > Introduce a minimum compatible layout version to allow downgrade in more > rolling upgrade use cases. > --- > > Key: HDFS-8432 > URL: https://issues.apache.org/jira/browse/HDFS-8432 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode, rolling upgrades >Reporter: Chris Nauroth >Assignee: Chris Nauroth >Priority: Major > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HDFS-8432-HDFS-Downgrade-Extended-Support.pdf, > HDFS-8432-branch-2.002.patch, HDFS-8432-branch-2.003.patch, > HDFS-8432.001.patch, HDFS-8432.002.patch > > > Maintain the prior layout version during the upgrade window and reject > attempts to use new features until after the upgrade has been finalized. > This guarantees that the prior software version can read the fsimage and edit > logs if the administrator decides to downgrade. This will make downgrade > usable for the majority of NameNode layout version changes, which just > involve introduction of new edit log operations. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11696) Fix warnings from Spotbugs in hadoop-hdfs
[ https://issues.apache.org/jira/browse/HDFS-11696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki updated HDFS-11696: Fix Version/s: 2.10.1 > Fix warnings from Spotbugs in hadoop-hdfs > - > > Key: HDFS-11696 > URL: https://issues.apache.org/jira/browse/HDFS-11696 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Major > Fix For: 3.0.0-beta1, 2.10.1 > > Attachments: HADOOP-14337.001.patch, HADOOP-14337.002.patch, > HADOOP-14337.003.patch, HDFS-11696.004.patch, HDFS-11696.005.patch, > HDFS-11696.006.patch, HDFS-11696.007.patch, HDFS-11696.008.patch, > HDFS-11696.009.patch, HDFS-11696.010.patch, findbugsHtml.html > > > There are totally 12 findbugs issues generated after switching from Findbugs > to Spotbugs across the project in HADOOP-14316. This JIRA focus on cleaning > up the part of warnings under scope of HDFS(mainly contained in hadoop-hdfs > and hadoop-hdfs-client). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11696) Fix warnings from Spotbugs in hadoop-hdfs
[ https://issues.apache.org/jira/browse/HDFS-11696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17155184#comment-17155184 ] Masatake Iwasaki commented on HDFS-11696: - cherry-picked to branch-2.10. > Fix warnings from Spotbugs in hadoop-hdfs > - > > Key: HDFS-11696 > URL: https://issues.apache.org/jira/browse/HDFS-11696 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Major > Fix For: 3.0.0-beta1, 2.10.1 > > Attachments: HADOOP-14337.001.patch, HADOOP-14337.002.patch, > HADOOP-14337.003.patch, HDFS-11696.004.patch, HDFS-11696.005.patch, > HDFS-11696.006.patch, HDFS-11696.007.patch, HDFS-11696.008.patch, > HDFS-11696.009.patch, HDFS-11696.010.patch, findbugsHtml.html > > > There are totally 12 findbugs issues generated after switching from Findbugs > to Spotbugs across the project in HADOOP-14316. This JIRA focus on cleaning > up the part of warnings under scope of HDFS(mainly contained in hadoop-hdfs > and hadoop-hdfs-client). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org