[jira] [Updated] (HDFS-15470) Added more unit tests to validate rename behaviour across snapshots

2020-07-20 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-15470:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~jnp] for the review. I have committed this.

> Added more unit tests to validate rename behaviour across snapshots
> ---
>
> Key: HDFS-15470
> URL: https://issues.apache.org/jira/browse/HDFS-15470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.0.4
>
> Attachments: HDFS-15470.000.patch, HDFS-15470.001.patch, 
> HDFS-15470.002.patch
>
>
> HDFS-15313 fixes a critical issue which will avoid deletion of data in active 
> fs with a sequence of snapshot deletes. The idea is to add more tests to 
> verify the behaviour.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10648) Expose Balancer metrics through Metrics2

2020-07-20 Thread Leon Gao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-10648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161727#comment-17161727
 ] 

Leon Gao commented on HDFS-10648:
-

[~hemanthboyina] Rebased. Saw some test failures but looks like not related. 
Please take a look.

> Expose Balancer metrics through Metrics2
> 
>
> Key: HDFS-10648
> URL: https://issues.apache.org/jira/browse/HDFS-10648
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: balancer  mover, metrics
>Reporter: Mark Wagner
>Assignee: Leon Gao
>Priority: Major
>  Labels: metrics
>
> The Balancer currently prints progress information to the console. For 
> deployments that run the balancer frequently, it would be helpful to collect 
> those metrics for publishing to the available sinks. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15478) When Empty mount points, we are assigning fallback link to self. But it should not use full URI for target fs.

2020-07-20 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161673#comment-17161673
 ] 

Hadoop QA commented on HDFS-15478:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m  
2s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
47s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
14s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
12s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 2 
extant findbugs warnings. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
24s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
25s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
42s{color} | {color:red} hadoop-common in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
21s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
51s{color} | {color:green} The patch does 

[jira] [Assigned] (HDFS-15481) Ordered snapshot deletion: garbage collect deleted snapshots

2020-07-20 Thread Tsz-wo Sze (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz-wo Sze reassigned HDFS-15481:
-

Assignee: Tsz-wo Sze

> Ordered snapshot deletion: garbage collect deleted snapshots
> 
>
> Key: HDFS-15481
> URL: https://issues.apache.org/jira/browse/HDFS-15481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
>
> When the earliest snapshot is actually deleted, if the subsequent snapshots 
> are already marked as deleted, the subsequent snapshots can be also actually 
> removed from the file system.  In this JIRA, we implement a mechanism to  
> garbage collect these snapshots.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15478) When Empty mount points, we are assigning fallback link to self. But it should not use full URI for target fs.

2020-07-20 Thread Uma Maheswara Rao G (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-15478:
---
Status: Patch Available  (was: Open)

> When Empty mount points, we are assigning fallback link to self. But it 
> should not use full URI for target fs.
> --
>
> Key: HDFS-15478
> URL: https://issues.apache.org/jira/browse/HDFS-15478
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>
> On empty mount tables detection, we will automatically assign fallback with 
> the same initialized uri fs. Currently we are using given uri for creating 
> target fs. 
> When creating target fs, we use Chrooted fs where it will set the path from 
> uri as base directory.  So, this can make path wrong in the case of fs 
> initialized with path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15404) ShellCommandFencer should expose info about source

2020-07-20 Thread Chen Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-15404:
--
Fix Version/s: 3.1.5
   3.4.0
   3.3.1
   2.10.1
   3.2.2
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> ShellCommandFencer should expose info about source
> --
>
> Key: HDFS-15404
> URL: https://issues.apache.org/jira/browse/HDFS-15404
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Fix For: 3.2.2, 2.10.1, 3.3.1, 3.4.0, 3.1.5
>
> Attachments: HDFS-15404.001.patch, HDFS-15404.002.patch, 
> HDFS-15404.003.patch, HDFS-15404.004.patch, HDFS-15404.005.patch, 
> HDFS-15404.006.patch
>
>
> Currently the HA fencing logic in ShellCommandFencer exposes environment 
> variable about only the fencing target. i.e. the $target_* variables as 
> mentioned in this [document 
> page|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html]).
>  
> But here only the fencing target variables are getting exposed. Sometimes it 
> is useful to expose info about the fencing source node. One use case is would 
> allow source and target node to identify themselves separately and run 
> different commands/scripts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15404) ShellCommandFencer should expose info about source

2020-07-20 Thread Chen Liang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161585#comment-17161585
 ] 

Chen Liang commented on HDFS-15404:
---

I have committed v006 patch to trunk, branch-3.3/3.2/3.1 and branch-2.10. 
Thanks Konstantin for the review!

> ShellCommandFencer should expose info about source
> --
>
> Key: HDFS-15404
> URL: https://issues.apache.org/jira/browse/HDFS-15404
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-15404.001.patch, HDFS-15404.002.patch, 
> HDFS-15404.003.patch, HDFS-15404.004.patch, HDFS-15404.005.patch, 
> HDFS-15404.006.patch
>
>
> Currently the HA fencing logic in ShellCommandFencer exposes environment 
> variable about only the fencing target. i.e. the $target_* variables as 
> mentioned in this [document 
> page|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html]).
>  
> But here only the fencing target variables are getting exposed. Sometimes it 
> is useful to expose info about the fencing source node. One use case is would 
> allow source and target node to identify themselves separately and run 
> different commands/scripts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15480) Ordered snapshot deletion: record snapshot deletion in XAttr

2020-07-20 Thread Tsz-wo Sze (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161558#comment-17161558
 ] 

Tsz-wo Sze commented on HDFS-15480:
---

BTW, there are a few XAttr constants defined in HdfsServerConstants such as 
XATTR_ERASURECODING_POLICY.  Should we also define the new constant there?

> Ordered snapshot deletion: record snapshot deletion in XAttr
> 
>
> Key: HDFS-15480
> URL: https://issues.apache.org/jira/browse/HDFS-15480
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Tsz-wo Sze
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-15480.000.patch
>
>
> In this JIRA, the behavior of deleting the non-earliest snapshots will be 
> changed to marking them as deleted in XAttr but not actually deleting them.  
> Note that
> # The marked-for-deletion snapshots will be garbage collected later on; see 
> HDFS-15481.
> # The marked-for-deletion snapshots will be hided from users; see HDFS-15482.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15480) Ordered snapshot deletion: record snapshot deletion in XAttr

2020-07-20 Thread Tsz-wo Sze (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161553#comment-17161553
 ] 

Tsz-wo Sze commented on HDFS-15480:
---

[~shashikant], thanks for working on this.

After marking the deleted snapshot, it should not call deleteSnapshot(..) 
anymore.

> Ordered snapshot deletion: record snapshot deletion in XAttr
> 
>
> Key: HDFS-15480
> URL: https://issues.apache.org/jira/browse/HDFS-15480
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Tsz-wo Sze
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-15480.000.patch
>
>
> In this JIRA, the behavior of deleting the non-earliest snapshots will be 
> changed to marking them as deleted in XAttr but not actually deleting them.  
> Note that
> # The marked-for-deletion snapshots will be garbage collected later on; see 
> HDFS-15481.
> # The marked-for-deletion snapshots will be hided from users; see HDFS-15482.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15404) ShellCommandFencer should expose info about source

2020-07-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161504#comment-17161504
 ] 

Hudson commented on HDFS-15404:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18457 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18457/])
HDFS-15404. ShellCommandFencer should expose info about source. (vagarychen: 
rev 3833c616e087518196bcb77ac2479c66a0b188d8)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestShellCommandFencer.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceTarget.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestFailoverController.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ShellCommandFencer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSHAAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/FailoverController.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/NodeFencer.java


> ShellCommandFencer should expose info about source
> --
>
> Key: HDFS-15404
> URL: https://issues.apache.org/jira/browse/HDFS-15404
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-15404.001.patch, HDFS-15404.002.patch, 
> HDFS-15404.003.patch, HDFS-15404.004.patch, HDFS-15404.005.patch, 
> HDFS-15404.006.patch
>
>
> Currently the HA fencing logic in ShellCommandFencer exposes environment 
> variable about only the fencing target. i.e. the $target_* variables as 
> mentioned in this [document 
> page|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html]).
>  
> But here only the fencing target variables are getting exposed. Sometimes it 
> is useful to expose info about the fencing source node. One use case is would 
> allow source and target node to identify themselves separately and run 
> different commands/scripts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15443) Setting dfs.datanode.max.transfer.threads to a very small value can cause strange failure.

2020-07-20 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161492#comment-17161492
 ] 

Hadoop QA commented on HDFS-15443:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m  
7s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m  
5s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 4 extant 
findbugs warnings. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}122m 35s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
50s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}196m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
|   | hadoop.hdfs.TestGetFileChecksum |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-HDFS-Build/29535/artifact/out/Dockerfile
 |
| JIRA Issue | HDFS-15443 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13007902/HDFS-15443.002.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux ccc7ca061d61 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 

[jira] [Commented] (HDFS-15470) Added more unit tests to validate rename behaviour across snapshots

2020-07-20 Thread Jitendra Nath Pandey (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161466#comment-17161466
 ] 

Jitendra Nath Pandey commented on HDFS-15470:
-

+1 for the patch.

> Added more unit tests to validate rename behaviour across snapshots
> ---
>
> Key: HDFS-15470
> URL: https://issues.apache.org/jira/browse/HDFS-15470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.0.4
>
> Attachments: HDFS-15470.000.patch, HDFS-15470.001.patch, 
> HDFS-15470.002.patch
>
>
> HDFS-15313 fixes a critical issue which will avoid deletion of data in active 
> fs with a sequence of snapshot deletes. The idea is to add more tests to 
> verify the behaviour.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15404) ShellCommandFencer should expose info about source

2020-07-20 Thread Konstantin Shvachko (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161420#comment-17161420
 ] 

Konstantin Shvachko commented on HDFS-15404:


+1 for v 06 patch.

> ShellCommandFencer should expose info about source
> --
>
> Key: HDFS-15404
> URL: https://issues.apache.org/jira/browse/HDFS-15404
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-15404.001.patch, HDFS-15404.002.patch, 
> HDFS-15404.003.patch, HDFS-15404.004.patch, HDFS-15404.005.patch, 
> HDFS-15404.006.patch
>
>
> Currently the HA fencing logic in ShellCommandFencer exposes environment 
> variable about only the fencing target. i.e. the $target_* variables as 
> mentioned in this [document 
> page|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html]).
>  
> But here only the fencing target variables are getting exposed. Sometimes it 
> is useful to expose info about the fencing source node. One use case is would 
> allow source and target node to identify themselves separately and run 
> different commands/scripts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15439) Setting dfs.mover.retry.max.attempts to negative value will retry forever.

2020-07-20 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161376#comment-17161376
 ] 

Ayush Saxena commented on HDFS-15439:
-

Thanx [~AMC-team] team for the report.
I think we should have a sanity check for {{dfs.mover.retry.max.attempts}} 
going ahead with an invalid configuration doesn't make sense.
You can add a check, if this value is less than 0, put a warn log and use the 
default value {{DFS_MOVER_RETRY_MAX_ATTEMPTS_DEFAULT}} : 

{code:java}
  LOG.warn(DFSConfigKeys.DFS_MOVER_RETRY_MAX_ATTEMPTS_KEY + " is "
  + "configured with a negative value, using default value of "
  + DFSConfigKeys.DFS_MOVER_RETRY_MAX_ATTEMPTS_DEFAULT);
{code}



> Setting dfs.mover.retry.max.attempts to negative value will retry forever.
> --
>
> Key: HDFS-15439
> URL: https://issues.apache.org/jira/browse/HDFS-15439
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Reporter: AMC-team
>Priority: Major
> Attachments: HDFS-15439.000.patch
>
>
> Configuration parameter "dfs.mover.retry.max.attempts" is to define the 
> maximum number of retries before the mover consider the move failed. There is 
> no checking code so this parameter can accept any int value.
> Theoratically, setting this value to <=0 should mean that no retry at all. 
> However, if you set the value to negative value. The checking condition for 
> retry failed will never satisfied because the if statement is "*if 
> (retryCount.get() == retryMaxAttempts)*". The retry count will always +1 by 
> retryCount.incrementAndGet() after failed but never *=* *retryMaxAttempts.* 
> {code:java}
> private Result processNamespace() throws IOException {
>   ... //wait for pending move to finish and retry the failed migration
>   if (hasFailed && !hasSuccess) {
> if (retryCount.get() == retryMaxAttempts) {
>   result.setRetryFailed();
>   LOG.error("Failed to move some block's after "
>   + retryMaxAttempts + " retries.");
>   return result;
> } else {
>   retryCount.incrementAndGet();
> }
>   } else {
> // Reset retry count if no failure.
> retryCount.set(0);
>   }
>   ...
> }
> {code}
> *How to fix*
> Add checking code of "dfs.mover.retry.max.attempts" to accept only 
> non-negative value or change the if statement condition when retry count 
> exceeds max attempts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15438) Setting dfs.disk.balancer.max.disk.errors = 0 will fail the block copy

2020-07-20 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161369#comment-17161369
 ] 

Ayush Saxena commented on HDFS-15438:
-

Thanx [~AMC-team]  for the report.

even if you get away here, you may get stuck below at L926 :
{code:java}
  if (item.getErrorCount() >= getMaxError(item)) {
{code}
error count and max error both shall be zero and this condition shall become 
true and ultimately you would land up setting an error.

Instead of this :

{code:java}
+  (item.getErrorCount() == 0 || item.getErrorCount() < 
getMaxError(item))) {
{code}
Shouldn't we just have :
{code:java}
  item.getErrorCount() <= getMaxError(item) {
{code}

and even tweak the if condition at L926 to remove the '=' sign?

cc [~aengineer] you wrote this up, any pointers.

> Setting dfs.disk.balancer.max.disk.errors = 0 will fail the block copy
> --
>
> Key: HDFS-15438
> URL: https://issues.apache.org/jira/browse/HDFS-15438
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Reporter: AMC-team
>Priority: Major
> Attachments: HDFS-15438.000.patch
>
>
> In HDFS disk balancer, the config parameter 
> "dfs.disk.balancer.max.disk.errors" is to control the value of maximum number 
> of errors we can ignore for a specific move between two disks before it is 
> abandoned.
> The parameter can accept value that >= 0. And setting the value to 0 should 
> mean no error tolerance. However, setting the value to 0 will simply don't do 
> the block copy even there is no disk error occur because the while loop 
> condition *item.getErrorCount() < getMaxError(item)* will not satisfied.
> {code:java}
> // Gets the next block that we can copy
> private ExtendedBlock getBlockToCopy(FsVolumeSpi.BlockIterator iter,
>  DiskBalancerWorkItem item) {
>   while (!iter.atEnd() && item.getErrorCount() < getMaxError(item)) {
> try {
>   ... //get the block
> }  catch (IOException e) {
> item.incErrorCount();
> }
>if (item.getErrorCount() >= getMaxError(item)) {
> item.setErrMsg("Error count exceeded.");
> LOG.info("Maximum error count exceeded. Error count: {} Max error:{} 
> ",
> item.getErrorCount(), item.getMaxDiskErrors());
>   }
> {code}
> *How to fix*
> Change the while loop condition to support value 0.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15381) Fix typo corrputBlocksFiles to corruptBlocksFiles

2020-07-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161368#comment-17161368
 ] 

Hudson commented on HDFS-15381:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18454 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18454/])
HDFS-15381. Fix typos corrputBlocksFiles to corruptBlocksFiles. (ayushsaxena: 
rev 6cbd8854ee5f2c33496ac7ae397e366cf136dd07)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java


> Fix typo corrputBlocksFiles to corruptBlocksFiles
> -
>
> Key: HDFS-15381
> URL: https://issues.apache.org/jira/browse/HDFS-15381
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Trivial
> Fix For: 3.4.0
>
> Attachments: HDFS-15381.001.patch
>
>
> Fix typos corrputBlocksFiles to corruptBlocksFiles



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15381) Fix typo corrputBlocksFiles to corruptBlocksFiles

2020-07-20 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15381:

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Fix typo corrputBlocksFiles to corruptBlocksFiles
> -
>
> Key: HDFS-15381
> URL: https://issues.apache.org/jira/browse/HDFS-15381
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Trivial
> Fix For: 3.4.0
>
> Attachments: HDFS-15381.001.patch
>
>
> Fix typos corrputBlocksFiles to corruptBlocksFiles



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15381) Fix typo corrputBlocksFiles to corruptBlocksFiles

2020-07-20 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161360#comment-17161360
 ] 

Ayush Saxena commented on HDFS-15381:
-

Committed to trunk.

Thanx [~bianqi]  for the contribution!!!

> Fix typo corrputBlocksFiles to corruptBlocksFiles
> -
>
> Key: HDFS-15381
> URL: https://issues.apache.org/jira/browse/HDFS-15381
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Trivial
> Attachments: HDFS-15381.001.patch
>
>
> Fix typos corrputBlocksFiles to corruptBlocksFiles



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15381) Fix typo corrputBlocksFiles to corruptBlocksFiles

2020-07-20 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15381:

Summary: Fix typo corrputBlocksFiles to corruptBlocksFiles  (was: Fix typos 
corrputBlocksFiles to corruptBlocksFiles)

> Fix typo corrputBlocksFiles to corruptBlocksFiles
> -
>
> Key: HDFS-15381
> URL: https://issues.apache.org/jira/browse/HDFS-15381
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Trivial
> Attachments: HDFS-15381.001.patch
>
>
> Fix typos corrputBlocksFiles to corruptBlocksFiles



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15479) Ordered snapshot deletion: make it a configurable feature

2020-07-20 Thread Shashikant Banerjee (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161319#comment-17161319
 ] 

Shashikant Banerjee commented on HDFS-15479:


Thanks [~szetszwo] for working on this. The changes look ok .
{code:java}
final Snapshot earliest = snapshottable.getSnapshotList().get(0);
{code}
I think the snapshot list is sorted by name and the 1st element in the list may 
not be the earliest snapshot ? can you plz check?

> Ordered snapshot deletion: make it a configurable feature
> -
>
> Key: HDFS-15479
> URL: https://issues.apache.org/jira/browse/HDFS-15479
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
> Attachments: h15479_20200719.patch
>
>
> Ordered snapshot deletion is a configurable feature.  In this JIRA, a conf is 
> added.
> When the feature is enabled, only the earliest snapshot can be deleted.  For 
> deleting the non-earliest snapshots, the behavior is temporarily changed to 
> throwing an exception in this JIRA.  In HDFS-15480, the behavior of deleting 
> the non-earliest snapshots will be changed to marking them as deleted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15479) Ordered snapshot deletion: make it a configurable feature

2020-07-20 Thread Shashikant Banerjee (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161319#comment-17161319
 ] 

Shashikant Banerjee edited comment on HDFS-15479 at 7/20/20, 3:19 PM:
--

Thanks [~szetszwo] for working on this.
{code:java}
final Snapshot earliest = snapshottable.getSnapshotList().get(0);
{code}
I think the snapshot list is sorted by name and the 1st element in the list may 
not be the earliest snapshot ? can you plz check?


was (Author: shashikant):
Thanks [~szetszwo] for working on this. The changes look ok .
{code:java}
final Snapshot earliest = snapshottable.getSnapshotList().get(0);
{code}
I think the snapshot list is sorted by name and the 1st element in the list may 
not be the earliest snapshot ? can you plz check?

> Ordered snapshot deletion: make it a configurable feature
> -
>
> Key: HDFS-15479
> URL: https://issues.apache.org/jira/browse/HDFS-15479
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
> Attachments: h15479_20200719.patch
>
>
> Ordered snapshot deletion is a configurable feature.  In this JIRA, a conf is 
> added.
> When the feature is enabled, only the earliest snapshot can be deleted.  For 
> deleting the non-earliest snapshots, the behavior is temporarily changed to 
> throwing an exception in this JIRA.  In HDFS-15480, the behavior of deleting 
> the non-earliest snapshots will be changed to marking them as deleted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15480) Ordered snapshot deletion: record snapshot deletion in XAttr

2020-07-20 Thread Shashikant Banerjee (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161315#comment-17161315
 ] 

Shashikant Banerjee commented on HDFS-15480:


HDFS-15480.000.patch–> 1st patch.

Will add more tests.

> Ordered snapshot deletion: record snapshot deletion in XAttr
> 
>
> Key: HDFS-15480
> URL: https://issues.apache.org/jira/browse/HDFS-15480
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Tsz-wo Sze
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-15480.000.patch
>
>
> In this JIRA, the behavior of deleting the non-earliest snapshots will be 
> changed to marking them as deleted in XAttr but not actually deleting them.  
> Note that
> # The marked-for-deletion snapshots will be garbage collected later on; see 
> HDFS-15481.
> # The marked-for-deletion snapshots will be hided from users; see HDFS-15482.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15480) Ordered snapshot deletion: record snapshot deletion in XAttr

2020-07-20 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-15480:
---
Attachment: HDFS-15480.000.patch

> Ordered snapshot deletion: record snapshot deletion in XAttr
> 
>
> Key: HDFS-15480
> URL: https://issues.apache.org/jira/browse/HDFS-15480
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Tsz-wo Sze
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-15480.000.patch
>
>
> In this JIRA, the behavior of deleting the non-earliest snapshots will be 
> changed to marking them as deleted in XAttr but not actually deleting them.  
> Note that
> # The marked-for-deletion snapshots will be garbage collected later on; see 
> HDFS-15481.
> # The marked-for-deletion snapshots will be hided from users; see HDFS-15482.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15313) Ensure inodes in active filesytem are not deleted during snapshot delete

2020-07-20 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161298#comment-17161298
 ] 

Stephen O'Donnell commented on HDFS-15313:
--

HDFS-15313.branch-2.10.001.patch LGTM for the 2.10 branch. I will commit it 
tomorrow if there are no other comments.

> Ensure inodes in active filesytem are not deleted during snapshot delete
> 
>
> Key: HDFS-15313
> URL: https://issues.apache.org/jira/browse/HDFS-15313
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5
>
> Attachments: HDFS-15313-branch-3.1.001.patch, HDFS-15313.000.patch, 
> HDFS-15313.001.patch, HDFS-15313.branch-2.10.001.patch, 
> HDFS-15313.branch-2.10.patch, HDFS-15313.branch-2.8.patch
>
>
> After HDFS-13101, it was observed in one of our customer deployments that 
> delete snapshot ends up cleaning up inodes from active fs which can be 
> referred from only one snapshot as the isLastReference() check for the parent 
> dir introduced in HDFS-13101 may return true in certain cases. The aim of 
> this Jira to add a check to ensure if the Inodes are being referred in the 
> active fs , should not get deleted while deletion of snapshot happens.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15313) Ensure inodes in active filesytem are not deleted during snapshot delete

2020-07-20 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161162#comment-17161162
 ] 

Hadoop QA commented on HDFS-15313:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.10 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
23s{color} | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} branch-2.10 passed with JDK Oracle 
Corporation-1.7.0_95-b00 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} branch-2.10 passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~16.04-b09 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} branch-2.10 passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-hdfs in branch-2.10 failed with JDK Oracle 
Corporation-1.7.0_95-b00. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} branch-2.10 passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~16.04-b09 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
31s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
30s{color} | {color:green} branch-2.10 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed with JDK Oracle 
Corporation-1.7.0_95-b00 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~16.04-b09 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
11s{color} | {color:red} 
hadoop-hdfs-project_hadoop-hdfs-jdkOracleCorporation-1.7.0_95-b00 with JDK 
Oracle Corporation-1.7.0_95-b00 generated 9 new + 0 unchanged - 0 fixed = 9 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~16.04-b09 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
|   | 

[jira] [Assigned] (HDFS-15480) Ordered snapshot deletion: record snapshot deletion in XAttr

2020-07-20 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee reassigned HDFS-15480:
--

Assignee: Shashikant Banerjee

> Ordered snapshot deletion: record snapshot deletion in XAttr
> 
>
> Key: HDFS-15480
> URL: https://issues.apache.org/jira/browse/HDFS-15480
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Tsz-wo Sze
>Assignee: Shashikant Banerjee
>Priority: Major
>
> In this JIRA, the behavior of deleting the non-earliest snapshots will be 
> changed to marking them as deleted in XAttr but not actually deleting them.  
> Note that
> # The marked-for-deletion snapshots will be garbage collected later on; see 
> HDFS-15481.
> # The marked-for-deletion snapshots will be hided from users; see HDFS-15482.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15313) Ensure inodes in active filesytem are not deleted during snapshot delete

2020-07-20 Thread Shashikant Banerjee (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161036#comment-17161036
 ] 

Shashikant Banerjee commented on HDFS-15313:


[^HDFS-15313.branch-2.10.001.patch] -> Patch for 2.10 branch

> Ensure inodes in active filesytem are not deleted during snapshot delete
> 
>
> Key: HDFS-15313
> URL: https://issues.apache.org/jira/browse/HDFS-15313
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5
>
> Attachments: HDFS-15313-branch-3.1.001.patch, HDFS-15313.000.patch, 
> HDFS-15313.001.patch, HDFS-15313.branch-2.10.001.patch, 
> HDFS-15313.branch-2.10.patch, HDFS-15313.branch-2.8.patch
>
>
> After HDFS-13101, it was observed in one of our customer deployments that 
> delete snapshot ends up cleaning up inodes from active fs which can be 
> referred from only one snapshot as the isLastReference() check for the parent 
> dir introduced in HDFS-13101 may return true in certain cases. The aim of 
> this Jira to add a check to ensure if the Inodes are being referred in the 
> active fs , should not get deleted while deletion of snapshot happens.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15313) Ensure inodes in active filesytem are not deleted during snapshot delete

2020-07-20 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-15313:
---
Attachment: HDFS-15313.branch-2.10.001.patch

> Ensure inodes in active filesytem are not deleted during snapshot delete
> 
>
> Key: HDFS-15313
> URL: https://issues.apache.org/jira/browse/HDFS-15313
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5
>
> Attachments: HDFS-15313-branch-3.1.001.patch, HDFS-15313.000.patch, 
> HDFS-15313.001.patch, HDFS-15313.branch-2.10.001.patch, 
> HDFS-15313.branch-2.10.patch, HDFS-15313.branch-2.8.patch
>
>
> After HDFS-13101, it was observed in one of our customer deployments that 
> delete snapshot ends up cleaning up inodes from active fs which can be 
> referred from only one snapshot as the isLastReference() check for the parent 
> dir introduced in HDFS-13101 may return true in certain cases. The aim of 
> this Jira to add a check to ensure if the Inodes are being referred in the 
> active fs , should not get deleted while deletion of snapshot happens.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15098) Add SM4 encryption method for HDFS

2020-07-20 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17160995#comment-17160995
 ] 

Hadoop QA commented on HDFS-15098:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
50s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} prototool {color} | {color:blue}  0m  
0s{color} | {color:blue} prototool was not available. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
0s{color} | {color:blue} markdownlint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
43s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
13s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
25s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 2 
extant findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
10s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 4 extant 
findbugs warnings. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 17m 13s{color} | 
{color:red} root generated 28 new + 134 unchanged - 28 fixed = 162 total (was 
162) {color} |
| {color:green}+1{color} | {color:green} golang {color} | {color:green} 17m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 13s{color} 
| {color:red} root generated 4 new + 1948 unchanged - 0 fixed = 1952 total (was 
1948) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 51s{color} | {color:orange} root: The patch generated 3 new + 227 unchanged 
- 8 fixed = 230 total (was 235) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} 

[jira] [Commented] (HDFS-15098) Add SM4 encryption method for HDFS

2020-07-20 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17160962#comment-17160962
 ] 

Hadoop QA commented on HDFS-15098:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} prototool {color} | {color:blue}  0m  
0s{color} | {color:blue} prototool was not available. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
0s{color} | {color:blue} markdownlint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
23m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
58s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
38s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 2 
extant findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
54s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 4 extant 
findbugs warnings. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 21m 57s{color} | 
{color:red} root generated 18 new + 144 unchanged - 18 fixed = 162 total (was 
162) {color} |
| {color:green}+1{color} | {color:green} golang {color} | {color:green} 21m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 21m 57s{color} 
| {color:red} root generated 3 new + 1944 unchanged - 0 fixed = 1947 total (was 
1944) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 37s{color} | {color:orange} root: The patch generated 3 new + 229 unchanged 
- 8 fixed = 232 total (was 237) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit 

[jira] [Commented] (HDFS-15463) Add a tool to validate FsImage

2020-07-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17160940#comment-17160940
 ] 

Hudson commented on HDFS-15463:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18451 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18451/])
HDFS-15463. Add a tool to validate FsImage. (#2140) (github: rev 
2cec50cf1657672e14541717b8222cecc3ad5dd0)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogInputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FsImageValidation.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsImageValidation.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReferenceValidation.java


> Add a tool to validate FsImage
> --
>
> Key: HDFS-15463
> URL: https://issues.apache.org/jira/browse/HDFS-15463
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: FsImageValidation20200709.patch, 
> FsImageValidation20200712.patch, FsImageValidation20200714.patch, 
> FsImageValidation20200715.patch, FsImageValidation20200715b.patch, 
> FsImageValidation20200715c.patch, FsImageValidation20200717b.patch, 
> FsImageValidation20200718.patch, HDFS-15463.000.patch
>
>
> Due to some snapshot related bugs, a fsimage may become corrupted.  Using a 
> corrupted fsimage may further result in data loss.
> In some cases, we found that reference counts are incorrect in some corrupted 
> FsImage.  One of the goals of the validation tool is to check  reference 
> counts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15463) Add a tool to validate FsImage

2020-07-20 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee resolved HDFS-15463.

Fix Version/s: 3.4.0
   Resolution: Fixed

> Add a tool to validate FsImage
> --
>
> Key: HDFS-15463
> URL: https://issues.apache.org/jira/browse/HDFS-15463
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: FsImageValidation20200709.patch, 
> FsImageValidation20200712.patch, FsImageValidation20200714.patch, 
> FsImageValidation20200715.patch, FsImageValidation20200715b.patch, 
> FsImageValidation20200715c.patch, FsImageValidation20200717b.patch, 
> FsImageValidation20200718.patch, HDFS-15463.000.patch
>
>
> Due to some snapshot related bugs, a fsimage may become corrupted.  Using a 
> corrupted fsimage may further result in data loss.
> In some cases, we found that reference counts are incorrect in some corrupted 
> FsImage.  One of the goals of the validation tool is to check  reference 
> counts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org