[jira] [Commented] (HDFS-15796) ConcurrentModificationException error happens on NameNode occasionally

2021-07-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17377367#comment-17377367
 ] 

Hudson commented on HDFS-15796:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m  
4s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
46s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 30s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 21m 
52s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are 
enabled, using SpotBugs. {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  2m 
59s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
18s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
10s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  8s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 

[jira] [Commented] (HDFS-16101) Remove unuse variable and IoException in ProvidedStorageMap

2021-07-05 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17374991#comment-17374991
 ] 

Hudson commented on HDFS-16101:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
5s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
46s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 27s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 25m 
15s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are 
enabled, using SpotBugs. {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  3m 
25s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
28s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 58s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 

[jira] [Commented] (HDFS-16101) Remove unuse variable and IoException in ProvidedStorageMap

2021-07-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17374521#comment-17374521
 ] 

Hudson commented on HDFS-16101:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} |  | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} yetus {color} | {color:red}  0m  7s{color} 
|  | {color:red} Unprocessed flag(s): --mvn-custom-repos-dir {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/665/artifact/out/Dockerfile
 |
| JIRA Issue | HDFS-16101 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13027419/HDFS-16101.001.patch |
| Console output | 
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/665/console |
| versions | git=2.25.1 |
| Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |


This message was automatically generated.



> Remove unuse variable and IoException in ProvidedStorageMap
> ---
>
> Key: HDFS-16101
> URL: https://issues.apache.org/jira/browse/HDFS-16101
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: lei w
>Assignee: lei w
>Priority: Minor
> Attachments: HDFS-16101.001.patch
>
>
> Remove unuse variable and IoException in ProvidedStorageMap



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16101) Remove unuse variable and IoException in ProvidedStorageMap

2021-07-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17374513#comment-17374513
 ] 

Hudson commented on HDFS-16101:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} |  | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} yetus {color} | {color:red}  0m  7s{color} 
|  | {color:red} Unprocessed flag(s): --brief-report-file 
--spotbugs-strict-precheck --html-report-file --mvn-custom-repos --shelldocs 
--mvn-javadoc-goals --mvn-custom-repos-dir {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/664/artifact/out/Dockerfile
 |
| JIRA Issue | HDFS-16101 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13027419/HDFS-16101.001.patch |
| Console output | 
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/664/console |
| versions | git=2.25.1 |
| Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |


This message was automatically generated.



> Remove unuse variable and IoException in ProvidedStorageMap
> ---
>
> Key: HDFS-16101
> URL: https://issues.apache.org/jira/browse/HDFS-16101
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: lei w
>Assignee: lei w
>Priority: Minor
> Attachments: HDFS-16101.001.patch
>
>
> Remove unuse variable and IoException in ProvidedStorageMap



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16101) Remove unuse variable and IoException in ProvidedStorageMap

2021-07-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17374511#comment-17374511
 ] 

Hudson commented on HDFS-16101:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} |  | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} yetus {color} | {color:red}  0m  7s{color} 
|  | {color:red} Unprocessed flag(s): --brief-report-file 
--spotbugs-strict-precheck --html-report-file --mvn-custom-repos --shelldocs 
--mvn-javadoc-goals --mvn-custom-repos-dir {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/662/artifact/out/Dockerfile
 |
| JIRA Issue | HDFS-16101 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13027419/HDFS-16101.001.patch |
| Console output | 
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/662/console |
| versions | git=2.25.1 |
| Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |


This message was automatically generated.



> Remove unuse variable and IoException in ProvidedStorageMap
> ---
>
> Key: HDFS-16101
> URL: https://issues.apache.org/jira/browse/HDFS-16101
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: lei w
>Assignee: lei w
>Priority: Minor
> Attachments: HDFS-16101.001.patch
>
>
> Remove unuse variable and IoException in ProvidedStorageMap



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15503) File and directory permissions are not able to be modified from WebUI

2020-08-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17170327#comment-17170327
 ] 

Hudson commented on HDFS-15503:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18490 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18490/])
HDFS-15503. File and directory permissions are not able to be modified 
(hemanthboyina: rev 82f3ffcd64d25cf3a2f5e280e07140994e0ba8cb)
* (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/explorer.js
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js


> File and directory permissions are not able to be modified from WebUI
> -
>
> Key: HDFS-15503
> URL: https://issues.apache.org/jira/browse/HDFS-15503
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Major
> Attachments: HDFS-15503.001.patch, HDFS-15503.002.patch, 
> after-HDFS-15503.png, before-HDFS-15503.png
>
>
> After upgrading bootstrap from 3.3.7 to 3.4.1 the bootstrap popover content 
> is not being shown in Browse File System Permission column



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15498) Show snapshots deletion status in snapList cmd

2020-08-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17169581#comment-17169581
 ] 

Hudson commented on HDFS-15498:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18487 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18487/])
HDFS-15498. Show snapshots deletion status in snapList cmd. (#2181) (github: 
rev d8a2df25ad2a145712910259f4d4079c302c2aef)
* (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestListSnapshot.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotStatus.java


> Show snapshots deletion status in snapList cmd
> --
>
> Key: HDFS-15498
> URL: https://issues.apache.org/jira/browse/HDFS-15498
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15498.000.patch
>
>
> HDFS-15488 adds a cmd to list all snapshots for a given snapshottable 
> directory. A snapshot can be just marked as deleted with ordered deletion 
> config set. This Jira aims to add deletion status to cmd output.
>  
> SAMPLE OUTPUT:
> {noformat}
> sbanerjee-MBP15:hadoop-3.4.0-SNAPSHOT sbanerjee$ bin/hdfs lsSnapshottableDir
> drwxr-xr-x 0 sbanerjee supergroup 0 2020-07-27 11:52 2 65536 /user
> sbanerjee-MBP15:hadoop-3.4.0-SNAPSHOT sbanerjee$ bin/hdfs lsSnapshot /user
> drwxr-xr-x 0 sbanerjee supergroup 0 2020-07-27 11:52 1 ACTIVE 
> /user/.snapshot/s1
> drwxr-xr-x 0 sbanerjee supergroup 0 2020-07-27 11:51 0 DELETED 
> /user/.snapshot/s20200727-115156.407{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15229) Truncate info should be logged at INFO level

2020-08-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17169472#comment-17169472
 ] 

Hudson commented on HDFS-15229:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18486 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18486/])
HDFS-15229. Truncate info should be logged at INFO level. Contributed by 
(hemanthboyina: rev 528a799a784c95cc702e3547a8a294f00743533b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java


>  Truncate info should be logged at INFO level
> -
>
> Key: HDFS-15229
> URL: https://issues.apache.org/jira/browse/HDFS-15229
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ravuri Sushma sree
>Assignee: Ravuri Sushma sree
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15229.001.patch, HDFS-15229.002.patch
>
>
> In NN log and audit log, we can't find the truncate size.
> Logs related to Truncate are captured at DEBUG Level and it is important that 
> NN should log the newLength of truncate.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14950) missing libhdfspp libs in dist-package

2020-07-31 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17168507#comment-17168507
 ] 

Hudson commented on HDFS-14950:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18483 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18483/])
HDFS-14950. fix missing libhdfspp lib in dist-package (#1947) (github: rev 
e756fe3590906bfd8ffe4ab5cc8b9b24a9b2b4b2)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt


> missing libhdfspp libs in dist-package
> --
>
> Key: HDFS-14950
> URL: https://issues.apache.org/jira/browse/HDFS-14950
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, libhdfs++
>Reporter: Yuan Zhou
>Assignee: Yuan Zhou
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0
>
> Attachments: fix_libhdfspp_lib.patch
>
>
> In a Hadoop build like "mvn package -Pnative" will copy HDFS native libs to 
> target/lib/native. For now it will only copy the C client 
> libraries(libhdfs.\{a,so}). C++ based HDFS client libraies(libhdfspp.\{a,so}) 
> are missing there.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15481) Ordered snapshot deletion: garbage collect deleted snapshots

2020-07-30 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17168118#comment-17168118
 ] 

Hudson commented on HDFS-15481:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18482 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18482/])
HDFS-15481. Ordered snapshot deletion: garbage collect deleted snapshots 
(github: rev 05b3337a4605dcb6904cb3fe2a58e4dc424ef015)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestOrderedSnapshotDeletion.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotDeletionGc.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiffList.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestOrderedSnapshotDeletionGc.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/Snapshot.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestOrderedSnapshotDeletion.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> Ordered snapshot deletion: garbage collect deleted snapshots
> 
>
> Key: HDFS-15481
> URL: https://issues.apache.org/jira/browse/HDFS-15481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: h15481_20200723.patch, h15481_20200723b.patch
>
>
> When the earliest snapshot is actually deleted, if the subsequent snapshots 
> are already marked as deleted, the subsequent snapshots can be also actually 
> removed from the file system.  In this JIRA, we implement a mechanism to  
> garbage collect these snapshots.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15488) Add a command to list all snapshots for a snaphottable root with snapshot Ids

2020-07-29 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17167343#comment-17167343
 ] 

Hudson commented on HDFS-15488:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18478 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18478/])
HDFS-15488. Add a command to list all snapshots for a snaphottable root 
(github: rev 68287371ccc66da80e6a3d7981ae6c7ce7238920)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientNamenodeProtocol.proto
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/snapshot/LsSnapshot.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsSnapshots.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/AdminHelper.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotStatus.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestListSnapshot.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterSnapshot.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/NameNodeMetrics.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSnapshotOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/protocol/TestReadOnly.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
* (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto


> Add a command to list all snapshots for a snaphottable root with snapshot Ids
> -
>
> Key: HDFS-15488
> URL: https://issues.apache.org/jira/browse/HDFS-15488
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15488.000.patch
>
>
> Currently, the way to list snapshots is do a ls on  
> /.snapshot directory. Since creation time is not 
> recorded , there is no way to actually figure out the chronological order of 
> snapshots. The idea here is to add a command to list snapshots for a 
> snapshottable directory along with snapshot Ids which grow monotonically as 
> snapshots are created in the system. With snapID, it will be helpful to 
> figure out the chronology of snapshots in the system.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15465) Support WebHDFS accesses to the data stored in secure Datanode through insecure Namenode

2020-07-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17165858#comment-17165858
 ] 

Hudson commented on HDFS-15465:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18475 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18475/])
HDFS-15465. Support WebHDFS accesses to the data stored in secure (github: rev 
026dce5334bca3b0aa9b05a6debe72db1e01842e)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/TestDataNodeUGIProvider.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/TestParameterParser.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ParameterParser.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/DataNodeUGIProvider.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java


> Support WebHDFS accesses to the data stored in secure Datanode through 
> insecure Namenode
> 
>
> Key: HDFS-15465
> URL: https://issues.apache.org/jira/browse/HDFS-15465
> Project: Hadoop HDFS
>  Issue Type: Wish
>  Components: federation, webhdfs
>Reporter: Toshihiko Uchida
>Assignee: Toshihiko Uchida
>Priority: Minor
> Fix For: 3.4.0
>
> Attachments: webhdfs-federation.pdf
>
>
> We're federating a secure HDFS cluster with an insecure cluster.
> Using HDFS RPC, we can access the data managed by insecure Namenode and 
> stored in secure Datanode.
> However, it does not work for WebHDFS due to HadoopIllegalArgumentException.
> {code}
> $ curl -i "http://:/webhdfs/v1/?op=OPEN"
> HTTP/1.1 307 TEMPORARY_REDIRECT
> (omitted)
> Location: 
> http://:/webhdfs/v1/?op=OPEN==0
> $ curl -i 
> "http://:/webhdfs/v1/?op=OPEN==0"
> HTTP/1.1 400 Bad Request
> (omitted)
> {"RemoteException":{"exception":"HadoopIllegalArgumentException","javaClassName":"org.apache.hadoop.HadoopIllegalArgumentException","message":"Invalid
>  argument, newValue is null"}}
> {code}
> This is because secure Datanode expects a delegation token, but insecure 
> Namenode does not return it to a client.
> - org.apache.hadoop.security.token.Token.decodeWritable
> {code}
>   private static void decodeWritable(Writable obj,
>  String newValue) throws IOException {
> if (newValue == null) {
>   throw new HadoopIllegalArgumentException(
>   "Invalid argument, newValue is null");
> }
> {code}
> The issue proposes to support the access also for WebHDFS.
> The attached PDF file [^webhdfs-federation.pdf] depicts our current 
> architecture and proposal.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15480) Ordered snapshot deletion: record snapshot deletion in XAttr

2020-07-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17163139#comment-17163139
 ] 

Hudson commented on HDFS-15480:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18466 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18466/])
HDFS-15480. Ordered snapshot deletion: record snapshot deletion in XAttr 
(github: rev 2d12496643b1b7cfa4eb270ec9b2fcdb78a58798)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestOrderedSnapshotDeletion.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSnapshotOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java


> Ordered snapshot deletion: record snapshot deletion in XAttr
> 
>
> Key: HDFS-15480
> URL: https://issues.apache.org/jira/browse/HDFS-15480
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Tsz-wo Sze
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 1.3.0
>
> Attachments: HDFS-15480.000.patch, HDFS-15480.001.patch, 
> HDFS-15480.002.patch
>
>
> In this JIRA, the behavior of deleting the non-earliest snapshots will be 
> changed to marking them as deleted in XAttr but not actually deleting them.  
> Note that
> # The marked-for-deletion snapshots will be garbage collected later on; see 
> HDFS-15481.
> # The marked-for-deletion snapshots will be hided from users; see HDFS-15482.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15478) When Empty mount points, we are assigning fallback link to self. But it should not use full URI for target fs.

2020-07-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162532#comment-17162532
 ] 

Hudson commented on HDFS-15478:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18463 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18463/])
HDFS-15478: When Empty mount points, we are assigning fallback link to (github: 
rev ac9a07b51aefd0fd3b4602adc844ab0f172835e3)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java


> When Empty mount points, we are assigning fallback link to self. But it 
> should not use full URI for target fs.
> --
>
> Key: HDFS-15478
> URL: https://issues.apache.org/jira/browse/HDFS-15478
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Fix For: 3.4.0
>
>
> On empty mount tables detection, we will automatically assign fallback with 
> the same initialized uri fs. Currently we are using given uri for creating 
> target fs. 
> When creating target fs, we use Chrooted fs where it will set the path from 
> uri as base directory.  So, this can make path wrong in the case of fs 
> initialized with path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15246) ArrayIndexOfboundsException in BlockManager CreateLocatedBlock

2020-07-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162155#comment-17162155
 ] 

Hudson commented on HDFS-15246:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18460 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18460/])
HDFS-15246. ArrayIndexOfboundsException in BlockManager (inigoiri: rev 
8b7695bb2628574b4450bac19c12b29db9ee0628)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java


> ArrayIndexOfboundsException in BlockManager CreateLocatedBlock
> --
>
> Key: HDFS-15246
> URL: https://issues.apache.org/jira/browse/HDFS-15246
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15246-testrepro.patch, HDFS-15246.001.patch, 
> HDFS-15246.002.patch, HDFS-15246.003.patch
>
>
> java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1
>  
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:1362)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlocks(BlockManager.java:1501)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:179)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2047)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:770)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15479) Ordered snapshot deletion: make it a configurable feature

2020-07-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161752#comment-17161752
 ] 

Hudson commented on HDFS-15479:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18459 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18459/])
HDFS-15479. Ordered snapshot deletion: make it a configurable feature (github: 
rev d57462f2daee5f057e32219d4123a3f75506d6d4)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestOrderedSnapshotDeletion.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSnapshotOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java


> Ordered snapshot deletion: make it a configurable feature
> -
>
> Key: HDFS-15479
> URL: https://issues.apache.org/jira/browse/HDFS-15479
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: h15479_20200719.patch
>
>
> Ordered snapshot deletion is a configurable feature.  In this JIRA, a conf is 
> added.
> When the feature is enabled, only the earliest snapshot can be deleted.  For 
> deleting the non-earliest snapshots, the behavior is temporarily changed to 
> throwing an exception in this JIRA.  In HDFS-15480, the behavior of deleting 
> the non-earliest snapshots will be changed to marking them as deleted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15470) Added more unit tests to validate rename behaviour across snapshots

2020-07-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161743#comment-17161743
 ] 

Hudson commented on HDFS-15470:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18458 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18458/])
HDFS-15470. Added more unit tests to validate rename behaviour across 
(shashikant: rev d9441f95c362214e249b969c9ccc3fb4e8c1709a)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithSnapshot.java


> Added more unit tests to validate rename behaviour across snapshots
> ---
>
> Key: HDFS-15470
> URL: https://issues.apache.org/jira/browse/HDFS-15470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.0.4
>
> Attachments: HDFS-15470.000.patch, HDFS-15470.001.patch, 
> HDFS-15470.002.patch
>
>
> HDFS-15313 fixes a critical issue which will avoid deletion of data in active 
> fs with a sequence of snapshot deletes. The idea is to add more tests to 
> verify the behaviour.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15404) ShellCommandFencer should expose info about source

2020-07-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161504#comment-17161504
 ] 

Hudson commented on HDFS-15404:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18457 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18457/])
HDFS-15404. ShellCommandFencer should expose info about source. (vagarychen: 
rev 3833c616e087518196bcb77ac2479c66a0b188d8)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestShellCommandFencer.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceTarget.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestFailoverController.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ShellCommandFencer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSHAAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/FailoverController.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/NodeFencer.java


> ShellCommandFencer should expose info about source
> --
>
> Key: HDFS-15404
> URL: https://issues.apache.org/jira/browse/HDFS-15404
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-15404.001.patch, HDFS-15404.002.patch, 
> HDFS-15404.003.patch, HDFS-15404.004.patch, HDFS-15404.005.patch, 
> HDFS-15404.006.patch
>
>
> Currently the HA fencing logic in ShellCommandFencer exposes environment 
> variable about only the fencing target. i.e. the $target_* variables as 
> mentioned in this [document 
> page|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html]).
>  
> But here only the fencing target variables are getting exposed. Sometimes it 
> is useful to expose info about the fencing source node. One use case is would 
> allow source and target node to identify themselves separately and run 
> different commands/scripts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15381) Fix typo corrputBlocksFiles to corruptBlocksFiles

2020-07-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161368#comment-17161368
 ] 

Hudson commented on HDFS-15381:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18454 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18454/])
HDFS-15381. Fix typos corrputBlocksFiles to corruptBlocksFiles. (ayushsaxena: 
rev 6cbd8854ee5f2c33496ac7ae397e366cf136dd07)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java


> Fix typo corrputBlocksFiles to corruptBlocksFiles
> -
>
> Key: HDFS-15381
> URL: https://issues.apache.org/jira/browse/HDFS-15381
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Trivial
> Fix For: 3.4.0
>
> Attachments: HDFS-15381.001.patch
>
>
> Fix typos corrputBlocksFiles to corruptBlocksFiles



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15463) Add a tool to validate FsImage

2020-07-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17160940#comment-17160940
 ] 

Hudson commented on HDFS-15463:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18451 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18451/])
HDFS-15463. Add a tool to validate FsImage. (#2140) (github: rev 
2cec50cf1657672e14541717b8222cecc3ad5dd0)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogInputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FsImageValidation.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsImageValidation.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReferenceValidation.java


> Add a tool to validate FsImage
> --
>
> Key: HDFS-15463
> URL: https://issues.apache.org/jira/browse/HDFS-15463
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: FsImageValidation20200709.patch, 
> FsImageValidation20200712.patch, FsImageValidation20200714.patch, 
> FsImageValidation20200715.patch, FsImageValidation20200715b.patch, 
> FsImageValidation20200715c.patch, FsImageValidation20200717b.patch, 
> FsImageValidation20200718.patch, HDFS-15463.000.patch
>
>
> Due to some snapshot related bugs, a fsimage may become corrupted.  Using a 
> corrupted fsimage may further result in data loss.
> In some cases, we found that reference counts are incorrect in some corrupted 
> FsImage.  One of the goals of the validation tool is to check  reference 
> counts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15476) Make AsyncStream class' executor_ member private

2020-07-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17160782#comment-17160782
 ] 

Hudson commented on HDFS-15476:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18450 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18450/])
HDFS-15476 Make AsyncStream executor private (#2151) (github: rev 
4101b0c0edab62a2f9fdbeb3071dc602fac45961)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/async_stream.h


> Make AsyncStream class' executor_ member private
> 
>
> Key: HDFS-15476
> URL: https://issues.apache.org/jira/browse/HDFS-15476
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: build, libhdfs++
>Reporter: Suraj Naik
>Assignee: Suraj Naik
>Priority: Minor
> Fix For: 3.4.0
>
>
> As part of [HDFS-15385|https://issues.apache.org/jira/browse/HDFS-15385] the 
> boost library was upgraded.
> The AsyncStream class has a getter function which returns the executor. 
> Keeping the executor member public makes the getter function's role 
> pointless. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15198) RBF: Add test for MountTableRefresherService failed to refresh other router MountTableEntries in secure mode

2020-07-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17160406#comment-17160406
 ] 

Hudson commented on HDFS-15198:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18448 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18448/])
HDFS-15198. RBF: Add test for MountTableRefresherService failed to 
(ayushsaxena: rev 8a9a674ef10a951c073ef17ba6db1ff07cff52cd)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/FederationTestUtils.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTableCacheRefreshSecure.java


> RBF: Add test for MountTableRefresherService failed to refresh other router 
> MountTableEntries in secure mode
> 
>
> Key: HDFS-15198
> URL: https://issues.apache.org/jira/browse/HDFS-15198
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15198.001.patch, HDFS-15198.002.patch, 
> HDFS-15198.003.patch, HDFS-15198.004.patch, HDFS-15198.005.patch, 
> HDFS-15198.006.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> In issue HDFS-13443, update mount table cache imediately. The specified 
> router update their own mount table cache imediately, then update other's by 
> rpc protocol refreshMountTableEntries. But in secure mode, can't refresh 
> other's router's. In specified router's log, error like this
> {code}
> 2020-02-27 22:59:07,212 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server : 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> 2020-02-27 22:59:07,213 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.MountTableRefresherThread: 
> Failed to refresh mount table entries cache at router $host:8111
> java.io.IOException: DestHost:destPort host:8111 , LocalHost:localPort 
> $host/$ip:0. Failed on local exception: java.io.IOException: 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> at 
> org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocolTranslatorPB.refreshMountTableEntries(RouterAdminProtocolTranslatorPB.java:288)
> at 
> org.apache.hadoop.hdfs.server.federation.router.MountTableRefresherThread.run(MountTableRefresherThread.java:65)
> 2020-02-27 22:59:07,214 INFO 
> org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver: Added 
> new mount point /test_11 to resolver
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15319) Fix INode#isInLatestSnapshot() API

2020-07-14 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17157899#comment-17157899
 ] 

Hudson commented on HDFS-15319:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18438 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18438/])
HDFS-15319. Fix INode#isInLatestSnapshot() API. Contributed by (shashikant: rev 
85d4718ed737d3bfadf815765336465a7a98bc47)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java


> Fix INode#isInLatestSnapshot() API
> --
>
> Key: HDFS-15319
> URL: https://issues.apache.org/jira/browse/HDFS-15319
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.0.4
>
> Attachments: HDFS-15319.000.patch, HDFS-15319.001.patch
>
>
> isInLatestSnapshot() may return true in cases where an inode's ancesstors 
> might not be in the latest snapshot.
> {code:java}
> // if parent is a reference node, parent must be a renamed node. We can 
> // stop the check at the reference node.
> if (parent != null && parent.isReference()) {
>   // TODO: Is it a bug to return true?
>   //   Some ancestor nodes may not be in the latest snapshot.
>   return true;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15385) Upgrade boost library to 1.72

2020-07-14 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17157882#comment-17157882
 ] 

Hudson commented on HDFS-15385:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18436 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18436/])
HDFS-15385 Upgrade boost library to 1.72 (#2051) (github: rev 
cce5a6f6094cefd2e23b73d202cc173cf4fc2cc5)
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/detail/impl/win_iocp_handle_service.ipp
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/logging.cc
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/ssl/old/stream_service.hpp
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/detail/type_traits.hpp
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/detail/buffered_stream_storage.hpp
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/detail/dev_poll_reactor.hpp
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/ssl/detail/buffered_handshake_op.hpp
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/detail/posix_fd_set_adapter.hpp
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/detail/handler_type_requirements.hpp
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/detail/keyword_tss_ptr.hpp
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/buffered_read_stream_fwd.hpp
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/detail/resolver_service.hpp
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/detail/winrt_socket_send_op.hpp
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/basic_waitable_timer.hpp
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/generic/detail/endpoint.hpp
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/datagram_socket_service.hpp
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/io_service.hpp
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/posix/stream_descriptor_service.hpp
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/detail/win_iocp_socket_recvmsg_op.hpp
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/detail/completion_handler.hpp
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/read_until.hpp
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/detail/socket_select_interrupter.hpp
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/connection/datanodeconnection.cc
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/ip/impl/address_v4.ipp
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/detail/null_socket_service.hpp
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/retry_policy.cc
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/ip/v6_only.hpp
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/seq_packet_socket_service.hpp
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/generic/stream_protocol.hpp
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2/include/asio/detail/timer_queue.hpp
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/rpc/namenode_tracker.cc
* (delete) 

[jira] [Commented] (HDFS-15371) Nonstandard characters exist in NameNode.java

2020-07-14 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17157396#comment-17157396
 ] 

Hudson commented on HDFS-15371:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18433 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18433/])
HDFS-15371. Nonstandard characters exist in NameNode.java (#2032) (github: rev 
bdce75d737bc7d207c777bb0a9e5fc4c9a78cc0a)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java


> Nonstandard characters exist in NameNode.java
> -
>
> Key: HDFS-15371
> URL: https://issues.apache.org/jira/browse/HDFS-15371
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: jianghua zhu
>Assignee: Zhao Yi Ming
>Priority: Minor
> Fix For: 3.4.0
>
>
> In NameNode.Java, DFS_HA_ZKFC_PORT_KEY has non-standard characters behind it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13934) Multipart uploaders to be created through API call to FileSystem/FileContext, not service loader

2020-07-13 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17156776#comment-17156776
 ] 

Hudson commented on HDFS-13934:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18427 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18427/])
HDFS-13934. Multipart uploaders to be created through (stevel: rev 
b9fa5e0182c19adc4ff4cd2d9265a36ce9913178)
* (delete) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MultipartUploaderFactory.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInstrumentation.java
* (delete) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/localfs/TestLocalFSContractMultipartUploader.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/S3AMultipartUploader.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/statistics/S3AMultipartUploaderStatistics.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/S3AMultipartUploaderBuilder.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/WriteOperations.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/resources/META-INF/services/org.apache.hadoop.fs.MultipartUploaderFactory
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/WriteOperationHelper.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/MultipartUploaderBuilderImpl.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/InternalOperations.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractMultipartUploader.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonPathCapabilities.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/FutureIOSupport.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.fs.MultipartUploader
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Statistic.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/statistics/S3AMultipartUploaderStatisticsImpl.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/BulkOperationState.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/DfsPathCapabilities.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/TestS3AMultipartUploaderSupport.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractMultipartUploaderTest.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/StoreContext.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MultipartUploader.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/TestPartialDeleteFailures.java
* (delete) 
hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.fs.MultipartUploaderFactory
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MultipartUploaderBuilder.java
* (delete) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AMultipartUploaderSupport.java
* (edit) 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/multipartuploader.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/FileSystemMultipartUploader.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/FileSystemMultipartUploaderBuilder.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/AbstractMultipartUploader.java
* (delete) 

[jira] [Commented] (HDFS-14498) LeaseManager can loop forever on the file for which create has failed

2020-07-13 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17156507#comment-17156507
 ] 

Hudson commented on HDFS-14498:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18426 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18426/])
HDFS-14498 LeaseManager can loop forever on the file for which create 
(hexiaoqiao: rev b97fea65e70bee4f5ea81c544396f8e9fa860ab0)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java


> LeaseManager can loop forever on the file for which create has failed 
> --
>
> Key: HDFS-14498
> URL: https://issues.apache.org/jira/browse/HDFS-14498
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.9.0
>Reporter: Sergey Shelukhin
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-14498.001.patch, HDFS-14498.002.patch
>
>
> The logs from file creation are long gone due to infinite lease logging, 
> however it presumably failed... the client who was trying to write this file 
> is definitely long dead.
> The version includes HDFS-4882.
> We get this log pattern repeating infinitely:
> {noformat}
> 2019-05-16 14:00:16,893 INFO 
> [org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@b27557f] 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: 
> DFSClient_NONMAPREDUCE_-20898906_61, pending creates: 1] has expired hard 
> limit
> 2019-05-16 14:00:16,893 INFO 
> [org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@b27557f] 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
> Holder: DFSClient_NONMAPREDUCE_-20898906_61, pending creates: 1], src=
> 2019-05-16 14:00:16,893 WARN 
> [org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@b27557f] 
> org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.internalReleaseLease: 
> Failed to release lease for file . Committed blocks are waiting to be 
> minimally replicated. Try again later.
> 2019-05-16 14:00:16,893 WARN 
> [org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@b27557f] 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Cannot release the path 
>  in the lease [Lease.  Holder: DFSClient_NONMAPREDUCE_-20898906_61, 
> pending creates: 1]. It will be retried.
> org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: DIR* 
> NameSystem.internalReleaseLease: Failed to release lease for file . 
> Committed blocks are waiting to be minimally replicated. Try again later.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:3357)
>   at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager.checkLeases(LeaseManager.java:573)
>   at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:509)
>   at java.lang.Thread.run(Thread.java:745)
> $  grep -c "Recovering.*DFSClient_NONMAPREDUCE_-20898906_61, pending creates: 
> 1" hdfs_nn*
> hdfs_nn.log:1068035
> hdfs_nn.log.2019-05-16-14:1516179
> hdfs_nn.log.2019-05-16-15:1538350
> {noformat}
> Aside from an actual bug fix, it might make sense to make LeaseManager not 
> log so much, in case if there are more bugs like this...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15447) RBF: Add top owners metrics for delegation tokens

2020-07-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17156226#comment-17156226
 ] 

Hudson commented on HDFS-15447:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18425 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18425/])
HDFS-15447 RBF: Add top real owners metrics for delegation tokens (github: rev 
84b74b335c0251afa672643352c6b7ecf003e0fb)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/RBFMetrics.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/RouterMBean.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/security/token/ZKDelegationTokenSecretManagerImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/security/TestRouterSecurityManager.java


> RBF: Add top owners metrics for delegation tokens
> -
>
> Key: HDFS-15447
> URL: https://issues.apache.org/jira/browse/HDFS-15447
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
>
> Over time we have seen multiple times of token bombarding behavior either due 
> to mistakes or user issuing huge amount of traffic. Having this metric will 
> help figuring out much faster who/which service is owning these tokens and 
> stopping the behavior quicker.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15464) ViewFsOverloadScheme should work when -fs option pointing to remote cluster without mount links

2020-07-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17156219#comment-17156219
 ] 

Hudson commented on HDFS-15464:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18424 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18424/])
HDFS-15464: ViewFsOverloadScheme should work when -fs option pointing to 
(github: rev 3e700066394fb9f516e23537d8abb4661409cae1)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithDFSAdmin.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsConfig.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java


> ViewFsOverloadScheme should work when -fs option pointing to remote cluster 
> without mount links
> ---
>
> Key: HDFS-15464
> URL: https://issues.apache.org/jira/browse/HDFS-15464
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfsOverloadScheme
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Fix For: 3.4.0
>
>
> When users try to connect to remote cluster from the cluster env where you 
> enabled ViewFSOverloadScheme, it expects to have at least one mount link make 
> fs init success. 
> Unfortunately you might not have configured any mount links with that remote 
> cluster in your current env. You would have configured only with your local 
> clusters mount points.
> In this case fs init will fail with no mount points configured the mount 
> table if that remote cluster uri's authority.
> One idea is that, when there are no mount links configured, we should just 
> consider that as default cluster, that can be achieved by considering it as 
> fallback option automatically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15462) Add fs.viewfs.overload.scheme.target.ofs.impl to core-default.xml

2020-07-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17154941#comment-17154941
 ] 

Hudson commented on HDFS-15462:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18423 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18423/])
HDFS-15462. Add fs.viewfs.overload.scheme.target.ofs.impl to (github: rev 
0e694b20b9d59cc46882df506dcea386020b1e4d)
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java


> Add fs.viewfs.overload.scheme.target.ofs.impl to core-default.xml
> -
>
> Key: HDFS-15462
> URL: https://issues.apache.org/jira/browse/HDFS-15462
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: configuration, viewfs, viewfsOverloadScheme
>Affects Versions: 3.2.1
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.4.0
>
>
> HDFS-15394 added the existing impls in core-default.xml except ofs. Let's add 
> ofs to core-default here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15425) Review Logging of DFSClient

2020-07-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17152868#comment-17152868
 ] 

Hudson commented on HDFS-15425:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18417 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18417/])
HDFS-15425. Review Logging of DFSClient. Contributed by Hongbing Wang. 
(hexiaoqiao: rev 4f26454a7d1b560f959cdb2fb0641147a85642da)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripeReader.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java


> Review Logging of DFSClient
> ---
>
> Key: HDFS-15425
> URL: https://issues.apache.org/jira/browse/HDFS-15425
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
>Reporter: Hongbing Wang
>Assignee: Hongbing Wang
>Priority: Minor
> Fix For: 3.4.0
>
> Attachments: HDFS-15425.001.patch, HDFS-15425.002.patch, 
> HDFS-15425.003.patch
>
>
> Review use of SLF4J for DFSClient.LOG. 
> Make the code more concise and readable. 
> Less is more !



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15312) Apply umask when creating directory by WebHDFS

2020-07-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17152431#comment-17152431
 ] 

Hudson commented on HDFS-15312:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18415 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18415/])
HDFS-15312. Apply umask when creating directory by WebHDFS (#2096) (github: rev 
f77bbc2123e3b39117f42e2c9471eb83da98380e)
* (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/explorer.js
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js


> Apply umask when creating directory by WebHDFS
> --
>
> Key: HDFS-15312
> URL: https://issues.apache.org/jira/browse/HDFS-15312
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: Ye Ni
>Assignee: Ye Ni
>Priority: Minor
> Fix For: 3.4.0
>
>
> WebHDFS methods for creating file/directories were always creating it with 
> 755 permissions as default for both files and directories.
> The configured *fs.permissions.umask-mode* is intentionally ignored.
> This Jira is to apply this setting in such scenario.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15449) Optionally ignore port number in mount-table name when picking from initialized uri

2020-07-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17152422#comment-17152422
 ] 

Hudson commented on HDFS-15449:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18414 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18414/])
HDFS-15449. Optionally ignore port number in mount-table name when (github: rev 
dc0626b5f2f2ba0bd3919650ea231cedd424f77a)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithDFSAdmin.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithFSCommands.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeHdfsFileSystemContract.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java


> Optionally ignore port number in mount-table name when picking from 
> initialized uri
> ---
>
> Key: HDFS-15449
> URL: https://issues.apache.org/jira/browse/HDFS-15449
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Fix For: 3.4.0
>
>
> Currently mount-table name is used from uri's authority part. This authority 
> part contains IP:port/HOST:port. Some may configure without port as well.
> ex: hdfs://ns1 or hdfs://ns1:8020
> It may be good idea to use only hostname/IP when users configured with 
> IP:port/HOST:port format. So, that we will have unique mount-table name in 
> both cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15417) RBF: Get the datanode report from cache for federation WebHDFS operations

2020-07-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17152369#comment-17152369
 ] 

Hudson commented on HDFS-15417:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18413 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18413/])
HDFS-15417. RBF: Get the datanode report from cache for federation (github: rev 
e820baa6e6f7e850ba62cbf150d760bd0ea6d0e0)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterWebHdfsMethods.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java


> RBF: Get the datanode report from cache for federation WebHDFS operations
> -
>
> Key: HDFS-15417
> URL: https://issues.apache.org/jira/browse/HDFS-15417
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation, rbf, webhdfs
>Reporter: Ye Ni
>Assignee: Ye Ni
>Priority: Major
>
> *Why*
>  For WebHDFS CREATE, OPEN, APPEND and GETFILECHECKSUM operations, router or 
> namenode needs to get the datanodes where the block is located, then redirect 
> the request to one of the datanodes.
> However, this chooseDatanode action in router is much slower than namenode, 
> which directly affects the WebHDFS operations above.
> For namenode WebHDFS, it normally takes tens of milliseconds, while router 
> always takes more than 2 seconds.
> *How*
> Cache the datanode report in router RPC server. Actively refresh with a 
> configured interval. Only get the datanode report when necessary in router.
> It is a very expense operation where all the time is spent on.
> This is only needed when we want to exclude some datanodes or find a random 
> datanode for CREATE.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15451) Restarting name node stuck in safe mode when using provided storage

2020-07-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17152124#comment-17152124
 ] 

Hudson commented on HDFS-15451:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18412 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18412/])
HDFS-15451. Do not discard non-initial block report for provided (github: rev 
834372f4040f1e7a00720da5c40407f9b1423b6d)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java


> Restarting name node stuck in safe mode when using provided storage
> ---
>
> Key: HDFS-15451
> URL: https://issues.apache.org/jira/browse/HDFS-15451
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.2.1, 3.1.3
>Reporter: shanyu zhao
>Assignee: shanyu zhao
>Priority: Major
> Fix For: 3.4.0
>
>
> When HDFS provided storage is used (dfs.namenode.provided.enabled=true), 
> sometimes restarting name node will result in it stuck at safe mode.
> The problem is that data node send block report to name node successfully, 
> but name node is not processing the report properly, then HDFS remains in 
> safe mode due to missing blocks.
> Looking at name node log, this is the sequence of log for a specific data 
> node:
> {code}
> 2020-07-01 19:46:41,997 INFO blockmanagement.BlockReportLeaseManager: 
> Registered DN af19d9e0-7b9b-45e0-9aa6-b2f404098084 (10.244.6.131:9866).
> 2020-07-01 19:46:42,012 DEBUG blockmanagement.BlockReportLeaseManager: 
> Created a new BR lease 0x476aaae689ebbc01 for DN 
> af19d9e0-7b9b-45e0-9aa6-b2f404098084.  numPending = 4
> 2020-07-01 19:46:42,340 INFO BlockStateChange: BLOCK* processReport 
> 0xcc610f42d0218cd9: discarded non-initial block report from 
> DatanodeRegistration(10.244.6.131:9866, 
> datanodeUuid=af19d9e0-7b9b-45e0-9aa6-b2f404098084, infoPort=0, 
> infoSecurePort=9865, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-f49d3421-e04f-40b9-89ef-cf4fee73ad6a;nsid=497894240;c=1572548424451)
>  because namenode still in startup phase
> 2020-07-01 19:46:42,648 WARN blockmanagement.BlockReportLeaseManager: BR 
> lease 0x476aaae689ebbc01 is not valid for DN 
> af19d9e0-7b9b-45e0-9aa6-b2f404098084, because the DN is not in the pending 
> set.
> {code}
> The root cause is when BlockManager is processing report, it will skip 
> processing when storageInfo.getBlockReportCount() > 0 and remove the lease:
> {code}
> blockReportLeaseManager.removeLease(node)
> {code}
> This is because every data node will report a DS-PROVIDED storage, along with 
> other storages (like DISK storage). All DS -PROVIDED storages are actually 
> pointing to the same storageInfo, therefore the second data node sending 
> block report with DS-PROVIDED will have blockReportCount > 0. Then the lease 
> is removed for the data node, then processing future block reports from this 
> node will fail at checkLease() with message "BR lease is not valid".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15450) Fix NN trash emptier to work if ViewFSOveroadScheme enabled

2020-07-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17151424#comment-17151424
 ] 

Hudson commented on HDFS-15450:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18408 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18408/])
HDFS-15450. Fix NN trash emptier to work if ViewFSOveroadScheme enabled. 
(github: rev 55a2ae80dc9b45413febd33840b8a653e3e29440)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestNNStartupWhenViewFSOverloadSchemeEnabled.java


> Fix NN trash emptier to work if ViewFSOveroadScheme enabled
> ---
>
> Key: HDFS-15450
> URL: https://issues.apache.org/jira/browse/HDFS-15450
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Fix For: 3.4.0
>
>
> When users add mount links only fs.defautFS, in HA NN, it will initialize 
> trashEmptier with RPC address set to defaultFS. It will fail to start because 
> we might not have configure any mount links with RPC address based URI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15446) CreateSnapshotOp fails during edit log loading for /.reserved/raw/path with error java.io.FileNotFoundException: Directory does not exist: /.reserved/raw/path

2020-07-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17151223#comment-17151223
 ] 

Hudson commented on HDFS-15446:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18406 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18406/])
HDFS-15446. CreateSnapshotOp fails during edit log loading for (ayushsaxena: 
rev f86f15cf2003a7c74d6a8dffa4c61236bc0a208a)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshot.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java


> CreateSnapshotOp fails during edit log loading for /.reserved/raw/path with 
> error java.io.FileNotFoundException: Directory does not exist: 
> /.reserved/raw/path 
> ---
>
> Key: HDFS-15446
> URL: https://issues.apache.org/jira/browse/HDFS-15446
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0, 3.3.0
>Reporter: Srinivasu Majeti
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: reserved-word, snapshot
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5
>
> Attachments: HDFS-15446.001.patch, HDFS-15446.002.patch, 
> HDFS-15446.003.patch
>
>
> After allowing snapshot creation for a path say /app-logs , when we try to 
> create snapshot on 
>  /.reserved/raw/app-logs , its successful with snapshot creation but later 
> when Standby Namenode is restarted and tries to load the edit record 
> OP_CREATE_SNAPSHOT , we see it failing and Standby Namenode shuts down with 
> an exception "ava.io.FileNotFoundException: Directory does not exist: 
> /.reserved/raw/app-logs" .
> Here are the steps to reproduce :
> {code:java}
> # hdfs dfs -ls /.reserved/raw/
> Found 15 items
> drwxrwxrwt   - yarn   hadoop  0 2020-06-29 10:27 
> /.reserved/raw/app-logs
> drwxr-xr-x   - hive   hadoop  0 2020-06-29 10:29 /.reserved/raw/prod
> ++
> [root@c3230-node2 ~]# hdfs dfsadmin -allowSnapshot /app-logs
> Allowing snapshot on /app-logs succeeded
> [root@c3230-node2 ~]# hdfs dfsadmin -allowSnapshot /prod
> Allowing snapshot on /prod succeeded
> ++
> # hdfs lsSnapshottableDir
> drwxrwxrwt 0 yarn hadoop 0 2020-06-29 10:27 1 65536 /app-logs
> drwxr-xr-x 0 hive hadoop 0 2020-06-29 10:29 1 65536 /prod
> ++
> [root@c3230-node2 ~]# hdfs dfs -createSnapshot /.reserved/raw/app-logs testSS
> Created snapshot /.reserved/raw/app-logs/.snapshot/testSS
> {code}
> Exception we see in Standby namenode while loading the snapshot creation edit 
> record.
> {code:java}
> 2020-06-29 10:33:25,488 ERROR namenode.NameNode (NameNode.java:main(1715)) - 
> Failed to start namenode.
> java.io.FileNotFoundException: Directory does not exist: 
> /.reserved/raw/app-logs
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.valueOf(INodeDirectory.java:60)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.getSnapshottableRoot(SnapshotManager.java:259)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.createSnapshot(SnapshotManager.java:307)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:772)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:257)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15430) create should work when parent dir is internalDir and fallback configured.

2020-07-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17151210#comment-17151210
 ] 

Hudson commented on HDFS-15430:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18405 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18405/])
HDFS-15430. create should work when parent dir is internalDir and (github: rev 
1f2a80b5e5024aeb7fb1f8c31b8fdd0fdb88bb66)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsLinkFallback.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java


> create should work when parent dir is internalDir and fallback configured.
> ---
>
> Key: HDFS-15430
> URL: https://issues.apache.org/jira/browse/HDFS-15430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Fix For: 3.4.0
>
>
> create will not work if the parent dir is Internal mount dir (non leaf in 
> mount path) and fall back configured.
> Since fallback is available and if same tree structure available in fallback, 
> we should be able to create in fallback fs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15374) Add documentation for fedbalance tool

2020-07-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149163#comment-17149163
 ] 

Hudson commented on HDFS-15374:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18397 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18397/])
HDFS-15374. Add documentation for fedbalance tool. Contributed by (yqlin: rev 
ff8bb672000980f3de7391e5d268e789d5cbe974)
* (add) hadoop-tools/hadoop-federation-balance/src/site/resources/css/site.css
* (edit) hadoop-project/src/site/site.xml
* (add) 
hadoop-tools/hadoop-federation-balance/src/site/resources/images/BalanceProcedureScheduler.png
* (add) 
hadoop-tools/hadoop-federation-balance/src/site/markdown/HDFSFederationBalance.md


> Add documentation for fedbalance tool
> -
>
> Key: HDFS-15374
> URL: https://issues.apache.org/jira/browse/HDFS-15374
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: BalanceProcedureScheduler.png, 
> FedBalance_Screenshot1.jpg, FedBalance_Screenshot2.jpg, 
> FedBalance_Screenshot3.jpg, HDFS-15374.001.patch, HDFS-15374.002.patch, 
> HDFS-15374.003.patch, HDFS-15374.004.patch, HDFS-15374.005.patch
>
>
> Add documentation for fedbalance tool.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15410) Add separated config file hdfs-fedbalance-default.xml for fedbalance tool

2020-07-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149156#comment-17149156
 ] 

Hudson commented on HDFS-15410:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18396 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18396/])
HDFS-15410. Add separated config file hdfs-fedbalance-default.xml for (yqlin: 
rev de2cb8626016f22b388da7796082b2e160059cf6)
* (edit) 
hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/FedBalance.java
* (delete) 
hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/DistCpBalanceOptions.java
* (add) 
hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/FedBalanceOptions.java
* (add) 
hadoop-tools/hadoop-federation-balance/src/test/java/org/apache/hadoop/tools/fedbalance/TestFedBalance.java
* (edit) 
hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/FedBalanceConfigs.java
* (edit) 
hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/procedure/BalanceProcedureScheduler.java
* (add) 
hadoop-tools/hadoop-federation-balance/src/main/resources/hdfs-fedbalance-default.xml


> Add separated config file hdfs-fedbalance-default.xml for fedbalance tool
> -
>
> Key: HDFS-15410
> URL: https://issues.apache.org/jira/browse/HDFS-15410
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15410.001.patch, HDFS-15410.002.patch, 
> HDFS-15410.003.patch, HDFS-15410.004.patch, HDFS-15410.005.patch
>
>
> Add a separated config file named hdfs-fedbalance-default.xml for fedbalance 
> tool configs. It's like the ditcp-default.xml for distcp tool.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15416) Improve DataStorage#addStorageLocations() for empty locations

2020-07-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149147#comment-17149147
 ] 

Hudson commented on HDFS-15416:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18395 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18395/])
HDFS-15416. Improve DataStorage#addStorageLocations() for empty (hexiaoqiao: 
rev 9ac498e30057de1291c3e3128bceaa1af9547c67)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataStorage.java


> Improve DataStorage#addStorageLocations() for empty locations
> -
>
> Key: HDFS-15416
> URL: https://issues.apache.org/jira/browse/HDFS-15416
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15416.000.patch, HDFS-15416.001.patch
>
>
> SuccessLocations content is an array, when the number is 0, do not need to be 
> executed again loadBlockPoolSliceStorage ().
> {code:java}
> try
> { 
>   final List successLocations = loadDataStorage(datanode, 
> nsInfo,    dataDirs, startOpt, executor); 
>   return loadBlockPoolSliceStorage(datanode, nsInfo, successLocations, 
> startOpt, executor); 
> } finally{
>   executor.shutdown();
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15160) ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use datanode readlock

2020-06-30 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148728#comment-17148728
 ] 

Hudson commented on HDFS-15160:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18393 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18393/])
HDFS-15160. ReplicaMap, Disk Balancer, Directory Scanner and various (weichiu: 
rev 2a67e2b1a0e3a5f91056f5b977ef9c4c07ba6718)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java


> ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl 
> methods should use datanode readlock
> ---
>
> Key: HDFS-15160
> URL: https://issues.apache.org/jira/browse/HDFS-15160
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15160.001.patch, HDFS-15160.002.patch, 
> HDFS-15160.003.patch, HDFS-15160.004.patch, HDFS-15160.005.patch, 
> HDFS-15160.006.patch, HDFS-15160.007.patch, HDFS-15160.008.patch, 
> image-2020-04-10-17-18-08-128.png, image-2020-04-10-17-18-55-938.png
>
>
> Now we have HDFS-15150, we can start to move some DN operations to use the 
> read lock rather than the write lock to improve concurrence. The first step 
> is to make the changes to ReplicaMap, as many other methods make calls to it.
> This Jira switches read operations against the volume map to use the readLock 
> rather than the write lock.
> Additionally, some methods make a call to replicaMap.replicas() (eg 
> getBlockReports, getFinalizedBlocks, deepCopyReplica) and only use the result 
> in a read only fashion, so they can also be switched to using a readLock.
> Next is the directory scanner and disk balancer, which only require a read 
> lock.
> Finally (for this Jira) are various "low hanging fruit" items in BlockSender 
> and fsdatasetImpl where is it fairly obvious they only need a read lock.
> For now, I have avoided changing anything which looks too risky, as I think 
> its better to do any larger refactoring or risky changes each in their own 
> Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15421) IBR leak causes standby NN to be stuck in safe mode

2020-06-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147233#comment-17147233
 ] 

Hudson commented on HDFS-15421:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18387 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18387/])
HDFS-15421. IBR leak causes standby NN to be stuck in safe mode. (aajisaka: rev 
c71ce7ac3370e220995bad0ae8b59d962c8d30a7)
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestUpdateBlockTailing.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestAddBlockTailing.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirTruncateOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java


> IBR leak causes standby NN to be stuck in safe mode
> ---
>
> Key: HDFS-15421
> URL: https://issues.apache.org/jira/browse/HDFS-15421
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Kihwal Lee
>Assignee: Akira Ajisaka
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-15421-000.patch, HDFS-15421-001.patch, 
> HDFS-15421.002.patch, HDFS-15421.003.patch, HDFS-15421.004.patch, 
> HDFS-15421.005.patch, HDFS-15421.006.patch, HDFS-15421.007.patch
>
>
> After HDFS-14941, update of the global gen stamp is delayed in certain 
> situations.  This makes the last set of incremental block reports from append 
> "from future", which causes it to be simply re-queued to the pending DN 
> message queue, rather than processed to complete the block.  The last set of 
> IBRs will leak and never cleaned until it transitions to active.  The size of 
> {{pendingDNMessages}} constantly grows until then.
> If a leak happens while in a startup safe mode, the namenode will never be 
> able to come out of safe mode on its own.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15378) TestReconstructStripedFile#testErasureCodingWorkerXmitsWeight is failing on trunk

2020-06-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17146984#comment-17146984
 ] 

Hudson commented on HDFS-15378:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18386 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18386/])
HDFS-15378. (ayushsaxena: rev 8db38c98a6c6ce9215ea998a2f544b2eabca4340)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReconstructStripedFile.java


> TestReconstructStripedFile#testErasureCodingWorkerXmitsWeight is failing on 
> trunk
> -
>
> Key: HDFS-15378
> URL: https://issues.apache.org/jira/browse/HDFS-15378
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15378.001.patch
>
>
> [https://builds.apache.org/job/PreCommit-HDFS-Build/29377/#showFailuresLink]
> [https://builds.apache.org/job/PreCommit-HDFS-Build/29368/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15436) Default mount table name used by ViewFileSystem should be configurable

2020-06-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17146627#comment-17146627
 ] 

Hudson commented on HDFS-15436:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18385 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18385/])
HDFS-15436. Default mount table name used by ViewFileSystem should be (github: 
rev bed0a3a37404e9defda13a5bffe5609e72466e46)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsWithAuthorityLocalFs.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsTestSetup.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemHdfs.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java


> Default mount table name used by ViewFileSystem should be configurable
> --
>
> Key: HDFS-15436
> URL: https://issues.apache.org/jira/browse/HDFS-15436
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfs, viewfsOverloadScheme
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
>
> Currently, if no authority is provided and the scheme of the Path is doesn't 
> match the scheme of the {{fs.defaultFS}} , the mount table used by 
> ViewFileSystem to resolve this path is {{default}}. 
> This breaks accesses to path like {{hdfs:///foo/bar}} (without any authority) 
> when the following configurations are used:
> (1) {{fs.defaultFS}} = {{viewfs://clustername/}} 
> (2) {{fs.hdfs.impl = 
> org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme}}
> This JIRA proposes to add a new configuration 
> {{fs.viewfs.mounttable.default.name.key}} which is used to get the name of 
> the cluster/mount table when the authority is missing in cases like the 
> above. If not set, the string {{default}} will be used as today.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15429) mkdirs should work when parent dir is internalDir and fallback configured.

2020-06-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17146128#comment-17146128
 ] 

Hudson commented on HDFS-15429:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18380 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18380/])
HDFS-15429. mkdirs should work when parent dir is an internalDir and (github: 
rev d5e1bb6155496cf9d82e121dd1b65d0072312197)
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsLinkFallback.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java


> mkdirs should work when parent dir is internalDir and fallback configured.
> --
>
> Key: HDFS-15429
> URL: https://issues.apache.org/jira/browse/HDFS-15429
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.21
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Fix For: 3.4.0
>
>
> mkdir will not work if the parent dir is Internal mount dir (non leaf in 
> mount path) and fall back configured.
> Since fallback is available and if same tree structure available in fallback, 
> we should be able to mkdir in fallback.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15383) RBF: Disable watch in ZKDelegationSecretManager for performance

2020-06-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143278#comment-17143278
 ] 

Hudson commented on HDFS-15383:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18377 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18377/])
HDFS-15383. RBF: Add support for router delegation token without watch (github: 
rev 84110d850e2bc2a9ff4afcc7508fecd81cb5b7e5)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/security/token/ZKDelegationTokenSecretManagerImpl.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/security/token/TestZKDelegationTokenSecretManagerImpl.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/TestZKDelegationTokenSecretManager.java


> RBF: Disable watch in ZKDelegationSecretManager for performance
> ---
>
> Key: HDFS-15383
> URL: https://issues.apache.org/jira/browse/HDFS-15383
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Fix For: 3.4.0
>
>
> Based on the current design for delegation token in secure Router, the total 
> number of watches for tokens is the product of number of routers and number 
> of tokens, this is due to ZKDelegationTokenManager is using PathChildrenCache 
> from curator, which automatically sets the watch and ZK will push the sync 
> information to each router. There are some evaluations about the number of 
> watches in Zookeeper has negative performance impact to Zookeeper server.
> In our practice when the number of watches exceeds 1.2 Million in a single ZK 
> server there will be significant ZK performance degradation. Thus this ticket 
> is to rewrite ZKDelegationTokenManagerImpl.java to explicitly disable the 
> PathChildrenCache and have Routers sync periodically from Zookeeper. This has 
> been working fine at the scale of 10 Routers with 2 million tokens. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15427) Merged ListStatus with Fallback target filesystem and InternalDirViewFS.

2020-06-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142757#comment-17142757
 ] 

Hudson commented on HDFS-15427:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18374 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18374/])
HDFS-15427. Merged ListStatus with Fallback target filesystem and (github: rev 
7c02d1889bbeabc73c95a4c83f0cd204365ff410)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java


> Merged ListStatus with Fallback target filesystem and InternalDirViewFS.
> 
>
> Key: HDFS-15427
> URL: https://issues.apache.org/jira/browse/HDFS-15427
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfs
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Fix For: 3.4.0
>
>
> Currently ListStatus will not consider fallback directory when passed path is 
> an internal Directory(except root).
> Since we configured fallback, we should be able to list fallback directories 
> when passed path is internal directory. It should list the union of 
> fallbackDir and internalDir.
> So, that fallback directories will not be shaded when path matched to 
> internal dir.
>  
> The idea here is, user configured default filesystem with fallback fs, then 
> every operation not having link should go to fallback fs. That way users need 
> not configure all paths as mount from default fs.
>  
> This will be very useful in the case of ViewFSOverloadScheme. 
> In ViewFSOverloadScheme, if you choose your existing cluster to be configured 
> as fallback fs, then you can configure desired mount paths to external fs and 
> rest other path should go to fallback.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15428) Javadocs fails for hadoop-federation-balance

2020-06-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17141938#comment-17141938
 ] 

Hudson commented on HDFS-15428:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18373 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18373/])
HDFS-15428. Javadocs fails for hadoop-federation-balance. Contributed by 
(aajisaka: rev 201d734af3992df13bc5f4d47b8869da4fb2b2c5)
* (edit) 
hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/MountTableProcedure.java
* (edit) 
hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/procedure/BalanceProcedureScheduler.java


> Javadocs fails for hadoop-federation-balance
> 
>
> Key: HDFS-15428
> URL: https://issues.apache.org/jira/browse/HDFS-15428
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Xieming Li
>Assignee: Xieming Li
>Priority: Minor
> Fix For: 3.4.0
>
> Attachments: HADOOP-17080.000.patch
>
>
> javadoc fails for hadoop-federation-balance
> {code:java}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
> project hadoop-federation-balance: MavenReportException: Error while 
> generating Javadoc:
> [ERROR] Exit code: 1 - 
> /Users/sri/projects/hadoop-mirror/hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/procedure/BalanceProcedure.java:124:
>  warning: no @throws for java.io.IOException
> [ERROR]   public abstract boolean execute() throws RetryException, 
> IOException;
> [ERROR]   ^
> [ERROR] 
> /Users/sri/projects/hadoop-mirror/hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/procedure/BalanceProcedure.java:129:
>  warning: no @return
> [ERROR]   public long delayMillisBeforeRetry() {
> [ERROR]   ^
> [ERROR] 
> /Users/sri/projects/hadoop-mirror/hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/procedure/BalanceProcedure.java:136:
>  warning: no @return
> [ERROR]   protected boolean isSchedulerShutdown() {
> [ERROR] ^
> [ERROR] 
> /Users/sri/projects/hadoop-mirror/hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/procedure/BalanceProcedure.java:151:
>  warning: no @return
> [ERROR]   public String nextProcedure() {
> [ERROR] ^
> [ERROR] 
> /Users/sri/projects/hadoop-mirror/hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/procedure/BalanceProcedure.java:158:
>  warning: no @return
> [ERROR]   public String name() {
> [ERROR] ^
> [ERROR] 
> /Users/sri/projects/hadoop-mirror/hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/DistCpProcedure.java:124:
>  warning: no @throws for java.io.IOException
> [ERROR]   public DistCpProcedure(String name, String nextProcedure, long 
> delayDuration,
> [ERROR]  ^
> [ERROR] 
> /Users/sri/projects/hadoop-mirror/hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/MountTableProcedure.java:46:
>  error: bad use of '>'
> [ERROR]  *   /a/b/c -> {ns:src path:/a/b/c}
> [ERROR]  ^
> [ERROR] 
> /Users/sri/projects/hadoop-mirror/hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/MountTableProcedure.java:48:
>  error: bad use of '>'
> [ERROR]  *   /a/b/c -> {ns:dst path:/a/b/c}
> [ERROR]  ^
> [ERROR] 
> /Users/sri/projects/hadoop-mirror/hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/MountTableProcedure.java:66:
>  warning: no @param for name
> [ERROR]   public MountTableProcedure(String name, String nextProcedure,
> [ERROR]  ^
> [ERROR] 
> /Users/sri/projects/hadoop-mirror/hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/MountTableProcedure.java:66:
>  warning: no @param for nextProcedure
> [ERROR]   public MountTableProcedure(String name, String nextProcedure,
> [ERROR]  ^
> [ERROR] 
> /Users/sri/projects/hadoop-mirror/hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/MountTableProcedure.java:66:
>  warning: no @param for delayDuration
> [ERROR]   public MountTableProcedure(String name, String nextProcedure,
> [ERROR]  ^
> [ERROR] 
> /Users/sri/projects/hadoop-mirror/hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/MountTableProcedure.java:66:
>  warning: no @param for dstNs
> [ERROR]   public MountTableProcedure(String name, String nextProcedure,
> [ERROR]  ^
> [ERROR] 
> 

[jira] [Commented] (HDFS-14546) Document block placement policies

2020-06-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17141460#comment-17141460
 ] 

Hudson commented on HDFS-14546:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18371 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18371/])
HDFS-14546. Document block placement policies. Contributed by Amithsha. 
(ayushsaxena: rev 17ffcab5f621400bd8bb47dc8fb29365f6e24ebd)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsBlockPlacementPolicies.md
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/RackFaultTolerant.jpg


> Document block placement policies
> -
>
> Key: HDFS-14546
> URL: https://issues.apache.org/jira/browse/HDFS-14546
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Amithsha
>Priority: Major
>  Labels: documentation
> Fix For: 3.4.0
>
> Attachments: HDFS-14546-01.patch, HDFS-14546-02.patch, 
> HDFS-14546-03.patch, HDFS-14546-04.patch, HDFS-14546-05.patch, 
> HDFS-14546-06.patch, HDFS-14546-07.patch, HDFS-14546-08.patch, 
> HDFS-14546-09.patch, HdfsDesign.patch
>
>
> Currently, all the documentation refers to the default block placement policy.
> However, over time there have been new policies:
> * BlockPlacementPolicyRackFaultTolerant (HDFS-7891)
> * BlockPlacementPolicyWithNodeGroup (HDFS-3601)
> * BlockPlacementPolicyWithUpgradeDomain (HDFS-9006)
> We should update the documentation to refer to them explaining their 
> particularities and probably how to setup each one of them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15418) ViewFileSystemOverloadScheme should represent mount links as non symlinks

2020-06-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17140998#comment-17140998
 ] 

Hudson commented on HDFS-15418:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18369 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18369/])
HDFS-15418. ViewFileSystemOverloadScheme should represent mount links as 
(github: rev b27810aa6015253866ccc0ccc7247ad7024c0730)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeHdfsFileSystemContract.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewfsFileStatus.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java


> ViewFileSystemOverloadScheme should represent mount links as non symlinks
> -
>
> Key: HDFS-15418
> URL: https://issues.apache.org/jira/browse/HDFS-15418
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Fix For: 3.4.0
>
>
> Currently ViewFileSystemOverloadScheme uses ViewFileSystem default behavior. 
> ViewFS represents the mount links as symlinks always. Since 
> ViewFSOverloadScheme, we can have any scheme, and that scheme fs does not 
> have symlinks, ViewFs behavior symlinks can confuse.
> So, here I propose to represent mount links as non symlinks in 
> ViewFSOverloadScheme



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15372) Files in snapshots no longer see attribute provider permissions

2020-06-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17139441#comment-17139441
 ] 

Hudson commented on HDFS-15372:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18363 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18363/])
Revert "HDFS-15372. Files in snapshots no longer see attribute provider 
(weichiu: rev edf716a5c3ed7f51c994ec8bcc460445f9bb8ece)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java
HDFS-15372. Files in snapshots no longer see attribute provider (weichiu: rev 
d50e93ce7b6aba235ecc0143fe2c7a0150a3ceae)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java


> Files in snapshots no longer see attribute provider permissions
> ---
>
> Key: HDFS-15372
> URL: https://issues.apache.org/jira/browse/HDFS-15372
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15372.001.patch, HDFS-15372.002.patch, 
> HDFS-15372.003.patch, HDFS-15372.004.patch, HDFS-15372.005.patch
>
>
> Given a cluster with an authorization provider configured (eg Sentry) and the 
> paths covered by the provider are snapshotable, there was a change in 
> behaviour in how the provider permissions and ACLs are applied to files in 
> snapshots between the 2.x branch and Hadoop 3.0.
> Eg, if we have the snapshotable path /data, which is Sentry managed. The ACLs 
> below are provided by Sentry:
> {code}
> hadoop fs -getfacl -R /data
> # file: /data
> # owner: hive
> # group: hive
> user::rwx
> group::rwx
> other::--x
> # file: /data/tab1
> # owner: hive
> # group: hive
> user::rwx
> group::---
> group:flume:rwx
> user:hive:rwx
> group:hive:rwx
> group:testgroup:rwx
> mask::rwx
> other::--x
> /data/tab1
> {code}
> After taking a snapshot, the files in the snapshot do not see the provider 
> permissions:
> {code}
> hadoop fs -getfacl -R /data/.snapshot
> # file: /data/.snapshot
> # owner: 
> # group: 
> user::rwx
> group::rwx
> other::rwx
> # file: /data/.snapshot/snap1
> # owner: hive
> # group: hive
> user::rwx
> group::rwx
> other::--x
> # file: /data/.snapshot/snap1/tab1
> # owner: hive
> # group: hive
> user::rwx
> group::rwx
> other::--x
> {code}
> However pre-Hadoop 3.0 (when the attribute provider etc was extensively 
> refactored) snapshots did get the provider permissions.
> The reason is this code in FSDirectory.java which ultimately calls the 
> attribute provider and passes the path we want permissions for:
> {code}
>   INodeAttributes getAttributes(INodesInPath iip)
>   throws IOException {
> INode node = FSDirectory.resolveLastINode(iip);
> int snapshot = iip.getPathSnapshotId();
> INodeAttributes nodeAttrs = node.getSnapshotINode(snapshot);
> UserGroupInformation ugi = NameNode.getRemoteUser();
> INodeAttributeProvider ap = this.getUserFilteredAttributeProvider(ugi);
> if (ap != null) {
>   // permission checking sends the full components array including the
>   // first empty component for the root.  however file status
>   // related calls are expected to strip out the root component according
>   // to TestINodeAttributeProvider.
>   byte[][] components = iip.getPathComponents();
>   components = Arrays.copyOfRange(components, 1, components.length);
>   nodeAttrs = ap.getAttributes(components, nodeAttrs);
> }
> return nodeAttrs;
>   }
> {code}
> The line:
> {code}
> INode node = FSDirectory.resolveLastINode(iip);
> {code}
> Picks the last resolved Inode and if you then call node.getPathComponents, 
> for a path like '/data/.snapshot/snap1/tab1' it will return /data/tab1. It 
> resolves the snapshot path to its original location, but its still the 
> snapshot inode.
> However the logic passes 'iip.getPathComponents' which returns 
> "/user/.snapshot/snap1/tab" to the provider.
> The pre Hadoop 3.0 code passes the inode directly to 

[jira] [Commented] (HDFS-15406) Improve the speed of Datanode Block Scan

2020-06-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17139346#comment-17139346
 ] 

Hudson commented on HDFS-15406:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18362 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18362/])
HDFS-15406. Improve the speed of Datanode Block Scan. Contributed by 
(sodonnell: rev 123777823edc98553fcef61f1913ab6e4cd5aa9a)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java


> Improve the speed of Datanode Block Scan
> 
>
> Key: HDFS-15406
> URL: https://issues.apache.org/jira/browse/HDFS-15406
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15406.001.patch, HDFS-15406.002.patch
>
>
> In our customer cluster we have approx 10M blocks in one datanode 
> the Datanode to scans all the blocks , it has taken nearly 5mins
> {code:java}
> 2020-06-10 12:17:06,869 | INFO  | 
> java.util.concurrent.ThreadPoolExecutor$Worker@3b4bea70[State = -1, empty 
> queue] | BlockPool BP-1104115233-**.**.**.**-1571300215588 Total blocks: 
> 11149530, missing metadata files:472, missing block files:472, missing blocks 
> in memory:0, mismatched blocks:0 | DirectoryScanner.java:473
> 2020-06-10 12:17:06,869 | WARN  | 
> java.util.concurrent.ThreadPoolExecutor$Worker@3b4bea70[State = -1, empty 
> queue] | Lock held time above threshold: lock identifier: 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl 
> lockHeldTimeMs=329854 ms. Suppressed 0 lock warnings. The stack trace is: 
> java.lang.Thread.getStackTrace(Thread.java:1559)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1032)
> org.apache.hadoop.util.InstrumentedLock.logWarning(InstrumentedLock.java:148)
> org.apache.hadoop.util.InstrumentedLock.check(InstrumentedLock.java:186)
> org.apache.hadoop.util.InstrumentedLock.unlock(InstrumentedLock.java:133)
> org.apache.hadoop.util.AutoCloseableLock.release(AutoCloseableLock.java:84)
> org.apache.hadoop.util.AutoCloseableLock.close(AutoCloseableLock.java:96)
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:475)
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:375)
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:320)
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> java.lang.Thread.run(Thread.java:748)
>  | InstrumentedLock.java:143 {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15346) FedBalance tool implementation

2020-06-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17139109#comment-17139109
 ] 

Hudson commented on HDFS-15346:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18361 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18361/])
HDFS-15346. FedBalance tool implementation. Contributed by Jinglun. (yqlin: rev 
9cbd76cc775b58dfedb943f971b3307ec5702f13)
* (add) 
hadoop-tools/hadoop-federation-balance/src/test/java/org/apache/hadoop/tools/fedbalance/procedure/UnrecoverableProcedure.java
* (edit) hadoop-tools/hadoop-tools-dist/pom.xml
* (add) 
hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/FedBalance.java
* (add) 
hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/package-info.java
* (add) 
hadoop-tools/hadoop-federation-balance/src/test/java/org/apache/hadoop/tools/fedbalance/TestTrashProcedure.java
* (add) 
hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/procedure/BalanceProcedureScheduler.java
* (add) 
hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/procedure/BalanceJob.java
* (edit) hadoop-assemblies/src/main/resources/assemblies/hadoop-tools.xml
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/procedure/MultiPhaseProcedure.java
* (add) 
hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/DistCpProcedure.java
* (add) 
hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/procedure/BalanceProcedure.java
* (add) 
hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/TrashProcedure.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/procedure/BalanceJournalInfoHDFS.java
* (add) 
hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/procedure/BalanceJournalInfoHDFS.java
* (add) 
hadoop-tools/hadoop-federation-balance/src/test/java/org/apache/hadoop/tools/fedbalance/TestDistCpProcedure.java
* (add) 
hadoop-tools/hadoop-federation-balance/src/test/java/org/apache/hadoop/tools/fedbalance/TestMountTableProcedure.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/procedure/BalanceProcedureConfigKeys.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/procedure/BalanceProcedure.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/procedure/BalanceJournal.java
* (add) 
hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/procedure/BalanceJournal.java
* (add) 
hadoop-tools/hadoop-federation-balance/src/test/java/org/apache/hadoop/tools/fedbalance/procedure/TestBalanceProcedureScheduler.java
* (add) 
hadoop-tools/hadoop-federation-balance/src/test/java/org/apache/hadoop/tools/fedbalance/procedure/WaitProcedure.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/procedure/RetryProcedure.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/procedure/TestBalanceProcedureScheduler.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/procedure/BalanceProcedureScheduler.java
* (edit) hadoop-project/pom.xml
* (add) 
hadoop-tools/hadoop-federation-balance/src/test/java/org/apache/hadoop/tools/fedbalance/procedure/RecordProcedure.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/procedure/WaitProcedure.java
* (add) 
hadoop-tools/hadoop-federation-balance/src/main/shellprofile.d/hadoop-federation-balance.sh
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/procedure/package-info.java
* (add) 
hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/procedure/package-info.java
* (add) 
hadoop-tools/hadoop-federation-balance/src/test/java/org/apache/hadoop/tools/fedbalance/procedure/MultiPhaseProcedure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/procedure/UnrecoverableProcedure.java
* (add) 
hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/FedBalanceContext.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/procedure/BalanceJob.java
* (add) 
hadoop-tools/hadoop-federation-balance/src/test/java/org/apache/hadoop/tools/fedbalance/procedure/RetryProcedure.java
* (add) hadoop-tools/hadoop-federation-balance/pom.xml
* (add) 
hadoop-tools/hadoop-federation-balance/src/main/java/org/apache/hadoop/tools/fedbalance/DistCpBalanceOptions.java
* (add) 

[jira] [Commented] (HDFS-15372) Files in snapshots no longer see attribute provider permissions

2020-06-16 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17137978#comment-17137978
 ] 

Hudson commented on HDFS-15372:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18354 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18354/])
HDFS-15372. Files in snapshots no longer see attribute provider (weichiu: rev 
730a39d1388548f22f76132a6734d61c24c3eb72)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java


> Files in snapshots no longer see attribute provider permissions
> ---
>
> Key: HDFS-15372
> URL: https://issues.apache.org/jira/browse/HDFS-15372
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15372.001.patch, HDFS-15372.002.patch, 
> HDFS-15372.003.patch, HDFS-15372.004.patch, HDFS-15372.005.patch
>
>
> Given a cluster with an authorization provider configured (eg Sentry) and the 
> paths covered by the provider are snapshotable, there was a change in 
> behaviour in how the provider permissions and ACLs are applied to files in 
> snapshots between the 2.x branch and Hadoop 3.0.
> Eg, if we have the snapshotable path /data, which is Sentry managed. The ACLs 
> below are provided by Sentry:
> {code}
> hadoop fs -getfacl -R /data
> # file: /data
> # owner: hive
> # group: hive
> user::rwx
> group::rwx
> other::--x
> # file: /data/tab1
> # owner: hive
> # group: hive
> user::rwx
> group::---
> group:flume:rwx
> user:hive:rwx
> group:hive:rwx
> group:testgroup:rwx
> mask::rwx
> other::--x
> /data/tab1
> {code}
> After taking a snapshot, the files in the snapshot do not see the provider 
> permissions:
> {code}
> hadoop fs -getfacl -R /data/.snapshot
> # file: /data/.snapshot
> # owner: 
> # group: 
> user::rwx
> group::rwx
> other::rwx
> # file: /data/.snapshot/snap1
> # owner: hive
> # group: hive
> user::rwx
> group::rwx
> other::--x
> # file: /data/.snapshot/snap1/tab1
> # owner: hive
> # group: hive
> user::rwx
> group::rwx
> other::--x
> {code}
> However pre-Hadoop 3.0 (when the attribute provider etc was extensively 
> refactored) snapshots did get the provider permissions.
> The reason is this code in FSDirectory.java which ultimately calls the 
> attribute provider and passes the path we want permissions for:
> {code}
>   INodeAttributes getAttributes(INodesInPath iip)
>   throws IOException {
> INode node = FSDirectory.resolveLastINode(iip);
> int snapshot = iip.getPathSnapshotId();
> INodeAttributes nodeAttrs = node.getSnapshotINode(snapshot);
> UserGroupInformation ugi = NameNode.getRemoteUser();
> INodeAttributeProvider ap = this.getUserFilteredAttributeProvider(ugi);
> if (ap != null) {
>   // permission checking sends the full components array including the
>   // first empty component for the root.  however file status
>   // related calls are expected to strip out the root component according
>   // to TestINodeAttributeProvider.
>   byte[][] components = iip.getPathComponents();
>   components = Arrays.copyOfRange(components, 1, components.length);
>   nodeAttrs = ap.getAttributes(components, nodeAttrs);
> }
> return nodeAttrs;
>   }
> {code}
> The line:
> {code}
> INode node = FSDirectory.resolveLastINode(iip);
> {code}
> Picks the last resolved Inode and if you then call node.getPathComponents, 
> for a path like '/data/.snapshot/snap1/tab1' it will return /data/tab1. It 
> resolves the snapshot path to its original location, but its still the 
> snapshot inode.
> However the logic passes 'iip.getPathComponents' which returns 
> "/user/.snapshot/snap1/tab" to the provider.
> The pre Hadoop 3.0 code passes the inode directly to the provider, and hence 
> it only ever sees the path as "/user/data/tab1".
> It is debatable which path should be passed to the provider - 
> /user/.snapshot/snap1/tab or /data/tab1 in the case of snapshots. However as 
> the behaviour has changed I feel we should ensure the old behaviour is 
> retained.
> It would also be fairly easy to provide a config switch so the provider gets 
> the full snapshot path or the resolved path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: 

[jira] [Commented] (HDFS-15403) NPE in FileIoProvider#transferToSocketFully

2020-06-14 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17135311#comment-17135311
 ] 

Hudson commented on HDFS-15403:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18351 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18351/])
HDFS-15403. NPE in FileIoProvider#transferToSocketFully. Contributed by 
(tasanuma: rev f41a144077fc0e2d32072e0d088c1abd1897cee5)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FileIoProvider.java


> NPE in FileIoProvider#transferToSocketFully
> ---
>
> Key: HDFS-15403
> URL: https://issues.apache.org/jira/browse/HDFS-15403
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5
>
> Attachments: HDFS-15403.001.patch, HDFS-15403.002.patch
>
>
> {code:java}
> [DataXceiver for client  at /127.0.0.1:41904 [Sending block 
> BP-293397713-127.0.1.1-1591535936877:blk_-9223372036854775786_1001]] ERROR 
> datanode.DataNode (DataXceiver.java:run(324)) - 127.0.0.1:34789:DataXceiver 
> error processing READ_BLOCK operation  src: /127.0.0.1:41904 dst: 
> /127.0.0.1:34789[DataXceiver for client  at /127.0.0.1:41904 [Sending block 
> BP-293397713-127.0.1.1-1591535936877:blk_-9223372036854775786_1001]] ERROR 
> datanode.DataNode (DataXceiver.java:run(324)) - 127.0.0.1:34789:DataXceiver 
> error processing READ_BLOCK operation  src: /127.0.0.1:41904 dst: 
> /127.0.0.1:34789java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:283)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:614)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:809)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:756)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:610)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:292)
> at java.lang.Thread.run(Thread.java:748) {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15351) Blocks scheduled count was wrong on truncate

2020-06-13 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17134868#comment-17134868
 ] 

Hudson commented on HDFS-15351:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18350 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18350/])
HDFS-15351. Blocks scheduled count was wrong on truncate. Contributed by 
(inigoiri: rev 719b53a79dc169a8c52229831dcb011935a8a151)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlocksScheduledCounter.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


> Blocks scheduled count was wrong on truncate 
> -
>
> Key: HDFS-15351
> URL: https://issues.apache.org/jira/browse/HDFS-15351
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15351.001.patch, HDFS-15351.002.patch, 
> HDFS-15351.003.patch
>
>
> On truncate and append we remove the blocks from Reconstruction Queue 
> On removing the blocks from pending reconstruction , we need to decrement 
> Blocks Scheduled 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15387) FSUsage$DF should consider ViewFSOverloadScheme in processPath

2020-06-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17134551#comment-17134551
 ] 

Hudson commented on HDFS-15387:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18349 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18349/])
HDFS-15387. FSUsage#DF should consider ViewFSOverloadScheme in (github: rev 
785b1def959fab6b8b766410bcd240feee13)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemUtil.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithFSCommands.java


> FSUsage$DF should consider ViewFSOverloadScheme in processPath
> --
>
> Key: HDFS-15387
> URL: https://issues.apache.org/jira/browse/HDFS-15387
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfs
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Minor
>
> Currently for calculating DF, processPath checks if it's ViewFS scheme, it 
> gets status from all fs and calculate. If not it will directly call 
> fs.getStatus.
> Here we should treat ViewFSOverloadScheme also in ViewFS flow



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15398) EC: hdfs client hangs due to exception during addBlock

2020-06-10 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17130294#comment-17130294
 ] 

Hudson commented on HDFS-15398:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18344 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18344/])
HDFS-15398. EC: hdfs client hangs due to exception during addBlock. 
(ayushsaxena: rev b735a777178a3be7924b0ea7c0f61003dc60f16e)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStreamUpdatePipeline.java


> EC: hdfs client hangs due to exception during addBlock
> --
>
> Key: HDFS-15398
> URL: https://issues.apache.org/jira/browse/HDFS-15398
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, hdfs-client
>Affects Versions: 3.2.0
>Reporter: Hongbing Wang
>Assignee: Hongbing Wang
>Priority: Critical
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15398.001.patch, HDFS-15398.002.patch, 
> HDFS-15398.003.patch, HDFS-15398.004.patch
>
>
>  In the operation of writing EC files, when the client calls addBlock() 
> applying for the second block group (or >= the second block group) and it 
> happens to exceed quota at this time, the client program will hang forever. 
>  See the demo below:
> {code:java}
> $ hadoop fs -mkdir -p /user/wanghongbing/quota/ec
> $ hdfs dfsadmin -setSpaceQuota 2g /user/wanghongbing/quota
> $ hdfs ec -setPolicy -path /user/wanghongbing/quota/ec -policy RS-6-3-1024k
> Set RS-6-3-1024k erasure coding policy on /user/wanghongbing/quota/ec
> $ hadoop fs -put 800m /user/wanghongbing/quota/ec
> ^@^@^@^@^@^@^@^@^Z
> {code}
> In the case of blocksize=128M, spaceQuota=2g and EC 6-3 policy, a block group 
> needs to apply for 1152M physical space to write 768M logical data. 
> Therefore, writing 800M data will exceed quota when applying for the second 
> block group. At this point, the client will be hang forever.
> The exception stack of client is as follows:
> {code:java}
> java.lang.Thread.State: TIMED_WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x8009d5d8> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
> at 
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
> at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream$MultipleBlockingQueue.takeWithTimeout(DFSStripedOutputStream.java:117)
> at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.waitEndBlocks(DFSStripedOutputStream.java:453)
> at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.allocateNewBlock(DFSStripedOutputStream.java:477)
> at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:541)
> - locked <0x8009f758> (a 
> org.apache.hadoop.hdfs.DFSStripedOutputStream)
> at 
> org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
> at 
> org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:164)
> - locked <0x8009f758> (a 
> org.apache.hadoop.hdfs.DFSStripedOutputStream)
> at 
> org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:145)
> - locked <0x8009f758> (a 
> org.apache.hadoop.hdfs.DFSStripedOutputStream)
> at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1182)
> - locked <0x8009f758> (a 
> org.apache.hadoop.hdfs.DFSStripedOutputStream)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:847)
> - locked <0x8009f758> (a 
> org.apache.hadoop.hdfs.DFSStripedOutputStream)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
> at org.apache.hadoop.io.IOUtils.cleanupWithLogger(IOUtils.java:280)
> at org.apache.hadoop.io.IOUtils.closeStream(IOUtils.java:298)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:77)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:129)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:485)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:407)
> at 
> 

[jira] [Commented] (HDFS-15376) Update the error about command line POST in httpfs documentation

2020-06-10 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17130167#comment-17130167
 ] 

Hudson commented on HDFS-15376:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18343 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18343/])
HDFS-15376. Update the error about command line POST in httpfs (ayushsaxena: 
rev 635e6a16d0f407eeec470f2d4d3303092961a177)
* (edit) hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/markdown/index.md


> Update the error about command line POST in httpfs documentation
> 
>
> Key: HDFS-15376
> URL: https://issues.apache.org/jira/browse/HDFS-15376
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15376.001.patch
>
>
>    In the official Hadoop documentation, there is an exception when executing 
> the following command.
> {quote} {{curl -X POST 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {quote}     *{"RemoteException":{"message":"Invalid HTTP POST operation 
> [MKDIRS]","exception":"IOException","javaClassName":"java.io.IOException"}}*
> {quote}
>      
> I checked the source code and found that the way to create the file should 
> use PUT to submit the form.
>     I modified to execute the command in PUT mode and got the result as 
> follows
> {quote}     {{curl -X PUT 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {"boolean":true}
> . At the same time the folder is created successfully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15211) EC: File write hangs during close in case of Exception during updatePipeline

2020-06-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129286#comment-17129286
 ] 

Hudson commented on HDFS-15211:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18340 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18340/])
HDFS-15211. EC: File write hangs during close in case of Exception 
(ayushsaxena: rev 852587456173f208f78d0c95046cfd0d8aa1c01c)
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStreamUpdatePipeline.java


> EC: File write hangs during close in case of Exception during updatePipeline
> 
>
> Key: HDFS-15211
> URL: https://issues.apache.org/jira/browse/HDFS-15211
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1, 3.3.0, 3.2.1
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-15211-01.patch, HDFS-15211-02.patch, 
> HDFS-15211-03.patch, HDFS-15211-04.patch, HDFS-15211-05.patch, 
> TestToRepro-01.patch, Thread-Dump, Thread-Dump-02
>
>
> Ec file write hangs during file close, if there is a exception due to closure 
> of slow stream, and number of data streamers failed increase more than parity 
> block.
> Since in the close, the Stream will try to flush all the healthy streamers, 
> but the streamers won't be having any result due to exception. and the 
> streamers will stay stuck.
> Hence the close will also get stuck.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15396) Fix TestViewFileSystemOverloadSchemeHdfsFileSystemContract#testListStatusRootDir

2020-06-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17127746#comment-17127746
 ] 

Hudson commented on HDFS-15396:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18336 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18336/])
HDFS-15396. Fix (github: rev a8610c15c498531bf3c011f1b0ace8ef61f2)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java


> Fix 
> TestViewFileSystemOverloadSchemeHdfsFileSystemContract#testListStatusRootDir
> 
>
> Key: HDFS-15396
> URL: https://issues.apache.org/jira/browse/HDFS-15396
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>
> Exception :
> {code:java}
> java.lang.IllegalArgumentException: Can not create a Path from an empty string
>   at org.apache.hadoop.fs.Path.checkPathArg(Path.java:172)
>   at org.apache.hadoop.fs.Path.(Path.java:184)
>   at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$InternalDirOfViewFs.listStatus(ViewFileSystem.java:1207)
>   at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.listStatus(ViewFileSystem.java:514)
>   at 
> org.apache.hadoop.fs.FileSystemContractBaseTest.assertListStatusFinds(FileSystemContractBaseTest.java:867)
>   at 
> org.apache.hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeHdfsFileSystemContract.testListStatusRootDir(TestViewFileSystemOverloadSchemeHdfsFileSystemContract.java:119)
> {code}
> The reason for failure being, the mount destination for /user and /append in 
> the test is just the URI, with no further path.
> Thus while listing, in order to fetch the permissions, the destination URI is 
> used to get the path, which resolves being empty. Hence the failure



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15394) Add all available fs.viewfs.overload.scheme.target..impl classes in core-default.xml bydefault.

2020-06-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17127375#comment-17127375
 ] 

Hudson commented on HDFS-15394:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18335 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18335/])
HDFS-15394. Add all available (github: rev 
3ca15292c5584ec220b3eeaf76da85d228bcbd8b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


> Add all available fs.viewfs.overload.scheme.target..impl classes in 
> core-default.xml bydefault.
> ---
>
> Key: HDFS-15394
> URL: https://issues.apache.org/jira/browse/HDFS-15394
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: configuration, viewfs, viewfsOverloadScheme
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>
> This proposes to add all available 
> fs.viewfs.overload.scheme.target..impl classes in core-default.xml. 
> So, that users need not configure them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15389) DFSAdmin should close filesystem and dfsadmin -setBalancerBandwidth should work with ViewFSOverloadScheme

2020-06-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17127241#comment-17127241
 ] 

Hudson commented on HDFS-15389:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18334 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18334/])
HDFS-15389. DFSAdmin should close filesystem and dfsadmin (github: rev 
cc671b16f7b0b7c1ed7b41b96171653dc43cf670)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithDFSAdmin.java


> DFSAdmin should close filesystem and dfsadmin -setBalancerBandwidth should 
> work with ViewFSOverloadScheme 
> --
>
> Key: HDFS-15389
> URL: https://issues.apache.org/jira/browse/HDFS-15389
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15389-01.patch
>
>
> Two Issues Here :
> Firstly Prior to HDFS-15321, When DFSAdmin was closed the FileSystem 
> associated with it was closed as part of close method, But post HDFS-15321, 
> the {{FileSystem}} isn't stored as part of {{FsShell}}, hence during close, 
> the FileSystem still stays and isn't close.
> * This is the reason for failure of TestDFSHAAdmin
> Second : {{DfsAdmin -setBalancerBandwidth}} doesn't work with 
> {{ViewFSOverloadScheme}} since the setBalancerBandwidth calls {{getFS()}} 
> rather than {{getDFS()}} which resolves the scheme in {{HDFS-15321}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15330) Document the ViewFSOverloadScheme details in ViewFS guide

2020-06-05 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17127001#comment-17127001
 ] 

Hudson commented on HDFS-15330:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18332 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18332/])
HDFS-15330. Document the ViewFSOverloadScheme details in ViewFS guide. (github: 
rev 76fa0222f0d2e2d92b4a1eedba8b3e38002e8c23)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
* (edit) hadoop-project/src/site/site.xml
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/ViewFSOverloadScheme.png
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md


> Document the ViewFSOverloadScheme details in ViewFS guide
> -
>
> Key: HDFS-15330
> URL: https://issues.apache.org/jira/browse/HDFS-15330
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfs, viewfsOverloadScheme
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Fix For: 3.4.0
>
>
> This Jira to track for documentation of ViewFSOverloadScheme usage guide.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15359) EC: Allow closing a file with committed blocks

2020-06-05 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17126794#comment-17126794
 ] 

Hudson commented on HDFS-15359:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18331 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18331/])
HDFS-15359. EC: Allow closing a file with committed blocks. Contributed 
(ayushsaxena: rev 2326123705445dee534ac2c298038831b5d04a0a)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


> EC: Allow closing a file with committed blocks
> --
>
> Key: HDFS-15359
> URL: https://issues.apache.org/jira/browse/HDFS-15359
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15359-01.patch, HDFS-15359-02.patch, 
> HDFS-15359-03.patch, HDFS-15359-04.patch, HDFS-15359-05.patch
>
>
> Presently, {{dfs.namenode.file.close.num-committed-allowed}} is ignored in 
> case of EC blocks. But in case of heavy loads, IBR's from Datanode may get 
> delayed and cause the file write to fail. So, can allow EC files to close 
> with blocks in committed state as REP files



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15386) ReplicaNotFoundException keeps happening in DN after removing multiple DN's data directories

2020-06-05 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17126630#comment-17126630
 ] 

Hudson commented on HDFS-15386:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18329 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18329/])
HDFS-15386 ReplicaNotFoundException keeps happening in DN after removing 
(github: rev 545a0a147c5256c44911ba57b4898e01d786d836)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java


> ReplicaNotFoundException keeps happening in DN after removing multiple DN's 
> data directories
> 
>
> Key: HDFS-15386
> URL: https://issues.apache.org/jira/browse/HDFS-15386
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
>
> When removing volumes, we need to invalidate all the blocks in the volumes. 
> In the following code (FsDatasetImpl), we keep the blocks that will be 
> invalidate in *blkToInvalidate* map. However as the key of the map is *bpid* 
> (Block Pool ID), it will be overwritten by other removed volumes. As a 
> result, the map will have only the blocks of the last volume we are removing, 
> and invalidate only them:
> {code:java}
> for (String bpid : volumeMap.getBlockPoolList()) {
>   List blocks = new ArrayList<>();
>   for (Iterator it =
> volumeMap.replicas(bpid).iterator(); it.hasNext();) {
> ReplicaInfo block = it.next();
> final StorageLocation blockStorageLocation =
> block.getVolume().getStorageLocation();
> LOG.trace("checking for block " + block.getBlockId() +
> " with storageLocation " + blockStorageLocation);
> if (blockStorageLocation.equals(sdLocation)) {
>   blocks.add(block);
>   it.remove();
> }
>   }
>   blkToInvalidate.put(bpid, blocks);
> }
> {code}
> [https://github.com/apache/hadoop/blob/704409d53bf7ebf717a3c2e988ede80f623bbad3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java#L580-L595]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11041) Unable to unregister FsDatasetState MBean if DataNode is shutdown twice

2020-06-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-11041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124721#comment-17124721
 ] 

Hudson commented on HDFS-11041:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18321 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18321/])
HDFS-11041. Unable to unregister FsDatasetState MBean if DataNode is 
(ayushsaxena: rev e8cb2ae409bc1d62f23efef485d1c6f1ff21e86c)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java


> Unable to unregister FsDatasetState MBean if DataNode is shutdown twice
> ---
>
> Key: HDFS-11041
> URL: https://issues.apache.org/jira/browse/HDFS-11041
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Trivial
> Fix For: 3.4.0
>
> Attachments: HDFS-11041.01.patch, HDFS-11041.02.patch, 
> HDFS-11041.03.patch
>
>
> I saw error message like the following in some tests
> {noformat}
> 2016-10-21 04:09:03,900 [main] WARN  util.MBeans 
> (MBeans.java:unregister(114)) - Error unregistering 
> Hadoop:service=DataNode,name=FSDatasetState-33cd714c-0b1a-471f-8efe-f431d7d874bc
> javax.management.InstanceNotFoundException: 
> Hadoop:service=DataNode,name=FSDatasetState-33cd714c-0b1a-471f-8efe-f431d7d874bc
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:427)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546)
>   at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:112)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.shutdown(FsDatasetImpl.java:2127)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:2016)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:1985)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1962)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1936)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1929)
>   at 
> org.apache.hadoop.hdfs.TestDatanodeReport.testDatanodeReport(TestDatanodeReport.java:144)
> {noformat}
> The test shuts down datanode, and then shutdown cluster, which shuts down the 
> a datanode twice. Resetting the FsDatasetSpi reference in DataNode to null 
> resolves the issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14960) TestBalancerWithNodeGroup should not succeed with DFSNetworkTopology

2020-06-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124706#comment-17124706
 ] 

Hudson commented on HDFS-14960:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18320 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18320/])
HDFS-14960. TestBalancerWithNodeGroup should not succeed with (ayushsaxena: rev 
f6453244ab8a676144bb001d497582da284730a1)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithNodeGroup.java


> TestBalancerWithNodeGroup should not succeed with DFSNetworkTopology
> 
>
> Key: HDFS-14960
> URL: https://issues.apache.org/jira/browse/HDFS-14960
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.3
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Minor
> Fix For: 3.4.0
>
> Attachments: HDFS-14960.001.patch, HDFS-14960.002.patch, 
> HDFS-14960.003.patch, HDFS-14960.004.patch, HDFS-14960.005.patch, 
> HDFS-14960.006.patch
>
>
> As reported in HDFS-14958, TestBalancerWithNodeGroup was succeeding even 
> though it was using DFSNetworkTopology instead of 
> NetworkTopologyWithNodeGroup.
> [~inigoiri] rightly suggested that this indicates the test is not very good - 
> it should fail when run without NetworkTopologyWithNodeGroup.
> We should improve this test.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15321) Make DFSAdmin tool to work with ViewFSOverloadScheme

2020-06-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124189#comment-17124189
 ] 

Hudson commented on HDFS-15321:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18317 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18317/])
HDFS-15321. Make DFSAdmin tool to work with (github: rev 
ed83c865dd0b4e92f3f89f79543acc23792bb69c)
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithDFSAdmin.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsTestSetup.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/AdminHelper.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java


> Make DFSAdmin tool to work with ViewFSOverloadScheme
> 
>
> Key: HDFS-15321
> URL: https://issues.apache.org/jira/browse/HDFS-15321
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: dfsadmin, fs, viewfs
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>
> When we enable ViewFSOverLoadScheme and used hdfs scheme as overloaded 
> scheme, users work with hdfs uris. But here DFSAdmin expects the impl classe 
> to be DistribbuteFileSystem. If impl class is ViewFSoverloadScheme, it will 
> fail.
> So, when impl is ViewFSoverloadScheme, we should get corresponding child hdfs 
> to make DFSAdmin to work.
> This Jira makes the DFSAdmin to work with ViewFSoverloadScheme.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10792) RedundantEditLogInputStream should log caught exceptions

2020-05-31 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-10792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120519#comment-17120519
 ] 

Hudson commented on HDFS-10792:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18311 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18311/])
HDFS-10792. RedundantEditLogInputStream should log caught exceptions. 
(ayushsaxena: rev ae13a5ccbea10fe86481adbbff574c528e03c7f6)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RedundantEditLogInputStream.java


> RedundantEditLogInputStream should log caught exceptions
> 
>
> Key: HDFS-10792
> URL: https://issues.apache.org/jira/browse/HDFS-10792
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Fix For: 3.4.0
>
> Attachments: HDFS-10792.01.patch
>
>
> There are a few places in {{RedundantEditLogInputStream}} where an 
> IOException is caught but never logged. We should improve the logging of 
> these exceptions to help debugging.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15168) ABFS driver enhancement - Allow customizable translation from AAD SPNs and security groups to Linux user and group

2020-05-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119215#comment-17119215
 ] 

Hudson commented on HDFS-15168:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18306 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18306/])
HDFS-15168: ABFS enhancement to translate AAD to Linux identities. (github: rev 
b2200a33a6cbb43998833d902578143f93bb192a)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/IdentityTransformer.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/IdentityTransformerInterface.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/LocalIdentityTransformer.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/utils/TextFileBasedIdentityHandler.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/utils/IdentityHandler.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestTextFileBasedIdentityHandler.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/AbfsHttpConstants.java


> ABFS driver enhancement - Allow customizable translation from AAD SPNs and 
> security groups to Linux user and group
> --
>
> Key: HDFS-15168
> URL: https://issues.apache.org/jira/browse/HDFS-15168
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Karthik Amarnath
>Assignee: Karthik Amarnath
>Priority: Major
>
> ABFS driver does not support the translation of AAD Service principal (SPI) 
> to Linux identities causing metadata operation failure. Hadoop MapReduce 
> client 
> [[JobSubmissionFiles|https://github.com/apache/hadoop/blob/d842dfffa53c8b565f3d65af44ccd7e1cc706733/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java#L138]]
>  expects the file owner permission to be the Linux identity, but the 
> underlying ABFS driver returns the AAD Object identity. Hence need ABFS 
> driver enhancement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15368) TestBalancerWithHANameNodes#testBalancerWithObserver failed occasionally

2020-05-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118840#comment-17118840
 ] 

Hudson commented on HDFS-15368:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18305 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18305/])
HDFS-15368. TestBalancerWithHANameNodes#testBalancerWithObserver failed 
(ayushsaxena: rev a838d871a76776016703f6c904fb049be2247626)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithHANameNodes.java


> TestBalancerWithHANameNodes#testBalancerWithObserver failed occasionally
> 
>
> Key: HDFS-15368
> URL: https://issues.apache.org/jira/browse/HDFS-15368
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
>  Labels: balancer, test
> Fix For: 3.4.0
>
> Attachments: HDFS-15368.001.patch, HDFS-15368.002.patch, 
> TestBalancerWithHANameNodes.testBalancerObserver.log, 
> TestBalancerWithHANameNodes.testBalancerObserver.log
>
>
> When I am working on HDFS-13183, I found that 
> TestBalancerWithHANameNodes#testBalancerWithObserver failed occasionally, 
> because the following code segment. Consider there are 1 ANN + 1 SBN + 2ONN, 
> when invoke getBlocks with opening Observer Read feature, it could request 
> any one of two ObserverNN based on my observation. So only verify the first 
> ObserverNN and check times of invoke #getBlocks is not expected.
> {code:java}
>   for (int i = 0; i < cluster.getNumNameNodes(); i++) {
> // First observer node is at idx 2, or 3 if 2 has been shut down
> // It should get both getBlocks calls, all other NNs should see 0 
> calls
> int expectedObserverIdx = withObserverFailure ? 3 : 2;
> int expectedCount = (i == expectedObserverIdx) ? 2 : 0;
> verify(namesystemSpies.get(i), times(expectedCount))
> .getBlocks(any(), anyLong(), anyLong());
>   }
> {code}
> cc [~xkrogen],[~weichiu]. I am not very familiar for Observer Read feature, 
> would you like give some suggestions? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13183) Standby NameNode process getBlocks request to reduce Active load

2020-05-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118421#comment-17118421
 ] 

Hudson commented on HDFS-13183:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18304 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18304/])
HDFS-13183. Addendum: Standby NameNode process getBlocks request to 
(ayushsaxena: rev 9b38be43c6323077a7be14e1295ad484c4038372)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java


> Standby NameNode process getBlocks request to reduce Active load
> 
>
> Key: HDFS-13183
> URL: https://issues.apache.org/jira/browse/HDFS-13183
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: balancer  mover, namenode
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-13183-trunk.001.patch, HDFS-13183-trunk.002.patch, 
> HDFS-13183-trunk.003.patch, HDFS-13183.004.patch, HDFS-13183.005.patch, 
> HDFS-13183.006.patch, HDFS-13183.007.patch, HDFS-13183.addendum.patch, 
> HDFS-13183.addendum.patch
>
>
> The performance of Active NameNode could be impact when {{Balancer}} requests 
> #getBlocks, since query blocks of overly full DNs performance is extremely 
> inefficient currently. The main reason is {{NameNodeRpcServer#getBlocks}} 
> hold read lock for long time. In extreme case, all handlers of Active 
> NameNode RPC server are occupied by one reader 
> {{NameNodeRpcServer#getBlocks}} and other write operation calls, thus Active 
> NameNode enter a state of false death for number of seconds even for minutes.
> The similar performance concerns of Balancer have reported by HDFS-9412, 
> HDFS-7967, etc.
> If Standby NameNode can shoulder #getBlocks heavy burden, it could speed up 
> the progress of balancing and reduce performance impact to Active NameNode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15362) FileWithSnapshotFeature#updateQuotaAndCollectBlocks should collect all distinct blocks

2020-05-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17117995#comment-17117995
 ] 

Hudson commented on HDFS-15362:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18300 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18300/])
HDFS-15362. FileWithSnapshotFeature#updateQuotaAndCollectBlocks should 
(inigoiri: rev 2148a8fe645333444c4e8110bb56acf0fb8e41b4)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFileWithSnapshotFeature.java


> FileWithSnapshotFeature#updateQuotaAndCollectBlocks should collect all 
> distinct blocks
> --
>
> Key: HDFS-15362
> URL: https://issues.apache.org/jira/browse/HDFS-15362
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15362.001.patch, HDFS-15362.002.patch
>
>
> FileWithSnapshotFeature#updateQuotaAndCollectBlocks uses list to collect 
> blocks 
> {code:java}
>  List allBlocks = new ArrayList();
>  if (file.getBlocks() != null) {
> allBlocks.addAll(Arrays.asList(file.getBlocks()));
>   }{code}
>  INodeFile#storagespaceConsumedContiguous collects all distinct blocks by set
> {code:java}
> // Collect all distinct blocks
>  Set allBlocks = new HashSet<>(Arrays.asList(getBlocks()));
>  DiffList diffs = sf.getDiffs().asList();
>  for(FileDiff diff : diffs) {
>BlockInfo[] diffBlocks = diff.getBlocks();
>if (diffBlocks != null) {
>  allBlocks.addAll(Arrays.asList(diffBlocks));
>  } {code}
> but on updating the reclaim context we subtract these both , so wrong quota 
> value can be updated
> {code:java}
> QuotaCounts current = file.storagespaceConsumed(bsp);
> reclaimContext.quotaDelta().add(oldCounts.subtract(current)); {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15373) Fix number of threads in IPCLoggerChannel#createParallelExecutor

2020-05-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17116640#comment-17116640
 ] 

Hudson commented on HDFS-15373:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18295 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18295/])
HDFS-15373. Fix number of threads in (ayushsaxena: rev 
6c9f75cf16b4a321a3b6965b76c53033843ce178)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java


> Fix number of threads in IPCLoggerChannel#createParallelExecutor
> 
>
> Key: HDFS-15373
> URL: https://issues.apache.org/jira/browse/HDFS-15373
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15373-01.patch, HDFS-15373-02.patch
>
>
> The number of threads in IPCLoggerChannel#createParallelExecutor is elastic 
> right now, make it fixed.
> Presently the {{corePoolSize}} is set to 1 and {{maximumPoolSize}} is set to 
> {{numThread}}, but since the size of Queue is {{Integer.MAX}}, the queue 
> doesn't tend to get full and threads are always confined to 1 irrespective of 
> {{numThreads}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15369) Refactor method VolumeScanner#runLoop()

2020-05-24 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17115293#comment-17115293
 ] 

Hudson commented on HDFS-15369:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18294 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18294/])
HDFS-15369. Refactor method VolumeScanner#runLoop(). Contributed by Yang 
(ayushsaxena: rev f43a152b9729323e290908fbd4f188f6034efb3f)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java


> Refactor method VolumeScanner#runLoop() 
> 
>
> Key: HDFS-15369
> URL: https://issues.apache.org/jira/browse/HDFS-15369
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15369.001.patch
>
>
> After HDFS-15207 the method VolumeScanner#runLoop() is quite long. seperate a 
> new private method getNextBlockToScan.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15355) Make the default block storage policy ID configurable

2020-05-24 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17115289#comment-17115289
 ] 

Hudson commented on HDFS-15355:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18293 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18293/])
HDFS-15355. Make the default block storage policy ID configurable. 
(ayushsaxena: rev f4901d07781faee657f5ac2c605183ef34fe7c1a)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ArchivalStorage.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java


> Make the default block storage policy ID configurable
> -
>
> Key: HDFS-15355
> URL: https://issues.apache.org/jira/browse/HDFS-15355
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement, namenode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15355.001.patch, HDFS-15355.002.patch, 
> HDFS-15355.003.patch, HDFS-15355.004.patch, HDFS-15355.005.patch, 
> HDFS-15355.006.patch, HDFS-15355.007.patch, HDFS-15355.008.patch, 
> HDFS-15355.009.patch, HDFS-15355.010.patch, HDFS-15355.011.patch, 
> HDFS-15355.012.patch, HDFS-15355.013.patch
>
>
> Make the default block storage policy ID configurable.  Sometime we want to 
> use different storage policy ID from the startup of cluster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12288) Fix DataNode's xceiver count calculation

2020-05-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17114899#comment-17114899
 ] 

Hudson commented on HDFS-12288:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18292 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18292/])
HDFS-12288. Fix DataNode's xceiver count calculation. Contributed by (inigoiri: 
rev 6e04b00df1bf4f0a45571c9fc4361e4e8a05f7ee)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferKeepalive.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeCapacityReport.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeMXBean.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMetrics.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java


> Fix DataNode's xceiver count calculation
> 
>
> Key: HDFS-12288
> URL: https://issues.apache.org/jira/browse/HDFS-12288
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Reporter: Lukas Majercak
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-12288.001.patch, HDFS-12288.002.patch, 
> HDFS-12288.003.patch, HDFS-12288.004.patch, HDFS-12288.005.patch, 
> HDFS-12288.006.patch, HDFS-12288.007.patch, HDFS-12288.008.patch
>
>
> The problem with the ThreadGroup.activeCount() method is that the method is 
> only a very rough estimate, and in reality returns the total number of 
> threads in the thread group as opposed to the threads actually running.
> In some DNs, we saw this to return 50~ for a long time, even though the 
> actual number of DataXceiver threads was next to none.
> This is a big issue as we use the xceiverCount to make decisions on the NN 
> for choosing replication source DN or returning DNs to clients for R/W.
> The plan is to reuse the DataNodeMetrics.dataNodeActiveXceiversCount value 
> which only accounts for actual number of DataXcevier threads currently 
> running and thus represents the load on the DN much better.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15093) RENAME.TO_TRASH is ignored When RENAME.OVERWRITE is specified

2020-05-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17114766#comment-17114766
 ] 

Hudson commented on HDFS-15093:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18291 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18291/])
HDFS-15093. RENAME.TO_TRASH is ignored When RENAME.OVERWRITE is (ayushsaxena: 
rev e0ae232f669b2e2a6654cfacff22a090c462effc)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSRename.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java


> RENAME.TO_TRASH is ignored When RENAME.OVERWRITE is specified
> -
>
> Key: HDFS-15093
> URL: https://issues.apache.org/jira/browse/HDFS-15093
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Harshakiran Reddy
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15093-01.patch, HDFS-15093-02.patch, 
> HDFS-15093-03.patch, HDFS-15093-04.patch
>
>
> When Rename Overwrite flag is specified the To_TRASH option gets silently 
> ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15288) Add Available Space Rack Fault Tolerant BPP

2020-05-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17114757#comment-17114757
 ] 

Hudson commented on HDFS-15288:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18290 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18290/])
HDFS-15288. Add Available Space Rack Fault Tolerant BPP. Contributed by 
(ayushsaxena: rev f99fcb26ab9153ac281fa95b97696387a9f3995c)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestAvailableSpaceBlockPlacementPolicy.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestAvailableSpaceRackFaultTolerantBPP.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/AvailableSpaceRackFaultTolerantBlockPlacementPolicy.java


> Add Available Space Rack Fault Tolerant BPP
> ---
>
> Key: HDFS-15288
> URL: https://issues.apache.org/jira/browse/HDFS-15288
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15288-01.patch, HDFS-15288-02.patch, 
> HDFS-15288-03.patch
>
>
> The Present {{AvailableSpaceBlockPlacementPolicy}} extends the Default Block 
> Placement policy, which makes it apt for Replicated files. But not very 
> efficient for EC files, which by default use. 
> {{BlockPlacementPolicyRackFaultTolerant}}. So propose a to add new BPP having 
> similar optimization as ASBPP where as keeping the spread of Blocks to max 
> racks, i.e as RackFaultTolerantBPP.
> This could extend {{BlockPlacementPolicyRackFaultTolerant}}, rather than the 
> {{BlockPlacementPOlicyDefault}} like ASBPP and keep other logics of 
> optimization same as ASBPP



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15363) BlockPlacementPolicyWithNodeGroup should validate if it is initialized by NetworkTopologyWithNodeGroup

2020-05-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17114619#comment-17114619
 ] 

Hudson commented on HDFS-15363:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18289 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18289/])
HDFS-15363. BlockPlacementPolicyWithNodeGroup should validate if it is 
(tasanuma: rev 4d22d1c58f0eb093775f0fe4f39ef4be639ad752)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithNodeGroup.java


> BlockPlacementPolicyWithNodeGroup should validate if it is initialized by 
> NetworkTopologyWithNodeGroup
> --
>
> Key: HDFS-15363
> URL: https://issues.apache.org/jira/browse/HDFS-15363
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5
>
> Attachments: HDFS-15363-testrepro.patch, HDFS-15363.001.patch
>
>
> BlockPlacementPolicyWithNodeGroup  type casts the initialized  clusterMap 
> {code:java}
> NetworkTopologyWithNodeGroup clusterMapNodeGroup =
> (NetworkTopologyWithNodeGroup) clusterMap {code}
> If clusterMap is an instance of DFSNetworkTopology we get a ClassCastException



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15322) Make NflyFS to work when ViewFsOverloadScheme's scheme and target uris schemes are same.

2020-05-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113745#comment-17113745
 ] 

Hudson commented on HDFS-15322:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18287 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18287/])
HDFS-15322. Make NflyFS to work when ViewFsOverloadScheme's scheme and (github: 
rev 4734c77b4b64b7c6432da4cc32881aba85f94ea1)
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/FsGetter.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/HCFSMountTableConfigLoader.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsTestSetup.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/NflyFSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java


> Make NflyFS to work when ViewFsOverloadScheme's scheme and target uris 
> schemes are same.
> 
>
> Key: HDFS-15322
> URL: https://issues.apache.org/jira/browse/HDFS-15322
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, nflyFs, viewfs, viewfsOverloadScheme
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Fix For: 3.4.0
>
>
> Currently Nfly mount link will not work when we use ViewFSOverloadScheme.
> Because when when configured scheme is hdfs and target uris scheme also hdfs, 
> it will face the similar issue of looping what we discussed in design. We 
> need to use FsGetter to handle looping. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13639) SlotReleaser is not fast enough

2020-05-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113526#comment-17113526
 ] 

Hudson commented on HDFS-13639:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18285 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18285/])
HDFS-13639. SlotReleaser is not fast enough (#1885) (github: rev 
be374faf429d28561dd9c582f5c55451213d89a4)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/DfsClientShmManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/shortcircuit/TestShortCircuitCache.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ShortCircuitRegistry.java


> SlotReleaser is not fast enough
> ---
>
> Key: HDFS-13639
> URL: https://issues.apache.org/jira/browse/HDFS-13639
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.4.0, 2.6.0, 3.0.2
> Environment: 1. YCSB:
> {color:#00} recordcount=20
>  fieldcount=1
>  fieldlength=1000
>  operationcount=1000
>  
>  workload=com.yahoo.ycsb.workloads.CoreWorkload
>  
>  table=ycsb-test
>  columnfamily=C
>  readproportion=1
>  updateproportion=0
>  insertproportion=0
>  scanproportion=0
>  
>  maxscanlength=0
>  requestdistribution=zipfian
>  
>  # default 
>  readallfields=true
>  writeallfields=true
>  scanlengthdistribution=constan{color}
> {color:#00}2. datanode:{color}
> -Xmx2048m -Xms2048m -Xmn1024m -XX:MaxDirectMemorySize=1024m 
> -XX:MaxPermSize=256m -Xloggc:$run_dir/stdout/datanode_gc_${start_time}.log 
> -XX:+DisableExplicitGC -XX:+HeapDumpOnOutOfMemoryError 
> -XX:HeapDumpPath=$log_dir -XX:+PrintGCApplicationStoppedTime 
> -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=80 
> -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSParallelRemarkEnabled 
> -XX:+CMSClassUnloadingEnabled -XX:CMSMaxAbortablePrecleanTime=1 
> -XX:+CMSScavengeBeforeRemark -XX:+PrintPromotionFailure 
> -XX:+CMSConcurrentMTEnabled -XX:+ExplicitGCInvokesConcurrent 
> -XX:+SafepointTimeout -XX:MonitorBound=16384 -XX:-UseBiasedLocking 
> -verbose:gc -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintGCDateStamps
> {color:#00}3. regionserver:{color}
> {color:#00}-Xmx10g -Xms10g -XX:MaxDirectMemorySize=10g 
> -XX:MaxGCPauseMillis=150 -XX:MaxTenuringThreshold=2 
> -XX:+UnlockExperimentalVMOptions -XX:G1NewSizePercent=5 
> -Xloggc:$run_dir/stdout/regionserver_gc_${start_time}.log -Xss256k 
> -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=$log_dir -verbose:gc 
> -XX:+PrintGC -XX:+PrintGCDetails -XX:+PrintGCApplicationStoppedTime 
> -XX:+PrintHeapAtGC -XX:+PrintGCDateStamps -XX:+PrintAdaptiveSizePolicy 
> -XX:+PrintTenuringDistribution -XX:+PrintSafepointStatistics 
> -XX:PrintSafepointStatisticsCount=1 -XX:PrintFLSStatistics=1 
> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=100 -XX:GCLogFileSize=128m 
> -XX:+SafepointTimeout -XX:MonitorBound=16384 -XX:-UseBiasedLocking 
> -XX:+UseG1GC -XX:InitiatingHeapOccupancyPercent=65 
> -XX:+ParallelRefProcEnabled -XX:ConcGCThreads=4 -XX:ParallelGCThreads=16 
> -XX:G1HeapRegionSize=32m -XX:G1MixedGCCountTarget=64 
> -XX:G1OldCSetRegionThresholdPercent=5{color}
> {color:#00}block cache is disabled:{color}{color:#00} 
>  hbase.bucketcache.size
>  0.9
>  {color}
>  
>Reporter: Gang Xie
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-13639-2.4.diff, HDFS-13639.001.patch, 
> HDFS-13639.002.patch, ShortCircuitCache_new_slotReleaser.diff, 
> perf_after_improve_SlotReleaser.png, perf_before_improve_SlotReleaser.png
>
>
> When test the performance of the ShortCircuit Read of the HDFS with YCSB, we 
> find that SlotReleaser of the ShortCircuitCache has some performance issue. 
> The problem is that, the qps of the slot releasing could only reach to 1000+ 
> while the qps of the slot allocating is ~3000. This means that the replica 
> info on datanode could not be released in time, which causes a lot of GCs and 
> finally full GCs.
>  
> The fireflame graph shows that SlotReleaser spends a lot of time to do domain 
> socket connecting and throw/catching the exception when close the domain 
> socket and its streams. It doesn't make any sense to do the connecting and 
> closing each time. Each time when we connect to the domain socket, Datanode 
> allocates a new thread to free the slot. There are a lot of initializing 
> work, and it's costly. We need reuse the domain socket. 
>  
> After switch to reuse the domain socket(see diff attached), we get great 
> improvement(see the perf):
>  # without reusing the 

[jira] [Commented] (HDFS-15353) Use sudo instead of su to allow nologin user for secure DataNode

2020-05-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112699#comment-17112699
 ] 

Hudson commented on HDFS-15353:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18283 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18283/])
HDFS-15353. Use sudo instead of su to allow nologin user for secure (github: 
rev 1a3c6bb33b615242506a0313a24527ca51a3d665)
* (edit) hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh


> Use sudo instead of su to allow nologin user for secure DataNode
> 
>
> Key: HDFS-15353
> URL: https://issues.apache.org/jira/browse/HDFS-15353
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, security
>Reporter: Akira Ajisaka
>Assignee: Kei Kori
>Priority: Major
> Fix For: 3.4.0
>
>
> When launching secure DataNode, su command fails in hadoop-functions.sh if 
> the login shell of the secure user (hdfs) is /sbin/nologin. Can we use sudo 
> command instead of su to fix this problem?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15340) RBF: Implement BalanceProcedureScheduler basic framework

2020-05-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111703#comment-17111703
 ] 

Hudson commented on HDFS-15340:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18278 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18278/])
HDFS-15340. RBF: Implement BalanceProcedureScheduler basic framework. (yqlin: 
rev 1983eea62def58fb769f44c1d41dc29690274809)
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/procedure/BalanceJournalInfoHDFS.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/procedure/RetryProcedure.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/procedure/WaitProcedure.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/procedure/BalanceJournal.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/procedure/MultiPhaseProcedure.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/procedure/BalanceProcedureScheduler.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/procedure/BalanceJob.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/procedure/BalanceProcedure.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/procedure/RecordProcedure.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/procedure/BalanceProcedureConfigKeys.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/procedure/package-info.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/procedure/UnrecoverableProcedure.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/procedure/TestBalanceProcedureScheduler.java


> RBF: Implement BalanceProcedureScheduler basic framework
> 
>
> Key: HDFS-15340
> URL: https://issues.apache.org/jira/browse/HDFS-15340
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15340.001.patch, HDFS-15340.002.patch, 
> HDFS-15340.003.patch, HDFS-15340.004.patch, HDFS-15340.005.patch, 
> HDFS-15340.006.patch, HDFS-15340.007.patch, HDFS-15340.008.patch
>
>
> Patch in HDFS-15294 is too big to review so we split it into 2 patches. This 
> is the first one. Detail can be found at HDFS-15294.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15293) Relax the condition for accepting a fsimage when receiving a checkpoint

2020-05-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110550#comment-17110550
 ] 

Hudson commented on HDFS-15293:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18272 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18272/])
HDFS-15293. Relax the condition for accepting a fsimage when receiving a 
(vagarychen: rev 7bb902bc0d0c62d63a8960db444de3abb0a6ad22)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java


> Relax the condition for accepting a fsimage when receiving a checkpoint 
> 
>
> Key: HDFS-15293
> URL: https://issues.apache.org/jira/browse/HDFS-15293
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Critical
>  Labels: multi-sbnn, release-blocker
> Attachments: HDFS-15293.001.patch, HDFS-15293.002.patch
>
>
> HDFS-12979 introduced the logic that, if ANN sees consecutive fs image upload 
> from Standby with a small delta comparing to previous fsImage. ANN would 
> reject this image. This is to avoid overly frequent fsImage in case of when 
> there are multiple Standby node. However this check could be too stringent.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14999) Avoid Potential Infinite Loop in DFSNetworkTopology

2020-05-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110463#comment-17110463
 ] 

Hudson commented on HDFS-14999:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18271 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18271/])
HDFS-14999. Avoid Potential Infinite Loop in DFSNetworkTopology. (ayushsaxena: 
rev c84e6beada4e604175f7f138c9878a29665a8c47)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DFSNetworkTopology.java


> Avoid Potential Infinite Loop in DFSNetworkTopology
> ---
>
> Key: HDFS-14999
> URL: https://issues.apache.org/jira/browse/HDFS-14999
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-14999-01.patch
>
>
> {code:java}
> do {
>   chosen = chooseRandomWithStorageTypeAndExcludeRoot(root, excludeRoot,
>   type);
>   if (excludedNodes == null || !excludedNodes.contains(chosen)) {
> break;
>   } else {
> LOG.debug("Node {} is excluded, continuing.", chosen);
>   }
> } while (true);
> {code}
> Observed this loop getting stuck as part of testing HDFS-14913.
> There should be some exit condition or max retries here



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15202) HDFS-client: boost ShortCircuit Cache

2020-05-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110449#comment-17110449
 ] 

Hudson commented on HDFS-15202:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18270 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18270/])
Revert "HDFS-15202 Boost short circuit cache (rebase PR-1884) (#2016)" 
(weichiu: rev 4525292d41482330a86f1cc3935e072f9f67c308)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestEnhancedByteBufferAccess.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderFactory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/shortcircuit/TestShortCircuitCache.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
HDFS-15202 Boost short circuit cache (rebase PR-1884) (#2016) (weichiu: rev 
2abcf7762ae74b936e1cedb60d5d2b4cc4ee86ea)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderFactory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/shortcircuit/TestShortCircuitCache.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestEnhancedByteBufferAccess.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java


> HDFS-client: boost ShortCircuit Cache
> -
>
> Key: HDFS-15202
> URL: https://issues.apache.org/jira/browse/HDFS-15202
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
> Environment: 4 nodes E5-2698 v4 @ 2.20GHz, 700 Gb Mem.
> 8 RegionServers (2 by host)
> 8 tables by 64 regions by 1.88 Gb data in each = 900 Gb total
> Random read in 800 threads via YCSB and a little bit updates (10% of reads)
>Reporter: Danil Lipovoy
>Assignee: Danil Lipovoy
>Priority: Minor
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15202-Addendum-01.patch, HDFS_CPU_full_cycle.png, 
> cpu_SSC.png, cpu_SSC2.png, hdfs_cpu.png, hdfs_reads.png, hdfs_scc_3_test.png, 
> hdfs_scc_test_full-cycle.png, locks.png, requests_SSC.png
>
>
> ТотI want to propose how to improve reading performance HDFS-client. The 
> idea: create few instances ShortCircuit caches instead of one. 
> The key points:
> 1. Create array of caches (set by 
> clientShortCircuitNum=*dfs.client.short.circuit.num*, see in the pull 
> requests below):
> {code:java}
> private ClientContext(String name, DfsClientConf conf, Configuration config) {
> ...
> shortCircuitCache = new ShortCircuitCache[this.clientShortCircuitNum];
> for (int i = 0; i < this.clientShortCircuitNum; i++) {
>   this.shortCircuitCache[i] = ShortCircuitCache.fromConf(scConf);
> }
> {code}
> 2 Then divide blocks by caches:
> {code:java}
>   public ShortCircuitCache getShortCircuitCache(long idx) {
> return shortCircuitCache[(int) (idx % clientShortCircuitNum)];
>   }
> {code}
> 3. And how to call it:
> {code:java}
> ShortCircuitCache cache = 
> clientContext.getShortCircuitCache(block.getBlockId());
> {code}
> The last number of offset evenly distributed from 0 to 9 - that's why all 
> caches will full approximately the same.
> It is good for performance. Below the attachment, it is load test reading 
> HDFS via HBase where clientShortCircuitNum = 1 vs 3. We can see that 
> performance grows ~30%, CPU usage about +15%. 
> Hope it is interesting for someone.
> Ready to explain some unobvious things.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To 

[jira] [Commented] (HDFS-15207) VolumeScanner skip to scan blocks accessed during recent scan peroid

2020-05-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110393#comment-17110393
 ] 

Hudson commented on HDFS-15207:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #18269 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18269/])
HDFS-15207. VolumeScanner skip to scan blocks accessed during recent (weichiu: 
rev 50caba1a92cb36ce78307d47ed7624ce216562fc)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockScanner.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockScanner.java


> VolumeScanner skip to scan blocks accessed during recent scan peroid
> 
>
> Key: HDFS-15207
> URL: https://issues.apache.org/jira/browse/HDFS-15207
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15207.002.patch, HDFS-15207.003.patch, 
> HDFS-15207.004.patch, HDFS-15207.005.patch, HDFS-15207.006.patch, 
> HDFS-15207.007.patch, HDFS-15207.008.patch, HDFS-15207.patch, HDFS-15207.patch
>
>
> Check the access time of block file to avoid scanning recently changed 
> blocks, reducing disk IO.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13183) Standby NameNode process getBlocks request to reduce Active load

2020-05-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110327#comment-17110327
 ] 

Hudson commented on HDFS-13183:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #18268 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18268/])
HDFS-13183. Standby NameNode process getBlocks request to reduce Active 
(weichiu: rev a3f44dacc1fa19acc4eefd1e2505e54f8629e603)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithHANameNodes.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java


> Standby NameNode process getBlocks request to reduce Active load
> 
>
> Key: HDFS-13183
> URL: https://issues.apache.org/jira/browse/HDFS-13183
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: balancer  mover, namenode
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-13183-trunk.001.patch, HDFS-13183-trunk.002.patch, 
> HDFS-13183-trunk.003.patch, HDFS-13183.004.patch, HDFS-13183.005.patch, 
> HDFS-13183.006.patch, HDFS-13183.007.patch
>
>
> The performance of Active NameNode could be impact when {{Balancer}} requests 
> #getBlocks, since query blocks of overly full DNs performance is extremely 
> inefficient currently. The main reason is {{NameNodeRpcServer#getBlocks}} 
> hold read lock for long time. In extreme case, all handlers of Active 
> NameNode RPC server are occupied by one reader 
> {{NameNodeRpcServer#getBlocks}} and other write operation calls, thus Active 
> NameNode enter a state of false death for number of seconds even for minutes.
> The similar performance concerns of Balancer have reported by HDFS-9412, 
> HDFS-7967, etc.
> If Standby NameNode can shoulder #getBlocks heavy burden, it could speed up 
> the progress of balancing and reduce performance impact to Active NameNode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15202) HDFS-client: boost ShortCircuit Cache

2020-05-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110319#comment-17110319
 ] 

Hudson commented on HDFS-15202:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #18267 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18267/])
HDFS-15202 Boost short circuit cache (rebase PR-1884) (#2016) (github: rev 
86e6aa8eec538e142044e2b6415ec1caff5e9cbd)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/shortcircuit/TestShortCircuitCache.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderFactory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestEnhancedByteBufferAccess.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java


> HDFS-client: boost ShortCircuit Cache
> -
>
> Key: HDFS-15202
> URL: https://issues.apache.org/jira/browse/HDFS-15202
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
> Environment: 4 nodes E5-2698 v4 @ 2.20GHz, 700 Gb Mem.
> 8 RegionServers (2 by host)
> 8 tables by 64 regions by 1.88 Gb data in each = 900 Gb total
> Random read in 800 threads via YCSB and a little bit updates (10% of reads)
>Reporter: Danil Lipovoy
>Assignee: Danil Lipovoy
>Priority: Minor
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS_CPU_full_cycle.png, cpu_SSC.png, cpu_SSC2.png, 
> hdfs_cpu.png, hdfs_reads.png, hdfs_scc_3_test.png, 
> hdfs_scc_test_full-cycle.png, locks.png, requests_SSC.png
>
>
> ТотI want to propose how to improve reading performance HDFS-client. The 
> idea: create few instances ShortCircuit caches instead of one. 
> The key points:
> 1. Create array of caches (set by 
> clientShortCircuitNum=*dfs.client.short.circuit.num*, see in the pull 
> requests below):
> {code:java}
> private ClientContext(String name, DfsClientConf conf, Configuration config) {
> ...
> shortCircuitCache = new ShortCircuitCache[this.clientShortCircuitNum];
> for (int i = 0; i < this.clientShortCircuitNum; i++) {
>   this.shortCircuitCache[i] = ShortCircuitCache.fromConf(scConf);
> }
> {code}
> 2 Then divide blocks by caches:
> {code:java}
>   public ShortCircuitCache getShortCircuitCache(long idx) {
> return shortCircuitCache[(int) (idx % clientShortCircuitNum)];
>   }
> {code}
> 3. And how to call it:
> {code:java}
> ShortCircuitCache cache = 
> clientContext.getShortCircuitCache(block.getBlockId());
> {code}
> The last number of offset evenly distributed from 0 to 9 - that's why all 
> caches will full approximately the same.
> It is good for performance. Below the attachment, it is load test reading 
> HDFS via HBase where clientShortCircuitNum = 1 vs 3. We can see that 
> performance grows ~30%, CPU usage about +15%. 
> Hope it is interesting for someone.
> Ready to explain some unobvious things.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15082) RBF: Check each component length of destination path when add/update mount entry

2020-05-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17109498#comment-17109498
 ] 

Hudson commented on HDFS-15082:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #18263 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18263/])
HDFS-15082. RBF: Check each component length of destination path when 
(ayushsaxena: rev a3809d202300ce39c75e909fbc4640635dc334bc)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTable.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml


> RBF: Check each component length of destination path when add/update mount 
> entry
> 
>
> Key: HDFS-15082
> URL: https://issues.apache.org/jira/browse/HDFS-15082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15082.001.patch, HDFS-15082.002.patch, 
> HDFS-15082.003.patch, HDFS-15082.004.patch, HDFS-15082.005.patch
>
>
> When add/update mount entry, each component length of destination path could 
> exceed filesystem path component length limit, reference to 
> `dfs.namenode.fs-limits.max-component-length` of NameNode. So we should check 
> each component length of destination path when add/update mount entry at 
> Router side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15358) RBF: Unify router datanode UI with namenode datanode UI

2020-05-16 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17109235#comment-17109235
 ] 

Hudson commented on HDFS-15358:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #18262 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18262/])
HDFS-15358. RBF: Unify router datanode UI with namenode datanode UI. 
(ayushsaxena: rev 6e416a83d1e674ecd018d1db74a2d88e738deb40)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html


> RBF: Unify router datanode UI with namenode datanode UI
> ---
>
> Key: HDFS-15358
> URL: https://issues.apache.org/jira/browse/HDFS-15358
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15358-01.patch, HDFS-15358-02.patch, 
> RBF-After-01.png, RBF-After-02.png, RBF-After-03.png, RBF-Before.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15356) Unify configuration `dfs.ha.allow.stale.reads` to DFSConfigKeys

2020-05-16 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17109006#comment-17109006
 ] 

Hudson commented on HDFS-15356:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #18261 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18261/])
HDFS-15356. Unify configuration `dfs.ha.allow.stale.reads` to (ayushsaxena: rev 
178336f8a8bb291eb355bede729082f2f0382216)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java


> Unify configuration `dfs.ha.allow.stale.reads` to DFSConfigKeys
> ---
>
> Key: HDFS-15356
> URL: https://issues.apache.org/jira/browse/HDFS-15356
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15356.001.patch, HDFS-15356.002.patch, 
> HDFS-15356.003.patch
>
>
> Unify to Define configuration key `dfs.ha.allow.stale.reads` in DFSConfigKeys 
> and give the default value and notice at hdfs-default.xml.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15306) Make mount-table to read from central place ( Let's say from HDFS)

2020-05-14 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107809#comment-17107809
 ] 

Hudson commented on HDFS-15306:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #18260 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18260/])
HDFS-15306. Make mount-table to read from central place ( Let's say from 
(github: rev ac4a2e11d98827c7926a34cda27aa7bcfd3f36c1)
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/MountTableConfigLoader.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeLocalFileSystem.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFSOverloadSchemeWithMountTableConfigInHDFS.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/HCFSMountTableConfigLoader.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestHCFSMountTableConfigLoader.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFSOverloadSchemeCentralMountTableConfig.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/package-info.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsTestSetup.java


> Make mount-table to read from central place ( Let's say from HDFS)
> --
>
> Key: HDFS-15306
> URL: https://issues.apache.org/jira/browse/HDFS-15306
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: configuration, hadoop-client
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Fix For: 3.4.0
>
>
> ViewFsOverloadScheme should be able to read mount-table.xml configuration 
> from remote servers.
>  Below are the discussed options in design doc:
>  # XInclude and HTTP Server ( including WebHDFS)
>  # Hadoop Compatible FS (*HCFS)*
>  a) Keep mount-table in Hadoop compatible FS
>  b)Read mount-table from Hadoop compatible FS using Xinclude
> We prefer to have 1 and 2a. For 1 we don't need to modify any code. So, this 
> Jira can cover 2a.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15344) DataNode#checkSuperuserPrivilege should use UGI#getGroups after HADOOP-13442

2020-05-13 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106586#comment-17106586
 ] 

Hudson commented on HDFS-15344:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18252 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18252/])
HDFS-15344. DataNode#checkSuperuserPrivilege should use UGI#getGroups (github: 
rev 3cacf1ce565ce151524d4f61ab7c01a718eb534d)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


> DataNode#checkSuperuserPrivilege should use UGI#getGroups after HADOOP-13442
> 
>
> Key: HDFS-15344
> URL: https://issues.apache.org/jira/browse/HDFS-15344
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.5
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 3.4.0
>
>
> HADOOP-13442 added UGI#getGroups to avoid list->array->list conversions. This 
> ticket is opened to change DataNode#checkSuperuserPrivilege to use 
> UGI#getGroups. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15316) Deletion failure should not remove directory from snapshottables

2020-05-13 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106173#comment-17106173
 ] 

Hudson commented on HDFS-15316:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18250 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18250/])
HDFS-15316. Deletion failure should not remove directory from (surendralilhore: 
rev 743c2e9071f4a73e0196ad4ca005b767758642b9)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeleteRace.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java


> Deletion failure should not remove directory from snapshottables
> 
>
> Key: HDFS-15316
> URL: https://issues.apache.org/jira/browse/HDFS-15316
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15316.001.patch, HDFS-15316.002.patch
>
>
> If deleting a directory doesn't succeeds , still we are removing directory 
> from snapshottables  
> this makes the system inconsistent , we will be able to create snapshots but 
> snapshot diff throws Directory is not snaphottable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15300) RBF: updateActiveNamenode() is invalid when RPC address is IP

2020-05-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105585#comment-17105585
 ] 

Hudson commented on HDFS-15300:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18242 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18242/])
HDFS-15300. RBF: updateActiveNamenode() is invalid when RPC address is 
(ayushsaxena: rev 936bf09c3745cfec26fa9cfa0562f88b1f8be133)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/FederationTestUtils.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestNamenodeResolver.java


> RBF: updateActiveNamenode() is invalid when RPC address is IP
> -
>
> Key: HDFS-15300
> URL: https://issues.apache.org/jira/browse/HDFS-15300
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15300-001.patch, HDFS-15300-002.patch
>
>
> ActiveNamenodeResolver#updateActiveNamenode will invalid when the rpc address 
> like ip:port.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15345) RBF: RouterPermissionChecker#checkSuperuserPrivilege should use UGI#getGroups after HADOOP-13442

2020-05-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105547#comment-17105547
 ] 

Hudson commented on HDFS-15345:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18240 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18240/])
HDFS-15345. RouterPermissionChecker#checkSuperuserPrivilege should use (github: 
rev 047d8879e7a1bf4dbf6b99815a78b384cd5d514c)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterPermissionChecker.java


> RBF: RouterPermissionChecker#checkSuperuserPrivilege should use UGI#getGroups 
> after HADOOP-13442
> 
>
> Key: HDFS-15345
> URL: https://issues.apache.org/jira/browse/HDFS-15345
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.5
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 3.4.0
>
>
> HADOOP-13442 added UGI#getGroups to avoid list->array->list conversions. This 
> ticket is opened to change  RouterPermissionChecker#checkSuperuserPrivilege 
> to use UGI#getGroups. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   9   10   >