[GitHub] [hadoop] dhirajh opened a new pull request #1964: HDFS-15281: Make sure ZKFC uses dfs.namenode.rpc-address to bind to host address

2020-04-18 Thread GitBox
dhirajh opened a new pull request #1964: HDFS-15281: Make sure ZKFC uses 
dfs.namenode.rpc-address to bind to host address
URL: https://github.com/apache/hadoop/pull/1964
 
 
   Details of the issue is here: 
https://issues.apache.org/jira/browse/HDFS-15281
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16959) Resolve hadoop-cos dependency conflict

2020-04-18 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17086690#comment-17086690
 ] 

Hadoop QA commented on HADOOP-16959:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
0s{color} | {color:blue} markdownlint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.3 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
48s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
54s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
49s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} branch-3.3 passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
44s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
26s{color} | {color:blue} branch/hadoop-project no findbugs output file 
(findbugsXml.xml) {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
26s{color} | {color:blue} 
branch/hadoop-cloud-storage-project/hadoop-cloud-storage no findbugs output 
file (findbugsXml.xml) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
42s{color} | {color:red} hadoop-cloud-storage-project/hadoop-cos in branch-3.3 
has 5 extant findbugs warnings. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 17m 
14s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 14s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
25s{color} | {color:blue} hadoop-project has no data from findbugs {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-cloud-storage-project/hadoop-cos generated 0 
new + 1 unchanged - 4 fixed = 1 total (was 5) {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
25s{color} | {color:blue} hadoop-cloud-storage-project/hadoop-cloud-storage has 
no data from 

[jira] [Commented] (HADOOP-16959) Resolve hadoop-cos dependency conflict

2020-04-18 Thread YangY (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17086669#comment-17086669
 ] 

YangY commented on HADOOP-16959:


Copy the dependencies of the Hadoop-cos into the ${project.build}/lib 
directory, and output the dependency list to the 
${project.build}/hadoop-cloud-storage-deps directory.

> Resolve hadoop-cos dependency conflict
> --
>
> Key: HADOOP-16959
> URL: https://issues.apache.org/jira/browse/HADOOP-16959
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/cos
>Reporter: YangY
>Assignee: YangY
>Priority: Major
> Attachments: HADOOP-16959-branch-3.3.001.patch, 
> HADOOP-16959-branch-3.3.002.patch, HADOOP-16959-branch-3.3.003.patch, 
> HADOOP-16959-branch-3.3.004.patch, HADOOP-16959-branch-3.3.005.patch
>
>
> There are some dependency conflicts between the Hadoop-common and Hadoop-cos. 
> For example, joda time lib, HTTP client lib and etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16959) Resolve hadoop-cos dependency conflict

2020-04-18 Thread YangY (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YangY updated HADOOP-16959:
---
Attachment: HADOOP-16959-branch-3.3.005.patch

> Resolve hadoop-cos dependency conflict
> --
>
> Key: HADOOP-16959
> URL: https://issues.apache.org/jira/browse/HADOOP-16959
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/cos
>Reporter: YangY
>Assignee: YangY
>Priority: Major
> Attachments: HADOOP-16959-branch-3.3.001.patch, 
> HADOOP-16959-branch-3.3.002.patch, HADOOP-16959-branch-3.3.003.patch, 
> HADOOP-16959-branch-3.3.004.patch, HADOOP-16959-branch-3.3.005.patch
>
>
> There are some dependency conflicts between the Hadoop-common and Hadoop-cos. 
> For example, joda time lib, HTTP client lib and etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri merged pull request #1954: HDFS-15217 Add more information to longest write/read lock held log

2020-04-18 Thread GitBox
goiri merged pull request #1954: HDFS-15217 Add more information to longest 
write/read lock held log
URL: https://github.com/apache/hadoop/pull/1954
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on issue #1954: HDFS-15217 Add more information to longest write/read lock held log

2020-04-18 Thread GitBox
goiri commented on issue #1954: HDFS-15217 Add more information to longest 
write/read lock held log
URL: https://github.com/apache/hadoop/pull/1954#issuecomment-615947142
 
 
   Thanks @brfrn169 for the benchmark, I think this is safe enough.
   We've also been on this for a while so I guess nobody else has concerns.
   I'll go ahead and merge this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17000) [Test] Use Yetus 0.12.0 in precommit job

2020-04-18 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17086635#comment-17086635
 ] 

Hadoop QA commented on HADOOP-17000:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
10s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
43s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16898/artifact/out/Dockerfile
 |
| JIRA Issue | HADOOP-17000 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13000378/HADOOP-17000.01.patch 
|
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 62ad199d9bbb 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 5576915 |
| Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16898/testReport/ |
| Max. process+thread count | 3233 (vs. ulimit of 5500) |
| 

[jira] [Commented] (HADOOP-17000) [Test] Use Yetus 0.12.0 in precommit job

2020-04-18 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17086584#comment-17086584
 ] 

Hadoop QA commented on HADOOP-17000:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} yetus {color} | {color:red}  0m 10s{color} 
| {color:red} Unprocessed flag(s): --jenkins --skip-dir {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16897/artifact/out/Dockerfile
 |
| JIRA Issue | HADOOP-17000 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13000378/HADOOP-17000.01.patch 
|
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16897/console |
| versions | git=2.7.4 |
| Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |


This message was automatically generated.



> [Test] Use Yetus 0.12.0 in precommit job
> 
>
> Key: HADOOP-17000
> URL: https://issues.apache.org/jira/browse/HADOOP-17000
> Project: Hadoop Common
>  Issue Type: Task
>  Components: bulid
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-17000.01.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17000) [Test] Use Yetus 0.12.0 in precommit job

2020-04-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17000:
---
Status: Patch Available  (was: Open)

Test patch to kick hadoop precommit job.

> [Test] Use Yetus 0.12.0 in precommit job
> 
>
> Key: HADOOP-17000
> URL: https://issues.apache.org/jira/browse/HADOOP-17000
> Project: Hadoop Common
>  Issue Type: Task
>  Components: bulid
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-17000.01.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17000) [Test] Use Yetus 0.12.0 in precommit job

2020-04-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17000:
---
Attachment: HADOOP-17000.01.patch

> [Test] Use Yetus 0.12.0 in precommit job
> 
>
> Key: HADOOP-17000
> URL: https://issues.apache.org/jira/browse/HADOOP-17000
> Project: Hadoop Common
>  Issue Type: Task
>  Components: bulid
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-17000.01.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17000) [Test] Use Yetus 0.12.0 in precommit job

2020-04-18 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17000:
--

 Summary: [Test] Use Yetus 0.12.0 in precommit job
 Key: HADOOP-17000
 URL: https://issues.apache.org/jira/browse/HADOOP-17000
 Project: Hadoop Common
  Issue Type: Task
  Components: bulid
Reporter: Akira Ajisaka
Assignee: Akira Ajisaka






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16944) Use Yetus 0.12.0 in GitHub PR

2020-04-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16944:
---
Summary: Use Yetus 0.12.0 in GitHub PR  (was: Use Yetus 0.12.0-SNAPSHOT for 
precommit jobs)

> Use Yetus 0.12.0 in GitHub PR
> -
>
> Key: HADOOP-16944
> URL: https://issues.apache.org/jira/browse/HADOOP-16944
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.4.0
>
>
> HADOOP-16054 wants to upgrade the ubuntu version of the docker image from 
> 16.04 to 18.04. However, ubuntu 18.04 brings maven 3.6.0 by default and the 
> pre-commit jobs fail to add comments to GitHub and JIRA. The issue was fixed 
> by YETUS-957 and upgrading the Yetus version to 0.12.0-SNAPSHOT (or 0.12.0, 
> if released) will fix the problem.
> How to upgrade Yetus version in the pre-commit jobs:
> * GitHub PR (hadoop-multibranch): Upgrade Jenkinsfile
> * JIRA (PreCommit--Build): Manually update the config in builds.apache.org



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16944) Use Yetus 0.12.0-SNAPSHOT for precommit jobs

2020-04-18 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17086573#comment-17086573
 ] 

Akira Ajisaka commented on HADOOP-16944:


I'll cherry-pick this to the lower branches after branch-3.3.0 is cut.

> Use Yetus 0.12.0-SNAPSHOT for precommit jobs
> 
>
> Key: HADOOP-16944
> URL: https://issues.apache.org/jira/browse/HADOOP-16944
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.4.0
>
>
> HADOOP-16054 wants to upgrade the ubuntu version of the docker image from 
> 16.04 to 18.04. However, ubuntu 18.04 brings maven 3.6.0 by default and the 
> pre-commit jobs fail to add comments to GitHub and JIRA. The issue was fixed 
> by YETUS-957 and upgrading the Yetus version to 0.12.0-SNAPSHOT (or 0.12.0, 
> if released) will fix the problem.
> How to upgrade Yetus version in the pre-commit jobs:
> * GitHub PR (hadoop-multibranch): Upgrade Jenkinsfile
> * JIRA (PreCommit--Build): Manually update the config in builds.apache.org



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16944) Use Yetus 0.12.0-SNAPSHOT for precommit jobs

2020-04-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17086566#comment-17086566
 ] 

Hudson commented on HADOOP-16944:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18162 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18162/])
HADOOP-16944. Use Yetus 0.12.0 in GitHub PR (#1917) (github: rev 
5576915236aba172cb5ab49b43111661590058af)
* (edit) Jenkinsfile


> Use Yetus 0.12.0-SNAPSHOT for precommit jobs
> 
>
> Key: HADOOP-16944
> URL: https://issues.apache.org/jira/browse/HADOOP-16944
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.4.0
>
>
> HADOOP-16054 wants to upgrade the ubuntu version of the docker image from 
> 16.04 to 18.04. However, ubuntu 18.04 brings maven 3.6.0 by default and the 
> pre-commit jobs fail to add comments to GitHub and JIRA. The issue was fixed 
> by YETUS-957 and upgrading the Yetus version to 0.12.0-SNAPSHOT (or 0.12.0, 
> if released) will fix the problem.
> How to upgrade Yetus version in the pre-commit jobs:
> * GitHub PR (hadoop-multibranch): Upgrade Jenkinsfile
> * JIRA (PreCommit--Build): Manually update the config in builds.apache.org



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16971) testFileContextResolveAfs creates dangling link and fails for subsequent runs

2020-04-18 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17086564#comment-17086564
 ] 

Ctest commented on HADOOP-16971:


Yes, I pushed a patch that ensures the symlink will be deleted before deleting 
the original file. So when the test finishes, both the file and the symlink 
will be deleted successfully and not affect the second run.

> testFileContextResolveAfs creates dangling link and fails for subsequent runs
> -
>
> Key: HADOOP-16971
> URL: https://issues.apache.org/jira/browse/HADOOP-16971
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, fs, test
>Affects Versions: 3.2.1, 3.4.0
>Reporter: Ctest
>Priority: Minor
>  Labels: easyfix, fs, symlink, test
> Attachments: HADOOP-16971.000.patch
>
>
> In the test testFileContextResolveAfs, the symlink TestFileContextResolveAfs2 
> (linked to TestFileContextResolveAfs1) cannot be deleted when the test 
> finishes.
> This is because TestFileContextResolveAfs1 was always deleted before 
> TestFileContextResolveAfs2 when they were both passed into 
> FileSystem#deleteOnExit. This caused TestFileContextResolveAfs2 to become a 
> dangling link, which FileSystem in Hadoop currently cannot delete. (This is 
> because Files#exists will return false for dangling links.)
> As a result, the test `testFileContextResolveAfs` only passed for the first 
> run. And for later runs of this test, it will fail by throwing the following 
> exception: 
> {code:java}
> fs.FileUtil (FileUtil.java:symLink(821)) - Command 'ln -s 
> mypath/TestFileContextResolveAfs1 mypath/TestFileContextResolveAfs2' failed 1 
> with: ln: mypath/TestFileContextResolveAfs2: File exists
> java.io.IOException: Error 1 creating symlink 
> file:mypath/TestFileContextResolveAfs2 to mypath/TestFileContextResolveAfs1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16944) Use Yetus 0.12.0-SNAPSHOT for precommit jobs

2020-04-18 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17086552#comment-17086552
 ] 

Akira Ajisaka commented on HADOOP-16944:


Merged the PR into trunk. Thanks [~ayushtkn] for the review.

> Use Yetus 0.12.0-SNAPSHOT for precommit jobs
> 
>
> Key: HADOOP-16944
> URL: https://issues.apache.org/jira/browse/HADOOP-16944
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> HADOOP-16054 wants to upgrade the ubuntu version of the docker image from 
> 16.04 to 18.04. However, ubuntu 18.04 brings maven 3.6.0 by default and the 
> pre-commit jobs fail to add comments to GitHub and JIRA. The issue was fixed 
> by YETUS-957 and upgrading the Yetus version to 0.12.0-SNAPSHOT (or 0.12.0, 
> if released) will fix the problem.
> How to upgrade Yetus version in the pre-commit jobs:
> * GitHub PR (hadoop-multibranch): Upgrade Jenkinsfile
> * JIRA (PreCommit--Build): Manually update the config in builds.apache.org



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16944) Use Yetus 0.12.0-SNAPSHOT for precommit jobs

2020-04-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16944:
---
Fix Version/s: 3.4.0

> Use Yetus 0.12.0-SNAPSHOT for precommit jobs
> 
>
> Key: HADOOP-16944
> URL: https://issues.apache.org/jira/browse/HADOOP-16944
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.4.0
>
>
> HADOOP-16054 wants to upgrade the ubuntu version of the docker image from 
> 16.04 to 18.04. However, ubuntu 18.04 brings maven 3.6.0 by default and the 
> pre-commit jobs fail to add comments to GitHub and JIRA. The issue was fixed 
> by YETUS-957 and upgrading the Yetus version to 0.12.0-SNAPSHOT (or 0.12.0, 
> if released) will fix the problem.
> How to upgrade Yetus version in the pre-commit jobs:
> * GitHub PR (hadoop-multibranch): Upgrade Jenkinsfile
> * JIRA (PreCommit--Build): Manually update the config in builds.apache.org



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka merged pull request #1917: HADOOP-16944. Use Yetus 0.12.0 in GitHub PR

2020-04-18 Thread GitBox
aajisaka merged pull request #1917: HADOOP-16944. Use Yetus 0.12.0 in GitHub PR
URL: https://github.com/apache/hadoop/pull/1917
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] brfrn169 edited a comment on issue #1954: HDFS-15217 Add more information to longest write/read lock held log

2020-04-18 Thread GitBox
brfrn169 edited a comment on issue #1954: HDFS-15217 Add more information to 
longest write/read lock held log
URL: https://github.com/apache/hadoop/pull/1954#issuecomment-615893744
 
 
   I did microbenchmarking with JMH. The microbenchmarking  code is as follows:
   https://gist.github.com/brfrn169/6a57175d934734b2a2c36652925ffdf6
   
   The results are as follows:
   
   Before applying the patch:
   ```
   Benchmark Mode  CntScore   Error  Units
   FSNamesystemBenchmark.benchmark  thrpt   25  567.924 ± 3.184  ops/s
   ```
   
   After applying the patch:
   ```
   Benchmark Mode  CntScoreError  Units
   FSNamesystemBenchmark.benchmark  thrpt   25  573.884 ± 11.171  ops/s
   ```
   
   It looks like there isn't any performance regression after applying the 
patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] brfrn169 commented on issue #1954: HDFS-15217 Add more information to longest write/read lock held log

2020-04-18 Thread GitBox
brfrn169 commented on issue #1954: HDFS-15217 Add more information to longest 
write/read lock held log
URL: https://github.com/apache/hadoop/pull/1954#issuecomment-615893744
 
 
   I did microbenchmarking with JMH. The code is as follows:
   https://gist.github.com/brfrn169/6a57175d934734b2a2c36652925ffdf6
   
   The results are as follows:
   
   Before applying the patch:
   ```
   Benchmark Mode  CntScore   Error  Units
   FSNamesystemBenchmark.benchmark  thrpt   25  567.924 ± 3.184  ops/s
   ```
   
   After applying the patch:
   ```
   Benchmark Mode  CntScoreError  Units
   FSNamesystemBenchmark.benchmark  thrpt   25  573.884 ± 11.171  ops/s
   ```
   
   It looks like there isn't any performance regression after applying the 
patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16999) ABFS: Reuser DSAS fetched in ABFS Input and Output stream

2020-04-18 Thread Sneha Vijayarajan (Jira)
Sneha Vijayarajan created HADOOP-16999:
--

 Summary: ABFS: Reuser DSAS fetched in ABFS Input and Output stream
 Key: HADOOP-16999
 URL: https://issues.apache.org/jira/browse/HADOOP-16999
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.2.1
Reporter: Sneha Vijayarajan
Assignee: Sneha Vijayarajan


This Jira will track the update where ABFS input and output streams can re-use 
D-SAS token fetched. If the SAS is within 1 minute of expiry, ABFS will request 
a new SAS.  When the stream is closed the SAS will be released. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16999) ABFS: Reuse DSAS fetched in ABFS Input and Output stream

2020-04-18 Thread Sneha Vijayarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Vijayarajan updated HADOOP-16999:
---
Summary: ABFS: Reuse DSAS fetched in ABFS Input and Output stream  (was: 
ABFS: Reuser DSAS fetched in ABFS Input and Output stream)

> ABFS: Reuse DSAS fetched in ABFS Input and Output stream
> 
>
> Key: HADOOP-16999
> URL: https://issues.apache.org/jira/browse/HADOOP-16999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.1
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>
> This Jira will track the update where ABFS input and output streams can 
> re-use D-SAS token fetched. If the SAS is within 1 minute of expiry, ABFS 
> will request a new SAS.  When the stream is closed the SAS will be released. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16647) Support OpenSSL 1.1.1 LTS

2020-04-18 Thread Luca Toscano (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17086475#comment-17086475
 ] 

Luca Toscano commented on HADOOP-16647:
---

[~iwasakims] thanks a lot for the commit, I am wondering if this change could 
also be backported to 2.10.x. 

> Support OpenSSL 1.1.1 LTS
> -
>
> Key: HADOOP-16647
> URL: https://issues.apache.org/jira/browse/HADOOP-16647
> Project: Hadoop Common
>  Issue Type: Task
>  Components: security
>Reporter: Wei-Chiu Chuang
>Assignee: Rakesh Radhakrishnan
>Priority: Critical
> Fix For: 3.3.0
>
> Attachments: HADOOP-16647-00.patch, HADOOP-16647-01.patch, 
> HADOOP-16647-02.patch
>
>
> See Hadoop user mailing list 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201910.mbox/%3CCADiq6%3DweDFxHTL_7eGwDNnxVCza39y2QYQTSggfLn7mXhMLOdg%40mail.gmail.com%3E
> Hadoop 2 supports OpenSSL 1.0.2.
> Hadoop 3 supports OpenSSL 1.1.0 (HADOOP-14597) and I believe 1.0.2 too.
> Per OpenSSL blog https://www.openssl.org/policies/releasestrat.html
> * 1.1.0 is EOL 2019/09/11
> * 1.0.2 EOL 2019/12/31
> * 1.1.1 is EOL 2023/09/11 (LTS)
> Many Hadoop installation relies on the OpenSSL package provided by Linux 
> distros, but it's not clear to me if Linux distros are going support 
> 1.1.0/1.0.2 beyond this date.
> We should make sure Hadoop works with OpenSSL 1.1.1, as well as document the 
> openssl version supported. File this jira to test/document/fix bugs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Deleted] (HADOOP-16996) ---

2020-04-18 Thread Gavin McDonald (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gavin McDonald deleted HADOOP-16996:



> ---
> ---
>
> Key: HADOOP-16996
> URL: https://issues.apache.org/jira/browse/HADOOP-16996
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Maziar Mirzazad
>Priority: Minor
>
> --
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16971) testFileContextResolveAfs creates dangling link and fails for subsequent runs

2020-04-18 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17086402#comment-17086402
 ] 

Ayush Saxena commented on HADOOP-16971:
---

bq.  but it accidentally made the symlink into a dangling link
Shouldn't we fix this then? correct the symlink?

> testFileContextResolveAfs creates dangling link and fails for subsequent runs
> -
>
> Key: HADOOP-16971
> URL: https://issues.apache.org/jira/browse/HADOOP-16971
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, fs, test
>Affects Versions: 3.2.1, 3.4.0
>Reporter: Ctest
>Priority: Minor
>  Labels: easyfix, fs, symlink, test
> Attachments: HADOOP-16971.000.patch
>
>
> In the test testFileContextResolveAfs, the symlink TestFileContextResolveAfs2 
> (linked to TestFileContextResolveAfs1) cannot be deleted when the test 
> finishes.
> This is because TestFileContextResolveAfs1 was always deleted before 
> TestFileContextResolveAfs2 when they were both passed into 
> FileSystem#deleteOnExit. This caused TestFileContextResolveAfs2 to become a 
> dangling link, which FileSystem in Hadoop currently cannot delete. (This is 
> because Files#exists will return false for dangling links.)
> As a result, the test `testFileContextResolveAfs` only passed for the first 
> run. And for later runs of this test, it will fail by throwing the following 
> exception: 
> {code:java}
> fs.FileUtil (FileUtil.java:symLink(821)) - Command 'ln -s 
> mypath/TestFileContextResolveAfs1 mypath/TestFileContextResolveAfs2' failed 1 
> with: ln: mypath/TestFileContextResolveAfs2: File exists
> java.io.IOException: Error 1 creating symlink 
> file:mypath/TestFileContextResolveAfs2 to mypath/TestFileContextResolveAfs1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16977) in javaApi, UGI params should be overidden through FileSystem conf

2020-04-18 Thread Hongbing Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17086367#comment-17086367
 ] 

Hongbing Wang commented on HADOOP-16977:


Thank [~ste...@apache.org] , thank you for your detailed answers and 
suggestions. I think it should be fixed because our internal version uses 
`hadoop.user.name` for  authentication, but it cannot be passed to 
distcp.execute (). We can choose `System.setProperty(k,v)` to solve the 
problem, but system variables will affect Global. Now, I try to modify from the 
business code. Thank you once again!

> in javaApi, UGI params should be overidden through FileSystem conf
> --
>
> Key: HADOOP-16977
> URL: https://issues.apache.org/jira/browse/HADOOP-16977
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 3.2.0
>Reporter: Hongbing Wang
>Priority: Major
> Attachments: HADOOP-16977.001.patch, HADOOP-16977.002.patch
>
>
> org.apache.hadoop.security.UserGroupInformation#ensureInitialized,will always 
> get the configure from the configuration files. Like below:
> {code:java}
> private static void ensureInitialized() {
>   if (conf == null) {
> synchronized(UserGroupInformation.class) {
>   if (conf == null) { // someone might have beat us
> initialize(new Configuration(), false);
>   }
> }
>   }
> }{code}
> So that, if FileSystem is created through FileSystem#get or 
> FileSystem#newInstance with conf, the conf values different from the 
> configuration files will not take effect in UserGroupInformation.  E.g:
> {code:java}
> Configuration conf = new Configuration();
> conf.set("k1","v1");
> conf.set("k2","v2");
> FileSystem fs = FileSystem.get(uri, conf);{code}
> "k1" or "k2" will not work in UserGroupInformation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16998) WASB : NativeAzureFsOutputStream#close() throwing java.lang.IllegalArgumentException instead of IOE which causes HBase RS to get aborted

2020-04-18 Thread Anoop Sam John (Jira)
Anoop Sam John created HADOOP-16998:
---

 Summary: WASB : NativeAzureFsOutputStream#close() throwing 
java.lang.IllegalArgumentException instead of IOE which causes HBase RS to get 
aborted
 Key: HADOOP-16998
 URL: https://issues.apache.org/jira/browse/HADOOP-16998
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Anoop Sam John


During HFile create, at the end when called close() on the OutputStream, there 
is some pending data to get flushed. When this flush happens, an Exception is 
thrown back from Storage. The Azure-storage SDK layer will throw back IOE. 
(Even if it is a StorageException thrown from the Storage, the SDK converts it 
to IOE.) But at HBase, we end up getting IllegalArgumentException which causes 
the RS to get aborted. If we get back IOE, the flush will get retried instead 
of aborting RS.
The reason is this
NativeAzureFsOutputStream uses Azure-storage SDK's BlobOutputStreamInternal. 
But the BlobOutputStreamInternal is wrapped within a SyncableDataOutputStream 
which is a FilterOutputStream. During the close op, NativeAzureFsOutputStream 
calls close on SyncableDataOutputStream and it uses below method from 
FilterOutputStream
{code}
public void close() throws IOException {
  try (OutputStream ostream = out) {
  flush();
  }
}
{code}
Here the flush call caused an IOE to be thrown to here. The finally will issue 
close call on ostream (Which is an instance of BlobOutputStreamInternal)
When BlobOutputStreamInternal#close() is been called, if there was any 
exception already occured on that Stream, it will throw back the same Exception
{code}
public synchronized void close() throws IOException {
  try {
  // if the user has already closed the stream, this will throw a 
STREAM_CLOSED exception
  // if an exception was thrown by any thread in the 
threadExecutor, realize it now
  this.checkStreamState();
  ...
}
private void checkStreamState() throws IOException {
  if (this.lastError != null) {
  throw this.lastError;
  }
}
{code}
So here both try and finally block getting Exceptions and Java uses 
Throwable#addSuppressed() 
Within this method if both Exceptions are same objects, it throws back 
IllegalArgumentException
{code}
public final synchronized void addSuppressed(Throwable exception) {
  if (exception == this)
 throw new 
IllegalArgumentException(SELF_SUPPRESSION_MESSAGE, exception);
  
}
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13435) Add thread local mechanism for aggregating file system storage stats

2020-04-18 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17086346#comment-17086346
 ] 

Hadoop QA commented on HADOOP-13435:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HADOOP-13435 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13435 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875965/HADOOP-13435.004.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16896/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Add thread local mechanism for aggregating file system storage stats
> 
>
> Key: HADOOP-13435
> URL: https://issues.apache.org/jira/browse/HADOOP-13435
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mingliang Liu
>Priority: Major
> Attachments: HADOOP-13435.000.patch, HADOOP-13435.001.patch, 
> HADOOP-13435.002.patch, HADOOP-13435.003.patch, HADOOP-13435.004.patch
>
>
> As discussed in [HADOOP-13032], this is to add thread local mechanism for 
> aggregating file system storage stats. This class will also be used in 
> [HADOOP-13031], which is to separate the distance-oriented rack-aware read 
> bytes logic from {{FileSystemStorageStatistics}} to new 
> DFSRackAwareStorageStatistics as it's DFS-specific. After this patch, the 
> {{FileSystemStorageStatistics}} can live without the to-be-removed 
> {{FileSystem$Statistics}} implementation.
> A unit test should also be added.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13031) Rack-aware read bytes stats should be managed by HFDS specific StorageStatistics

2020-04-18 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HADOOP-13031:
--

Target Version/s:   (was: 3.4.0)
Assignee: (was: Mingliang Liu)

After HADOOP-13032 gets in, let's revisit this.

> Rack-aware read bytes stats should be managed by HFDS specific 
> StorageStatistics
> 
>
> Key: HADOOP-13031
> URL: https://issues.apache.org/jira/browse/HADOOP-13031
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mingliang Liu
>Priority: Major
>
> [HADOOP-13065] added a new interface for retrieving FS and FC Statistics. 
> This jira is to refactor the code that maintains rack-aware read metrics to 
> use the newly added StorageStatistics. Specially,
> # Rack-aware read bytes metrics is mostly specific to HDFS. For example, 
> local file system doesn't need that. We consider to move it from base 
> FileSystemStorageStatistics to a dedicated HDFS specific StorageStatistics 
> sub-class.
> # We would have to develop an optimized thread-local mechanism to do this, to 
> avoid causing a performance regression in HDFS stream performance.
> Optionally, it would be better to simply move this to HDFS's existing 
> per-stream {{ReadStatistics}} for now. As [HDFS-9579] states, ReadStatistics 
> metrics are only accessible via {{DFSClient}} or {{DFSInputStream}}. Not 
> something that application framework such as MR and Tez can get to.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13435) Add thread local mechanism for aggregating file system storage stats

2020-04-18 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HADOOP-13435:
--

 Target Version/s:   (was: 3.4.0)
Affects Version/s: (was: 2.9.0)
 Assignee: (was: Mingliang Liu)

> Add thread local mechanism for aggregating file system storage stats
> 
>
> Key: HADOOP-13435
> URL: https://issues.apache.org/jira/browse/HADOOP-13435
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mingliang Liu
>Priority: Major
> Attachments: HADOOP-13435.000.patch, HADOOP-13435.001.patch, 
> HADOOP-13435.002.patch, HADOOP-13435.003.patch, HADOOP-13435.004.patch
>
>
> As discussed in [HADOOP-13032], this is to add thread local mechanism for 
> aggregating file system storage stats. This class will also be used in 
> [HADOOP-13031], which is to separate the distance-oriented rack-aware read 
> bytes logic from {{FileSystemStorageStatistics}} to new 
> DFSRackAwareStorageStatistics as it's DFS-specific. After this patch, the 
> {{FileSystemStorageStatistics}} can live without the to-be-removed 
> {{FileSystem$Statistics}} implementation.
> A unit test should also be added.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13032) Refactor FileSystem$Statistics to use StorageStatistics

2020-04-18 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HADOOP-13032:
--

Assignee: (was: Mingliang Liu)

Un-assigning since I'm not working on this now. On next major release, we can 
discuss this again. Or if someone gets it done in a compatible way, we can push 
to 3.4

> Refactor FileSystem$Statistics to use StorageStatistics
> ---
>
> Key: HADOOP-13032
> URL: https://issues.apache.org/jira/browse/HADOOP-13032
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mingliang Liu
>Priority: Major
> Attachments: HADOOP-13032.000.patch, HADOOP-13032.001.patch, 
> HADOOP-13032.002.patch, HADOOP-13032.003.patch, HADOOP-13032.004.patch, 
> HADOOP-13032.005.patch
>
>
> [HADOOP-13065] added a new interface for retrieving FS and FC Statistics. 
> This jira is to track the effort of moving the {{Statistics}} class out of 
> {{FileSystem}}, and make it use that new interface.
> We should keep the thread local implementation. Benefits are:
> # they could be used in both {{FileContext}} and {{FileSystem}}
> # unified stats data structure
> # shorter source code
> Please note this will be an backwards-incompatible change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] liuml07 commented on a change in pull request #1861: HADOOP-13230. Optionally retain directory markers

2020-04-18 Thread GitBox
liuml07 commented on a change in pull request #1861: HADOOP-13230. Optionally 
retain directory markers
URL: https://github.com/apache/hadoop/pull/1861#discussion_r410651379
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/DirectoryPolicyImpl.java
 ##
 @@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+
+import java.util.Locale;
+import java.util.function.Predicate;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+
+import static 
org.apache.hadoop.fs.s3a.Constants.DEFAULT_DIRECTORY_MARKER_POLICY;
+import static org.apache.hadoop.fs.s3a.Constants.DIRECTORY_MARKER_POLICY;
+import static 
org.apache.hadoop.fs.s3a.Constants.DIRECTORY_MARKER_POLICY_AUTHORITATIVE;
+import static 
org.apache.hadoop.fs.s3a.Constants.DIRECTORY_MARKER_POLICY_DELETE;
+import static org.apache.hadoop.fs.s3a.Constants.DIRECTORY_MARKER_POLICY_KEEP;
+
+/**
+ * Implementation of directory policy.
+ */
+public final class DirectoryPolicyImpl
+implements DirectoryPolicy {
+
+  public static final String UNKNOWN_MARKER_POLICY = "Unknown value of "
+  + DIRECTORY_MARKER_POLICY + ": ";
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+  DirectoryPolicyImpl.class);
+
+  private final MarkerPolicy markerPolicy;
+
+  private final Predicate authoritativenes;
+
+  public DirectoryPolicyImpl(
+  final Configuration conf,
+  final Predicate authoritativenes) {
+this.authoritativenes = authoritativenes;
+String option = conf.getTrimmed(DIRECTORY_MARKER_POLICY,
+DEFAULT_DIRECTORY_MARKER_POLICY);
+MarkerPolicy p;
+switch (option.toLowerCase(Locale.ENGLISH)) {
+
+case DIRECTORY_MARKER_POLICY_KEEP:
+  p = MarkerPolicy.Keep;
+  LOG.info("Directory markers will be deleted");
+  break;
+case DIRECTORY_MARKER_POLICY_AUTHORITATIVE:
+  p = MarkerPolicy.Authoritative;
+  LOG.info("Directory markers will be deleted on authoritative"
+  + " paths");
+  break;
+case DIRECTORY_MARKER_POLICY_DELETE:
+  p = MarkerPolicy.Delete;
+  break;
+default:
+  throw new IllegalArgumentException(UNKNOWN_MARKER_POLICY + option);
+}
+this.markerPolicy = p;
+  }
+
+  @Override
+  public boolean keepDirectoryMarkers(final Path path) {
+switch (markerPolicy) {
+case Keep:
+  return true;
+case Authoritative:
+  return authoritativenes.test(path);
+case Delete:
+default:   // which cannot happen
 
 Review comment:
   throw unchecked exception so new policy (in future) will fail in an obvious 
way?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] liuml07 commented on a change in pull request #1861: HADOOP-13230. Optionally retain directory markers

2020-04-18 Thread GitBox
liuml07 commented on a change in pull request #1861: HADOOP-13230. Optionally 
retain directory markers
URL: https://github.com/apache/hadoop/pull/1861#discussion_r410645571
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -1265,18 +1282,30 @@ public WriteOperationHelper getWriteOperationHelper() {
* is not a directory.
*/
   @Override
-  public FSDataOutputStream createNonRecursive(Path path,
+  public FSDataOutputStream createNonRecursive(Path p,
   FsPermission permission,
   EnumSet flags,
   int bufferSize,
   short replication,
   long blockSize,
   Progressable progress) throws IOException {
 entryPoint(INVOCATION_CREATE_NON_RECURSIVE);
+final Path path = makeQualified(p);
 Path parent = path.getParent();
-if (parent != null) {
-  // expect this to raise an exception if there is no parent
-  if (!getFileStatus(parent).isDirectory()) {
+// expect this to raise an exception if there is no parent dir
+if (parent != null && !parent.isRoot()) {
+  S3AFileStatus status;
+  try {
+// optimize for the directory existing: Call list first
+status = innerGetFileStatus(parent, false,
+StatusProbeEnum.DIRECTORIES);
+  } catch (FileNotFoundException e) {
+// no dir, fall back to looking for a file
+// (failure condition if true)
+status = innerGetFileStatus(parent, false,
+StatusProbeEnum.HEAD_ONLY);
 
 Review comment:
   I know they are the same, but is `StatusProbeEnum.FILE` a bit better here 
since the status is for heading a file.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] liuml07 commented on a change in pull request #1861: HADOOP-13230. Optionally retain directory markers

2020-04-18 Thread GitBox
liuml07 commented on a change in pull request #1861: HADOOP-13230. Optionally 
retain directory markers
URL: https://github.com/apache/hadoop/pull/1861#discussion_r410640648
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/StatusProbeEnum.java
 ##
 @@ -33,28 +33,25 @@
   /** LIST under the path. */
   List;
 
-  /** All probes. */
-  public static final Set ALL = EnumSet.allOf(
-  StatusProbeEnum.class);
-
-  /** Skip the HEAD and only look for directories. */
-  public static final Set DIRECTORIES =
-  EnumSet.of(DirMarker, List);
-
-  /** We only want the HEAD or dir marker. */
-  public static final Set HEAD_OR_DIR_MARKER =
-  EnumSet.of(Head, DirMarker);
+  /** Look for files and directories. */
+  public static final Set ALL =
+  EnumSet.of(Head, List);
 
   /** We only want the HEAD. */
   public static final Set HEAD_ONLY =
   EnumSet.of(Head);
 
-  /** We only want the dir marker. */
-  public static final Set DIR_MARKER_ONLY =
-  EnumSet.of(DirMarker);
-
-  /** We only want the dir marker. */
+  /** List operation only. */
   public static final Set LIST_ONLY =
   EnumSet.of(List);
 
+  /** Look for files and directories. */
+  public static final Set FILE =
+  HEAD_ONLY;
+
+  /** Skip the HEAD and only look for directories. */
+  public static final Set DIRECTORIES =
+  LIST_ONLY;
+
+
 
 Review comment:
   Do we still need `DirMarker` in the `StatusProbeEnum`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] liuml07 commented on a change in pull request #1861: HADOOP-13230. Optionally retain directory markers

2020-04-18 Thread GitBox
liuml07 commented on a change in pull request #1861: HADOOP-13230. Optionally 
retain directory markers
URL: https://github.com/apache/hadoop/pull/1861#discussion_r410635387
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -445,6 +456,12 @@ public void initialize(URI name, Configuration 
originalConf)
 DEFAULT_S3GUARD_DISABLED_WARN_LEVEL);
 S3Guard.logS3GuardDisabled(LOG, warnLevel, bucket);
   }
+  // directory policy, which will look at authoritative paths
+  // if needed
+  directoryPolicy = new DirectoryPolicyImpl(conf,
+  this::allowAuthoritative);
+  LOG.debug("Directory marker retention policy is {}",
 
 Review comment:
   This can be `info` level since this is new and client may want to see the 
message.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] liuml07 commented on a change in pull request #1861: HADOOP-13230. Optionally retain directory markers

2020-04-18 Thread GitBox
liuml07 commented on a change in pull request #1861: HADOOP-13230. Optionally 
retain directory markers
URL: https://github.com/apache/hadoop/pull/1861#discussion_r410650058
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -1411,10 +1440,13 @@ public boolean rename(Path src, Path dst) throws 
IOException {
   LOG.debug("rename: destination path {} not found", dst);
   // Parent must exist
   Path parent = dst.getParent();
-  if (!pathToKey(parent).isEmpty()) {
+  if (!pathToKey(parent).isEmpty()
+  && !parent.equals(src.getParent()) ) {
 try {
-  S3AFileStatus dstParentStatus = innerGetFileStatus(dst.getParent(),
-  false, StatusProbeEnum.ALL);
+  // only look against S3 for directories; saves
+  // a HEAD request on all normal operations.
+  S3AFileStatus dstParentStatus = innerGetFileStatus(parent,
+  false, StatusProbeEnum.DIRECTORIES);
 
 Review comment:
   We do not need to fallback to `StatusProbeEnum.FILE` because we know this 
`innerGetFileStatus` should not throw `FileNotFoundException` - at least dst 
should show in the listing. Is this correct?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org