[jira] [Commented] (HADOOP-16971) testFileContextResolveAfs failed to delete created symlink and pollute subsequent test run.

2020-04-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17081594#comment-17081594
 ] 

Hadoop QA commented on HADOOP-16971:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
19s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d |
| JIRA Issue | HADOOP-16971 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12999667/HADOOP-16971.000.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 32d0bb4c9e88 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 275c478 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_242 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16876/testReport/ |
| Max. process+thread count | 2371 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16876/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> testFileContextResolveAfs failed to delete created symlink and pollute 
> subsequent test run.
> 

[jira] [Updated] (HADOOP-16971) testFileContextResolveAfs failed to delete created symlink and pollute subsequent test run.

2020-04-11 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HADOOP-16971:
---
Attachment: HADOOP-16971.000.patch
Status: Patch Available  (was: Open)

> testFileContextResolveAfs failed to delete created symlink and pollute 
> subsequent test run.
> ---
>
> Key: HADOOP-16971
> URL: https://issues.apache.org/jira/browse/HADOOP-16971
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, test
>Affects Versions: 3.2.1, 3.4.0
>Reporter: Ctest
>Priority: Minor
>  Labels: easyfix
> Attachments: HADOOP-16971.000.patch
>
>
> In the test `testFileContextResolveAfs`, the symlink 
> `TestFileContextResolveAfs2` (linked to `TestFileContextResolveAfs1`) was not 
> deleted as intended in the first run, thus the test will fail in the second 
> run.
> The reason is that this test uses org.apache.hadoop.fs.FileSystem to handle 
> the deletion of symlink, which
> 1. does not support symlink.
> 2. deletes `TestFileContextResolveAfs1` before `TestFileContextResolveAfs2` 
> if both links passed into `deleteOnExit` in any order, because of the 
> ordering of paths to be deleted in the TreeSet `deleteOnExit`.
> When `TestFileContextResolveAfs1` has been deleted, 
> `TestFileContextResolveAfs2` became an orphan symlink and is now considered 
> as a non-exisitent path by org.apache.hadoop.fs.FileSystem, thus its deletion 
> cannot be completed.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16971) testFileContextResolveAfs failed to delete created symlink and pollute subsequent test run.

2020-04-11 Thread Ctest (Jira)
Ctest created HADOOP-16971:
--

 Summary: testFileContextResolveAfs failed to delete created 
symlink and pollute subsequent test run.
 Key: HADOOP-16971
 URL: https://issues.apache.org/jira/browse/HADOOP-16971
 Project: Hadoop Common
  Issue Type: Bug
  Components: common, test
Affects Versions: 3.2.1, 3.4.0
Reporter: Ctest
 Attachments: HADOOP-16971.000.patch

In the test `testFileContextResolveAfs`, the symlink 
`TestFileContextResolveAfs2` (linked to `TestFileContextResolveAfs1`) was not 
deleted as intended in the first run, thus the test will fail in the second run.


The reason is that this test uses org.apache.hadoop.fs.FileSystem to handle the 
deletion of symlink, which

1. does not support symlink.
2. deletes `TestFileContextResolveAfs1` before `TestFileContextResolveAfs2` if 
both links passed into `deleteOnExit` in any order, because of the ordering of 
paths to be deleted in the TreeSet `deleteOnExit`.


When `TestFileContextResolveAfs1` has been deleted, 
`TestFileContextResolveAfs2` became an orphan symlink and is now considered as 
a non-exisitent path by org.apache.hadoop.fs.FileSystem, thus its deletion 
cannot be completed.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16958) NullPointerException(NPE) when hadoop.security.authorization is enabled but the input PolicyProvider for ZKFCRpcServer is NULL

2020-04-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17081558#comment-17081558
 ] 

Hadoop QA commented on HADOOP-16958:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 20m  2s{color} 
| {color:red} root generated 1 new + 1870 unchanged - 0 fixed = 1871 total (was 
1870) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 3 new + 7 unchanged - 0 fixed = 10 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
30s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
54s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}126m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d |
| JIRA Issue | HADOOP-16958 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12999661/HADOOP-16958.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f7872d407911 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 275c478 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_242 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16874/artifact/out/diff-compile-javac-root.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16874/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16874/testReport/ |
| Max. process+thread count | 2570 (vs. ulimit of 5500) |
| modules | C: 

[jira] [Commented] (HADOOP-16967) TestSequenceFile#testRecursiveSeqFileCreate failed to clean its data caused failures in subsequent run

2020-04-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17081554#comment-17081554
 ] 

Hadoop QA commented on HADOOP-16967:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
51s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
0s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d |
| JIRA Issue | HADOOP-16967 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12999662/HADOOP-16967.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 119834b14268 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 275c478 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_242 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16875/testReport/ |
| Max. process+thread count | 2918 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16875/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> TestSequenceFile#testRecursiveSeqFileCreate failed to clean its data caused 
> failures in subsequent run 
> 

[GitHub] [hadoop] hadoop-yetus commented on issue #1953: HADOOP-16528. Update document for web authentication kerberos principal configuration.

2020-04-11 Thread GitBox
hadoop-yetus commented on issue #1953: HADOOP-16528. Update document for web 
authentication kerberos principal configuration.
URL: https://github.com/apache/hadoop/pull/1953#issuecomment-612499088
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
12 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 48s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  18m 53s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 33s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   3m  5s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 42s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 53s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 43s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 11s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m 17s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m  8s |  the patch passed  |
   | -1 :x: |  javac  |  16m  8s |  root generated 6 new + 1864 unchanged - 0 
fixed = 1870 total (was 1864)  |
   | -0 :warning: |  checkstyle  |   3m  0s |  root: The patch generated 2 new 
+ 996 unchanged - 2 fixed = 998 total (was 998)  |
   | +1 :green_heart: |  mvnsite  |   3m 42s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 36s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 28s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   6m 38s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 36s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |  92m 40s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 44s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  6s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 231m 13s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestDecommissionWithStriped |
   |   | hadoop.hdfs.TestErasureCodingExerciseAPIs |
   |   | hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.8 Server=19.03.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1953/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1953 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 3f0cb70f6a48 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 275c478 |
   | Default Java | 1.8.0_242 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1953/2/artifact/out/diff-compile-javac-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1953/2/artifact/out/diff-checkstyle-root.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1953/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1953/2/testReport/ |
   | Max. process+thread count | 4639 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs hadoop-tools/hadoop-aws U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1953/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact 

[jira] [Commented] (HADOOP-16967) TestSequenceFile#testRecursiveSeqFileCreate failed to clean its data caused failures in subsequent run

2020-04-11 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17081535#comment-17081535
 ] 

Ctest commented on HADOOP-16967:


Thx! 002.patch submitted. 

> TestSequenceFile#testRecursiveSeqFileCreate failed to clean its data caused 
> failures in subsequent run 
> ---
>
> Key: HADOOP-16967
> URL: https://issues.apache.org/jira/browse/HADOOP-16967
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, test
>Affects Versions: 3.2.1, 3.4.0
>Reporter: Ctest
>Priority: Minor
>  Labels: easyfix, test
> Attachments: HADOOP-16967.000.patch, HADOOP-16967.001.patch, 
> HADOOP-16967.002.patch
>
>
> The test expects an IOException when creating a writer for file 
> `target/test/data/recursiveCreateDir/file` with `createParent=false`. And it 
> expects to create the writer successfully when `createParent=True`. 
> `createParent` means `create parent directory if non-existent`.
> The test will pass if it is run for the first time, but it will fail for the 
> second run. This is because the test did not clean the parent directory 
> created during the first run.
> The parent directory `recursiveCreateDir` was created, but it was not deleted 
> before the test finished. So, when the test was run again, it still treated 
> the parent directory `recursiveCreateDir` as non-existent and expected an 
> IOException from creating a writer with `createParent=false`. Then the test 
> did not get the expected IOException because `recursiveCreateDir` has been 
> created in the first test run.
> {code:java}
> @SuppressWarnings("deprecation")
>  @Test
>  public void testRecursiveSeqFileCreate() throws IOException {
>  FileSystem fs = FileSystem.getLocal(conf);
>  Path name = new Path(new Path(GenericTestUtils.getTempPath(
>  "recursiveCreateDir")), "file"); // FILE SUCCESSULLY CREATED HERE
>  boolean createParent = false;
> try {
>  SequenceFile.createWriter(fs, conf, name, RandomDatum.class,
>  RandomDatum.class, 512, (short) 1, 4096, createParent,
>  CompressionType.NONE, null, new Metadata());
>  fail("Expected an IOException due to missing parent");
>  } catch (IOException ioe) {
>  // Expected
>  }
> createParent = true;
>  SequenceFile.createWriter(fs, conf, name, RandomDatum.class,
>  RandomDatum.class, 512, (short) 1, 4096, createParent,
>  CompressionType.NONE, null, new Metadata());
>  // should succeed, fails if exception thrown
>  }
> {code}
> Suggested patch:
>  
> {code:java}
> diff --git 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
>  
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
> index 044824356ed..1aff2936264 100644
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
> @@ -649,8 +649,9 @@ public void testCreateWriterOnExistingFile() throws 
> IOException {
>  @Test
>  public void testRecursiveSeqFileCreate() throws IOException {
>  FileSystem fs = FileSystem.getLocal(conf);
> - Path name = new Path(new Path(GenericTestUtils.getTempPath(
> - "recursiveCreateDir")), "file");
> + Path parentDir = new Path(GenericTestUtils.getTempPath(
> + "recursiveCreateDir"));
> + Path name = new Path(parentDir, "file");
>  boolean createParent = false;
>  
>  try {
> @@ -667,6 +668,9 @@ public void testRecursiveSeqFileCreate() throws 
> IOException {
>  RandomDatum.class, 512, (short) 1, 4096, createParent,
>  CompressionType.NONE, null, new Metadata());
>  // should succeed, fails if exception thrown
> +
> + fs.deleteOnExit(parentDir);
> + fs.close();
>  }
>  
>  @Test{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16967) TestSequenceFile#testRecursiveSeqFileCreate failed to clean its data caused failures in subsequent run

2020-04-11 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HADOOP-16967:
---
Attachment: HADOOP-16967.002.patch

> TestSequenceFile#testRecursiveSeqFileCreate failed to clean its data caused 
> failures in subsequent run 
> ---
>
> Key: HADOOP-16967
> URL: https://issues.apache.org/jira/browse/HADOOP-16967
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, test
>Affects Versions: 3.2.1, 3.4.0
>Reporter: Ctest
>Priority: Minor
>  Labels: easyfix, test
> Attachments: HADOOP-16967.000.patch, HADOOP-16967.001.patch, 
> HADOOP-16967.002.patch
>
>
> The test expects an IOException when creating a writer for file 
> `target/test/data/recursiveCreateDir/file` with `createParent=false`. And it 
> expects to create the writer successfully when `createParent=True`. 
> `createParent` means `create parent directory if non-existent`.
> The test will pass if it is run for the first time, but it will fail for the 
> second run. This is because the test did not clean the parent directory 
> created during the first run.
> The parent directory `recursiveCreateDir` was created, but it was not deleted 
> before the test finished. So, when the test was run again, it still treated 
> the parent directory `recursiveCreateDir` as non-existent and expected an 
> IOException from creating a writer with `createParent=false`. Then the test 
> did not get the expected IOException because `recursiveCreateDir` has been 
> created in the first test run.
> {code:java}
> @SuppressWarnings("deprecation")
>  @Test
>  public void testRecursiveSeqFileCreate() throws IOException {
>  FileSystem fs = FileSystem.getLocal(conf);
>  Path name = new Path(new Path(GenericTestUtils.getTempPath(
>  "recursiveCreateDir")), "file"); // FILE SUCCESSULLY CREATED HERE
>  boolean createParent = false;
> try {
>  SequenceFile.createWriter(fs, conf, name, RandomDatum.class,
>  RandomDatum.class, 512, (short) 1, 4096, createParent,
>  CompressionType.NONE, null, new Metadata());
>  fail("Expected an IOException due to missing parent");
>  } catch (IOException ioe) {
>  // Expected
>  }
> createParent = true;
>  SequenceFile.createWriter(fs, conf, name, RandomDatum.class,
>  RandomDatum.class, 512, (short) 1, 4096, createParent,
>  CompressionType.NONE, null, new Metadata());
>  // should succeed, fails if exception thrown
>  }
> {code}
> Suggested patch:
>  
> {code:java}
> diff --git 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
>  
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
> index 044824356ed..1aff2936264 100644
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
> @@ -649,8 +649,9 @@ public void testCreateWriterOnExistingFile() throws 
> IOException {
>  @Test
>  public void testRecursiveSeqFileCreate() throws IOException {
>  FileSystem fs = FileSystem.getLocal(conf);
> - Path name = new Path(new Path(GenericTestUtils.getTempPath(
> - "recursiveCreateDir")), "file");
> + Path parentDir = new Path(GenericTestUtils.getTempPath(
> + "recursiveCreateDir"));
> + Path name = new Path(parentDir, "file");
>  boolean createParent = false;
>  
>  try {
> @@ -667,6 +668,9 @@ public void testRecursiveSeqFileCreate() throws 
> IOException {
>  RandomDatum.class, 512, (short) 1, 4096, createParent,
>  CompressionType.NONE, null, new Metadata());
>  // should succeed, fails if exception thrown
> +
> + fs.deleteOnExit(parentDir);
> + fs.close();
>  }
>  
>  @Test{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16958) NullPointerException(NPE) when hadoop.security.authorization is enabled but the input PolicyProvider for ZKFCRpcServer is NULL

2020-04-11 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17081533#comment-17081533
 ] 

Ctest commented on HADOOP-16958:


Thanks! I just submitted the 003.patch. 

> NullPointerException(NPE) when hadoop.security.authorization is enabled but 
> the input PolicyProvider for ZKFCRpcServer is NULL
> --
>
> Key: HADOOP-16958
> URL: https://issues.apache.org/jira/browse/HADOOP-16958
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, ha
>Affects Versions: 3.2.1
>Reporter: Ctest
>Priority: Critical
> Attachments: HADOOP-16958.000.patch, HADOOP-16958.001.patch, 
> HADOOP-16958.002.patch, HADOOP-16958.003.patch
>
>
> During initialization, ZKFCRpcServer refreshes the service authorization ACL 
> for the service handled by this server if config 
> hadoop.security.authorization is enabled, by calling refreshServiceAcl with 
> the input PolicyProvider and Configuration.
> {code:java}
> ZKFCRpcServer(Configuration conf,
>  InetSocketAddress bindAddr,
>  ZKFailoverController zkfc,
>  PolicyProvider policy) throws IOException {
>  this.server = ...
>  
>  // set service-level authorization security policy
>  if (conf.getBoolean(
>  CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION, false)) {
>  server.refreshServiceAcl(conf, policy);
>  }
> }{code}
> refreshServiceAcl calls 
> ServiceAuthorizationManager#refreshWithLoadedConfiguration which directly 
> gets services from the provider with provider.getServices(). When the 
> provider is NULL, the code throws NPE without an informative message. In 
> addition, the default value of config 
> `hadoop.security.authorization.policyprovider` (which controls PolicyProvider 
> here) is NULL and the only usage of ZKFCRpcServer initializer provides only 
> an abstract method getPolicyProvider which does not enforce that 
> PolicyProvider should not be NULL.
> The suggestion here is to either add a guard check or exception handling with 
> an informative logging message on ZKFCRpcServer to handle input 
> PolicyProvider being NULL.
>  
> I am very happy to provide a patch for it if the issue is confirmed :)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16958) NullPointerException(NPE) when hadoop.security.authorization is enabled but the input PolicyProvider for ZKFCRpcServer is NULL

2020-04-11 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HADOOP-16958:
---
Attachment: HADOOP-16958.003.patch

> NullPointerException(NPE) when hadoop.security.authorization is enabled but 
> the input PolicyProvider for ZKFCRpcServer is NULL
> --
>
> Key: HADOOP-16958
> URL: https://issues.apache.org/jira/browse/HADOOP-16958
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, ha
>Affects Versions: 3.2.1
>Reporter: Ctest
>Priority: Critical
> Attachments: HADOOP-16958.000.patch, HADOOP-16958.001.patch, 
> HADOOP-16958.002.patch, HADOOP-16958.003.patch
>
>
> During initialization, ZKFCRpcServer refreshes the service authorization ACL 
> for the service handled by this server if config 
> hadoop.security.authorization is enabled, by calling refreshServiceAcl with 
> the input PolicyProvider and Configuration.
> {code:java}
> ZKFCRpcServer(Configuration conf,
>  InetSocketAddress bindAddr,
>  ZKFailoverController zkfc,
>  PolicyProvider policy) throws IOException {
>  this.server = ...
>  
>  // set service-level authorization security policy
>  if (conf.getBoolean(
>  CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION, false)) {
>  server.refreshServiceAcl(conf, policy);
>  }
> }{code}
> refreshServiceAcl calls 
> ServiceAuthorizationManager#refreshWithLoadedConfiguration which directly 
> gets services from the provider with provider.getServices(). When the 
> provider is NULL, the code throws NPE without an informative message. In 
> addition, the default value of config 
> `hadoop.security.authorization.policyprovider` (which controls PolicyProvider 
> here) is NULL and the only usage of ZKFCRpcServer initializer provides only 
> an abstract method getPolicyProvider which does not enforce that 
> PolicyProvider should not be NULL.
> The suggestion here is to either add a guard check or exception handling with 
> an informative logging message on ZKFCRpcServer to handle input 
> PolicyProvider being NULL.
>  
> I am very happy to provide a patch for it if the issue is confirmed :)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1952: HDFS-1820. FTPFileSystem attempts to close the outputstream even when it is not initialised.

2020-04-11 Thread GitBox
goiri commented on a change in pull request #1952: HDFS-1820. FTPFileSystem 
attempts to close the outputstream even when it is not initialised.
URL: https://github.com/apache/hadoop/pull/1952#discussion_r407100202
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/ftp/TestFTPFileSystem.java
 ##
 @@ -37,9 +54,71 @@
  */
 public class TestFTPFileSystem {
 
+  private TestFtpServer server;
+
   @Rule
   public Timeout testTimeout = new Timeout(18);
 
+  @Before
+  public void setUp() throws Exception {
+server = new TestFtpServer(GenericTestUtils.getTestDir().toPath()).start();
+  }
+
+  @After
+  @SuppressWarnings("ResultOfMethodCallIgnored")
+  public void tearDown() throws Exception {
+server.stop();
+Files.walk(server.getFtpRoot())
+.sorted(Comparator.reverseOrder())
+.map(java.nio.file.Path::toFile)
+.forEach(File::delete);
+  }
+
+  @Test
+  public void testCreateWithWritePermissions() throws Exception {
+BaseUser user = server.addUser("test", "password", new WritePermission());
+Configuration configuration = new Configuration();
+configuration.set("fs.defaultFS", "ftp:///;);
+configuration.set("fs.ftp.host", "localhost");
+configuration.setInt("fs.ftp.host.port", server.getPort());
+configuration.set("fs.ftp.user.localhost", user.getName());
+configuration.set("fs.ftp.password.localhost", user.getPassword());
+configuration.set("fs.ftp.impl.disable.cache", "true");
+
+FileSystem fs = FileSystem.get(configuration);
+byte[] bytesExpected = "hello world".getBytes(StandardCharsets.UTF_8);
+try (FSDataOutputStream outputStream = fs.create(new Path("test1.txt"))) {
+  outputStream.write(bytesExpected);
+}
+try (FSDataInputStream input = fs.open(new Path("test1.txt"))) {
+  assertThat(bytesExpected, equalTo(IOUtils.readFullyToByteArray(input)));
+}
+  }
+
+  @Test
+  public void testCreateWithoutWritePermissions() throws Exception {
+BaseUser user = server.addUser("test", "password");
+Configuration configuration = new Configuration();
+configuration.set("fs.defaultFS", "ftp:///;);
+configuration.set("fs.ftp.host", "localhost");
+configuration.setInt("fs.ftp.host.port", server.getPort());
+configuration.set("fs.ftp.user.localhost", user.getName());
+configuration.set("fs.ftp.password.localhost", user.getPassword());
+configuration.set("fs.ftp.impl.disable.cache", "true");
+
+FileSystem fs = FileSystem.get(configuration);
+byte[] bytesExpected = "hello world".getBytes(StandardCharsets.UTF_8);
+
+try (FSDataOutputStream outputStream = fs.create(new Path("test1.txt"))) {
+  outputStream.write(bytesExpected);
 
 Review comment:
   We should use LambdaTestUtils#intercept and make sure that we fail after 
write()


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1952: HDFS-1820. FTPFileSystem attempts to close the outputstream even when it is not initialised.

2020-04-11 Thread GitBox
goiri commented on a change in pull request #1952: HDFS-1820. FTPFileSystem 
attempts to close the outputstream even when it is not initialised.
URL: https://github.com/apache/hadoop/pull/1952#discussion_r407100428
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/ftp/TestFtpServer.java
 ##
 @@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.ftp;
+
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.util.Arrays;
+
+import org.apache.ftpserver.FtpServer;
+import org.apache.ftpserver.FtpServerFactory;
+import org.apache.ftpserver.ftplet.Authority;
+import org.apache.ftpserver.ftplet.FtpException;
+import org.apache.ftpserver.ftplet.UserManager;
+import org.apache.ftpserver.impl.DefaultFtpServer;
+import org.apache.ftpserver.listener.Listener;
+import org.apache.ftpserver.listener.ListenerFactory;
+import org.apache.ftpserver.usermanager.PropertiesUserManagerFactory;
+import org.apache.ftpserver.usermanager.impl.BaseUser;
+
+/**
+ * Helper class facilitating to manage a local ftp
+ * server for unit tests purposes only.
+ */
+public class TestFtpServer {
 
 Review comment:
   Starting by Test in  this folder I would expect it to have a @Test in it.
   TBH, I don't know what better name to give it, FtpTestServer?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1952: HDFS-1820. FTPFileSystem attempts to close the outputstream even when it is not initialised.

2020-04-11 Thread GitBox
goiri commented on a change in pull request #1952: HDFS-1820. FTPFileSystem 
attempts to close the outputstream even when it is not initialised.
URL: https://github.com/apache/hadoop/pull/1952#discussion_r407100095
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/ftp/TestFTPFileSystem.java
 ##
 @@ -37,9 +54,71 @@
  */
 public class TestFTPFileSystem {
 
+  private TestFtpServer server;
+
   @Rule
   public Timeout testTimeout = new Timeout(18);
 
+  @Before
+  public void setUp() throws Exception {
+server = new TestFtpServer(GenericTestUtils.getTestDir().toPath()).start();
+  }
+
+  @After
+  @SuppressWarnings("ResultOfMethodCallIgnored")
+  public void tearDown() throws Exception {
+server.stop();
+Files.walk(server.getFtpRoot())
+.sorted(Comparator.reverseOrder())
+.map(java.nio.file.Path::toFile)
+.forEach(File::delete);
+  }
+
+  @Test
+  public void testCreateWithWritePermissions() throws Exception {
+BaseUser user = server.addUser("test", "password", new WritePermission());
+Configuration configuration = new Configuration();
+configuration.set("fs.defaultFS", "ftp:///;);
+configuration.set("fs.ftp.host", "localhost");
+configuration.setInt("fs.ftp.host.port", server.getPort());
+configuration.set("fs.ftp.user.localhost", user.getName());
+configuration.set("fs.ftp.password.localhost", user.getPassword());
+configuration.set("fs.ftp.impl.disable.cache", "true");
 
 Review comment:
   Can we use setBoolean()?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1954: HDFS-15217 Add more information to longest write/read lock held log

2020-04-11 Thread GitBox
goiri commented on a change in pull request #1954: HDFS-15217 Add more 
information to longest write/read lock held log
URL: https://github.com/apache/hadoop/pull/1954#discussion_r407099001
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystemLock.java
 ##
 @@ -176,13 +181,23 @@ public void readUnlock(String opName) {
 final long readLockIntervalMs =
 TimeUnit.NANOSECONDS.toMillis(readLockIntervalNanos);
 if (needReport && readLockIntervalMs >= this.readLockReportingThresholdMs) 
{
-  LockHeldInfo localLockHeldInfo;
+  String lockReportInfo = null;
   do {
-localLockHeldInfo = longestReadLockHeldInfo.get();
-  } while (localLockHeldInfo.getIntervalMs() - readLockIntervalMs < 0 &&
-  !longestReadLockHeldInfo.compareAndSet(localLockHeldInfo,
-  new LockHeldInfo(currentTimeMs, readLockIntervalMs,
-  StringUtils.getStackTrace(Thread.currentThread();
+LockHeldInfo localLockHeldInfo = longestReadLockHeldInfo.get();
+if (localLockHeldInfo.getIntervalMs() <= readLockIntervalMs) {
+  if (lockReportInfo == null) {
+lockReportInfo = lockReportInfoSupplier != null ? " (" +
+lockReportInfoSupplier.get() + ")" : "";
+  }
+  if (longestReadLockHeldInfo.compareAndSet(localLockHeldInfo,
+new LockHeldInfo(currentTimeMs, readLockIntervalMs,
 
 Review comment:
   Can we extract some of this? At least the stack trace.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1954: HDFS-15217 Add more information to longest write/read lock held log

2020-04-11 Thread GitBox
goiri commented on a change in pull request #1954: HDFS-15217 Add more 
information to longest write/read lock held log
URL: https://github.com/apache/hadoop/pull/1954#discussion_r407099593
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystemLock.java
 ##
 @@ -159,10 +160,14 @@ public void readLockInterruptibly() throws 
InterruptedException {
   }
 
   public void readUnlock() {
-readUnlock(OP_NAME_OTHER);
+readUnlock(OP_NAME_OTHER, null);
   }
 
   public void readUnlock(String opName) {
+readUnlock(opName, null);
+  }
+
+  public void readUnlock(String opName, Supplier 
lockReportInfoSupplier) {
 
 Review comment:
   I'm having a hard time finding who uses this Supplier. Where are we using 
this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1954: HDFS-15217 Add more information to longest write/read lock held log

2020-04-11 Thread GitBox
goiri commented on a change in pull request #1954: HDFS-15217 Add more 
information to longest write/read lock held log
URL: https://github.com/apache/hadoop/pull/1954#discussion_r407098949
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystemLock.java
 ##
 @@ -176,13 +181,23 @@ public void readUnlock(String opName) {
 final long readLockIntervalMs =
 TimeUnit.NANOSECONDS.toMillis(readLockIntervalNanos);
 if (needReport && readLockIntervalMs >= this.readLockReportingThresholdMs) 
{
-  LockHeldInfo localLockHeldInfo;
+  String lockReportInfo = null;
   do {
-localLockHeldInfo = longestReadLockHeldInfo.get();
-  } while (localLockHeldInfo.getIntervalMs() - readLockIntervalMs < 0 &&
-  !longestReadLockHeldInfo.compareAndSet(localLockHeldInfo,
-  new LockHeldInfo(currentTimeMs, readLockIntervalMs,
-  StringUtils.getStackTrace(Thread.currentThread();
+LockHeldInfo localLockHeldInfo = longestReadLockHeldInfo.get();
+if (localLockHeldInfo.getIntervalMs() <= readLockIntervalMs) {
+  if (lockReportInfo == null) {
+lockReportInfo = lockReportInfoSupplier != null ? " (" +
+lockReportInfoSupplier.get() + ")" : "";
+  }
+  if (longestReadLockHeldInfo.compareAndSet(localLockHeldInfo,
+new LockHeldInfo(currentTimeMs, readLockIntervalMs,
+  StringUtils.getStackTrace(Thread.currentThread()), opName, 
lockReportInfo))) {
+break;
+  }
+} else {
+  break;
+}
+  } while (true);
 
 Review comment:
   As we are touching this, can we make this into a regular:
   boolean done = false;
   while (!done) {
...
} else {
 done = true;
}
   }
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1954: HDFS-15217 Add more information to longest write/read lock held log

2020-04-11 Thread GitBox
goiri commented on a change in pull request #1954: HDFS-15217 Add more 
information to longest write/read lock held log
URL: https://github.com/apache/hadoop/pull/1954#discussion_r407099108
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystemLock.java
 ##
 @@ -253,10 +294,12 @@ public void writeUnlock(String opName, boolean 
suppressWriteLockReport) {
 LogAction logAction = LogThrottlingHelper.DO_NOT_LOG;
 if (needReport &&
 writeLockIntervalMs >= this.writeLockReportingThresholdMs) {
-  if (longestWriteLockHeldInfo.getIntervalMs() < writeLockIntervalMs) {
+  if (longestWriteLockHeldInfo.getIntervalMs() <= writeLockIntervalMs) {
+String lockReportInfo = lockReportInfoSupplier != null ? " (" +
+lockReportInfoSupplier.get() + ")" : "";
 longestWriteLockHeldInfo =
 new LockHeldInfo(currentTimeMs, writeLockIntervalMs,
-StringUtils.getStackTrace(Thread.currentThread()));
+StringUtils.getStackTrace(Thread.currentThread()), opName, 
lockReportInfo);
 
 Review comment:
   Extract the trace at least.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1954: HDFS-15217 Add more information to longest write/read lock held log

2020-04-11 Thread GitBox
goiri commented on a change in pull request #1954: HDFS-15217 Add more 
information to longest write/read lock held log
URL: https://github.com/apache/hadoop/pull/1954#discussion_r407098774
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystemLock.java
 ##
 @@ -106,8 +107,8 @@ public Long initialValue() {
* lock was held since the last report.
*/
   private final AtomicReference longestReadLockHeldInfo =
-  new AtomicReference<>(new LockHeldInfo(0, 0, null));
-  private LockHeldInfo longestWriteLockHeldInfo = new LockHeldInfo(0, 0, null);
+  new AtomicReference<>(new LockHeldInfo(0, 0, null, null, null));
+  private LockHeldInfo longestWriteLockHeldInfo = new LockHeldInfo(0, 0, null, 
null, null);
 
 Review comment:
   Can we just keep the old constructor with the old parameters that pass the 
null to the new one instead of changing everywhere?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16967) TestSequenceFile#testRecursiveSeqFileCreate failed to clean its data caused failures in subsequent run

2020-04-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17081520#comment-17081520
 ] 

Hadoop QA commented on HADOOP-16967:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
34s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d |
| JIRA Issue | HADOOP-16967 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12999649/HADOOP-16967.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8fa67c5873a4 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 275c478 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_242 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16873/testReport/ |
| Max. process+thread count | 1375 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16873/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> TestSequenceFile#testRecursiveSeqFileCreate failed to clean its data caused 
> failures in subsequent run 
> 

[jira] [Comment Edited] (HADOOP-15836) Review of AccessControlList

2020-04-11 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080711#comment-17080711
 ] 

Brahma Reddy Battula edited comment on HADOOP-15836 at 4/11/20, 6:39 PM:
-

Removed the fix version as this reverted.


was (Author: brahmareddy):
Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a 
blocker.

> Review of AccessControlList
> ---
>
> Key: HADOOP-15836
> URL: https://issues.apache.org/jira/browse/HADOOP-15836
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HADOOP-15836.1.patch, HADOOP-15836.2.patch, 
> assertEqualACLStrings.patch
>
>
> * Improve unit tests (expected / actual were backwards)
> * Unit test expected elements to be in order but the class's return 
> Collections were unordered
> * Formatting cleanup
> * Removed superfluous white space
> * Remove use of LinkedList
> * Removed superfluous code
> * Use {{unmodifiable}} Collections where JavaDoc states that caller must not 
> manipulate the data structure



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15836) Review of AccessControlList

2020-04-11 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-15836:
--
Fix Version/s: (was: 3.3.0)

> Review of AccessControlList
> ---
>
> Key: HADOOP-15836
> URL: https://issues.apache.org/jira/browse/HADOOP-15836
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HADOOP-15836.1.patch, HADOOP-15836.2.patch, 
> assertEqualACLStrings.patch
>
>
> * Improve unit tests (expected / actual were backwards)
> * Unit test expected elements to be in order but the class's return 
> Collections were unordered
> * Formatting cleanup
> * Removed superfluous white space
> * Remove use of LinkedList
> * Removed superfluous code
> * Use {{unmodifiable}} Collections where JavaDoc states that caller must not 
> manipulate the data structure



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16672) missing null check for UserGroupInformation while during IOSteam setup

2020-04-11 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-16672:
--
Fix Version/s: (was: 3.3.0)

> missing null check for UserGroupInformation while during IOSteam setup
> --
>
> Key: HADOOP-16672
> URL: https://issues.apache.org/jira/browse/HADOOP-16672
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Viraj Jasani
>Priority: Major
>
> While setting up IOStreams, we might end up with NPE if UserGroupInformation 
> is null resulting from getTicket() call. Similar to other operations, we 
> should add null check for ticket.doAs() call.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1954: HDFS-15217 Add more information to longest write/read lock held log

2020-04-11 Thread GitBox
hadoop-yetus commented on issue #1954: HDFS-15217 Add more information to 
longest write/read lock held log
URL: https://github.com/apache/hadoop/pull/1954#issuecomment-612467222
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 12s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  21m 44s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 29s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m  2s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 58s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  2s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 44s |  hadoop-hdfs-project/hadoop-hdfs: 
The patch generated 81 new + 164 unchanged - 1 fixed = 245 total (was 165)  |
   | +1 :green_heart: |  mvnsite  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 33s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  the patch passed  |
   | -1 :x: |  findbugs  |   3m  5s |  hadoop-hdfs-project/hadoop-hdfs 
generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 110m 34s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 183m  0s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
   |  |  Possible null pointer dereference of effectiveDirective in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheDirective(CacheDirectiveInfo,
 EnumSet, boolean)  Dereferenced at FSNamesystem.java:effectiveDirective in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheDirective(CacheDirectiveInfo,
 EnumSet, boolean)  Dereferenced at FSNamesystem.java:[line 7436] |
   |  |  Possible null pointer dereference of ret in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(String, String, 
boolean)  Dereferenced at FSNamesystem.java:ret in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(String, String, 
boolean)  Dereferenced at FSNamesystem.java:[line 3207] |
   |  |  Possible null pointer dereference of res in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(String, String, 
boolean, Options$Rename[])  Dereferenced at FSNamesystem.java:res in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(String, String, 
boolean, Options$Rename[])  Dereferenced at FSNamesystem.java:[line 3242] |
   | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.8 Server=19.03.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1954/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1954 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 652450b0e918 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 275c478 |
   | Default Java | 1.8.0_242 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1954/1/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1954/1/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1954/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1954/1/testReport/ |
   | Max. process+thread count | 2879 (vs. ulimit of 5500) |
   | 

[jira] [Commented] (HADOOP-16967) TestSequenceFile#testRecursiveSeqFileCreate failed to clean its data caused failures in subsequent run

2020-04-11 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17081379#comment-17081379
 ] 

Ayush Saxena commented on HADOOP-16967:
---

Thanx for the update.

{code:java}
+
+fs.close();
{code}
this should also be in {{finally}} I think?

Apart LGTM


> TestSequenceFile#testRecursiveSeqFileCreate failed to clean its data caused 
> failures in subsequent run 
> ---
>
> Key: HADOOP-16967
> URL: https://issues.apache.org/jira/browse/HADOOP-16967
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, test
>Affects Versions: 3.2.1, 3.4.0
>Reporter: Ctest
>Priority: Minor
>  Labels: easyfix, test
> Attachments: HADOOP-16967.000.patch, HADOOP-16967.001.patch
>
>
> The test expects an IOException when creating a writer for file 
> `target/test/data/recursiveCreateDir/file` with `createParent=false`. And it 
> expects to create the writer successfully when `createParent=True`. 
> `createParent` means `create parent directory if non-existent`.
> The test will pass if it is run for the first time, but it will fail for the 
> second run. This is because the test did not clean the parent directory 
> created during the first run.
> The parent directory `recursiveCreateDir` was created, but it was not deleted 
> before the test finished. So, when the test was run again, it still treated 
> the parent directory `recursiveCreateDir` as non-existent and expected an 
> IOException from creating a writer with `createParent=false`. Then the test 
> did not get the expected IOException because `recursiveCreateDir` has been 
> created in the first test run.
> {code:java}
> @SuppressWarnings("deprecation")
>  @Test
>  public void testRecursiveSeqFileCreate() throws IOException {
>  FileSystem fs = FileSystem.getLocal(conf);
>  Path name = new Path(new Path(GenericTestUtils.getTempPath(
>  "recursiveCreateDir")), "file"); // FILE SUCCESSULLY CREATED HERE
>  boolean createParent = false;
> try {
>  SequenceFile.createWriter(fs, conf, name, RandomDatum.class,
>  RandomDatum.class, 512, (short) 1, 4096, createParent,
>  CompressionType.NONE, null, new Metadata());
>  fail("Expected an IOException due to missing parent");
>  } catch (IOException ioe) {
>  // Expected
>  }
> createParent = true;
>  SequenceFile.createWriter(fs, conf, name, RandomDatum.class,
>  RandomDatum.class, 512, (short) 1, 4096, createParent,
>  CompressionType.NONE, null, new Metadata());
>  // should succeed, fails if exception thrown
>  }
> {code}
> Suggested patch:
>  
> {code:java}
> diff --git 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
>  
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
> index 044824356ed..1aff2936264 100644
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
> @@ -649,8 +649,9 @@ public void testCreateWriterOnExistingFile() throws 
> IOException {
>  @Test
>  public void testRecursiveSeqFileCreate() throws IOException {
>  FileSystem fs = FileSystem.getLocal(conf);
> - Path name = new Path(new Path(GenericTestUtils.getTempPath(
> - "recursiveCreateDir")), "file");
> + Path parentDir = new Path(GenericTestUtils.getTempPath(
> + "recursiveCreateDir"));
> + Path name = new Path(parentDir, "file");
>  boolean createParent = false;
>  
>  try {
> @@ -667,6 +668,9 @@ public void testRecursiveSeqFileCreate() throws 
> IOException {
>  RandomDatum.class, 512, (short) 1, 4096, createParent,
>  CompressionType.NONE, null, new Metadata());
>  // should succeed, fails if exception thrown
> +
> + fs.deleteOnExit(parentDir);
> + fs.close();
>  }
>  
>  @Test{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16958) NullPointerException(NPE) when hadoop.security.authorization is enabled but the input PolicyProvider for ZKFCRpcServer is NULL

2020-04-11 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17081377#comment-17081377
 ] 

Ayush Saxena commented on HADOOP-16958:
---

You can move it back at the end, as it was.
Additionally, I don't think there is a need to add 
{{HadoopIllegalArgumentException}} into the method signature

> NullPointerException(NPE) when hadoop.security.authorization is enabled but 
> the input PolicyProvider for ZKFCRpcServer is NULL
> --
>
> Key: HADOOP-16958
> URL: https://issues.apache.org/jira/browse/HADOOP-16958
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, ha
>Affects Versions: 3.2.1
>Reporter: Ctest
>Priority: Critical
> Attachments: HADOOP-16958.000.patch, HADOOP-16958.001.patch, 
> HADOOP-16958.002.patch
>
>
> During initialization, ZKFCRpcServer refreshes the service authorization ACL 
> for the service handled by this server if config 
> hadoop.security.authorization is enabled, by calling refreshServiceAcl with 
> the input PolicyProvider and Configuration.
> {code:java}
> ZKFCRpcServer(Configuration conf,
>  InetSocketAddress bindAddr,
>  ZKFailoverController zkfc,
>  PolicyProvider policy) throws IOException {
>  this.server = ...
>  
>  // set service-level authorization security policy
>  if (conf.getBoolean(
>  CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION, false)) {
>  server.refreshServiceAcl(conf, policy);
>  }
> }{code}
> refreshServiceAcl calls 
> ServiceAuthorizationManager#refreshWithLoadedConfiguration which directly 
> gets services from the provider with provider.getServices(). When the 
> provider is NULL, the code throws NPE without an informative message. In 
> addition, the default value of config 
> `hadoop.security.authorization.policyprovider` (which controls PolicyProvider 
> here) is NULL and the only usage of ZKFCRpcServer initializer provides only 
> an abstract method getPolicyProvider which does not enforce that 
> PolicyProvider should not be NULL.
> The suggestion here is to either add a guard check or exception handling with 
> an informative logging message on ZKFCRpcServer to handle input 
> PolicyProvider being NULL.
>  
> I am very happy to provide a patch for it if the issue is confirmed :)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16967) TestSequenceFile#testRecursiveSeqFileCreate failed to clean its data caused failures in subsequent run

2020-04-11 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17081375#comment-17081375
 ] 

Ctest commented on HADOOP-16967:


Thank you for your comment! Submitted a new patch:)

> TestSequenceFile#testRecursiveSeqFileCreate failed to clean its data caused 
> failures in subsequent run 
> ---
>
> Key: HADOOP-16967
> URL: https://issues.apache.org/jira/browse/HADOOP-16967
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, test
>Affects Versions: 3.2.1, 3.4.0
>Reporter: Ctest
>Priority: Minor
>  Labels: easyfix, test
> Attachments: HADOOP-16967.000.patch, HADOOP-16967.001.patch
>
>
> The test expects an IOException when creating a writer for file 
> `target/test/data/recursiveCreateDir/file` with `createParent=false`. And it 
> expects to create the writer successfully when `createParent=True`. 
> `createParent` means `create parent directory if non-existent`.
> The test will pass if it is run for the first time, but it will fail for the 
> second run. This is because the test did not clean the parent directory 
> created during the first run.
> The parent directory `recursiveCreateDir` was created, but it was not deleted 
> before the test finished. So, when the test was run again, it still treated 
> the parent directory `recursiveCreateDir` as non-existent and expected an 
> IOException from creating a writer with `createParent=false`. Then the test 
> did not get the expected IOException because `recursiveCreateDir` has been 
> created in the first test run.
> {code:java}
> @SuppressWarnings("deprecation")
>  @Test
>  public void testRecursiveSeqFileCreate() throws IOException {
>  FileSystem fs = FileSystem.getLocal(conf);
>  Path name = new Path(new Path(GenericTestUtils.getTempPath(
>  "recursiveCreateDir")), "file"); // FILE SUCCESSULLY CREATED HERE
>  boolean createParent = false;
> try {
>  SequenceFile.createWriter(fs, conf, name, RandomDatum.class,
>  RandomDatum.class, 512, (short) 1, 4096, createParent,
>  CompressionType.NONE, null, new Metadata());
>  fail("Expected an IOException due to missing parent");
>  } catch (IOException ioe) {
>  // Expected
>  }
> createParent = true;
>  SequenceFile.createWriter(fs, conf, name, RandomDatum.class,
>  RandomDatum.class, 512, (short) 1, 4096, createParent,
>  CompressionType.NONE, null, new Metadata());
>  // should succeed, fails if exception thrown
>  }
> {code}
> Suggested patch:
>  
> {code:java}
> diff --git 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
>  
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
> index 044824356ed..1aff2936264 100644
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
> @@ -649,8 +649,9 @@ public void testCreateWriterOnExistingFile() throws 
> IOException {
>  @Test
>  public void testRecursiveSeqFileCreate() throws IOException {
>  FileSystem fs = FileSystem.getLocal(conf);
> - Path name = new Path(new Path(GenericTestUtils.getTempPath(
> - "recursiveCreateDir")), "file");
> + Path parentDir = new Path(GenericTestUtils.getTempPath(
> + "recursiveCreateDir"));
> + Path name = new Path(parentDir, "file");
>  boolean createParent = false;
>  
>  try {
> @@ -667,6 +668,9 @@ public void testRecursiveSeqFileCreate() throws 
> IOException {
>  RandomDatum.class, 512, (short) 1, 4096, createParent,
>  CompressionType.NONE, null, new Metadata());
>  // should succeed, fails if exception thrown
> +
> + fs.deleteOnExit(parentDir);
> + fs.close();
>  }
>  
>  @Test{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16967) TestSequenceFile#testRecursiveSeqFileCreate failed to clean its data caused failures in subsequent run

2020-04-11 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HADOOP-16967:
---
Attachment: HADOOP-16967.001.patch

> TestSequenceFile#testRecursiveSeqFileCreate failed to clean its data caused 
> failures in subsequent run 
> ---
>
> Key: HADOOP-16967
> URL: https://issues.apache.org/jira/browse/HADOOP-16967
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, test
>Affects Versions: 3.2.1, 3.4.0
>Reporter: Ctest
>Priority: Minor
>  Labels: easyfix, test
> Attachments: HADOOP-16967.000.patch, HADOOP-16967.001.patch
>
>
> The test expects an IOException when creating a writer for file 
> `target/test/data/recursiveCreateDir/file` with `createParent=false`. And it 
> expects to create the writer successfully when `createParent=True`. 
> `createParent` means `create parent directory if non-existent`.
> The test will pass if it is run for the first time, but it will fail for the 
> second run. This is because the test did not clean the parent directory 
> created during the first run.
> The parent directory `recursiveCreateDir` was created, but it was not deleted 
> before the test finished. So, when the test was run again, it still treated 
> the parent directory `recursiveCreateDir` as non-existent and expected an 
> IOException from creating a writer with `createParent=false`. Then the test 
> did not get the expected IOException because `recursiveCreateDir` has been 
> created in the first test run.
> {code:java}
> @SuppressWarnings("deprecation")
>  @Test
>  public void testRecursiveSeqFileCreate() throws IOException {
>  FileSystem fs = FileSystem.getLocal(conf);
>  Path name = new Path(new Path(GenericTestUtils.getTempPath(
>  "recursiveCreateDir")), "file"); // FILE SUCCESSULLY CREATED HERE
>  boolean createParent = false;
> try {
>  SequenceFile.createWriter(fs, conf, name, RandomDatum.class,
>  RandomDatum.class, 512, (short) 1, 4096, createParent,
>  CompressionType.NONE, null, new Metadata());
>  fail("Expected an IOException due to missing parent");
>  } catch (IOException ioe) {
>  // Expected
>  }
> createParent = true;
>  SequenceFile.createWriter(fs, conf, name, RandomDatum.class,
>  RandomDatum.class, 512, (short) 1, 4096, createParent,
>  CompressionType.NONE, null, new Metadata());
>  // should succeed, fails if exception thrown
>  }
> {code}
> Suggested patch:
>  
> {code:java}
> diff --git 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
>  
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
> index 044824356ed..1aff2936264 100644
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
> @@ -649,8 +649,9 @@ public void testCreateWriterOnExistingFile() throws 
> IOException {
>  @Test
>  public void testRecursiveSeqFileCreate() throws IOException {
>  FileSystem fs = FileSystem.getLocal(conf);
> - Path name = new Path(new Path(GenericTestUtils.getTempPath(
> - "recursiveCreateDir")), "file");
> + Path parentDir = new Path(GenericTestUtils.getTempPath(
> + "recursiveCreateDir"));
> + Path name = new Path(parentDir, "file");
>  boolean createParent = false;
>  
>  try {
> @@ -667,6 +668,9 @@ public void testRecursiveSeqFileCreate() throws 
> IOException {
>  RandomDatum.class, 512, (short) 1, 4096, createParent,
>  CompressionType.NONE, null, new Metadata());
>  // should succeed, fails if exception thrown
> +
> + fs.deleteOnExit(parentDir);
> + fs.close();
>  }
>  
>  @Test{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16958) NullPointerException(NPE) when hadoop.security.authorization is enabled but the input PolicyProvider for ZKFCRpcServer is NULL

2020-04-11 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17081370#comment-17081370
 ] 

Ctest commented on HADOOP-16958:


The tests failed because I moved the check policy==null to the start of 
ZKFCRpcServer and did not check whether HADOOP_SECURITY_AUTHORIZATION is true. 
Policy should not be null if HADOOP_SECURITY_AUTHORIZATION is true.

There is a check HADOOP_SECURITY_AUTHORIZATION==true at the end of 
ZKFCRpcServer but not at the start. Should I still move the policy==null check 
back to the end? Or add an additional check that for  
HADOOP_SECURITY_AUTHORIZATION==true and policy==null at the starting, like 
below?

 
{code:java}
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFCRpcServer.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFCRpcServer.java
index 86dd91ee142..6c49d70a1d7 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFCRpcServer.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFCRpcServer.java
@@ -20,6 +20,8 @@
 import java.io.IOException;
 import java.net.InetSocketAddress;
 
+import com.sun.org.apache.xpath.internal.operations.Bool;
+import org.apache.hadoop.HadoopIllegalArgumentException;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -46,7 +48,17 @@
   ZKFCRpcServer(Configuration conf,
   InetSocketAddress bindAddr,
   ZKFailoverController zkfc,
-  PolicyProvider policy) throws IOException {
+  PolicyProvider policy) throws IOException, 
HadoopIllegalArgumentException {
+boolean securityAuthorizationEnabled = conf.getBoolean(
+CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION,
+false);
+if (securityAuthorizationEnabled && policy == null) {
+  throw new HadoopIllegalArgumentException(
+  CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION
+  + "is configured to true but service-level"
+  + "authorization security policy is null.");
+}
+
 this.zkfc = zkfc;
 
 RPC.setProtocolEngine(conf, ZKFCProtocolPB.class,
@@ -61,8 +73,7 @@
 .setVerbose(false).build();
 
 // set service-level authorization security policy
-if (conf.getBoolean(
-CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION, false)) {
+if (securityAuthorizationEnabled) {
   server.refreshServiceAcl(conf, policy);
 }

{code}

> NullPointerException(NPE) when hadoop.security.authorization is enabled but 
> the input PolicyProvider for ZKFCRpcServer is NULL
> --
>
> Key: HADOOP-16958
> URL: https://issues.apache.org/jira/browse/HADOOP-16958
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, ha
>Affects Versions: 3.2.1
>Reporter: Ctest
>Priority: Critical
> Attachments: HADOOP-16958.000.patch, HADOOP-16958.001.patch, 
> HADOOP-16958.002.patch
>
>
> During initialization, ZKFCRpcServer refreshes the service authorization ACL 
> for the service handled by this server if config 
> hadoop.security.authorization is enabled, by calling refreshServiceAcl with 
> the input PolicyProvider and Configuration.
> {code:java}
> ZKFCRpcServer(Configuration conf,
>  InetSocketAddress bindAddr,
>  ZKFailoverController zkfc,
>  PolicyProvider policy) throws IOException {
>  this.server = ...
>  
>  // set service-level authorization security policy
>  if (conf.getBoolean(
>  CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION, false)) {
>  server.refreshServiceAcl(conf, policy);
>  }
> }{code}
> refreshServiceAcl calls 
> ServiceAuthorizationManager#refreshWithLoadedConfiguration which directly 
> gets services from the provider with provider.getServices(). When the 
> provider is NULL, the code throws NPE without an informative message. In 
> addition, the default value of config 
> `hadoop.security.authorization.policyprovider` (which controls PolicyProvider 
> here) is NULL and the only usage of ZKFCRpcServer initializer provides only 
> an abstract method getPolicyProvider which does not enforce that 
> PolicyProvider should not be NULL.
> The suggestion here is to either add a guard check or exception handling with 
> an informative logging message on ZKFCRpcServer to handle input 
> PolicyProvider being NULL.
>  
> I am very happy to provide a patch for it if the issue is confirmed :)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: 

[jira] [Comment Edited] (HADOOP-16958) NullPointerException(NPE) when hadoop.security.authorization is enabled but the input PolicyProvider for ZKFCRpcServer is NULL

2020-04-11 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17081370#comment-17081370
 ] 

Ctest edited comment on HADOOP-16958 at 4/11/20, 4:38 PM:
--

The tests failed because I moved the check policy==null to the start of 
ZKFCRpcServer and did not check whether HADOOP_SECURITY_AUTHORIZATION is true. 
Policy should not be null if HADOOP_SECURITY_AUTHORIZATION is true.

There is a check HADOOP_SECURITY_AUTHORIZATION==true at the end of 
ZKFCRpcServer but not at the start. Should I still move the policy==null check 
back to the end? Or add an additional check that for  
HADOOP_SECURITY_AUTHORIZATION==true and policy==null at the starting, like 
below?

 
{code:java}
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFCRpcServer.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFCRpcServer.java

   ZKFCRpcServer(Configuration conf,
   InetSocketAddress bindAddr,
   ZKFailoverController zkfc,
-  PolicyProvider policy) throws IOException {
+  PolicyProvider policy) throws IOException, 
HadoopIllegalArgumentException {
+boolean securityAuthorizationEnabled = conf.getBoolean(
+CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION,
+false);
+if (securityAuthorizationEnabled && policy == null) {
+  throw new HadoopIllegalArgumentException(
+  CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION
+  + "is configured to true but service-level"
+  + "authorization security policy is null.");
+}
+
 this.zkfc = zkfc;
 
 RPC.setProtocolEngine(conf, ZKFCProtocolPB.class,
@@ -61,8 +73,7 @@
 .setVerbose(false).build();
 
 // set service-level authorization security policy
-if (conf.getBoolean(
-CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION, false)) {
+if (securityAuthorizationEnabled) {
   server.refreshServiceAcl(conf, policy);
 }

{code}


was (Author: ctest.team):
The tests failed because I moved the check policy==null to the start of 
ZKFCRpcServer and did not check whether HADOOP_SECURITY_AUTHORIZATION is true. 
Policy should not be null if HADOOP_SECURITY_AUTHORIZATION is true.

There is a check HADOOP_SECURITY_AUTHORIZATION==true at the end of 
ZKFCRpcServer but not at the start. Should I still move the policy==null check 
back to the end? Or add an additional check that for  
HADOOP_SECURITY_AUTHORIZATION==true and policy==null at the starting, like 
below?

 
{code:java}
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFCRpcServer.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFCRpcServer.java
index 86dd91ee142..6c49d70a1d7 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFCRpcServer.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFCRpcServer.java
@@ -20,6 +20,8 @@
 import java.io.IOException;
 import java.net.InetSocketAddress;
 
+import com.sun.org.apache.xpath.internal.operations.Bool;
+import org.apache.hadoop.HadoopIllegalArgumentException;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -46,7 +48,17 @@
   ZKFCRpcServer(Configuration conf,
   InetSocketAddress bindAddr,
   ZKFailoverController zkfc,
-  PolicyProvider policy) throws IOException {
+  PolicyProvider policy) throws IOException, 
HadoopIllegalArgumentException {
+boolean securityAuthorizationEnabled = conf.getBoolean(
+CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION,
+false);
+if (securityAuthorizationEnabled && policy == null) {
+  throw new HadoopIllegalArgumentException(
+  CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION
+  + "is configured to true but service-level"
+  + "authorization security policy is null.");
+}
+
 this.zkfc = zkfc;
 
 RPC.setProtocolEngine(conf, ZKFCProtocolPB.class,
@@ -61,8 +73,7 @@
 .setVerbose(false).build();
 
 // set service-level authorization security policy
-if (conf.getBoolean(
-CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION, false)) {
+if (securityAuthorizationEnabled) {
   server.refreshServiceAcl(conf, policy);
 }

{code}

> NullPointerException(NPE) when hadoop.security.authorization is enabled but 
> the input PolicyProvider for ZKFCRpcServer is NULL
> --
>
> Key: HADOOP-16958
> URL: https://issues.apache.org/jira/browse/HADOOP-16958
> Project: 

[GitHub] [hadoop] hadoop-yetus commented on issue #1953: HADOOP-16528. Update document for web authentication kerberos principal configuration.

2020-04-11 Thread GitBox
hadoop-yetus commented on issue #1953: HADOOP-16528. Update document for web 
authentication kerberos principal configuration.
URL: https://github.com/apache/hadoop/pull/1953#issuecomment-612460433
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 15s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
12 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 51s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 38s |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m  3s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   3m  3s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 26s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 18s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  5s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m 15s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 30s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 26s |  the patch passed  |
   | -1 :x: |  javac  |  17m 26s |  root generated 6 new + 1864 unchanged - 0 
fixed = 1870 total (was 1864)  |
   | -0 :warning: |  checkstyle  |   3m  2s |  root: The patch generated 2 new 
+ 996 unchanged - 2 fixed = 998 total (was 998)  |
   | +1 :green_heart: |  mvnsite  |   3m 30s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 44s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 21s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   6m 42s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 12s |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  | 111m 35s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 43s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 253m 34s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.security.TestRaceWhenRelogin |
   |   | hadoop.hdfs.server.datanode.TestBPOfferService |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.8 Server=19.03.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1953/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1953 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux cbb5210d954d 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 
11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 275c478 |
   | Default Java | 1.8.0_242 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1953/1/artifact/out/diff-compile-javac-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1953/1/artifact/out/diff-checkstyle-root.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1953/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1953/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1953/1/testReport/ |
   | Max. process+thread count | 3070 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs hadoop-tools/hadoop-aws U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1953/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the 

[jira] [Commented] (HADOOP-16967) TestSequenceFile#testRecursiveSeqFileCreate failed to clean its data caused failures in subsequent run

2020-04-11 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17081365#comment-17081365
 ] 

Ayush Saxena commented on HADOOP-16967:
---

Thnax [~ctest.team] for the report and fix. I am able to repro the said issue. 
Minor comment :
{code:java}
+fs.deleteOnExit(parentDir);
+fs.close();
   }
{code}
 The delete part should be in a {{finally}} block, so if the test fails post 
creation of the file before reaching delete, still the file should get deleted .

> TestSequenceFile#testRecursiveSeqFileCreate failed to clean its data caused 
> failures in subsequent run 
> ---
>
> Key: HADOOP-16967
> URL: https://issues.apache.org/jira/browse/HADOOP-16967
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, test
>Affects Versions: 3.2.1, 3.4.0
>Reporter: Ctest
>Priority: Minor
>  Labels: easyfix, test
> Attachments: HADOOP-16967.000.patch
>
>
> The test expects an IOException when creating a writer for file 
> `target/test/data/recursiveCreateDir/file` with `createParent=false`. And it 
> expects to create the writer successfully when `createParent=True`. 
> `createParent` means `create parent directory if non-existent`.
> The test will pass if it is run for the first time, but it will fail for the 
> second run. This is because the test did not clean the parent directory 
> created during the first run.
> The parent directory `recursiveCreateDir` was created, but it was not deleted 
> before the test finished. So, when the test was run again, it still treated 
> the parent directory `recursiveCreateDir` as non-existent and expected an 
> IOException from creating a writer with `createParent=false`. Then the test 
> did not get the expected IOException because `recursiveCreateDir` has been 
> created in the first test run.
> {code:java}
> @SuppressWarnings("deprecation")
>  @Test
>  public void testRecursiveSeqFileCreate() throws IOException {
>  FileSystem fs = FileSystem.getLocal(conf);
>  Path name = new Path(new Path(GenericTestUtils.getTempPath(
>  "recursiveCreateDir")), "file"); // FILE SUCCESSULLY CREATED HERE
>  boolean createParent = false;
> try {
>  SequenceFile.createWriter(fs, conf, name, RandomDatum.class,
>  RandomDatum.class, 512, (short) 1, 4096, createParent,
>  CompressionType.NONE, null, new Metadata());
>  fail("Expected an IOException due to missing parent");
>  } catch (IOException ioe) {
>  // Expected
>  }
> createParent = true;
>  SequenceFile.createWriter(fs, conf, name, RandomDatum.class,
>  RandomDatum.class, 512, (short) 1, 4096, createParent,
>  CompressionType.NONE, null, new Metadata());
>  // should succeed, fails if exception thrown
>  }
> {code}
> Suggested patch:
>  
> {code:java}
> diff --git 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
>  
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
> index 044824356ed..1aff2936264 100644
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
> @@ -649,8 +649,9 @@ public void testCreateWriterOnExistingFile() throws 
> IOException {
>  @Test
>  public void testRecursiveSeqFileCreate() throws IOException {
>  FileSystem fs = FileSystem.getLocal(conf);
> - Path name = new Path(new Path(GenericTestUtils.getTempPath(
> - "recursiveCreateDir")), "file");
> + Path parentDir = new Path(GenericTestUtils.getTempPath(
> + "recursiveCreateDir"));
> + Path name = new Path(parentDir, "file");
>  boolean createParent = false;
>  
>  try {
> @@ -667,6 +668,9 @@ public void testRecursiveSeqFileCreate() throws 
> IOException {
>  RandomDatum.class, 512, (short) 1, 4096, createParent,
>  CompressionType.NONE, null, new Metadata());
>  // should succeed, fails if exception thrown
> +
> + fs.deleteOnExit(parentDir);
> + fs.close();
>  }
>  
>  @Test{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16958) NullPointerException(NPE) when hadoop.security.authorization is enabled but the input PolicyProvider for ZKFCRpcServer is NULL

2020-04-11 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17081346#comment-17081346
 ] 

Ayush Saxena commented on HADOOP-16958:
---

The test failures seems to be related, Please check once.

> NullPointerException(NPE) when hadoop.security.authorization is enabled but 
> the input PolicyProvider for ZKFCRpcServer is NULL
> --
>
> Key: HADOOP-16958
> URL: https://issues.apache.org/jira/browse/HADOOP-16958
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, ha
>Affects Versions: 3.2.1
>Reporter: Ctest
>Priority: Critical
> Attachments: HADOOP-16958.000.patch, HADOOP-16958.001.patch, 
> HADOOP-16958.002.patch
>
>
> During initialization, ZKFCRpcServer refreshes the service authorization ACL 
> for the service handled by this server if config 
> hadoop.security.authorization is enabled, by calling refreshServiceAcl with 
> the input PolicyProvider and Configuration.
> {code:java}
> ZKFCRpcServer(Configuration conf,
>  InetSocketAddress bindAddr,
>  ZKFailoverController zkfc,
>  PolicyProvider policy) throws IOException {
>  this.server = ...
>  
>  // set service-level authorization security policy
>  if (conf.getBoolean(
>  CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION, false)) {
>  server.refreshServiceAcl(conf, policy);
>  }
> }{code}
> refreshServiceAcl calls 
> ServiceAuthorizationManager#refreshWithLoadedConfiguration which directly 
> gets services from the provider with provider.getServices(). When the 
> provider is NULL, the code throws NPE without an informative message. In 
> addition, the default value of config 
> `hadoop.security.authorization.policyprovider` (which controls PolicyProvider 
> here) is NULL and the only usage of ZKFCRpcServer initializer provides only 
> an abstract method getPolicyProvider which does not enforce that 
> PolicyProvider should not be NULL.
> The suggestion here is to either add a guard check or exception handling with 
> an informative logging message on ZKFCRpcServer to handle input 
> PolicyProvider being NULL.
>  
> I am very happy to provide a patch for it if the issue is confirmed :)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16528) Update document for web authentication kerberos principal configuration

2020-04-11 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-16528:
--
Status: Open  (was: Patch Available)

> Update document for web authentication kerberos principal configuration
> ---
>
> Key: HADOOP-16528
> URL: https://issues.apache.org/jira/browse/HADOOP-16528
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: Chen Zhang
>Assignee: Masatake Iwasaki
>Priority: Major
>
> The config \{{dfs.web.authentication.kerberos.principal}} is not used anymore 
> after HADOOP-16354, but the document for WebHDFS is not updated, the 
> hdfs-default.xml should be updated as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] brfrn169 opened a new pull request #1954: HDFS-15217 Add more information to longest write/read lock held log

2020-04-11 Thread GitBox
brfrn169 opened a new pull request #1954: HDFS-15217 Add more information to 
longest write/read lock held log
URL: https://github.com/apache/hadoop/pull/1954
 
 
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16528) Update document for web authentication kerberos principal configuration

2020-04-11 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-16528:
--
Status: Patch Available  (was: Open)

> Update document for web authentication kerberos principal configuration
> ---
>
> Key: HADOOP-16528
> URL: https://issues.apache.org/jira/browse/HADOOP-16528
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: Chen Zhang
>Assignee: Masatake Iwasaki
>Priority: Major
>
> The config \{{dfs.web.authentication.kerberos.principal}} is not used anymore 
> after HADOOP-16354, but the document for WebHDFS is not updated, the 
> hdfs-default.xml should be updated as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims opened a new pull request #1953: HADOOP-16528. Update document for web authentication kerberos principal configuration.

2020-04-11 Thread GitBox
iwasakims opened a new pull request #1953: HADOOP-16528. Update document for 
web authentication kerberos principal configuration.
URL: https://github.com/apache/hadoop/pull/1953
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16970) Supporting the new credentials provider in Hadoop-cos

2020-04-11 Thread YangY (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YangY updated HADOOP-16970:
---
Description: 
This task aims to support three credentials provider in Hadoop-cos:
 * SessionCredentialsProvider
 * InstanceCredentialsProvider

  was:
This task aims to support three credentials provider in Hadoop-cos:
 * SessionCredentialsProvider
 * CVMInstanceCredentialsProvider
 * CPMInstanceCredentialsProvider


> Supporting the new credentials provider in Hadoop-cos
> -
>
> Key: HADOOP-16970
> URL: https://issues.apache.org/jira/browse/HADOOP-16970
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/cos
>Reporter: YangY
>Assignee: YangY
>Priority: Major
>
> This task aims to support three credentials provider in Hadoop-cos:
>  * SessionCredentialsProvider
>  * InstanceCredentialsProvider



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16970) Supporting the new credentials provider in Hadoop-cos

2020-04-11 Thread YangY (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YangY updated HADOOP-16970:
---
Summary: Supporting the new credentials provider in Hadoop-cos  (was: 
Supporting the new Credentials Provider in Hadoop-cos)

> Supporting the new credentials provider in Hadoop-cos
> -
>
> Key: HADOOP-16970
> URL: https://issues.apache.org/jira/browse/HADOOP-16970
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/cos
>Reporter: YangY
>Assignee: YangY
>Priority: Major
>
> This task aims to support three credentials provider in Hadoop-cos:
>  * SessionCredentialsProvider
>  * CVMInstanceCredentialsProvider
>  * CPMInstanceCredentialsProvider



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16970) Supporting the new Credentials Provider in Hadoop-cos

2020-04-11 Thread YangY (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YangY updated HADOOP-16970:
---
Summary: Supporting the new Credentials Provider in Hadoop-cos  (was: 
Support the new Credentials Provider in Hadoop-cos)

> Supporting the new Credentials Provider in Hadoop-cos
> -
>
> Key: HADOOP-16970
> URL: https://issues.apache.org/jira/browse/HADOOP-16970
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/cos
>Reporter: YangY
>Assignee: YangY
>Priority: Major
>
> This task aims to support three credentials provider in Hadoop-cos:
>  * SessionCredentialsProvider
>  * CVMInstanceCredentialsProvider
>  * CPMInstanceCredentialsProvider



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16969) Fixing the issue that the single file upload can not be retried in Hadoop-cos

2020-04-11 Thread YangY (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YangY updated HADOOP-16969:
---
Summary: Fixing the issue that the single file upload can not be retried in 
Hadoop-cos  (was: Fix the issue that the single file upload can not be retried 
in Hadoop-cos)

> Fixing the issue that the single file upload can not be retried in Hadoop-cos
> -
>
> Key: HADOOP-16969
> URL: https://issues.apache.org/jira/browse/HADOOP-16969
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: YangY
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16970) Support the new Credentials Provider in Hadoop-cos

2020-04-11 Thread YangY (Jira)
YangY created HADOOP-16970:
--

 Summary: Support the new Credentials Provider in Hadoop-cos
 Key: HADOOP-16970
 URL: https://issues.apache.org/jira/browse/HADOOP-16970
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/cos
Reporter: YangY
Assignee: YangY


This task aims to support three credentials provider in Hadoop-cos:
 * SessionCredentialsProvider
 * CVMInstanceCredentialsProvider
 * CPMInstanceCredentialsProvider



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16955) Umbrella Jira for improving the Hadoop-cos support in Hadoop

2020-04-11 Thread YangY (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YangY updated HADOOP-16955:
---
Description: 
This Umbrella Jira focus on fixing some known bugs and adding some important 
features.

 

bugfix:
 # resolve the dependency conflict;
 # fix the upload buffer returning failed when some exceptions occur;
 # fix the issue that the single file upload can not be retried;
 # fix the bug of checking if a file exists through listing the file frequently.

features:
 # support SessionCredentialsProvider and InstanceCredentialsProvider, which 
allows users to specify the credentials in URI or get it from the CVM (Tencent 
Cloud Virtual Machine) bound to the CAM role that can access the COS bucket;
 # support the server encryption  based on SSE-COS and SSE-C;
 # support the HTTP proxy settings;
 # support the storage class settings;
 # support the CRC64 checksum.

  was:
This Jira focuses on fixing some known bugs and adding some important features.

 

bugfix:
 # resolve the dependency conflict;
 # fix the upload buffer returning failed when some exceptions occur;
 # fix the issue that the single file upload can not be retried;
 # fix the bug of checking if a file exists through listing the file frequently.

features:
 # support SessionCredentialsProvider and InstanceCredentialsProvider, which 
allows users to specify the credentials in URI or get it from the CVM (Tencent 
Cloud Virtual Machine) bound to the CAM role that can access the COS bucket;
 # support the server encryption  based on SSE-COS and SSE-C;
 # support the HTTP proxy settings;
 # support the storage class settings;
 # support the CRC64 checksum.


> Umbrella Jira for improving the Hadoop-cos support in Hadoop
> 
>
> Key: HADOOP-16955
> URL: https://issues.apache.org/jira/browse/HADOOP-16955
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/cos
>Reporter: YangY
>Assignee: YangY
>Priority: Major
> Attachments: HADOOP-16955-branch-3.3.001.patch
>
>   Original Estimate: 48h
>  Time Spent: 4h
>  Remaining Estimate: 44h
>
> This Umbrella Jira focus on fixing some known bugs and adding some important 
> features.
>  
> bugfix:
>  # resolve the dependency conflict;
>  # fix the upload buffer returning failed when some exceptions occur;
>  # fix the issue that the single file upload can not be retried;
>  # fix the bug of checking if a file exists through listing the file 
> frequently.
> features:
>  # support SessionCredentialsProvider and InstanceCredentialsProvider, which 
> allows users to specify the credentials in URI or get it from the CVM 
> (Tencent Cloud Virtual Machine) bound to the CAM role that can access the COS 
> bucket;
>  # support the server encryption  based on SSE-COS and SSE-C;
>  # support the HTTP proxy settings;
>  # support the storage class settings;
>  # support the CRC64 checksum.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16969) Fix the issue that the single file upload can not be retried in Hadoop-cos

2020-04-11 Thread YangY (Jira)
YangY created HADOOP-16969:
--

 Summary: Fix the issue that the single file upload can not be 
retried in Hadoop-cos
 Key: HADOOP-16969
 URL: https://issues.apache.org/jira/browse/HADOOP-16969
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: YangY






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16968) Optimizing the upload buffer in Hadoop-cos

2020-04-11 Thread YangY (Jira)
YangY created HADOOP-16968:
--

 Summary: Optimizing the upload buffer in Hadoop-cos
 Key: HADOOP-16968
 URL: https://issues.apache.org/jira/browse/HADOOP-16968
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/cos
Reporter: YangY
Assignee: YangY


This task focus on fixing the bug of returning an upload buffer failed when 
some exceptions occur.

 

What's more, the optimizing upload buffer management would be provided.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16959) Resolve hadoop-cos dependency conflict

2020-04-11 Thread YangY (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YangY updated HADOOP-16959:
---
Summary: Resolve hadoop-cos dependency conflict  (was: resolve hadoop-cos 
dependency conflict)

> Resolve hadoop-cos dependency conflict
> --
>
> Key: HADOOP-16959
> URL: https://issues.apache.org/jira/browse/HADOOP-16959
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/cos
>Reporter: YangY
>Assignee: YangY
>Priority: Major
>
> There are some dependency conflicts between the Hadoop-common and Hadoop-cos. 
> For example, joda time lib, HTTP client lib and etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16955) Umbrella Jira for improving the Hadoop-cos support in Hadoop

2020-04-11 Thread YangY (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YangY updated HADOOP-16955:
---
Summary: Umbrella Jira for improving the Hadoop-cos support in Hadoop  
(was: Hadoop-cos: Fix some bugs and add some important features)

> Umbrella Jira for improving the Hadoop-cos support in Hadoop
> 
>
> Key: HADOOP-16955
> URL: https://issues.apache.org/jira/browse/HADOOP-16955
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/cos
>Reporter: YangY
>Assignee: YangY
>Priority: Major
> Attachments: HADOOP-16955-branch-3.3.001.patch
>
>   Original Estimate: 48h
>  Time Spent: 4h
>  Remaining Estimate: 44h
>
> This Jira focuses on fixing some known bugs and adding some important 
> features.
>  
> bugfix:
>  # resolve the dependency conflict;
>  # fix the upload buffer returning failed when some exceptions occur;
>  # fix the issue that the single file upload can not be retried;
>  # fix the bug of checking if a file exists through listing the file 
> frequently.
> features:
>  # support SessionCredentialsProvider and InstanceCredentialsProvider, which 
> allows users to specify the credentials in URI or get it from the CVM 
> (Tencent Cloud Virtual Machine) bound to the CAM role that can access the COS 
> bucket;
>  # support the server encryption  based on SSE-COS and SSE-C;
>  # support the HTTP proxy settings;
>  # support the storage class settings;
>  # support the CRC64 checksum.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on issue #1943: Hadoop 16465 listlocatedstatus optimisation

2020-04-11 Thread GitBox
mukund-thakur commented on issue #1943: Hadoop 16465 listlocatedstatus 
optimisation
URL: https://github.com/apache/hadoop/pull/1943#issuecomment-612370398
 
 
   Failures happening for guarded bucket. 
   This is a parameterised test which runs for both raw and guarded FS. If the 
guard settings are not enabled properly then tests actually skip rather than 
failing. So, I am not sure what am I missing here :( 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14794) Standalone MiniKdc server

2020-04-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17081180#comment-17081180
 ] 

Hadoop QA commented on HADOOP-14794:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 19m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 16m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
15s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}551m 25s{color} 
| {color:red} root in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m 
25s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}712m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestConfiguredFailoverProxyProvider |
|   | hadoop.hdfs.TestMultipleNNPortQOP |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.yarn.applications.distributedshell.TestDistributedShell |
|   | hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor |
|   | hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d |
| JIRA Issue | HADOOP-14794 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12887428/HADOOP-14794.003.patch
 |
| Optional Tests |  dupname  asflicense  shellcheck  shelldocs  compile  javac  
javadoc  mvninstall  mvnsite  unit  shadedclient  xml  |
| uname | Linux ac8766e63b2d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool |