[GitHub] [hadoop] hadoop-yetus commented on issue #1842: HADOOP-16730 : ABFS: Add Authorizer Interface

2020-02-19 Thread GitBox
hadoop-yetus commented on issue #1842: HADOOP-16730 : ABFS: Add Authorizer 
Interface
URL: https://github.com/apache/hadoop/pull/1842#issuecomment-588688524
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 10s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
6 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m 15s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 21s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 52s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 50s |  trunk passed  |
   | -0 :warning: |  patch  |   1m  7s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 21s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 15s |  hadoop-tools/hadoop-azure: The 
patch generated 37 new + 8 unchanged - 1 fixed = 45 total (was 9)  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 13s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   0m 53s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 17s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 28s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  63m 30s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.6 Server=19.03.6 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1842/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1842 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b9d878c0655c 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ec75071 |
   | Default Java | 1.8.0_242 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1842/8/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1842/8/testReport/ |
   | Max. process+thread count | 305 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1842/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16864) ABFS: Test code with Delegation SAS generation logic

2020-02-19 Thread Sneha Vijayarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Vijayarajan resolved HADOOP-16864.

Release Note: Container SAS or DSAS or directory SAS will be handled alike 
in ABFS driver. Test for HADOOP-16730 includes sample reference for a 
SASTokenProvider for container SAS. Resolving this as duplicate of HADOOP-16730.
  Resolution: Duplicate

> ABFS: Test code with Delegation SAS generation logic
> 
>
> Key: HADOOP-16864
> URL: https://issues.apache.org/jira/browse/HADOOP-16864
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.1
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
> Fix For: 3.3.1
>
>
> Add sample delegation SAS token generation code in test framework for 
> reference for any authorizer adopters of SAS authentication.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1855: HADOOP-16869. Upgrade findbugs-maven-plugin to 3.0.5 to fix mvn findbugs:findbugs failure

2020-02-19 Thread GitBox
hadoop-yetus commented on issue #1855: HADOOP-16869. Upgrade 
findbugs-maven-plugin to 3.0.5 to fix mvn findbugs:findbugs failure
URL: https://github.com/apache/hadoop/pull/1855#issuecomment-588596493
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 24s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m  1s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 21s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  54m 22s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 16s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 16s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 16s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 21s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | -1 :x: |  shadedclient  |   5m 22s |  patch has errors when building and 
testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 12s |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  67m 10s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.6 Server=19.03.6 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1855/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1855 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux b16ec2e4e5b0 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ec75071 |
   | Default Java | 1.8.0_242 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1855/1/testReport/ |
   | Max. process+thread count | 327 (vs. ulimit of 5500) |
   | modules | C: hadoop-project U: hadoop-project |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1855/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16869) mvn findbugs:findbugs fails

2020-02-19 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16869:
---
Component/s: build

> mvn findbugs:findbugs fails
> ---
>
> Key: HADOOP-16869
> URL: https://issues.apache.org/jira/browse/HADOOP-16869
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> mvn findbugs:findbugs is failing:
> {noformat}
> [ERROR] Failed to execute goal 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs (default-cli) on 
> project hadoop-project: Unable to parse configuration of mojo 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs for parameter 
> pluginArtifacts: Cannot assign configuration entry 'pluginArtifacts' with 
> value '${plugin.artifacts}' of type 
> java.util.Collections.UnmodifiableRandomAccessList to property of type 
> java.util.ArrayList -> [Help 1]
>  {noformat}
> We have to update the version of findbugs-maven-plugin.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka opened a new pull request #1855: HADOOP-16869. Upgrade findbugs-maven-plugin to 3.0.5 to fix mvn findbugs:findbugs failure

2020-02-19 Thread GitBox
aajisaka opened a new pull request #1855: HADOOP-16869. Upgrade 
findbugs-maven-plugin to 3.0.5 to fix mvn findbugs:findbugs failure
URL: https://github.com/apache/hadoop/pull/1855
 
 
   JIRA: https://issues.apache.org/jira/browse/HADOOP-16869


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16869) mvn findbugs:findbugs fails

2020-02-19 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16869:
---
Status: Patch Available  (was: Open)

> mvn findbugs:findbugs fails
> ---
>
> Key: HADOOP-16869
> URL: https://issues.apache.org/jira/browse/HADOOP-16869
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> mvn findbugs:findbugs is failing:
> {noformat}
> [ERROR] Failed to execute goal 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs (default-cli) on 
> project hadoop-project: Unable to parse configuration of mojo 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs for parameter 
> pluginArtifacts: Cannot assign configuration entry 'pluginArtifacts' with 
> value '${plugin.artifacts}' of type 
> java.util.Collections.UnmodifiableRandomAccessList to property of type 
> java.util.ArrayList -> [Help 1]
>  {noformat}
> We have to update the version of findbugs-maven-plugin.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16869) mvn findbugs:findbugs fails

2020-02-19 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-16869:
--

Assignee: Akira Ajisaka

> mvn findbugs:findbugs fails
> ---
>
> Key: HADOOP-16869
> URL: https://issues.apache.org/jira/browse/HADOOP-16869
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> mvn findbugs:findbugs is failing:
> {noformat}
> [ERROR] Failed to execute goal 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs (default-cli) on 
> project hadoop-project: Unable to parse configuration of mojo 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs for parameter 
> pluginArtifacts: Cannot assign configuration entry 'pluginArtifacts' with 
> value '${plugin.artifacts}' of type 
> java.util.Collections.UnmodifiableRandomAccessList to property of type 
> java.util.ArrayList -> [Help 1]
>  {noformat}
> We have to update the version of findbugs-maven-plugin.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1854: Hadoop 16864: TEST - NOT FOR CHECKIN

2020-02-19 Thread GitBox
hadoop-yetus commented on issue #1854: Hadoop 16864: TEST - NOT FOR CHECKIN
URL: https://github.com/apache/hadoop/pull/1854#issuecomment-588580290
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 16s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
11 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   5m 53s |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   0m 44s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |   1m 44s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  6s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  5s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 36s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  compile  |   0m 34s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  javac  |   0m 34s |  hadoop-azure in the patch failed.  |
   | -0 :warning: |  checkstyle  |   0m 22s |  The patch fails to run 
checkstyle in hadoop-azure  |
   | -1 :x: |  mvnsite  |   0m 35s |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 20s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  the patch passed  |
   | -1 :x: |  findbugs  |   0m 25s |  hadoop-azure in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 24s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  asflicense  |   0m 29s |  The patch generated 3 ASF License 
warnings.  |
   |  |   |  34m 48s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.6 Server=19.03.6 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1854 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a9f2cceb38a1 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ec75071 |
   | Default Java | 1.8.0_242 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/2/artifact/out/branch-mvninstall-root.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/2/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/2/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/2/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/2/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1854/out/maven-patch-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/2/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/2/artifact/out/patch-findbugs-hadoop-tools_hadoop-azure.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/2/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/2/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/2/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 419 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log 

[GitHub] [hadoop] hadoop-yetus commented on issue #1842: HADOOP-16730 : ABFS: Add Authorizer Interface

2020-02-19 Thread GitBox
hadoop-yetus commented on issue #1842: HADOOP-16730 : ABFS: Add Authorizer 
Interface
URL: https://github.com/apache/hadoop/pull/1842#issuecomment-588572835
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 19s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
6 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m  0s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 28s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 54s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 52s |  trunk passed  |
   | -0 :warning: |  patch  |   1m  9s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 25s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 16s |  hadoop-tools/hadoop-azure: The 
patch generated 36 new + 8 unchanged - 1 fixed = 44 total (was 9)  |
   | +1 :green_heart: |  mvnsite  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 22s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   0m 53s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 18s |  hadoop-azure in the patch passed.  
|
   | -1 :x: |  asflicense  |   0m 32s |  The patch generated 1 ASF License 
warnings.  |
   |  |   |  63m 57s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1842/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1842 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 47f03bdcfcbe 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ec75071 |
   | Default Java | 1.8.0_242 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1842/7/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1842/7/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1842/7/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 321 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1842/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya commented on issue #1842: HADOOP-16730 : ABFS: Add Authorizer Interface

2020-02-19 Thread GitBox
snvijaya commented on issue #1842: HADOOP-16730 : ABFS: Add Authorizer Interface
URL: https://github.com/apache/hadoop/pull/1842#issuecomment-588568589
 
 
   @steveloughran  @ThomasMarquardt - Have updated the review based on the 
recent suggestions form Thomas. Test results are as pasted above. Please review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya commented on issue #1842: HADOOP-16730 : ABFS: Add Authorizer Interface

2020-02-19 Thread GitBox
snvijaya commented on issue #1842: HADOOP-16730 : ABFS: Add Authorizer Interface
URL: https://github.com/apache/hadoop/pull/1842#issuecomment-588568283
 
 
   With namespace enabled account (East US 2)
   `[INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0`
   `[WARNING] Tests run: 407, Failures: 0, Errors: 0, Skipped: 32`
   `[WARNING] Tests run: 194, Failures: 0, Errors: 0, Skipped: 24`
   
   Without namespace enabled account (East US 2)
   `[INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0`
   `[WARNING] Tests run: 407, Failures: 0, Errors: 0, Skipped: 232`
   `[WARNING] Tests run: 194, Failures: 0, Errors: 0, Skipped: 24`
   `


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1854: Hadoop 16864: TEST - NOT FOR CHECKIN

2020-02-19 Thread GitBox
hadoop-yetus commented on issue #1854: Hadoop 16864: TEST - NOT FOR CHECKIN
URL: https://github.com/apache/hadoop/pull/1854#issuecomment-588568391
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  0s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
11 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   3m 24s |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 40s |  hadoop-azure in trunk failed.  |
   | -0 :warning: |  checkstyle  |   0m 27s |  The patch fails to run 
checkstyle in hadoop-azure  |
   | -1 :x: |  mvnsite  |   0m 26s |  hadoop-azure in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |   1m 27s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 26s |  hadoop-azure in trunk failed.  |
   | +0 :ok: |  spotbugs  |   2m 21s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   0m 26s |  hadoop-azure in trunk failed.  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 12s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  compile  |   0m 21s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  javac  |   0m 21s |  hadoop-azure in the patch failed.  |
   | -0 :warning: |  checkstyle  |   0m 25s |  The patch fails to run 
checkstyle in hadoop-azure  |
   | -1 :x: |  mvnsite  |   0m 27s |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |   0m 27s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  the patch passed  |
   | -1 :x: |  findbugs  |   0m 35s |  hadoop-azure in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 38s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  asflicense  |   0m 37s |  The patch generated 4 ASF License 
warnings.  |
   |  |   |  14m 54s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.6 Server=19.03.6 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1854 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux cf56cde4cb3c 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ec75071 |
   | Default Java | 1.8.0_242 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/1/artifact/out/branch-mvninstall-root.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/1/artifact/out/branch-compile-hadoop-tools_hadoop-azure.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/1/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1854/out/maven-branch-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/1/artifact/out/branch-mvnsite-hadoop-tools_hadoop-azure.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/1/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/1/artifact/out/branch-findbugs-hadoop-tools_hadoop-azure.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/1/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/1/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/1/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/1/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1854/out/maven-patch-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/1/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1854/1/artifact/out/patch-findbugs-hadoop-tools_hadoop-azure.txt
 |
   | unit | 

[GitHub] [hadoop] snvijaya opened a new pull request #1854: Hadoop 16864: TEST - NOT FOR CHECKIN

2020-02-19 Thread GitBox
snvijaya opened a new pull request #1854: Hadoop 16864: TEST - NOT FOR CHECKIN
URL: https://github.com/apache/hadoop/pull/1854
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move et

2020-02-19 Thread GitBox
hadoop-yetus commented on issue #1829: HDFS-14743. Enhance 
INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support 
Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-588542045
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  8s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  21m 16s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 42s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m 16s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 14s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  3s |  the patch passed  |
   | -1 :x: |  javac  |   1m  3s |  hadoop-hdfs-project_hadoop-hdfs generated 6 
new + 580 unchanged - 0 fixed = 586 total (was 580)  |
   | -0 :warning: |  checkstyle  |   0m 44s |  hadoop-hdfs-project/hadoop-hdfs: 
The patch generated 54 new + 245 unchanged - 0 fixed = 299 total (was 245)  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 1 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  16m 38s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 40s |  hadoop-hdfs-project_hadoop-hdfs generated 
1 new + 100 unchanged - 0 fixed = 101 total (was 100)  |
   | -1 :x: |  findbugs  |   3m 27s |  hadoop-hdfs-project/hadoop-hdfs 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  29m  1s |  hadoop-hdfs in the patch passed.  |
   | +0 :ok: |  asflicense  |   0m 40s |  ASF License check generated no 
output?  |
   |  |   | 103m 21s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
   |  |  
org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider$AuthorizationContext
 defines equals and uses Object.hashCode()  At 
INodeAttributeProvider.java:Object.hashCode()  At 
INodeAttributeProvider.java:[lines 211-217] |
   | Failed junit tests | hadoop.hdfs.TestDFSClientFailover |
   |   | hadoop.hdfs.TestWriteRead |
   |   | hadoop.hdfs.client.impl.TestBlockReaderFactory |
   |   | hadoop.hdfs.TestEncryptionZones |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.6 Server=19.03.6 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1829 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ce99c1da1660 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ec75071 |
   | Default Java | 1.8.0_242 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/6/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/6/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/6/artifact/out/whitespace-eol.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/6/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/6/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/6/testReport/ |
   | Max. process+thread count | 2609 (vs. ulimit of 5500) |
   | 

[GitHub] [hadoop] hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move et

2020-02-19 Thread GitBox
hadoop-yetus commented on issue #1829: HDFS-14743. Enhance 
INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support 
Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-588509498
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m 45s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 50s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m 25s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 23s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  the patch passed  |
   | -1 :x: |  javac  |   1m 14s |  hadoop-hdfs-project_hadoop-hdfs generated 6 
new + 580 unchanged - 0 fixed = 586 total (was 580)  |
   | -0 :warning: |  checkstyle  |   0m 45s |  hadoop-hdfs-project/hadoop-hdfs: 
The patch generated 54 new + 245 unchanged - 0 fixed = 299 total (was 245)  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m  3s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 39s |  hadoop-hdfs-project_hadoop-hdfs generated 
1 new + 100 unchanged - 0 fixed = 101 total (was 100)  |
   | -1 :x: |  findbugs  |   4m 40s |  hadoop-hdfs-project/hadoop-hdfs 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 144m 22s |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  asflicense  |   0m 37s |  The patch generated 1 ASF License 
warnings.  |
   |  |   | 221m  3s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
   |  |  
org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider$AuthorizationContext
 defines equals and uses Object.hashCode()  At 
INodeAttributeProvider.java:Object.hashCode()  At 
INodeAttributeProvider.java:[lines 211-217] |
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.TestReadStripedFileWithDecodingDeletedData |
   |   | hadoop.hdfs.TestDeadNodeDetection |
   |   | hadoop.hdfs.server.namenode.TestAuditLoggerWithCommands |
   |   | hadoop.hdfs.server.namenode.TestLargeDirectoryDelete |
   |   | hadoop.hdfs.server.namenode.ha.TestLossyRetryInvocationHandler |
   |   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
   |   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
   |   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
   |   | hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication |
   |   | hadoop.hdfs.TestReadStripedFileWithDNFailure |
   |   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
   |   | hadoop.hdfs.server.namenode.TestCacheDirectives |
   |   | hadoop.hdfs.server.namenode.TestUpgradeDomainBlockPlacementPolicy |
   |   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
   |   | hadoop.hdfs.TestMaintenanceState |
   |   | hadoop.hdfs.server.namenode.TestEditLogAutoroll |
   |   | hadoop.hdfs.server.namenode.TestFSImage |
   |   | hadoop.hdfs.server.namenode.ha.TestStandbyIsHot |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1829 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 366e61eb090f 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3f1aad0 |
   | Default Java | 1.8.0_242 |
   | javac | 

[GitHub] [hadoop] hadoop-yetus commented on issue #1820: HADOOP-16830. Add public IOStatistics API + S3A implementation

2020-02-19 Thread GitBox
hadoop-yetus commented on issue #1820: HADOOP-16830. Add public IOStatistics 
API + S3A implementation
URL: https://github.com/apache/hadoop/pull/1820#issuecomment-588482038
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 10s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
12 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 37s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 58s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 23s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 53s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 10s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 57s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  7s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 17s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 29s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 11s |  the patch passed  |
   | +1 :green_heart: |  javac  |  18m 11s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 51s |  root: The patch generated 97 new 
+ 95 unchanged - 19 fixed = 192 total (was 114)  |
   | +1 :green_heart: |  mvnsite  |   2m  7s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 24s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 54s |  hadoop-common-project_hadoop-common 
generated 87 new + 101 unchanged - 0 fixed = 188 total (was 101)  |
   | -1 :x: |  findbugs  |   2m 18s |  hadoop-common-project/hadoop-common 
generated 29 new + 0 unchanged - 0 fixed = 29 total (was 0)  |
   | -1 :x: |  findbugs  |   1m 16s |  hadoop-tools/hadoop-aws generated 14 new 
+ 0 unchanged - 0 fixed = 14 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 32s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   1m 25s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 131m 46s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
42] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
45] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
48] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
51] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
54] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
57] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
60] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
63] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
66] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
69] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
72] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
75] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
81] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
78] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
84] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
87] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
90] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
93] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
96] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
99] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
102] |
   |  |  

[GitHub] [hadoop] hadoop-yetus commented on issue #1820: HADOOP-16830. Add public IOStatistics API + S3A implementation

2020-02-19 Thread GitBox
hadoop-yetus commented on issue #1820: HADOOP-16830. Add public IOStatistics 
API + S3A implementation
URL: https://github.com/apache/hadoop/pull/1820#issuecomment-588473900
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
12 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  18m 53s |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m  3s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 41s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 17s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 35s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 12s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 13s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 36s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 19s |  hadoop-aws in the patch failed.  |
   | -1 :x: |  compile  |  15m 37s |  root in the patch failed.  |
   | -1 :x: |  javac  |  15m 37s |  root in the patch failed.  |
   | -0 :warning: |  checkstyle  |   2m 42s |  root: The patch generated 96 new 
+ 95 unchanged - 19 fixed = 191 total (was 114)  |
   | -1 :x: |  mvnsite  |   0m 40s |  hadoop-aws in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 10s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   1m  4s |  hadoop-common-project_hadoop-common 
generated 87 new + 101 unchanged - 0 fixed = 188 total (was 101)  |
   | -1 :x: |  findbugs  |   2m 16s |  hadoop-common-project/hadoop-common 
generated 29 new + 0 unchanged - 0 fixed = 29 total (was 0)  |
   | -1 :x: |  findbugs  |   0m 38s |  hadoop-aws in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 27s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |   0m 39s |  hadoop-aws in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 119m 33s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
42] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
45] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
48] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
51] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
54] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
57] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
60] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
63] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
66] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
69] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
72] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
75] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
81] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
78] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
84] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
87] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
90] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
93] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
96] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
99] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
102] |
   |  |  Unread public/protected field:At 

[GitHub] [hadoop] hadoop-yetus commented on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-19 Thread GitBox
hadoop-yetus commented on issue #1838: HADOOP-16711 Add way to skip 
verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#issuecomment-588444854
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 11s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
7 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m 46s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 43s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  4s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  2s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 27s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 18s |  hadoop-tools/hadoop-aws: The 
patch generated 1 new + 30 unchanged - 0 fixed = 31 total (was 30)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 40s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m 25s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 35s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  66m 32s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.6 Server=19.03.6 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1838 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint xml |
   | uname | Linux 9de8b593adf5 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ec75071 |
   | Default Java | 1.8.0_242 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/8/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/8/testReport/ |
   | Max. process+thread count | 342 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur edited a comment on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-19 Thread GitBox
mukund-thakur edited a comment on issue #1838: HADOOP-16711 Add way to skip 
verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#issuecomment-588400930
 
 
   > Two tests are failing. Will debug. Also will rebase from trunk and fix 
merge conflicts.
   
   Fixed merge conflicts. Above tests are succeeding now. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-19 Thread GitBox
mukund-thakur commented on issue #1838: HADOOP-16711 Add way to skip 
verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#issuecomment-588400930
 
 
   > Two tests are failing. Will debug. Also will rebase from trunk and fix 
merge conflicts.
   Fixed merge conflicts. Above tests are succeeding now. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmd

2020-02-19 Thread GitBox
jojochuang commented on a change in pull request #1829: HDFS-14743. Enhance 
INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support 
Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#discussion_r381478385
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
 ##
 @@ -68,6 +277,8 @@ public abstract void checkPermission(String fsOwner, String 
supergroup,
 boolean ignoreEmptyDir)
 throws AccessControlException;
 
+void checkPermissionWithContext(AuthorizationContext authzContext)
 
 Review comment:
   done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmd

2020-02-19 Thread GitBox
jojochuang commented on a change in pull request #1829: HDFS-14743. Enhance 
INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support 
Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#discussion_r381478346
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
 ##
 @@ -17,19 +17,227 @@
  */
 package org.apache.hadoop.hdfs.server.namenode;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.hdfs.DFSUtil;
+import org.apache.hadoop.ipc.CallerContext;
 import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.UserGroupInformation;
 
 @InterfaceAudience.Public
 @InterfaceStability.Unstable
 public abstract class INodeAttributeProvider {
 
+  public static class AuthorizationContext {
+public String fsOwner;
+public String supergroup;
+public UserGroupInformation callerUgi;
+public INodeAttributes[] inodeAttrs;
+public INode[] inodes;
+public byte[][] pathByNameArr;
+public int snapshotId;
+public String path;
+public int ancestorIndex;
+public boolean doCheckOwner;
+public FsAction ancestorAccess;
+public FsAction parentAccess;
+public FsAction access;
+public FsAction subAccess;
+public boolean ignoreEmptyDir;
+public String operationName;
+public CallerContext callerContext;
+
+public static class Builder {
+  public String fsOwner;
+  public String supergroup;
+  public UserGroupInformation callerUgi;
+  public INodeAttributes[] inodeAttrs;
+  public INode[] inodes;
+  public byte[][] pathByNameArr;
+  public int snapshotId;
+  public String path;
+  public int ancestorIndex;
+  public boolean doCheckOwner;
+  public FsAction ancestorAccess;
+  public FsAction parentAccess;
+  public FsAction access;
+  public FsAction subAccess;
+  public boolean ignoreEmptyDir;
+  public String operationName;
+  public CallerContext callerContext;
+
+  public AuthorizationContext build() {
+return new AuthorizationContext(this);
+  }
+
+  public Builder fsOwner(String val) {
+this.fsOwner = val;
+return this;
+  }
+
+  public Builder supergroup(String val) {
+this.supergroup = val;
+return this;
+  }
+
+  public Builder callerUgi(UserGroupInformation val) {
+this.callerUgi = val;
+return this;
+  }
+
+  public Builder inodeAttrs(INodeAttributes[] val) {
+this.inodeAttrs = val;
+return this;
+  }
+
+  public Builder inodes(INode[] val) {
+this.inodes = val;
+return this;
+  }
+
+  public Builder pathByNameArr(byte[][] val) {
+this.pathByNameArr = val;
+return this;
+  }
+
+  public Builder snapshotId(int val) {
+this.snapshotId = val;
+return this;
+  }
+
+  public Builder path(String val) {
+this.path = val;
+return this;
+  }
+
+  public Builder ancestorIndex(int val) {
+this.ancestorIndex = val;
+return this;
+  }
+
+  public Builder doCheckOwner(boolean val) {
+this.doCheckOwner = val;
+return this;
+  }
+
+  public Builder ancestorAccess(FsAction val) {
+this.ancestorAccess = val;
+return this;
+  }
+
+  public Builder parentAccess(FsAction val) {
+this.parentAccess = val;
+return this;
+  }
+
+  public Builder access(FsAction val) {
+this.access = val;
+return this;
+  }
+
+  public Builder subAccess(FsAction val) {
+this.subAccess = val;
+return this;
+  }
+
+  public Builder ignoreEmptyDir(boolean val) {
+this.ignoreEmptyDir = val;
+return this;
+  }
+
+  public Builder operationName(String val) {
+this.operationName = val;
+return this;
+  }
+
+  public Builder callerContext(CallerContext val) {
+this.callerContext = val;
+return this;
+  }
+}
+
+public AuthorizationContext(
+String fsOwner,
+String supergroup,
+UserGroupInformation callerUgi,
+INodeAttributes[] inodeAttrs,
+INode[] inodes,
+byte[][] pathByNameArr,
+int snapshotId,
+String path,
+int ancestorIndex,
+boolean doCheckOwner,
+FsAction ancestorAccess,
+FsAction parentAccess,
+FsAction access,
+FsAction subAccess,
+boolean ignoreEmptyDir) {
+  this.fsOwner = fsOwner;
+  this.supergroup = 

[GitHub] [hadoop] jojochuang commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmd

2020-02-19 Thread GitBox
jojochuang commented on a change in pull request #1829: HDFS-14743. Enhance 
INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support 
Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#discussion_r381477751
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
 ##
 @@ -17,19 +17,227 @@
  */
 package org.apache.hadoop.hdfs.server.namenode;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.hdfs.DFSUtil;
+import org.apache.hadoop.ipc.CallerContext;
 import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.UserGroupInformation;
 
 @InterfaceAudience.Public
 @InterfaceStability.Unstable
 public abstract class INodeAttributeProvider {
 
+  public static class AuthorizationContext {
+public String fsOwner;
+public String supergroup;
+public UserGroupInformation callerUgi;
+public INodeAttributes[] inodeAttrs;
+public INode[] inodes;
+public byte[][] pathByNameArr;
+public int snapshotId;
+public String path;
+public int ancestorIndex;
+public boolean doCheckOwner;
+public FsAction ancestorAccess;
+public FsAction parentAccess;
+public FsAction access;
+public FsAction subAccess;
+public boolean ignoreEmptyDir;
+public String operationName;
+public CallerContext callerContext;
+
+public static class Builder {
+  public String fsOwner;
+  public String supergroup;
+  public UserGroupInformation callerUgi;
+  public INodeAttributes[] inodeAttrs;
+  public INode[] inodes;
+  public byte[][] pathByNameArr;
+  public int snapshotId;
+  public String path;
+  public int ancestorIndex;
+  public boolean doCheckOwner;
+  public FsAction ancestorAccess;
+  public FsAction parentAccess;
+  public FsAction access;
+  public FsAction subAccess;
+  public boolean ignoreEmptyDir;
+  public String operationName;
+  public CallerContext callerContext;
+
+  public AuthorizationContext build() {
+return new AuthorizationContext(this);
+  }
+
+  public Builder fsOwner(String val) {
+this.fsOwner = val;
+return this;
+  }
+
+  public Builder supergroup(String val) {
+this.supergroup = val;
+return this;
+  }
+
+  public Builder callerUgi(UserGroupInformation val) {
+this.callerUgi = val;
+return this;
+  }
+
+  public Builder inodeAttrs(INodeAttributes[] val) {
+this.inodeAttrs = val;
+return this;
+  }
+
+  public Builder inodes(INode[] val) {
+this.inodes = val;
+return this;
+  }
+
+  public Builder pathByNameArr(byte[][] val) {
+this.pathByNameArr = val;
+return this;
+  }
+
+  public Builder snapshotId(int val) {
+this.snapshotId = val;
+return this;
+  }
+
+  public Builder path(String val) {
+this.path = val;
+return this;
+  }
+
+  public Builder ancestorIndex(int val) {
+this.ancestorIndex = val;
+return this;
+  }
+
+  public Builder doCheckOwner(boolean val) {
+this.doCheckOwner = val;
+return this;
+  }
+
+  public Builder ancestorAccess(FsAction val) {
+this.ancestorAccess = val;
+return this;
+  }
+
+  public Builder parentAccess(FsAction val) {
+this.parentAccess = val;
+return this;
+  }
+
+  public Builder access(FsAction val) {
+this.access = val;
+return this;
+  }
+
+  public Builder subAccess(FsAction val) {
+this.subAccess = val;
+return this;
+  }
+
+  public Builder ignoreEmptyDir(boolean val) {
+this.ignoreEmptyDir = val;
+return this;
+  }
+
+  public Builder operationName(String val) {
+this.operationName = val;
+return this;
+  }
+
+  public Builder callerContext(CallerContext val) {
+this.callerContext = val;
+return this;
+  }
+}
+
+public AuthorizationContext(
+String fsOwner,
+String supergroup,
+UserGroupInformation callerUgi,
+INodeAttributes[] inodeAttrs,
+INode[] inodes,
+byte[][] pathByNameArr,
+int snapshotId,
+String path,
+int ancestorIndex,
+boolean doCheckOwner,
+FsAction ancestorAccess,
+FsAction parentAccess,
+FsAction access,
+FsAction subAccess,
+boolean ignoreEmptyDir) {
+  this.fsOwner = fsOwner;
+  this.supergroup = 

[jira] [Commented] (HADOOP-16868) ipc.Server readAndProcess threw NullPointerException

2020-02-19 Thread Tsz-wo Sze (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040332#comment-17040332
 ] 

Tsz-wo Sze commented on HADOOP-16868:
-

[~weichiu], thanks a lot for the quick review and commit.

> ipc.Server readAndProcess threw NullPointerException
> 
>
> Key: HADOOP-16868
> URL: https://issues.apache.org/jira/browse/HADOOP-16868
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: rpc-server
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: c16868_20200218.patch
>
>
> {code}
> 2020-01-18 10:19:02,109 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 
> for port 8020: readAndProcess from client xx.xx.xx.xx threw exception 
> [java.lang.NullPointerException]
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1676)
>   at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:935)
>   at 
> org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:791)
>   at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:762)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16647) Support OpenSSL 1.1.1 LTS

2020-02-19 Thread Luca Toscano (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040320#comment-17040320
 ] 

Luca Toscano commented on HADOOP-16647:
---

Should 
[https://github.com/apache/hadoop/commit/138c1ed5660f713d24bfebc44ea1846f76c00cb9]
 also be considered for backport to branch-2 (I suppose this means Hadoop 2.x)? 
I am currently working on BIGTOP-3308 to fix Debian 9's openssl 1.1.0 
compatibility with BigTop 1.4 (Hadoop 2.8.5) and IIUC the aforementioned commit 
is essential to avoid functions deprecated in openssl 1.1.0 to be used. What do 
you think?

> Support OpenSSL 1.1.1 LTS
> -
>
> Key: HADOOP-16647
> URL: https://issues.apache.org/jira/browse/HADOOP-16647
> Project: Hadoop Common
>  Issue Type: Task
>  Components: security
>Reporter: Wei-Chiu Chuang
>Priority: Critical
>
> See Hadoop user mailing list 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201910.mbox/%3CCADiq6%3DweDFxHTL_7eGwDNnxVCza39y2QYQTSggfLn7mXhMLOdg%40mail.gmail.com%3E
> Hadoop 2 supports OpenSSL 1.0.2.
> Hadoop 3 supports OpenSSL 1.1.0 (HADOOP-14597) and I believe 1.0.2 too.
> Per OpenSSL blog https://www.openssl.org/policies/releasestrat.html
> * 1.1.0 is EOL 2019/09/11
> * 1.0.2 EOL 2019/12/31
> * 1.1.1 is EOL 2023/09/11 (LTS)
> Many Hadoop installation relies on the OpenSSL package provided by Linux 
> distros, but it's not clear to me if Linux distros are going support 
> 1.1.0/1.0.2 beyond this date.
> We should make sure Hadoop works with OpenSSL 1.1.1, as well as document the 
> openssl version supported. File this jira to test/document/fix bugs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-16874) For AzureNativeFS, when BlockCompaction is enabled, FileSystem.create(path).close() would throw exception.

2020-02-19 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang moved HDFS-15183 to HADOOP-16874:
-

  Component/s: (was: fs/azure)
   fs/azure
  Key: HADOOP-16874  (was: HDFS-15183)
Affects Version/s: (was: 3.2.1)
   (was: 2.9.2)
   2.9.2
   3.2.1
  Project: Hadoop Common  (was: Hadoop HDFS)

> For AzureNativeFS, when BlockCompaction is enabled, 
> FileSystem.create(path).close() would throw exception.
> --
>
> Key: HADOOP-16874
> URL: https://issues.apache.org/jira/browse/HADOOP-16874
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.2.1, 2.9.2
> Environment: macOS Mojave 10.14.6
>  
>Reporter: Xiaolei Liu
>Priority: Minor
>
> For AzureNativeFS, when BlockCompaction is enabled, 
> FileSystem.create(path).close() would throw blob not existed exception.
> Block Compaction Setting: fs.azure.block.blob.with.compaction.dir
> Exception is thrown from close(), this would happen when no write happened. 
> When actually write any content in the file, same context close() won't 
> trigger the exception. 
> When BlockCompaction is not enabled, this issue won't happen. 
> Call Stack:
> org.apache.hadoop.fs.azure.AzureException: Source blob 
> _$azuretmpfolder$/956457df-4a3e-4285-bc68-29f68b9b36c4test1911.log does not 
> exist.
> org.apache.hadoop.fs.azure.AzureException: Source blob 
> _$azuretmpfolder$/956457df-4a3e-4285-bc68-29f68b9b36c4test1911.log does not 
> exist. 
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.rename(AzureNativeFileSystemStore.java:2648)
>  
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.rename(AzureNativeFileSystemStore.java:2608)
>  
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsOutputStream.restoreKey(NativeAzureFileSystem.java:1199)
>  
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsOutputStream.close(NativeAzureFileSystem.java:1068)
>  
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>  
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2020-02-19 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040264#comment-17040264
 ] 

Wei-Chiu Chuang commented on HADOOP-16206:
--

Unless there's a big push for this, it is likely going to happen after Hadoop 
3.3.0 (3.4.0 maybe?)

2.7 is dead. 2.8 is pretty much dead by now (i sent an EOL discussion thread in 
the dev mailing lists). Given that this is an incompat change i suspect we 
won't cherrypick to lower branches at all.

> Migrate from Log4j1 to Log4j2
> -
>
> Key: HADOOP-16206
> URL: https://issues.apache.org/jira/browse/HADOOP-16206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16206-wip.001.patch
>
>
> This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-19 Thread GitBox
mukund-thakur commented on issue #1838: HADOOP-16711 Add way to skip 
verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#issuecomment-588323865
 
 
   Two tests are failing. Will debug. Also will rebase from trunk and fix merge 
conflicts. 
   
   > [INFO] Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir
   [ERROR] Tests run: 9, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 
48.592 s <<< FAILURE! - in 
org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir
   [ERROR] 
testListEmptyRootDirectory(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir)
  Time elapsed: 15.053 s  <<< FAILURE!
   java.lang.AssertionError: 
   Expected no results from listLocatedStatus(/), but got 1 elements:
   S3ALocatedFileStatus{path=s3a://mthakur-data/fork-0002; isDirectory=true; 
modification_time=1582129763970; access_time=0; owner=mthakur; group=mthakur; 
permission=rwxrwxrwx; isSymlink=false; hasAcl=false; isEncrypted=true; 
isErasureCoded=false}[eTag='', versionId='']
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.assertNoElements(AbstractContractRootDirectoryTest.java:218)
at 
org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testListEmptyRootDirectory(AbstractContractRootDirectoryTest.java:202)
at 
org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir.testListEmptyRootDirectory(ITestS3AContractRootDir.java:82)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
   [ERROR] 
testSimpleRootListing(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir)
  Time elapsed: 0.819 s  <<< FAILURE!
   java.lang.AssertionError: 
   listStatus(/) vs listLocatedStatus(/) with 
   listStatus =S3AFileStatus{path=s3a://mthakur-data/test; isDirectory=true; 
modification_time=0; access_time=0; owner=mthakur; group=mthakur; 
permission=rwxrwxrwx; isSymlink=false; hasAcl=false; isEncrypted=true; 
isErasureCoded=false} isEmptyDirectory=FALSE eTag=null versionId=null 
listLocatedStatus = S3ALocatedFileStatus{path=s3a://mthakur-data/fork-0002; 
isDirectory=true; modification_time=1582129765040; access_time=0; 
owner=mthakur; group=mthakur; permission=rwxrwxrwx; isSymlink=false; 
hasAcl=false; isEncrypted=true; isErasureCoded=false}[eTag='', versionId='']
   S3ALocatedFileStatus{path=s3a://mthakur-data/test; isDirectory=true; 
modification_time=1582129765040; access_time=0; owner=mthakur; group=mthakur; 
permission=rwxrwxrwx; isSymlink=false; hasAcl=false; isEncrypted=true; 
isErasureCoded=false}[eTag='', versionId=''] expected:<1> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testSimpleRootListing(AbstractContractRootDirectoryTest.java:235)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 

[GitHub] [hadoop] kihwal commented on a change in pull request #1758: HDFS-15052. WebHDFS getTrashRoot leads to OOM due to FileSystem objec…

2020-02-19 Thread GitBox
kihwal commented on a change in pull request #1758: HDFS-15052. WebHDFS 
getTrashRoot leads to OOM due to FileSystem objec…
URL: https://github.com/apache/hadoop/pull/1758#discussion_r381361709
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
 ##
 @@ -1345,11 +1348,21 @@ protected Response get(
 }
   }
 
-  private static String getTrashRoot(String fullPath,
-  Configuration conf) throws IOException {
-FileSystem fs = FileSystem.get(conf != null ? conf : new Configuration());
-return fs.getTrashRoot(
-new org.apache.hadoop.fs.Path(fullPath)).toUri().getPath();
+  private String getTrashRoot(String fullPath) throws IOException {
+String user = UserGroupInformation.getCurrentUser().getShortUserName();
+org.apache.hadoop.fs.Path path = new org.apache.hadoop.fs.Path(fullPath);
+String parentSrc = path.isRoot() ?
+path.toUri().getPath() : path.getParent().toUri().getPath();
+EncryptionZone ez = getRpcClientProtocol().getEZForPath(parentSrc);
+org.apache.hadoop.fs.Path trashRoot;
+if (ez != null) {
+  trashRoot = new org.apache.hadoop.fs.Path(
+  new org.apache.hadoop.fs.Path(ez.getPath(), TRASH_PREFIX), user);
+} else {
+  trashRoot = new org.apache.hadoop.fs.Path(
+  new org.apache.hadoop.fs.Path(USER_HOME_PREFIX, user), TRASH_PREFIX);
+}
+return trashRoot.toUri().getPath();
 
 Review comment:
   I am fine with an interim solution to fix the immediate issues. But use of 
`Path` in NN is far from ideal. It will be best if this is addressed now than 
later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on a change in pull request #1758: HDFS-15052. WebHDFS getTrashRoot leads to OOM due to FileSystem objec…

2020-02-19 Thread GitBox
sodonnel commented on a change in pull request #1758: HDFS-15052. WebHDFS 
getTrashRoot leads to OOM due to FileSystem objec…
URL: https://github.com/apache/hadoop/pull/1758#discussion_r381356059
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
 ##
 @@ -1345,11 +1348,21 @@ protected Response get(
 }
   }
 
-  private static String getTrashRoot(String fullPath,
-  Configuration conf) throws IOException {
-FileSystem fs = FileSystem.get(conf != null ? conf : new Configuration());
-return fs.getTrashRoot(
-new org.apache.hadoop.fs.Path(fullPath)).toUri().getPath();
+  private String getTrashRoot(String fullPath) throws IOException {
+String user = UserGroupInformation.getCurrentUser().getShortUserName();
+org.apache.hadoop.fs.Path path = new org.apache.hadoop.fs.Path(fullPath);
+String parentSrc = path.isRoot() ?
+path.toUri().getPath() : path.getParent().toUri().getPath();
+EncryptionZone ez = getRpcClientProtocol().getEZForPath(parentSrc);
+org.apache.hadoop.fs.Path trashRoot;
+if (ez != null) {
+  trashRoot = new org.apache.hadoop.fs.Path(
+  new org.apache.hadoop.fs.Path(ez.getPath(), TRASH_PREFIX), user);
+} else {
+  trashRoot = new org.apache.hadoop.fs.Path(
+  new org.apache.hadoop.fs.Path(USER_HOME_PREFIX, user), TRASH_PREFIX);
+}
+return trashRoot.toUri().getPath();
 
 Review comment:
   From the the current implementation, we need to fix the memory leak and the 
fact it does not work with security enabled, and probably backport that across 
the active branches. The memory leak especially is a big problem.
   
   Would it make sense to adopt this patch as it stands and push it to all the 
branches and then open another Jira to implement the namenode RPC suggestion on 
trunk as a new feature?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2020-02-19 Thread Sourabh Sarvotham Parkala (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040136#comment-17040136
 ] 

Sourabh Sarvotham Parkala edited comment on HADOOP-16206 at 2/19/20 3:02 PM:
-

Request for the actual release date with this update. Also, wanted to check if 
there is a plan to backport the log4j migration till 2.7.x version as well?


was (Author: sourabhsparkala):
Request for the actual release date with this update. Also, wanted to check if 
there is a plan to downport the log4j migration till 2.7.x version as well?

> Migrate from Log4j1 to Log4j2
> -
>
> Key: HADOOP-16206
> URL: https://issues.apache.org/jira/browse/HADOOP-16206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16206-wip.001.patch
>
>
> This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2020-02-19 Thread Sourabh Sarvotham Parkala (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040136#comment-17040136
 ] 

Sourabh Sarvotham Parkala commented on HADOOP-16206:


Request for the actual release date with this update. Also, wanted to check if 
there is a plan to downport the log4j migration till 2.7.x version as well?

> Migrate from Log4j1 to Log4j2
> -
>
> Key: HADOOP-16206
> URL: https://issues.apache.org/jira/browse/HADOOP-16206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16206-wip.001.patch
>
>
> This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] kihwal commented on a change in pull request #1758: HDFS-15052. WebHDFS getTrashRoot leads to OOM due to FileSystem objec…

2020-02-19 Thread GitBox
kihwal commented on a change in pull request #1758: HDFS-15052. WebHDFS 
getTrashRoot leads to OOM due to FileSystem objec…
URL: https://github.com/apache/hadoop/pull/1758#discussion_r381340045
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
 ##
 @@ -1345,11 +1348,21 @@ protected Response get(
 }
   }
 
-  private static String getTrashRoot(String fullPath,
-  Configuration conf) throws IOException {
-FileSystem fs = FileSystem.get(conf != null ? conf : new Configuration());
-return fs.getTrashRoot(
-new org.apache.hadoop.fs.Path(fullPath)).toUri().getPath();
+  private String getTrashRoot(String fullPath) throws IOException {
+String user = UserGroupInformation.getCurrentUser().getShortUserName();
+org.apache.hadoop.fs.Path path = new org.apache.hadoop.fs.Path(fullPath);
+String parentSrc = path.isRoot() ?
+path.toUri().getPath() : path.getParent().toUri().getPath();
+EncryptionZone ez = getRpcClientProtocol().getEZForPath(parentSrc);
+org.apache.hadoop.fs.Path trashRoot;
+if (ez != null) {
+  trashRoot = new org.apache.hadoop.fs.Path(
+  new org.apache.hadoop.fs.Path(ez.getPath(), TRASH_PREFIX), user);
+} else {
+  trashRoot = new org.apache.hadoop.fs.Path(
+  new org.apache.hadoop.fs.Path(USER_HOME_PREFIX, user), TRASH_PREFIX);
+}
+return trashRoot.toUri().getPath();
 
 Review comment:
   Discussed with @daryn-sharp on this.  New features in the future might 
require additional logic in determining trash root. It is better to be done at 
the server-side.  So, the Ideal solution would be to add a `getTrashRoot()` 
namenode RPC method. `DistributedFileSystem` would then have a fallback logic 
for compatibility with older servers.  `NamenodeWebHdfsMethods` would simply 
call this rpc method.
   
   Also, use of `Path` on NN is highly discouraged. Path is super-expensive and 
performs normalization that no other operation performs. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13980) S3Guard CLI: Add fsck check and fix commands

2020-02-19 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-13980:

Summary: S3Guard CLI: Add fsck check and fix commands  (was: S3Guard CLI: 
Add fsck check command)

> S3Guard CLI: Add fsck check and fix commands
> 
>
> Key: HADOOP-13980
> URL: https://issues.apache.org/jira/browse/HADOOP-13980
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Major
>
> As discussed in HADOOP-13650, we want to add an S3Guard CLI command which 
> compares S3 with MetadataStore, and returns a failure status if any 
> invariants are violated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1851: HADOOP-16858. S3Guard fsck: Add option to remove orphaned entries

2020-02-19 Thread GitBox
hadoop-yetus commented on issue #1851: HADOOP-16858. S3Guard fsck: Add option 
to remove orphaned entries
URL: https://github.com/apache/hadoop/pull/1851#issuecomment-588239207
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  20m  6s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 23s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 59s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 57s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 27s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 20s |  hadoop-tools/hadoop-aws: The 
patch generated 3 new + 23 unchanged - 0 fixed = 26 total (was 23)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 23s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m  0s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 26s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  60m 22s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.6 Server=19.03.6 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1851/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1851 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 47abdffba1da 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cb3f3cc |
   | Default Java | 1.8.0_242 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1851/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1851/2/testReport/ |
   | Max. process+thread count | 419 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1851/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1839: HADOOP-16848. Refactoring: initial layering

2020-02-19 Thread GitBox
hadoop-yetus commented on issue #1839: HADOOP-16848. Refactoring: initial 
layering
URL: https://github.com/apache/hadoop/pull/1839#issuecomment-588225353
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  7s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 20s |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m  2s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 37s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 20s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 31s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 11s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 16s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 27s |  hadoop-aws in the patch failed.  |
   | -1 :x: |  compile  |  15m 31s |  root in the patch failed.  |
   | -1 :x: |  javac  |  15m 31s |  root in the patch failed.  |
   | -0 :warning: |  checkstyle  |   2m 40s |  root: The patch generated 43 new 
+ 31 unchanged - 0 fixed = 74 total (was 31)  |
   | -1 :x: |  mvnsite  |   0m 49s |  hadoop-aws in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 25s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 37s |  hadoop-tools_hadoop-aws generated 1 new + 
4 unchanged - 0 fixed = 5 total (was 4)  |
   | -1 :x: |  findbugs  |   0m 35s |  hadoop-aws in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 46s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |   0m 35s |  hadoop-aws in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 120m 49s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.6 Server=19.03.6 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1839 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 404ab7a7b7be 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cb3f3cc |
   | Default Java | 1.8.0_242 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/2/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/2/artifact/out/patch-compile-root.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/2/artifact/out/patch-compile-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/2/artifact/out/diff-checkstyle-root.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/2/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/2/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/2/artifact/out/patch-findbugs-hadoop-tools_hadoop-aws.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/2/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/2/testReport/ |
   | Max. process+thread count | 3206 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an 

[GitHub] [hadoop] bgaborg commented on issue #1851: HADOOP-16858. S3Guard fsck: Add option to remove orphaned entries

2020-02-19 Thread GitBox
bgaborg commented on issue #1851: HADOOP-16858. S3Guard fsck: Add option to 
remove orphaned entries
URL: https://github.com/apache/hadoop/pull/1851#issuecomment-588209054
 
 
   Added test and removed the violation fix method impl from subclasses.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16872) Performance improvement when distcp files in large dir with -direct option

2020-02-19 Thread liuxiaolong (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuxiaolong updated HADOOP-16872:
-
Description: 
We use distcp with -direct option to copy a file between two large directories. 
We found it costed a few minutes. If we launch too much distcp jobs at the same 
time, NameNode  performance degradation is serious.

hadoop -direct -skipcrccheck -update -prbugaxt -i -numListstatusThreads 1 
hdfs://cluster1:8020/source/100.log  hdfs://cluster2:8020/target/100.jpg
|| ||Dir path||Count||
||Source dir||  hdfs://cluster1:8020/source/ ||100k+ files||
||Target dir||hdfs://cluster2:8020/target/ ||100k+  files||

 

Check code in CopyCommitter.java, we find in function

deleteAttemptTempFiles() has a code targetFS.globStatus(new 
Path(targetWorkPath, ".distcp.tmp." + jobId.replaceAll("job","attempt") + 
"*")); 

It will waste a lot of time when distcp between two large dirs. When we use 
distcp with -direct option,  it will direct write to the target file without 
generate a  '.distcp.tmp'  temp file. So, i think this code need add a judgment 
before call function deleteAttemptTempFiles, if distcp with -direct option, do 
nothing , directly return .  

 

  was:
We use distcp with -direct option to copy a file between two large directories. 
We found it costed a few minutes. If we launch too much distcp jobs at the same 
time, NameNode  performance degradation is serious.

hadoop -direct -skipcrccheck -update -prbugaxt -i -numListstatusThreads 1 
hdfs://cluster1:8020/source/100.log  hdfs://cluster2:8020/target/100.jpg
|| ||Dir path||Count||
||Source dir||  hdfs://cluster1:8020/source/ ||100k+ files||
||Target dir||hdfs://cluster2:8020/target/ ||100k+  files||

 

Check code in CopyCommitter.java, we find in function

deleteAttemptTempFiles() has a code targetFS.globStatus(new 
Path(targetWorkPath, ".distcp.tmp." + jobId.replaceAll("job","attempt") + 
"*")); 

It will waste a lot of time when distcp between two large dirs. When we use 
distcp with -direct option,  it will direct write to the target file without 
generate a  '.distcp.tmp'  temp file. So, i think this code need add a judgment 
in function deleteAttemptTempFiles, if distcp with -direct option, do nothing , 
directly return .  

 


> Performance improvement when distcp files in large dir with -direct option
> --
>
> Key: HADOOP-16872
> URL: https://issues.apache.org/jira/browse/HADOOP-16872
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: liuxiaolong
>Priority: Major
> Attachments: HADOOP-16872.001.patch, optimise after.png, optimise 
> before.png
>
>
> We use distcp with -direct option to copy a file between two large 
> directories. We found it costed a few minutes. If we launch too much distcp 
> jobs at the same time, NameNode  performance degradation is serious.
> hadoop -direct -skipcrccheck -update -prbugaxt -i -numListstatusThreads 1 
> hdfs://cluster1:8020/source/100.log  hdfs://cluster2:8020/target/100.jpg
> || ||Dir path||Count||
> ||Source dir||  hdfs://cluster1:8020/source/ ||100k+ files||
> ||Target dir||hdfs://cluster2:8020/target/ ||100k+  files||
>  
> Check code in CopyCommitter.java, we find in function
> deleteAttemptTempFiles() has a code targetFS.globStatus(new 
> Path(targetWorkPath, ".distcp.tmp." + jobId.replaceAll("job","attempt") + 
> "*")); 
> It will waste a lot of time when distcp between two large dirs. When we use 
> distcp with -direct option,  it will direct write to the target file without 
> generate a  '.distcp.tmp'  temp file. So, i think this code need add a 
> judgment before call function deleteAttemptTempFiles, if distcp with -direct 
> option, do nothing , directly return .  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1839: HADOOP-16848. Refactoring: initial layering

2020-02-19 Thread GitBox
hadoop-yetus removed a comment on issue #1839: HADOOP-16848. Refactoring: 
initial layering
URL: https://github.com/apache/hadoop/pull/1839#issuecomment-583900610
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |   0m 26s |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 28s |  root in trunk failed.  |
   | -0 :warning: |  checkstyle  |   2m 58s |  The patch fails to run 
checkstyle in root  |
   | -1 :x: |  mvnsite  |   0m 59s |  hadoop-aws in trunk failed.  |
   | -1 :x: |  shadedclient  |  10m 39s |  branch has errors when building and 
testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 15s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m  6s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 54s |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 29s |  hadoop-aws in the patch failed.  |
   | -1 :x: |  compile  |  16m 23s |  root in the patch failed.  |
   | -1 :x: |  javac  |  16m 23s |  root in the patch failed.  |
   | -0 :warning: |  checkstyle  |   2m 37s |  root: The patch generated 36 new 
+ 0 unchanged - 0 fixed = 36 total (was 0)  |
   | -1 :x: |  mvnsite  |   0m 47s |  hadoop-aws in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 19s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  the patch passed  |
   | -1 :x: |  findbugs  |   0m 47s |  hadoop-aws in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 37s |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  |   0m 47s |  hadoop-aws in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  78m 12s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.viewfs.TestViewFsTrash |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1839 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e7211e3ac278 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6191d4b |
   | Default Java | 1.8.0_242 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/1/artifact/out/branch-mvninstall-root.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/1/artifact/out/branch-compile-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/1/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1839/out/maven-branch-checkstyle-root.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/1/artifact/out/branch-mvnsite-hadoop-tools_hadoop-aws.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/1/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/1/artifact/out/patch-compile-root.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/1/artifact/out/patch-compile-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/1/artifact/out/diff-checkstyle-root.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/1/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/1/artifact/out/patch-findbugs-hadoop-tools_hadoop-aws.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   | unit | 

[jira] [Commented] (HADOOP-16869) mvn findbugs:findbugs fails

2020-02-19 Thread Tsuyoshi Ozawa (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17039896#comment-17039896
 ] 

Tsuyoshi Ozawa commented on HADOOP-16869:
-

Marking this as a blocking issue of HADOOP-16866 because the upgrade task 
should be done from a clean state.

> mvn findbugs:findbugs fails
> ---
>
> Key: HADOOP-16869
> URL: https://issues.apache.org/jira/browse/HADOOP-16869
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Priority: Major
>
> mvn findbugs:findbugs is failing:
> {noformat}
> [ERROR] Failed to execute goal 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs (default-cli) on 
> project hadoop-project: Unable to parse configuration of mojo 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs for parameter 
> pluginArtifacts: Cannot assign configuration entry 'pluginArtifacts' with 
> value '${plugin.artifacts}' of type 
> java.util.Collections.UnmodifiableRandomAccessList to property of type 
> java.util.ArrayList -> [Help 1]
>  {noformat}
> We have to update the version of findbugs-maven-plugin.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1853: HADOOP-16873 - Upgrade to Apache ZooKeeper 3.5.7

2020-02-19 Thread GitBox
hadoop-yetus commented on issue #1853: HADOOP-16873 - Upgrade to Apache 
ZooKeeper 3.5.7
URL: https://github.com/apache/hadoop/pull/1853#issuecomment-588155756
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m  2s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 22s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 44s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 13s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 12s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 12s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 14s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  13m 58s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 16s |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  52m 34s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.6 Server=19.03.6 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1853/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1853 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 9f20f9f95ccc 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cb3f3cc |
   | Default Java | 1.8.0_242 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1853/1/testReport/ |
   | Max. process+thread count | 423 (vs. ulimit of 5500) |
   | modules | C: hadoop-project U: hadoop-project |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1853/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16873) Upgrade to Apache ZooKeeper 3.5.7

2020-02-19 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-16873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Norbert Kalmár updated HADOOP-16873:

Status: Patch Available  (was: Open)

> Upgrade to Apache ZooKeeper 3.5.7
> -
>
> Key: HADOOP-16873
> URL: https://issues.apache.org/jira/browse/HADOOP-16873
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Norbert Kalmár
>Assignee: Norbert Kalmár
>Priority: Major
>
> Apache ZooKeeper 3.5.7 has been released, which contains some important fixes 
> including third party CVE, possible split brain and data loss in some very 
> rare but plausible scenarios etc.
> the release has been tested by the curator team to be compatible with 4.2.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16873) Upgrade to Apache ZooKeeper 3.5.7

2020-02-19 Thread Jira


[ 
https://issues.apache.org/jira/browse/HADOOP-16873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17039853#comment-17039853
 ] 

Norbert Kalmár commented on HADOOP-16873:
-

release notes:
https://zookeeper.apache.org/doc/r3.5.7/releasenotes.html

> Upgrade to Apache ZooKeeper 3.5.7
> -
>
> Key: HADOOP-16873
> URL: https://issues.apache.org/jira/browse/HADOOP-16873
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Norbert Kalmár
>Assignee: Norbert Kalmár
>Priority: Major
>
> Apache ZooKeeper 3.5.7 has been released, which contains some important fixes 
> including third party CVE, possible split brain and data loss in some very 
> rare but plausible scenarios etc.
> the release has been tested by the curator team to be compatible with 4.2.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nkalmar opened a new pull request #1853: HADOOP-16873 - Upgrade to Apache ZooKeeper 3.5.7

2020-02-19 Thread GitBox
nkalmar opened a new pull request #1853: HADOOP-16873 - Upgrade to Apache 
ZooKeeper 3.5.7
URL: https://github.com/apache/hadoop/pull/1853
 
 
   Apache ZooKeeper 3.5.7 has been released, which contains some important 
fixes including third party CVE, possible split brain and data loss in some 
very rare but plausible scenarios etc.
   the release has been tested by the curator team to be compatible with 4.2.0
   
   Change-Id: Ia7dae5e7e3ee5e7f7f92734e74722da7fedaa063
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16873) Upgrade to Apache ZooKeeper 3.5.7

2020-02-19 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-16873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Norbert Kalmár updated HADOOP-16873:

Description: 
Apache ZooKeeper 3.5.7 has been released, which contains some important fixes 
including third party CVE, possible split brain and data loss in some very rare 
but plausible scenarios etc.
the release has been tested by the curator team to be compatible with 4.2.0

> Upgrade to Apache ZooKeeper 3.5.7
> -
>
> Key: HADOOP-16873
> URL: https://issues.apache.org/jira/browse/HADOOP-16873
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Norbert Kalmár
>Assignee: Norbert Kalmár
>Priority: Major
>
> Apache ZooKeeper 3.5.7 has been released, which contains some important fixes 
> including third party CVE, possible split brain and data loss in some very 
> rare but plausible scenarios etc.
> the release has been tested by the curator team to be compatible with 4.2.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16873) Upgrade to Apache ZooKeeper 3.5.7

2020-02-19 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-16873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Norbert Kalmár reassigned HADOOP-16873:
---

Assignee: Norbert Kalmár

> Upgrade to Apache ZooKeeper 3.5.7
> -
>
> Key: HADOOP-16873
> URL: https://issues.apache.org/jira/browse/HADOOP-16873
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Norbert Kalmár
>Assignee: Norbert Kalmár
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16873) Upgrade to Apache ZooKeeper 3.5.7

2020-02-19 Thread Jira
Norbert Kalmár created HADOOP-16873:
---

 Summary: Upgrade to Apache ZooKeeper 3.5.7
 Key: HADOOP-16873
 URL: https://issues.apache.org/jira/browse/HADOOP-16873
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Norbert Kalmár






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16872) Performance improvement when distcp files in large dir with -direct option

2020-02-19 Thread liuxiaolong (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuxiaolong updated HADOOP-16872:
-
Attachment: optimise before.png

> Performance improvement when distcp files in large dir with -direct option
> --
>
> Key: HADOOP-16872
> URL: https://issues.apache.org/jira/browse/HADOOP-16872
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: liuxiaolong
>Priority: Major
> Attachments: HADOOP-16872.001.patch, optimise after.png, optimise 
> before.png
>
>
> We use distcp with -direct option to copy a file between two large 
> directories. We found it costed a few minutes. If we launch too much distcp 
> jobs at the same time, NameNode  performance degradation is serious.
> hadoop -direct -skipcrccheck -update -prbugaxt -i -numListstatusThreads 1 
> hdfs://cluster1:8020/source/100.log  hdfs://cluster2:8020/target/100.jpg
> || ||Dir path||Count||
> ||Source dir||  hdfs://cluster1:8020/source/ ||100k+ files||
> ||Target dir||hdfs://cluster2:8020/target/ ||100k+  files||
>  
> Check code in CopyCommitter.java, we find in function
> deleteAttemptTempFiles() has a code targetFS.globStatus(new 
> Path(targetWorkPath, ".distcp.tmp." + jobId.replaceAll("job","attempt") + 
> "*")); 
> It will waste a lot of time when distcp between two large dirs. When we use 
> distcp with -direct option,  it will direct write to the target file without 
> generate a  '.distcp.tmp'  temp file. So, i think this code need add a 
> judgment in function deleteAttemptTempFiles, if distcp with -direct option, 
> do nothing , directly return .  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16872) Performance improvement when distcp files in large dir with -direct option

2020-02-19 Thread liuxiaolong (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuxiaolong updated HADOOP-16872:
-
Attachment: optimise after.png

> Performance improvement when distcp files in large dir with -direct option
> --
>
> Key: HADOOP-16872
> URL: https://issues.apache.org/jira/browse/HADOOP-16872
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: liuxiaolong
>Priority: Major
> Attachments: HADOOP-16872.001.patch, optimise after.png, optimise 
> before.png
>
>
> We use distcp with -direct option to copy a file between two large 
> directories. We found it costed a few minutes. If we launch too much distcp 
> jobs at the same time, NameNode  performance degradation is serious.
> hadoop -direct -skipcrccheck -update -prbugaxt -i -numListstatusThreads 1 
> hdfs://cluster1:8020/source/100.log  hdfs://cluster2:8020/target/100.jpg
> || ||Dir path||Count||
> ||Source dir||  hdfs://cluster1:8020/source/ ||100k+ files||
> ||Target dir||hdfs://cluster2:8020/target/ ||100k+  files||
>  
> Check code in CopyCommitter.java, we find in function
> deleteAttemptTempFiles() has a code targetFS.globStatus(new 
> Path(targetWorkPath, ".distcp.tmp." + jobId.replaceAll("job","attempt") + 
> "*")); 
> It will waste a lot of time when distcp between two large dirs. When we use 
> distcp with -direct option,  it will direct write to the target file without 
> generate a  '.distcp.tmp'  temp file. So, i think this code need add a 
> judgment in function deleteAttemptTempFiles, if distcp with -direct option, 
> do nothing , directly return .  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16869) mvn findbugs:findbugs fails

2020-02-19 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17039800#comment-17039800
 ] 

Masatake Iwasaki commented on HADOOP-16869:
---

I can reproduce the issue by using Maven 3.6.0. No problem on 3.5.2. MNG-6625 
says that this is bug of findbugs-maven-plugin.

> mvn findbugs:findbugs fails
> ---
>
> Key: HADOOP-16869
> URL: https://issues.apache.org/jira/browse/HADOOP-16869
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Priority: Major
>
> mvn findbugs:findbugs is failing:
> {noformat}
> [ERROR] Failed to execute goal 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs (default-cli) on 
> project hadoop-project: Unable to parse configuration of mojo 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs for parameter 
> pluginArtifacts: Cannot assign configuration entry 'pluginArtifacts' with 
> value '${plugin.artifacts}' of type 
> java.util.Collections.UnmodifiableRandomAccessList to property of type 
> java.util.ArrayList -> [Help 1]
>  {noformat}
> We have to update the version of findbugs-maven-plugin.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16872) Performance improvement when distcp files in large dir with -direct option

2020-02-19 Thread liuxiaolong (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuxiaolong updated HADOOP-16872:
-
Attachment: HADOOP-16872.001.patch

> Performance improvement when distcp files in large dir with -direct option
> --
>
> Key: HADOOP-16872
> URL: https://issues.apache.org/jira/browse/HADOOP-16872
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: liuxiaolong
>Priority: Major
> Attachments: HADOOP-16872.001.patch
>
>
> We use distcp with -direct option to copy a file between two large 
> directories. We found it costed a few minutes. If we launch too much distcp 
> jobs at the same time, NameNode  performance degradation is serious.
> hadoop -direct -skipcrccheck -update -prbugaxt -i -numListstatusThreads 1 
> hdfs://cluster1:8020/source/100.log  hdfs://cluster2:8020/target/100.jpg
> || ||Dir path||Count||
> ||Source dir||  hdfs://cluster1:8020/source/ ||100k+ files||
> ||Target dir||hdfs://cluster2:8020/target/ ||100k+  files||
>  
> Check code in CopyCommitter.java, we find in function
> deleteAttemptTempFiles() has a code targetFS.globStatus(new 
> Path(targetWorkPath, ".distcp.tmp." + jobId.replaceAll("job","attempt") + 
> "*")); 
> It will waste a lot of time when distcp between two large dirs. When we use 
> distcp with -direct option,  it will direct write to the target file without 
> generate a  '.distcp.tmp'  temp file. So, i think this code need add a 
> judgment in function deleteAttemptTempFiles, if distcp with -direct option, 
> do nothing , directly return .  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16872) Performance improvement when distcp files in large dir with -direct option

2020-02-19 Thread liuxiaolong (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuxiaolong updated HADOOP-16872:
-
Description: 
We use distcp with -direct option to copy a file between two large directories. 
We found it costed a few minutes. If we launch too much distcp jobs at the same 
time, NameNode  performance degradation is serious.

hadoop -direct -skipcrccheck -update -prbugaxt -i -numListstatusThreads 1 
hdfs://cluster1:8020/source/100.log  hdfs://cluster2:8020/target/100.jpg
|| ||Dir path||Count||
||Source dir||  hdfs://cluster1:8020/source/ ||100k+ files||
||Target dir||hdfs://cluster2:8020/target/ ||100k+  files||

 

Check code in CopyCommitter.java, we find in function

deleteAttemptTempFiles() has a code targetFS.globStatus(new 
Path(targetWorkPath, ".distcp.tmp." + jobId.replaceAll("job","attempt") + 
"*")); 

It will waste a lot of time when distcp between two large dirs. When we use 
distcp with -direct option,  it will direct write to the target file without 
generate a  '.distcp.tmp'  temp file. So, i think this code need add a judgment 
in function deleteAttemptTempFiles, if distcp with -direct option, do nothing , 
directly return .  

 

  was:
We use distcp with -direct option to copy a file between two large directories. 
We found it costed a few minutes. If we launch too much distcp jobs at the same 
time, NameNode  performance degradation is serious.

hadoop -direct -skipcrccheck -update -prbugaxt -i -numListstatusThreads 1 
hdfs://cluster1:8020/source/100.log  hdfs://cluster2:8020/target/100.jpg
|| ||Dir path||Count||
||Source dir||  hdfs://cluster1:8020/source/ ||100k+ files||
||Target dir||hdfs://cluster2:8020/target/ ||100k+  files||

 

Check code in CopyCommitter.java, we find in function

deleteAttemptTempFiles() has a code

targetFS.globStatus(new Path(targetWorkPath, ".distcp.tmp." + 
jobId.replaceAll("job","attempt") + "*")); 

 it will waste a lot of time when distcp between two large dirs. When we use 
distcp with -direct option,  it will direct write to the target file without 
generate a  '.distcp.tmp'  temp file. So, i think this code need add a judgment 
in function deleteAttemptTempFiles, if distcp with -direct option, do nothing , 
directly return .  

 


> Performance improvement when distcp files in large dir with -direct option
> --
>
> Key: HADOOP-16872
> URL: https://issues.apache.org/jira/browse/HADOOP-16872
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: liuxiaolong
>Priority: Major
>
> We use distcp with -direct option to copy a file between two large 
> directories. We found it costed a few minutes. If we launch too much distcp 
> jobs at the same time, NameNode  performance degradation is serious.
> hadoop -direct -skipcrccheck -update -prbugaxt -i -numListstatusThreads 1 
> hdfs://cluster1:8020/source/100.log  hdfs://cluster2:8020/target/100.jpg
> || ||Dir path||Count||
> ||Source dir||  hdfs://cluster1:8020/source/ ||100k+ files||
> ||Target dir||hdfs://cluster2:8020/target/ ||100k+  files||
>  
> Check code in CopyCommitter.java, we find in function
> deleteAttemptTempFiles() has a code targetFS.globStatus(new 
> Path(targetWorkPath, ".distcp.tmp." + jobId.replaceAll("job","attempt") + 
> "*")); 
> It will waste a lot of time when distcp between two large dirs. When we use 
> distcp with -direct option,  it will direct write to the target file without 
> generate a  '.distcp.tmp'  temp file. So, i think this code need add a 
> judgment in function deleteAttemptTempFiles, if distcp with -direct option, 
> do nothing , directly return .  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16872) Performance improvement when distcp files in large dir with -direct option

2020-02-19 Thread liuxiaolong (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuxiaolong updated HADOOP-16872:
-
Description: 
We use distcp with -direct option to copy a file between two large directories. 
We found it costed a few minutes. If we launch too much distcp jobs at the same 
time, NameNode  performance degradation is serious.

hadoop -direct -skipcrccheck -update -prbugaxt -i -numListstatusThreads 1 
hdfs://cluster1:8020/source/100.log  hdfs://cluster2:8020/target/100.jpg
|| ||Dir path||Count||
||Source dir||  hdfs://cluster1:8020/source/ ||100k+ files||
||Target dir||hdfs://cluster2:8020/target/ ||100k+  files||

 

Check code in CopyCommitter.java, we find in function

deleteAttemptTempFiles() has a code

targetFS.globStatus(new Path(targetWorkPath, ".distcp.tmp." + 
jobId.replaceAll("job","attempt") + "*")); 

 it will waste a lot of time when distcp between two large dirs. When we use 
distcp with -direct option,  it will direct write to the target file without 
generate a  '.distcp.tmp'  temp file. So, i think this code need add a judgment 
in function deleteAttemptTempFiles, if distcp with -direct option, do nothing , 
directly return .  

 

  was:
We use distcp with -direct option to copy a file between two large directories. 
We found it costed a few minutes. If we launch too much distcp jobs at the same 
time, NameNode  performance degradation is serious.

hadoop -direct -skipcrccheck -update -prbugaxt -i -numListstatusThreads 1 
hdfs://cluster1:8020/source/100.log  hdfs://cluster2:8020/target/100.jpg
|| ||Dir path||Count||
||Source dir||  hdfs://cluster1:8020/source/ ||100k+ files||
||Target dir||hdfs://cluster2:8020/target/ ||100k+  files||

Check code in CopyCommitter.java, we find in function

deleteAttemptTempFiles() has a code

targetFS.globStatus(new Path(targetWorkPath, ".distcp.tmp." + 
jobId.replaceAll("job","attempt") + "*")); 

 it will waste a lot of time when distcp between two large dirs. When we use 
distcp with -direct option,  it will direct write to the target file without 
generate a  '.distcp.tmp'  temp file. So, i think this code need add a judgment 
in function deleteAttemptTempFiles, if distcp with -direct option, do nothing , 
directly return .  

 


> Performance improvement when distcp files in large dir with -direct option
> --
>
> Key: HADOOP-16872
> URL: https://issues.apache.org/jira/browse/HADOOP-16872
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: liuxiaolong
>Priority: Major
>
> We use distcp with -direct option to copy a file between two large 
> directories. We found it costed a few minutes. If we launch too much distcp 
> jobs at the same time, NameNode  performance degradation is serious.
> hadoop -direct -skipcrccheck -update -prbugaxt -i -numListstatusThreads 1 
> hdfs://cluster1:8020/source/100.log  hdfs://cluster2:8020/target/100.jpg
> || ||Dir path||Count||
> ||Source dir||  hdfs://cluster1:8020/source/ ||100k+ files||
> ||Target dir||hdfs://cluster2:8020/target/ ||100k+  files||
>  
> Check code in CopyCommitter.java, we find in function
> deleteAttemptTempFiles() has a code
> targetFS.globStatus(new Path(targetWorkPath, ".distcp.tmp." + 
> jobId.replaceAll("job","attempt") + "*")); 
>  it will waste a lot of time when distcp between two large dirs. When we use 
> distcp with -direct option,  it will direct write to the target file without 
> generate a  '.distcp.tmp'  temp file. So, i think this code need add a 
> judgment in function deleteAttemptTempFiles, if distcp with -direct option, 
> do nothing , directly return .  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16872) Performance improvement when distcp files in large dir with -direct option

2020-02-19 Thread liuxiaolong (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuxiaolong updated HADOOP-16872:
-
Description: 
We use distcp with -direct option to copy a file between two large directories. 
We found it costed a few minutes. If we launch too much distcp jobs at the same 
time, NameNode  performance degradation is serious.

hadoop -direct -skipcrccheck -update -prbugaxt -i -numListstatusThreads 1 
hdfs://cluster1:8020/source/100.log  hdfs://cluster2:8020/target/100.jpg
|| ||Dir path||Count||
||Source dir||  hdfs://cluster1:8020/source/ ||100k+ files||
||Target dir||hdfs://cluster2:8020/target/ ||100k+  files||

Check code in CopyCommitter.java, we find in function

deleteAttemptTempFiles() has a code

targetFS.globStatus(new Path(targetWorkPath, ".distcp.tmp." + 
jobId.replaceAll("job","attempt") + "*")); 

 it will waste a lot of time when distcp between two large dirs. When we use 
distcp with -direct option,  it will direct write to the target file without 
generate a  '.distcp.tmp'  temp file. So, i think this code need add a judgment 
in function deleteAttemptTempFiles, if distcp with -direct option, do nothing , 
directly return .  

 

  was:
We use distcp with -direct option to copy a file between two large directories. 
We found it costed a few minutes. If we launch too much distcp jobs at the same 
time, NameNode  performance degradation is serious.

hadoop -direct -skipcrccheck -update -prbugaxt -i -numListstatusThreads 1 
hdfs://cluster1:8020/source/100.log  hdfs://cluster2:8020/target/100.jpg

 
|| ||Dir path||Count||
||Source dir||  hdfs://cluster1:8020/source/ ||100k+ files||
||Target dir||hdfs://cluster2:8020/target/ ||100k+  files||

 

 

Check code in CopyCommitter.java, we find in function

deleteAttemptTempFiles() has a code

targetFS.globStatus(new Path(targetWorkPath, ".distcp.tmp." + 
jobId.replaceAll("job","attempt") + "*")); 

 it will waste a lot of time when distcp between two large dirs. When we use 
distcp with -direct option,  it will direct write to the target file without 
generate a  '.distcp.tmp'  temp file. So, i think this code need add a judgment 
in function deleteAttemptTempFiles, if distcp with -direct option, do nothing , 
directly return .  

 


> Performance improvement when distcp files in large dir with -direct option
> --
>
> Key: HADOOP-16872
> URL: https://issues.apache.org/jira/browse/HADOOP-16872
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: liuxiaolong
>Priority: Major
>
> We use distcp with -direct option to copy a file between two large 
> directories. We found it costed a few minutes. If we launch too much distcp 
> jobs at the same time, NameNode  performance degradation is serious.
> hadoop -direct -skipcrccheck -update -prbugaxt -i -numListstatusThreads 1 
> hdfs://cluster1:8020/source/100.log  hdfs://cluster2:8020/target/100.jpg
> || ||Dir path||Count||
> ||Source dir||  hdfs://cluster1:8020/source/ ||100k+ files||
> ||Target dir||hdfs://cluster2:8020/target/ ||100k+  files||
> Check code in CopyCommitter.java, we find in function
> deleteAttemptTempFiles() has a code
> targetFS.globStatus(new Path(targetWorkPath, ".distcp.tmp." + 
> jobId.replaceAll("job","attempt") + "*")); 
>  it will waste a lot of time when distcp between two large dirs. When we use 
> distcp with -direct option,  it will direct write to the target file without 
> generate a  '.distcp.tmp'  temp file. So, i think this code need add a 
> judgment in function deleteAttemptTempFiles, if distcp with -direct option, 
> do nothing , directly return .  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16872) Performance improvement when distcp files in large dir with -direct option

2020-02-19 Thread liuxiaolong (Jira)
liuxiaolong created HADOOP-16872:


 Summary: Performance improvement when distcp files in large dir 
with -direct option
 Key: HADOOP-16872
 URL: https://issues.apache.org/jira/browse/HADOOP-16872
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: liuxiaolong


We use distcp with -direct option to copy a file between two large directories. 
We found it costed a few minutes. If we launch too much distcp jobs at the same 
time, NameNode  performance degradation is serious.

hadoop -direct -skipcrccheck -update -prbugaxt -i -numListstatusThreads 1 
hdfs://cluster1:8020/source/100.log  hdfs://cluster2:8020/target/100.jpg

 
|| ||Dir path||Count||
||Source dir||  hdfs://cluster1:8020/source/ ||100k+ files||
||Target dir||hdfs://cluster2:8020/target/ ||100k+  files||

 

 

Check code in CopyCommitter.java, we find in function

deleteAttemptTempFiles() has a code

targetFS.globStatus(new Path(targetWorkPath, ".distcp.tmp." + 
jobId.replaceAll("job","attempt") + "*")); 

 it will waste a lot of time when distcp between two large dirs. When we use 
distcp with -direct option,  it will direct write to the target file without 
generate a  '.distcp.tmp'  temp file. So, i think this code need add a judgment 
in function deleteAttemptTempFiles, if distcp with -direct option, do nothing , 
directly return .  

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org