[jira] [Work logged] (HADOOP-17311) ABFS: Logs should redact SAS signature

2020-11-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17311?focusedWorklogId=507528=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-507528
 ]

ASF GitHub Bot logged work on HADOOP-17311:
---

Author: ASF GitHub Bot
Created on: 04/Nov/20 07:46
Start Date: 04/Nov/20 07:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2432:
URL: https://github.com/apache/hadoop/pull/2432#issuecomment-721568332


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  49m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  15m 28s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2432/1/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 25s | 
[/branch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2432/1/artifact/out/branch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1.txt)
 |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1.  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 55s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 53s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | -1 :x: |  javac  |   0m 29s | 
[/diff-compile-javac-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2432/1/artifact/out/diff-compile-javac-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1.txt)
 |  hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 with 
JDK Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 generated 15 new + 0 unchanged - 0 
fixed = 15 total (was 0)  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 15s | 
[/diff-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2432/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 7 new + 4 unchanged - 0 
fixed = 11 total (was 4)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  17m  3s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  findbugs  |   1m  1s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   1m 33s | 
[/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2432/1/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 114m 11s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.azure.TestNativeAzureFileSystemConcurrency |
   |   

[GitHub] [hadoop] hadoop-yetus commented on pull request #2432: HADOOP-17311. ABFS: Read small files completely

2020-11-03 Thread GitBox


hadoop-yetus commented on pull request #2432:
URL: https://github.com/apache/hadoop/pull/2432#issuecomment-721568332


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  49m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  15m 28s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2432/1/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 25s | 
[/branch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2432/1/artifact/out/branch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1.txt)
 |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1.  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 55s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 53s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | -1 :x: |  javac  |   0m 29s | 
[/diff-compile-javac-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2432/1/artifact/out/diff-compile-javac-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1.txt)
 |  hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 with 
JDK Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 generated 15 new + 0 unchanged - 0 
fixed = 15 total (was 0)  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 15s | 
[/diff-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2432/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 7 new + 4 unchanged - 0 
fixed = 11 total (was 4)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  17m  3s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  findbugs  |   1m  1s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   1m 33s | 
[/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2432/1/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 114m 11s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.azure.TestNativeAzureFileSystemConcurrency |
   |   | hadoop.fs.azure.TestBlobMetadata |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemContractMocked |
   |   | hadoop.fs.azure.TestOutOfBandAzureBlobOperations |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked |
   |   | hadoop.fs.azure.TestWasbFsck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemMocked |
   
   
   | Subsystem | 

[jira] [Updated] (HADOOP-17344) Harmonize guava version and shade guava in yarn-csi

2020-11-03 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17344:
---
Target Version/s: 3.4.0
  Status: Patch Available  (was: Open)

> Harmonize guava version and shade guava in yarn-csi
> ---
>
> Key: HADOOP-17344
> URL: https://issues.apache.org/jira/browse/HADOOP-17344
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> yarn-csi defines a separate guava version 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml#L30].
>  
>  
> We should harmonize the guava version (pull it from hadoop-project/pom.xml) 
> and use the shaded guava classes. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17344) Harmonize guava version and shade guava in yarn-csi

2020-11-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17344:

Labels: pull-request-available  (was: )

> Harmonize guava version and shade guava in yarn-csi
> ---
>
> Key: HADOOP-17344
> URL: https://issues.apache.org/jira/browse/HADOOP-17344
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> yarn-csi defines a separate guava version 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml#L30].
>  
>  
> We should harmonize the guava version (pull it from hadoop-project/pom.xml) 
> and use the shaded guava classes. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17344) Harmonize guava version and shade guava in yarn-csi

2020-11-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17344?focusedWorklogId=507525=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-507525
 ]

ASF GitHub Bot logged work on HADOOP-17344:
---

Author: ASF GitHub Bot
Created on: 04/Nov/20 07:31
Start Date: 04/Nov/20 07:31
Worklog Time Spent: 10m 
  Work Description: aajisaka opened a new pull request #2434:
URL: https://github.com/apache/hadoop/pull/2434


   JIRA: https://issues.apache.org/jira/browse/HADOOP-17344
   
   - Use hadoop-shaded-guava
   - Remove skip replacer setting from yarn-csi pom.xml



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 507525)
Remaining Estimate: 0h
Time Spent: 10m

> Harmonize guava version and shade guava in yarn-csi
> ---
>
> Key: HADOOP-17344
> URL: https://issues.apache.org/jira/browse/HADOOP-17344
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Akira Ajisaka
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> yarn-csi defines a separate guava version 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml#L30].
>  
>  
> We should harmonize the guava version (pull it from hadoop-project/pom.xml) 
> and use the shaded guava classes. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka opened a new pull request #2434: HADOOP-17344. Harmonize guava version and shade guava in yarn-csi.

2020-11-03 Thread GitBox


aajisaka opened a new pull request #2434:
URL: https://github.com/apache/hadoop/pull/2434


   JIRA: https://issues.apache.org/jira/browse/HADOOP-17344
   
   - Use hadoop-shaded-guava
   - Remove skip replacer setting from yarn-csi pom.xml



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2377: HDFS-15624. fix the function of setting quota by storage type

2020-11-03 Thread GitBox


hadoop-yetus commented on pull request #2377:
URL: https://github.com/apache/hadoop/pull/2377#issuecomment-721554987


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 15s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 45s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 17s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |  18m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m  5s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   4m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +0 :ok: |  spotbugs  |   1m 21s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m 53s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |  20m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |  18m  6s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   3m  0s |  |  root: The patch generated 
0 new + 734 unchanged - 1 fixed = 734 total (was 735)  |
   | +1 :green_heart: |  mvnsite  |   3m 33s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 31s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   4m  4s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  findbugs  |   7m 12s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  10m 17s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/20/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  | 111m 14s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/20/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   9m 31s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 331m 35s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.security.TestLdapGroupsMapping |
   |   | hadoop.crypto.key.TestKeyProviderFactory |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/20/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2377 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 759092337d3e 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d07dc7afb4a |
   | Default Java | 

[jira] [Assigned] (HADOOP-17344) Harmonize guava version and shade guava in yarn-csi

2020-11-03 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-17344:
--

Assignee: Akira Ajisaka

> Harmonize guava version and shade guava in yarn-csi
> ---
>
> Key: HADOOP-17344
> URL: https://issues.apache.org/jira/browse/HADOOP-17344
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Akira Ajisaka
>Priority: Major
>
> yarn-csi defines a separate guava version 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml#L30].
>  
>  
> We should harmonize the guava version (pull it from hadoop-project/pom.xml) 
> and use the shaded guava classes. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17352) Update PATCH_NAMING_RULE in the personality file

2020-11-03 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17352:
---
Target Version/s: 3.1.0, 3.3.1, 3.4.0, 2.10.2, 3.2.3

> Update PATCH_NAMING_RULE in the personality file
> 
>
> Key: HADOOP-17352
> URL: https://issues.apache.org/jira/browse/HADOOP-17352
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {noformat:title=./dev-support/bin/hadoop.sh}
> PATCH_NAMING_RULE="https://wiki.apache.org/hadoop/HowToContribute;
> {noformat}
> https://wiki.apache.org/hadoop/HowToContribute was moved to 
> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute. Let's 
> update the personality.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17352) Update PATCH_NAMING_RULE in the personality file

2020-11-03 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17352:
---
Status: Patch Available  (was: Open)

> Update PATCH_NAMING_RULE in the personality file
> 
>
> Key: HADOOP-17352
> URL: https://issues.apache.org/jira/browse/HADOOP-17352
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {noformat:title=./dev-support/bin/hadoop.sh}
> PATCH_NAMING_RULE="https://wiki.apache.org/hadoop/HowToContribute;
> {noformat}
> https://wiki.apache.org/hadoop/HowToContribute was moved to 
> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute. Let's 
> update the personality.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17349) hadoop: mvn site tests should enable shelldocs and releasedocs

2020-11-03 Thread Allen Wittenauer (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-17349:
--
Reporter: Unassigned Developer  (was: Allen Wittenauer)

> hadoop: mvn site tests should enable shelldocs and releasedocs
> --
>
> Key: HADOOP-17349
> URL: https://issues.apache.org/jira/browse/HADOOP-17349
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: precommit
>Reporter: Unassigned Developer
>Priority: Trivial
>
> It would be good if, at least under qbt, mvn site also ran against the 
> shelldocs and releasedocs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17352) Update PATCH_NAMING_RULE in the personality file

2020-11-03 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225890#comment-17225890
 ] 

Akira Ajisaka commented on HADOOP-17352:


Thanks [~aw] for moving the jira. Yes, after HADOOP-17205, this is a Hadoop 
issue rather than Yetus.

> Update PATCH_NAMING_RULE in the personality file
> 
>
> Key: HADOOP-17352
> URL: https://issues.apache.org/jira/browse/HADOOP-17352
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {noformat:title=./dev-support/bin/hadoop.sh}
> PATCH_NAMING_RULE="https://wiki.apache.org/hadoop/HowToContribute;
> {noformat}
> https://wiki.apache.org/hadoop/HowToContribute was moved to 
> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute. Let's 
> update the personality.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17352) Update PATCH_NAMING_RULE in the personality file

2020-11-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17352?focusedWorklogId=507513=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-507513
 ]

ASF GitHub Bot logged work on HADOOP-17352:
---

Author: ASF GitHub Bot
Created on: 04/Nov/20 06:32
Start Date: 04/Nov/20 06:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2433:
URL: https://github.com/apache/hadoop/pull/2433#issuecomment-721539606


   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2433/1/console in 
case of problems.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 507513)
Time Spent: 20m  (was: 10m)

> Update PATCH_NAMING_RULE in the personality file
> 
>
> Key: HADOOP-17352
> URL: https://issues.apache.org/jira/browse/HADOOP-17352
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {noformat:title=./dev-support/bin/hadoop.sh}
> PATCH_NAMING_RULE="https://wiki.apache.org/hadoop/HowToContribute;
> {noformat}
> https://wiki.apache.org/hadoop/HowToContribute was moved to 
> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute. Let's 
> update the personality.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2433: HADOOP-17352. Update PATCH_NAMING_RULE in the personality file.

2020-11-03 Thread GitBox


hadoop-yetus commented on pull request #2433:
URL: https://github.com/apache/hadoop/pull/2433#issuecomment-721539606


   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2433/1/console in 
case of problems.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17352) Update PATCH_NAMING_RULE in the personality file

2020-11-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17352:

Labels: newbie pull-request-available  (was: newbie)

> Update PATCH_NAMING_RULE in the personality file
> 
>
> Key: HADOOP-17352
> URL: https://issues.apache.org/jira/browse/HADOOP-17352
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {noformat:title=./dev-support/bin/hadoop.sh}
> PATCH_NAMING_RULE="https://wiki.apache.org/hadoop/HowToContribute;
> {noformat}
> https://wiki.apache.org/hadoop/HowToContribute was moved to 
> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute. Let's 
> update the personality.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17352) Update PATCH_NAMING_RULE in the personality file

2020-11-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17352?focusedWorklogId=507512=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-507512
 ]

ASF GitHub Bot logged work on HADOOP-17352:
---

Author: ASF GitHub Bot
Created on: 04/Nov/20 06:30
Start Date: 04/Nov/20 06:30
Worklog Time Spent: 10m 
  Work Description: aajisaka opened a new pull request #2433:
URL: https://github.com/apache/hadoop/pull/2433


   JIRA: https://issues.apache.org/jira/browse/HADOOP-17352



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 507512)
Remaining Estimate: 0h
Time Spent: 10m

> Update PATCH_NAMING_RULE in the personality file
> 
>
> Key: HADOOP-17352
> URL: https://issues.apache.org/jira/browse/HADOOP-17352
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Minor
>  Labels: newbie
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {noformat:title=./dev-support/bin/hadoop.sh}
> PATCH_NAMING_RULE="https://wiki.apache.org/hadoop/HowToContribute;
> {noformat}
> https://wiki.apache.org/hadoop/HowToContribute was moved to 
> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute. Let's 
> update the personality.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka opened a new pull request #2433: HADOOP-17352. Update PATCH_NAMING_RULE in the personality file.

2020-11-03 Thread GitBox


aajisaka opened a new pull request #2433:
URL: https://github.com/apache/hadoop/pull/2433


   JIRA: https://issues.apache.org/jira/browse/HADOOP-17352



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17352) Update PATCH_NAMING_RULE in the personality file

2020-11-03 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-17352:
--

Assignee: Akira Ajisaka

> Update PATCH_NAMING_RULE in the personality file
> 
>
> Key: HADOOP-17352
> URL: https://issues.apache.org/jira/browse/HADOOP-17352
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Minor
>  Labels: newbie
>
> {noformat:title=./dev-support/bin/hadoop.sh}
> PATCH_NAMING_RULE="https://wiki.apache.org/hadoop/HowToContribute;
> {noformat}
> https://wiki.apache.org/hadoop/HowToContribute was moved to 
> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute. Let's 
> update the personality.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-17355) hadoop personality test of shaded artifacts should link to output in report table

2020-11-03 Thread Allen Wittenauer (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer moved YETUS-550 to HADOOP-17355:
-

  Component/s: (was: Precommit)
  Key: HADOOP-17355  (was: YETUS-550)
Affects Version/s: (was: 0.6.0)
   Issue Type: Improvement  (was: Bug)
  Project: Hadoop Common  (was: Yetus)

> hadoop personality test of shaded artifacts should link to output in report 
> table
> -
>
> Key: HADOOP-17355
> URL: https://issues.apache.org/jira/browse/HADOOP-17355
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Priority: Major
>
> feedback from HADOOP-13917
> bq. One nice-to-have enhancement would be to link to the 
> patch-shadedclient.txt file in the Report/Notes field, otherwise people have 
> to dig it out of the Jenkins artifacts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17352) Update PATCH_NAMING_RULE in the personality file

2020-11-03 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17352:
---
Summary: Update PATCH_NAMING_RULE in the personality file  (was: Update 
PATCH_NAMING_RULE in hadoop personality)

> Update PATCH_NAMING_RULE in the personality file
> 
>
> Key: HADOOP-17352
> URL: https://issues.apache.org/jira/browse/HADOOP-17352
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Minor
>  Labels: newbie
>
> {noformat:title=./dev-support/bin/hadoop.sh}
> PATCH_NAMING_RULE="https://wiki.apache.org/hadoop/HowToContribute;
> {noformat}
> https://wiki.apache.org/hadoop/HowToContribute was moved to 
> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute. Let's 
> update the personality.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17352) Update PATCH_NAMING_RULE in hadoop personality

2020-11-03 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17352:
---
Component/s: (was: precommit)
 build
Description: 
{noformat:title=./dev-support/bin/hadoop.sh}
PATCH_NAMING_RULE="https://wiki.apache.org/hadoop/HowToContribute;
{noformat}
https://wiki.apache.org/hadoop/HowToContribute was moved to 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute. Let's 
update the personality.

  was:
{noformat}
PATCH_NAMING_RULE="https://wiki.apache.org/hadoop/HowToContribute;
{noformat}
https://wiki.apache.org/hadoop/HowToContribute was moved to 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute. Let's 
update the personality.

 Issue Type: Bug  (was: Improvement)

> Update PATCH_NAMING_RULE in hadoop personality
> --
>
> Key: HADOOP-17352
> URL: https://issues.apache.org/jira/browse/HADOOP-17352
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Minor
>  Labels: newbie
>
> {noformat:title=./dev-support/bin/hadoop.sh}
> PATCH_NAMING_RULE="https://wiki.apache.org/hadoop/HowToContribute;
> {noformat}
> https://wiki.apache.org/hadoop/HowToContribute was moved to 
> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute. Let's 
> update the personality.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17353) hadoop personality: yarn-ui should be conditional

2020-11-03 Thread Allen Wittenauer (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-17353:
--
Reporter: Unassigned Developer  (was: Allen Wittenauer)

> hadoop personality: yarn-ui should be conditional
> -
>
> Key: HADOOP-17353
> URL: https://issues.apache.org/jira/browse/HADOOP-17353
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: precommit
>Reporter: Unassigned Developer
>Priority: Major
>
> Given how much stuff -Pyarn-ui downloads, we should make it conditional to 
> cut down on testing time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17353) hadoop personality: yarn-ui should be conditional

2020-11-03 Thread Allen Wittenauer (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reassigned HADOOP-17353:
-

Assignee: (was: Allen Wittenauer)

> hadoop personality: yarn-ui should be conditional
> -
>
> Key: HADOOP-17353
> URL: https://issues.apache.org/jira/browse/HADOOP-17353
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: precommit
>Reporter: Allen Wittenauer
>Priority: Major
>
> Given how much stuff -Pyarn-ui downloads, we should make it conditional to 
> cut down on testing time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17351) hadoop: shaded client test is too aggressive

2020-11-03 Thread Allen Wittenauer (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-17351:
--
Reporter: Unassigned Developer  (was: Allen Wittenauer)

> hadoop: shaded client test is too aggressive
> 
>
> Key: HADOOP-17351
> URL: https://issues.apache.org/jira/browse/HADOOP-17351
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: precommit
>Reporter: Unassigned Developer
>Priority: Major
>
> See for example YARN-8726: modifying javascript should not trigger the shaded 
> client test.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17348) hadoop: add a test for -Pdist

2020-11-03 Thread Allen Wittenauer (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-17348:
--
Reporter: Unassigned Developer  (was: Allen Wittenauer)

> hadoop: add a test for -Pdist
> -
>
> Key: HADOOP-17348
> URL: https://issues.apache.org/jira/browse/HADOOP-17348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: precommit
>Reporter: Unassigned Developer
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17350) hadoop: flag native changes w/no Dockerfile change

2020-11-03 Thread Allen Wittenauer (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-17350:
--
Reporter: Unassigned Developer  (was: Allen Wittenauer)

> hadoop: flag native changes w/no Dockerfile change
> --
>
> Key: HADOOP-17350
> URL: https://issues.apache.org/jira/browse/HADOOP-17350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: precommit
>Reporter: Unassigned Developer
>Priority: Minor
>
> Issues like HADOOP-13578 are a textbook example of where a large number of 
> issue followers and reviewers (some of whom are experienced PMC members) 
> completely missed the fact that very little of the added native code is 
> actually being compiled, tested, or even in the release because the 
> Dockerfile wasn't modified to include new prerequisites. We should probably 
> enhance the hadoop personality to -1 such patches.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17311) ABFS: Logs should redact SAS signature

2020-11-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17311?focusedWorklogId=507504=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-507504
 ]

ASF GitHub Bot logged work on HADOOP-17311:
---

Author: ASF GitHub Bot
Created on: 04/Nov/20 05:51
Start Date: 04/Nov/20 05:51
Worklog Time Spent: 10m 
  Work Description: bilaharith opened a new pull request #2432:
URL: https://github.com/apache/hadoop/pull/2432


   Files that are of size smaller than the read buffer size can be considered 
as small files. In case of such files it would be better to read the full file 
into the AbfsInputStream buffer.
   
   **Driver test results using accounts in Central India**
   mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 462, Failures: 0, Errors: 0, Skipped: 64
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 24
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 462, Failures: 0, Errors: 0, Skipped: 24
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 16
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 462, Failures: 0, Errors: 0, Skipped: 245
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 16



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 507504)
Time Spent: 2h 10m  (was: 2h)

> ABFS: Logs should redact SAS signature
> --
>
> Key: HADOOP-17311
> URL: https://issues.apache.org/jira/browse/HADOOP-17311
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, security
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Bilahari T H
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Signature part of the SAS should be redacted for security purposes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17354) Move Jenkinsfile outside of the root directory

2020-11-03 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225877#comment-17225877
 ] 

Akira Ajisaka edited comment on HADOOP-17354 at 11/4/20, 5:59 AM:
--

I think the Jenkinsfile can be moved under ./dev-support.

After this issue is resolved, we need to update the config of 
hadoop-multibranch job to specify the new location of the Jenkinsfile.


was (Author: ajisakaa):
I think the Jenkins can be moved under ./dev-support.

> Move Jenkinsfile outside of the root directory
> --
>
> Key: HADOOP-17354
> URL: https://issues.apache.org/jira/browse/HADOOP-17354
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
>
> The Jenkinsfile is placed under the project root directory, so when the 
> Jenkinsfile is changed, all the Hadoop unit tests will run and it wastes a 
> lot of time and resources. Let's move the file outside of the root directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17354) Move Jenkinsfile outside of the root directory

2020-11-03 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225877#comment-17225877
 ] 

Akira Ajisaka commented on HADOOP-17354:


I think the Jenkins can be moved under ./dev-support.

> Move Jenkinsfile outside of the root directory
> --
>
> Key: HADOOP-17354
> URL: https://issues.apache.org/jira/browse/HADOOP-17354
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
>
> The Jenkinsfile is placed under the project root directory, so when the 
> Jenkinsfile is changed, all the Hadoop unit tests will run and it wastes a 
> lot of time and resources. Let's move the file outside of the root directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17354) Move Jenkinsfile outside of the root directory

2020-11-03 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17354:
--

 Summary: Move Jenkinsfile outside of the root directory
 Key: HADOOP-17354
 URL: https://issues.apache.org/jira/browse/HADOOP-17354
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Akira Ajisaka


The Jenkinsfile is placed under the project root directory, so when the 
Jenkinsfile is changed, all the Hadoop unit tests will run and it wastes a lot 
of time and resources. Let's move the file outside of the root directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17354) Move Jenkinsfile outside of the root directory

2020-11-03 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17354:
---
Target Version/s: 3.3.1, 3.4.0, 3.1.5, 2.10.2, 3.2.3

> Move Jenkinsfile outside of the root directory
> --
>
> Key: HADOOP-17354
> URL: https://issues.apache.org/jira/browse/HADOOP-17354
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
>
> The Jenkinsfile is placed under the project root directory, so when the 
> Jenkinsfile is changed, all the Hadoop unit tests will run and it wastes a 
> lot of time and resources. Let's move the file outside of the root directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-17352) Update PATCH_NAMING_RULE in hadoop personality

2020-11-03 Thread Allen Wittenauer (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer moved YETUS-694 to HADOOP-17352:
-

  Component/s: (was: Precommit)
   precommit
  Key: HADOOP-17352  (was: YETUS-694)
Affects Version/s: (was: 0.8.0)
  Project: Hadoop Common  (was: Yetus)

> Update PATCH_NAMING_RULE in hadoop personality
> --
>
> Key: HADOOP-17352
> URL: https://issues.apache.org/jira/browse/HADOOP-17352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: precommit
>Reporter: Akira Ajisaka
>Priority: Minor
>  Labels: newbie
>
> {noformat}
> PATCH_NAMING_RULE="https://wiki.apache.org/hadoop/HowToContribute;
> {noformat}
> https://wiki.apache.org/hadoop/HowToContribute was moved to 
> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute. Let's 
> update the personality.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-17350) hadoop: flag native changes w/no Dockerfile change

2020-11-03 Thread Allen Wittenauer (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer moved YETUS-519 to HADOOP-17350:
-

Component/s: (was: Precommit)
 precommit
Key: HADOOP-17350  (was: YETUS-519)
Project: Hadoop Common  (was: Yetus)

> hadoop: flag native changes w/no Dockerfile change
> --
>
> Key: HADOOP-17350
> URL: https://issues.apache.org/jira/browse/HADOOP-17350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: precommit
>Reporter: Allen Wittenauer
>Priority: Minor
>
> Issues like HADOOP-13578 are a textbook example of where a large number of 
> issue followers and reviewers (some of whom are experienced PMC members) 
> completely missed the fact that very little of the added native code is 
> actually being compiled, tested, or even in the release because the 
> Dockerfile wasn't modified to include new prerequisites. We should probably 
> enhance the hadoop personality to -1 such patches.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-17349) hadoop: mvn site tests should enable shelldocs and releasedocs

2020-11-03 Thread Allen Wittenauer (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer moved YETUS-453 to HADOOP-17349:
-

Component/s: (was: Precommit)
 precommit
Key: HADOOP-17349  (was: YETUS-453)
Project: Hadoop Common  (was: Yetus)

> hadoop: mvn site tests should enable shelldocs and releasedocs
> --
>
> Key: HADOOP-17349
> URL: https://issues.apache.org/jira/browse/HADOOP-17349
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: precommit
>Reporter: Allen Wittenauer
>Priority: Trivial
>
> It would be good if, at least under qbt, mvn site also ran against the 
> shelldocs and releasedocs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17347) ABFS: Read small files completely

2020-11-03 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H reassigned HADOOP-17347:
-

Assignee: Bilahari T H

> ABFS: Read small files completely
> -
>
> Key: HADOOP-17347
> URL: https://issues.apache.org/jira/browse/HADOOP-17347
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Major
>
> Files that are of size smaller than the read buffer size can be considered as 
> small files. In case of such files it would be better to read the full file 
> into the AbfsInputStream buffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-17351) hadoop: shaded client test is too aggressive

2020-11-03 Thread Allen Wittenauer (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer moved YETUS-689 to HADOOP-17351:
-

Component/s: (was: Precommit)
 precommit
Key: HADOOP-17351  (was: YETUS-689)
Project: Hadoop Common  (was: Yetus)

> hadoop: shaded client test is too aggressive
> 
>
> Key: HADOOP-17351
> URL: https://issues.apache.org/jira/browse/HADOOP-17351
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: precommit
>Reporter: Allen Wittenauer
>Priority: Major
>
> See for example YARN-8726: modifying javascript should not trigger the shaded 
> client test.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-17348) hadoop: add a test for -Pdist

2020-11-03 Thread Allen Wittenauer (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer moved YETUS-266 to HADOOP-17348:
-

Component/s: (was: Precommit)
 precommit
Key: HADOOP-17348  (was: YETUS-266)
 Issue Type: Bug  (was: New Feature)
Project: Hadoop Common  (was: Yetus)

> hadoop: add a test for -Pdist
> -
>
> Key: HADOOP-17348
> URL: https://issues.apache.org/jira/browse/HADOOP-17348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: precommit
>Reporter: Allen Wittenauer
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-17353) hadoop personality: yarn-ui should be conditional

2020-11-03 Thread Allen Wittenauer (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer moved YETUS-548 to HADOOP-17353:
-

  Component/s: (was: Precommit)
   precommit
  Key: HADOOP-17353  (was: YETUS-548)
Affects Version/s: (was: 0.6.0)
  Project: Hadoop Common  (was: Yetus)

> hadoop personality: yarn-ui should be conditional
> -
>
> Key: HADOOP-17353
> URL: https://issues.apache.org/jira/browse/HADOOP-17353
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: precommit
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Major
>
> Given how much stuff -Pyarn-ui downloads, we should make it conditional to 
> cut down on testing time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17347) ABFS: Read small files completely

2020-11-03 Thread Bilahari T H (Jira)
Bilahari T H created HADOOP-17347:
-

 Summary: ABFS: Read small files completely
 Key: HADOOP-17347
 URL: https://issues.apache.org/jira/browse/HADOOP-17347
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.4.0
Reporter: Bilahari T H


Files that are of size smaller than the read buffer size can be considered as 
small files. In case of such files it would be better to read the full file 
into the AbfsInputStream buffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith opened a new pull request #2432: HADOOP-17311. ABFS: Read small files completely

2020-11-03 Thread GitBox


bilaharith opened a new pull request #2432:
URL: https://github.com/apache/hadoop/pull/2432


   Files that are of size smaller than the read buffer size can be considered 
as small files. In case of such files it would be better to read the full file 
into the AbfsInputStream buffer.
   
   **Driver test results using accounts in Central India**
   mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 462, Failures: 0, Errors: 0, Skipped: 64
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 24
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 462, Failures: 0, Errors: 0, Skipped: 24
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 16
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 462, Failures: 0, Errors: 0, Skipped: 245
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 16



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16492) Support HuaweiCloud Object Storage as a Hadoop Backend File System

2020-11-03 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225844#comment-17225844
 ] 

Brahma Reddy Battula edited comment on HADOOP-16492 at 11/4/20, 4:53 AM:
-

[~zhongjun] thanks for continuous effort.
{quote}In the 2nd page of the attached Difference Between OBSA and S3A.pdf, we 
list some advantages of OBSA over S3A on append, rename, and list features.
{quote} *  Looks interesting.

  
{quote}private static final AtomicInteger POOLNUMBER = new AtomicInteger(1);
{quote} * Use case sensitive for variables.

  
 * why the loglevel is debug here, let it be WARN or ERROR..?

 
{code:java}
// code placeholder
195   private synchronized OBSDataBlocks.DataBlock createBlockIfNeeded()
195   private synchronized OBSDataBlocks.DataBlock createBlockIfNeeded()
196       throws IOException {197     if (activeBlock == null) {
198       blockCount++;
199       if (blockCount >= OBSConstants.MAX_MULTIPART_COUNT) {
200         LOG.debug(
201             "Number of partitions in stream exceeds limit for OBS: "
202                 + OBSConstants.MAX_MULTIPART_COUNT203                 + " 
write may fail.");
204       }
205       activeBlock = blockFactory.create(blockCount, this.blockSize);
206     }
207     return activeBlock;
208   }

{code}
 *  Looks so much code is used from the S3, can you try to extend the existing 
S3 code..?

*The new features append,rename..Still I am checking.*

 


was (Author: brahmareddy):
[~zhongjun] thanks for continuous effort.
{quote}In the 2nd page of the attached Difference Between OBSA and S3A.pdf, we 
list some advantages of OBSA over S3A on append, rename, and list features.
{quote} * Looks interesting.

  
{quote}private static final AtomicInteger POOLNUMBER = new AtomicInteger(1);
{quote} * Use case sensitive for variables.

  
 * why the loglevel is debug here, let it be WARN or ERROR..?

 
{code:java}
// code placeholder
195   private synchronized OBSDataBlocks.DataBlock createBlockIfNeeded()
195   private synchronized OBSDataBlocks.DataBlock createBlockIfNeeded()
196       throws IOException {197     if (activeBlock == null) {
198       blockCount++;
199       if (blockCount >= OBSConstants.MAX_MULTIPART_COUNT) {
200         LOG.debug(
201             "Number of partitions in stream exceeds limit for OBS: "
202                 + OBSConstants.MAX_MULTIPART_COUNT203                 + " 
write may fail.");
204       }
205       activeBlock = blockFactory.create(blockCount, this.blockSize);
206     }
207     return activeBlock;
208   }

{code}
 *  Looks so much code is used from the S3, can you try to extend the existing 
S3 code..?

The new features append,rename..Still I am checking.

 

 

 

 

 

> Support HuaweiCloud Object Storage as a Hadoop Backend File System
> --
>
> Key: HADOOP-16492
> URL: https://issues.apache.org/jira/browse/HADOOP-16492
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.4.0
>Reporter: zhongjun
>Priority: Major
> Attachments: Difference Between OBSA and S3A.pdf, 
> HADOOP-16492.001.patch, HADOOP-16492.002.patch, HADOOP-16492.003.patch, 
> HADOOP-16492.004.patch, HADOOP-16492.005.patch, HADOOP-16492.006.patch, 
> HADOOP-16492.007.patch, HADOOP-16492.008.patch, HADOOP-16492.009.patch, 
> HADOOP-16492.010.patch, HADOOP-16492.011.patch, HADOOP-16492.012.patch, 
> HADOOP-16492.013.patch, HADOOP-16492.014.patch, HADOOP-16492.015.patch, 
> HADOOP-16492.016.patch, OBSA HuaweiCloud OBS Adapter for Hadoop Support.pdf
>
>
> Added support for HuaweiCloud OBS 
> ([https://www.huaweicloud.com/en-us/product/obs.html]) to Hadoop file system, 
> just like what we do before for S3, ADLS, OSS, etc. With simple 
> configuration, Hadoop applications can read/write data from OBS without any 
> code change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16492) Support HuaweiCloud Object Storage as a Hadoop Backend File System

2020-11-03 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225844#comment-17225844
 ] 

Brahma Reddy Battula edited comment on HADOOP-16492 at 11/4/20, 4:50 AM:
-

[~zhongjun] thanks for continuous effort.
{quote}In the 2nd page of the attached Difference Between OBSA and S3A.pdf, we 
list some advantages of OBSA over S3A on append, rename, and list features.
{quote} * Looks interesting.

{quote}private static final AtomicInteger POOLNUMBER = new AtomicInteger(1);
{quote} * Use case sensitive for variables.

 
 * why the loglevel is debug here, let it be WARN or ERROR..?

 
{code:java}
// code placeholder
195   private synchronized OBSDataBlocks.DataBlock createBlockIfNeeded()
195   private synchronized OBSDataBlocks.DataBlock createBlockIfNeeded()
196       throws IOException {197     if (activeBlock == null) {
198       blockCount++;
199       if (blockCount >= OBSConstants.MAX_MULTIPART_COUNT) {
200         LOG.debug(
201             "Number of partitions in stream exceeds limit for OBS: "
202                 + OBSConstants.MAX_MULTIPART_COUNT203                 + " 
write may fail.");
204       }
205       activeBlock = blockFactory.create(blockCount, this.blockSize);
206     }
207     return activeBlock;
208   }

{code}
 *  Looks so much code is used from the S3, can you try to extend the existing 
S3 code..?

The new features append,rename..Still I am checking.

 

 

 

 

 


was (Author: brahmareddy):
[~zhongjun] thanks for continuous effort.
{quote}In the 2nd page of the attached Difference Between OBSA and S3A.pdf, we 
list some advantages of OBSA over S3A on append, rename, and list features.
{quote}
Looks interesting.
{quote}private static final AtomicInteger POOLNUMBER = new AtomicInteger(1);
{quote}
Use case sensitive for variables.

 

why the loglevel is debug here, let it be WARN or ERROR..?

 
{code:java}
// code placeholder
195   private synchronized OBSDataBlocks.DataBlock createBlockIfNeeded()
195   private synchronized OBSDataBlocks.DataBlock createBlockIfNeeded()
196       throws IOException {197     if (activeBlock == null) {
198       blockCount++;
199       if (blockCount >= OBSConstants.MAX_MULTIPART_COUNT) {
200         LOG.debug(
201             "Number of partitions in stream exceeds limit for OBS: "
202                 + OBSConstants.MAX_MULTIPART_COUNT203                 + " 
write may fail.");
204       }
205       activeBlock = blockFactory.create(blockCount, this.blockSize);
206     }
207     return activeBlock;
208   }

{code}
 

Looks so much code is used from the S3, can you try to extend the existing S3 
code..?

 

 

 

 

 

> Support HuaweiCloud Object Storage as a Hadoop Backend File System
> --
>
> Key: HADOOP-16492
> URL: https://issues.apache.org/jira/browse/HADOOP-16492
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.4.0
>Reporter: zhongjun
>Priority: Major
> Attachments: Difference Between OBSA and S3A.pdf, 
> HADOOP-16492.001.patch, HADOOP-16492.002.patch, HADOOP-16492.003.patch, 
> HADOOP-16492.004.patch, HADOOP-16492.005.patch, HADOOP-16492.006.patch, 
> HADOOP-16492.007.patch, HADOOP-16492.008.patch, HADOOP-16492.009.patch, 
> HADOOP-16492.010.patch, HADOOP-16492.011.patch, HADOOP-16492.012.patch, 
> HADOOP-16492.013.patch, HADOOP-16492.014.patch, HADOOP-16492.015.patch, 
> HADOOP-16492.016.patch, OBSA HuaweiCloud OBS Adapter for Hadoop Support.pdf
>
>
> Added support for HuaweiCloud OBS 
> ([https://www.huaweicloud.com/en-us/product/obs.html]) to Hadoop file system, 
> just like what we do before for S3, ADLS, OSS, etc. With simple 
> configuration, Hadoop applications can read/write data from OBS without any 
> code change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16492) Support HuaweiCloud Object Storage as a Hadoop Backend File System

2020-11-03 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225844#comment-17225844
 ] 

Brahma Reddy Battula edited comment on HADOOP-16492 at 11/4/20, 4:50 AM:
-

[~zhongjun] thanks for continuous effort.
{quote}In the 2nd page of the attached Difference Between OBSA and S3A.pdf, we 
list some advantages of OBSA over S3A on append, rename, and list features.
{quote} * Looks interesting.

  
{quote}private static final AtomicInteger POOLNUMBER = new AtomicInteger(1);
{quote} * Use case sensitive for variables.

  
 * why the loglevel is debug here, let it be WARN or ERROR..?

 
{code:java}
// code placeholder
195   private synchronized OBSDataBlocks.DataBlock createBlockIfNeeded()
195   private synchronized OBSDataBlocks.DataBlock createBlockIfNeeded()
196       throws IOException {197     if (activeBlock == null) {
198       blockCount++;
199       if (blockCount >= OBSConstants.MAX_MULTIPART_COUNT) {
200         LOG.debug(
201             "Number of partitions in stream exceeds limit for OBS: "
202                 + OBSConstants.MAX_MULTIPART_COUNT203                 + " 
write may fail.");
204       }
205       activeBlock = blockFactory.create(blockCount, this.blockSize);
206     }
207     return activeBlock;
208   }

{code}
 *  Looks so much code is used from the S3, can you try to extend the existing 
S3 code..?

The new features append,rename..Still I am checking.

 

 

 

 

 


was (Author: brahmareddy):
[~zhongjun] thanks for continuous effort.
{quote}In the 2nd page of the attached Difference Between OBSA and S3A.pdf, we 
list some advantages of OBSA over S3A on append, rename, and list features.
{quote} * Looks interesting.

{quote}private static final AtomicInteger POOLNUMBER = new AtomicInteger(1);
{quote} * Use case sensitive for variables.

 
 * why the loglevel is debug here, let it be WARN or ERROR..?

 
{code:java}
// code placeholder
195   private synchronized OBSDataBlocks.DataBlock createBlockIfNeeded()
195   private synchronized OBSDataBlocks.DataBlock createBlockIfNeeded()
196       throws IOException {197     if (activeBlock == null) {
198       blockCount++;
199       if (blockCount >= OBSConstants.MAX_MULTIPART_COUNT) {
200         LOG.debug(
201             "Number of partitions in stream exceeds limit for OBS: "
202                 + OBSConstants.MAX_MULTIPART_COUNT203                 + " 
write may fail.");
204       }
205       activeBlock = blockFactory.create(blockCount, this.blockSize);
206     }
207     return activeBlock;
208   }

{code}
 *  Looks so much code is used from the S3, can you try to extend the existing 
S3 code..?

The new features append,rename..Still I am checking.

 

 

 

 

 

> Support HuaweiCloud Object Storage as a Hadoop Backend File System
> --
>
> Key: HADOOP-16492
> URL: https://issues.apache.org/jira/browse/HADOOP-16492
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.4.0
>Reporter: zhongjun
>Priority: Major
> Attachments: Difference Between OBSA and S3A.pdf, 
> HADOOP-16492.001.patch, HADOOP-16492.002.patch, HADOOP-16492.003.patch, 
> HADOOP-16492.004.patch, HADOOP-16492.005.patch, HADOOP-16492.006.patch, 
> HADOOP-16492.007.patch, HADOOP-16492.008.patch, HADOOP-16492.009.patch, 
> HADOOP-16492.010.patch, HADOOP-16492.011.patch, HADOOP-16492.012.patch, 
> HADOOP-16492.013.patch, HADOOP-16492.014.patch, HADOOP-16492.015.patch, 
> HADOOP-16492.016.patch, OBSA HuaweiCloud OBS Adapter for Hadoop Support.pdf
>
>
> Added support for HuaweiCloud OBS 
> ([https://www.huaweicloud.com/en-us/product/obs.html]) to Hadoop file system, 
> just like what we do before for S3, ADLS, OSS, etc. With simple 
> configuration, Hadoop applications can read/write data from OBS without any 
> code change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on pull request #2424: HDFS-15643. EC: Fix checksum computation in case of native encoders.

2020-11-03 Thread GitBox


ayushtkn commented on pull request #2424:
URL: https://github.com/apache/hadoop/pull/2424#issuecomment-721510244


   Thanx @aajisaka  and @amahussein for the reviews!!!
   
   @amahussein it would be good to give a check to the tests that failed again, 
post getting fixed recently. I will try to figure out from the jenkins report, 
you to give a check



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16492) Support HuaweiCloud Object Storage as a Hadoop Backend File System

2020-11-03 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225844#comment-17225844
 ] 

Brahma Reddy Battula commented on HADOOP-16492:
---

[~zhongjun] thanks for continuous effort.
{quote}In the 2nd page of the attached Difference Between OBSA and S3A.pdf, we 
list some advantages of OBSA over S3A on append, rename, and list features.
{quote}
Looks interesting.
{quote}private static final AtomicInteger POOLNUMBER = new AtomicInteger(1);
{quote}
Use case sensitive for variables.

 

why the loglevel is debug here, let it be WARN or ERROR..?

 
{code:java}
// code placeholder
195   private synchronized OBSDataBlocks.DataBlock createBlockIfNeeded()
195   private synchronized OBSDataBlocks.DataBlock createBlockIfNeeded()
196       throws IOException {197     if (activeBlock == null) {
198       blockCount++;
199       if (blockCount >= OBSConstants.MAX_MULTIPART_COUNT) {
200         LOG.debug(
201             "Number of partitions in stream exceeds limit for OBS: "
202                 + OBSConstants.MAX_MULTIPART_COUNT203                 + " 
write may fail.");
204       }
205       activeBlock = blockFactory.create(blockCount, this.blockSize);
206     }
207     return activeBlock;
208   }

{code}
 

Looks so much code is used from the S3, can you try to extend the existing S3 
code..?

 

 

 

 

 

> Support HuaweiCloud Object Storage as a Hadoop Backend File System
> --
>
> Key: HADOOP-16492
> URL: https://issues.apache.org/jira/browse/HADOOP-16492
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.4.0
>Reporter: zhongjun
>Priority: Major
> Attachments: Difference Between OBSA and S3A.pdf, 
> HADOOP-16492.001.patch, HADOOP-16492.002.patch, HADOOP-16492.003.patch, 
> HADOOP-16492.004.patch, HADOOP-16492.005.patch, HADOOP-16492.006.patch, 
> HADOOP-16492.007.patch, HADOOP-16492.008.patch, HADOOP-16492.009.patch, 
> HADOOP-16492.010.patch, HADOOP-16492.011.patch, HADOOP-16492.012.patch, 
> HADOOP-16492.013.patch, HADOOP-16492.014.patch, HADOOP-16492.015.patch, 
> HADOOP-16492.016.patch, OBSA HuaweiCloud OBS Adapter for Hadoop Support.pdf
>
>
> Added support for HuaweiCloud OBS 
> ([https://www.huaweicloud.com/en-us/product/obs.html]) to Hadoop file system, 
> just like what we do before for S3, ADLS, OSS, etc. With simple 
> configuration, Hadoop applications can read/write data from OBS without any 
> code change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn merged pull request #2424: HDFS-15643. EC: Fix checksum computation in case of native encoders.

2020-11-03 Thread GitBox


ayushtkn merged pull request #2424:
URL: https://github.com/apache/hadoop/pull/2424


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2288: HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount

2020-11-03 Thread GitBox


hadoop-yetus commented on pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#issuecomment-721490455


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 50s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 55s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |   3m 58s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 12s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +0 :ok: |  spotbugs  |   3m 10s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 40s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 47s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  0s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  cc  |   4m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   4m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  cc  |   3m 55s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 55s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   1m  3s | 
[/diff-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2288/16/artifact/out/diff-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 1 new + 698 unchanged - 0 fixed = 
699 total (was 698)  |
   | +1 :green_heart: |  mvnsite  |   1m 58s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  16m 18s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 53s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  findbugs  |   5m 54s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 15s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 121m 29s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2288/16/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  asflicense  |   0m 46s | 
[/patch-asflicense-problems.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2288/16/artifact/out/patch-asflicense-problems.txt)
 |  The patch generated 20 ASF License warnings.  |
   |  |   | 244m 40s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestBlocksScheduledCounter |
   |   | hadoop.hdfs.TestStateAlignmentContextWithHA |
   |   | hadoop.hdfs.TestDFSStripedOutputStream |
   |   | hadoop.hdfs.TestSafeModeWithStripedFile |
   |   | hadoop.hdfs.TestGetFileChecksum |
   |   | hadoop.hdfs.TestWriteConfigurationToDFS |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
   |   | hadoop.hdfs.TestLocalDFS |
   |   | hadoop.hdfs.TestHDFSServerPorts |
   |   | hadoop.hdfs.TestDistributedFileSystemWithECFileWithRandomECPolicy |
   |   | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR |
   |   | 

[jira] [Updated] (HADOOP-17343) Upgrade aws-java-sdk to 1.11.892

2020-11-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17343:

Labels: pull-request-available  (was: )

> Upgrade aws-java-sdk to 1.11.892
> 
>
> Key: HADOOP-17343
> URL: https://issues.apache.org/jira/browse/HADOOP-17343
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17343) Upgrade aws-java-sdk to 1.11.892

2020-11-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17343?focusedWorklogId=507469=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-507469
 ]

ASF GitHub Bot logged work on HADOOP-17343:
---

Author: ASF GitHub Bot
Created on: 04/Nov/20 03:13
Start Date: 04/Nov/20 03:13
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2429:
URL: https://github.com/apache/hadoop/pull/2429#issuecomment-721487909


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  33m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  1s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m  6s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |  17m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  mvnsite  |  25m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 28s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   7m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   7m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 44s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  20m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |  19m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |  17m 23s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  20m 11s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  There were no new 
shellcheck issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 18s |  |  There were no new 
shelldocs issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m  7s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   7m  7s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   7m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 321m 54s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2429/2/artifact/out/patch-unit-root.txt)
 |  root in the patch failed.  |
   | +0 :ok: |  asflicense  |   0m 44s |  |  ASF License check generated no 
output?  |
   |  |   | 596m 59s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
   |   | hadoop.security.TestLdapGroupsMapping |
   |   | hadoop.yarn.client.api.impl.TestAMRMClient |
   |   | hadoop.yarn.server.nodemanager.TestNodeManagerShutdown |
   |   | hadoop.yarn.server.nodemanager.TestNodeManagerResync |
   |   | hadoop.yarn.server.nodemanager.TestDeletionService |
   |   | 
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch |
   |   | hadoop.yarn.server.nodemanager.containermanager.TestContainerManager |
   |   | 
hadoop.yarn.server.nodemanager.containermanager.monitor.TestContainersMonitor |
   |   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
|
   |   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerAutoQueueCreation
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2429: HADOOP-17343. Upgrade aws-java-sdk to 1.11.892

2020-11-03 Thread GitBox


hadoop-yetus commented on pull request #2429:
URL: https://github.com/apache/hadoop/pull/2429#issuecomment-721487909


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  33m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  1s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m  6s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |  17m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  mvnsite  |  25m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 28s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   7m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   7m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 44s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  20m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |  19m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |  17m 23s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  20m 11s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  There were no new 
shellcheck issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 18s |  |  There were no new 
shelldocs issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m  7s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   7m  7s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   7m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 321m 54s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2429/2/artifact/out/patch-unit-root.txt)
 |  root in the patch failed.  |
   | +0 :ok: |  asflicense  |   0m 44s |  |  ASF License check generated no 
output?  |
   |  |   | 596m 59s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
   |   | hadoop.security.TestLdapGroupsMapping |
   |   | hadoop.yarn.client.api.impl.TestAMRMClient |
   |   | hadoop.yarn.server.nodemanager.TestNodeManagerShutdown |
   |   | hadoop.yarn.server.nodemanager.TestNodeManagerResync |
   |   | hadoop.yarn.server.nodemanager.TestDeletionService |
   |   | 
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch |
   |   | hadoop.yarn.server.nodemanager.containermanager.TestContainerManager |
   |   | 
hadoop.yarn.server.nodemanager.containermanager.monitor.TestContainersMonitor |
   |   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
|
   |   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerAutoQueueCreation
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2429/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2429 |
   | Optional Tests | dupname asflicense shellcheck shelldocs compile javac 
javadoc mvninstall mvnsite unit shadedclient xml |
   | uname | Linux b6e1678215a7 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |

[jira] [Work logged] (HADOOP-17346) Fair call queue is defeated by abusive service principals

2020-11-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17346?focusedWorklogId=507464=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-507464
 ]

ASF GitHub Bot logged work on HADOOP-17346:
---

Author: ASF GitHub Bot
Created on: 04/Nov/20 02:58
Start Date: 04/Nov/20 02:58
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2431:
URL: https://github.com/apache/hadoop/pull/2431#issuecomment-721484027


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  28m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |  17m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 35s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +0 :ok: |  spotbugs  |   2m 23s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 20s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |  19m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |  18m  6s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m  9s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 486 unchanged 
- 1 fixed = 486 total (was 487)  |
   | +1 :green_heart: |  mvnsite  |   1m 38s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 13s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  findbugs  |   2m 49s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  10m 30s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2431/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 202m  3s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.metrics2.source.TestJvmMetrics |
   |   | hadoop.crypto.key.TestKeyProviderFactory |
   |   | hadoop.security.TestLdapGroupsMapping |
   |   | hadoop.ipc.TestDecayRpcScheduler |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2431/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2431 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 801db6ba89b3 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d07dc7afb4a |
   | Default Java | Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2431: HADOOP-17346. Fair call queue is defeated by abusive service principals

2020-11-03 Thread GitBox


hadoop-yetus commented on pull request #2431:
URL: https://github.com/apache/hadoop/pull/2431#issuecomment-721484027


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  28m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |  17m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 35s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +0 :ok: |  spotbugs  |   2m 23s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 20s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |  19m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |  18m  6s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m  9s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 486 unchanged 
- 1 fixed = 486 total (was 487)  |
   | +1 :green_heart: |  mvnsite  |   1m 38s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 13s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  findbugs  |   2m 49s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  10m 30s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2431/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 202m  3s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.metrics2.source.TestJvmMetrics |
   |   | hadoop.crypto.key.TestKeyProviderFactory |
   |   | hadoop.security.TestLdapGroupsMapping |
   |   | hadoop.ipc.TestDecayRpcScheduler |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2431/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2431 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 801db6ba89b3 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d07dc7afb4a |
   | Default Java | Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2431/1/testReport/ |
   | Max. process+thread count | 1605 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2431/1/console |
 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2430: HDFS-15562: StandbyCheckpointer will do checkpoint repeatedly while connecting observer/active namenode failed

2020-11-03 Thread GitBox


hadoop-yetus commented on pull request #2430:
URL: https://github.com/apache/hadoop/pull/2430#issuecomment-721477331


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 10s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m  3s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +0 :ok: |  spotbugs  |   3m 11s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  9s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |   1m  5s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 47s | 
[/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2430/1/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 35 unchanged - 
0 fixed = 37 total (was 35)  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 13s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  findbugs  |   3m 15s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 115m 45s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2430/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 211m 34s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.fs.viewfs.TestViewFileSystemLinkFallback |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestStripedFileAppend |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2430/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2430 |
   | JIRA Issue | HDFS-15562 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ee06256c70cf 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d07dc7afb4a |
   | Default Java | Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 

[jira] [Updated] (HADOOP-17333) MetricsRecordFiltered error

2020-11-03 Thread minchengbo (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

minchengbo updated HADOOP-17333:

Priority: Major  (was: Minor)

> MetricsRecordFiltered error
> ---
>
> Key: HADOOP-17333
> URL: https://issues.apache.org/jira/browse/HADOOP-17333
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0, 3.2.1
>Reporter: minchengbo
>Priority: Major
>
>  Got sink exception,when set  
> datanode.sink.ganglia.metric.filter.exclude=metricssystem in 
> hadoop-metrics2.properties ,
> java.lang.ClassCastException: 
> org.apache.hadoop.metrics2.impl.MetricsRecordFiltered$1 cannot be cast to 
> java.util.Collection
>  at 
> org.apache.hadoop.metrics2.sink.ganglia.GangliaSink30.putMetrics(GangliaSink30.java:165)
>  at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:184)
>  at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
>  at org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
>  at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:135)
>  at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:89)
> //
> This case can show the exception
>   public static void main(String[] args) {
>List metricsd=new 
> LinkedList(); 
>MetricsInfo info=MsInfo.ProcessName;
>long timestamp=System.currentTimeMillis();
>  List tags=new LinkedList<>(); 
>org.apache.hadoop.metrics2.impl.MetricsRecordImpl recordimp = 
> new MetricsRecordImpl(info, timestamp, tags, metricsd);
>MetricsFilter filter=new RegexFilter(); 
>MetricsRecordFiltered  recordfilter=new 
> MetricsRecordFiltered(recordimp,filter);   
>SubsetConfiguration conf=new SubsetConfiguration(new 
> PropertyListConfiguration(),"test");
>
> conf.addProperty(AbstractGangliaSink.SUPPORT_SPARSE_METRICS_PROPERTY, true);
>GangliaSink30  ganliasink=new GangliaSink30();
>ganliasink.init(conf);  
>ganliasink.putMetrics(recordfilter);
>   
>   }
> ///
> The root cause is:
>  Gets a Iterable object in  MetricsRecordFiltered.java:
>  @Override public Iterable metrics() {
> return new Iterable() {
>   final Iterator it = delegate.metrics().iterator();
>   @Override public Iterator iterator() {
> return new AbstractIterator() {
>   @Override public AbstractMetric computeNext() {
> while (it.hasNext()) {
>   AbstractMetric next = it.next();
>   if (filter.accepts(next.name())) {
> return next;
>   }
> }
> return (AbstractMetric)endOfData();
>   }
> };
>   }
> };
>   }
> but convert to Collection in GangliaSink30.java line 164
> Collection metrics = (Collection) 
> record
> .metrics();



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17333) MetricsRecordFiltered error

2020-11-03 Thread minchengbo (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

minchengbo updated HADOOP-17333:

Affects Version/s: 3.3.0

> MetricsRecordFiltered error
> ---
>
> Key: HADOOP-17333
> URL: https://issues.apache.org/jira/browse/HADOOP-17333
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0, 3.2.1
>Reporter: minchengbo
>Priority: Minor
>
>  Got sink exception,when set  
> datanode.sink.ganglia.metric.filter.exclude=metricssystem in 
> hadoop-metrics2.properties ,
> java.lang.ClassCastException: 
> org.apache.hadoop.metrics2.impl.MetricsRecordFiltered$1 cannot be cast to 
> java.util.Collection
>  at 
> org.apache.hadoop.metrics2.sink.ganglia.GangliaSink30.putMetrics(GangliaSink30.java:165)
>  at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:184)
>  at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
>  at org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
>  at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:135)
>  at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:89)
> //
> This case can show the exception
>   public static void main(String[] args) {
>List metricsd=new 
> LinkedList(); 
>MetricsInfo info=MsInfo.ProcessName;
>long timestamp=System.currentTimeMillis();
>  List tags=new LinkedList<>(); 
>org.apache.hadoop.metrics2.impl.MetricsRecordImpl recordimp = 
> new MetricsRecordImpl(info, timestamp, tags, metricsd);
>MetricsFilter filter=new RegexFilter(); 
>MetricsRecordFiltered  recordfilter=new 
> MetricsRecordFiltered(recordimp,filter);   
>SubsetConfiguration conf=new SubsetConfiguration(new 
> PropertyListConfiguration(),"test");
>
> conf.addProperty(AbstractGangliaSink.SUPPORT_SPARSE_METRICS_PROPERTY, true);
>GangliaSink30  ganliasink=new GangliaSink30();
>ganliasink.init(conf);  
>ganliasink.putMetrics(recordfilter);
>   
>   }
> ///
> The root cause is:
>  Gets a Iterable object in  MetricsRecordFiltered.java:
>  @Override public Iterable metrics() {
> return new Iterable() {
>   final Iterator it = delegate.metrics().iterator();
>   @Override public Iterator iterator() {
> return new AbstractIterator() {
>   @Override public AbstractMetric computeNext() {
> while (it.hasNext()) {
>   AbstractMetric next = it.next();
>   if (filter.accepts(next.name())) {
> return next;
>   }
> }
> return (AbstractMetric)endOfData();
>   }
> };
>   }
> };
>   }
> but convert to Collection in GangliaSink30.java line 164
> Collection metrics = (Collection) 
> record
> .metrics();



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17346) Fair call queue is defeated by abusive service principals

2020-11-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17346?focusedWorklogId=507383=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-507383
 ]

ASF GitHub Bot logged work on HADOOP-17346:
---

Author: ASF GitHub Bot
Created on: 03/Nov/20 23:35
Start Date: 03/Nov/20 23:35
Worklog Time Spent: 10m 
  Work Description: amahussein opened a new pull request #2431:
URL: https://github.com/apache/hadoop/pull/2431


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 507383)
Remaining Estimate: 0h
Time Spent: 10m

> Fair call queue is defeated by abusive service principals
> -
>
> Key: HADOOP-17346
> URL: https://issues.apache.org/jira/browse/HADOOP-17346
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, ipc
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [~daryn] reported  that the FCQ prioritizes based on the full kerberos 
> principal (ie. "user/host@realm") rather than short name (ie. "user") to 
> prevent service principals like the DNs and NMs being de-prioritized since 
> service principals are expected to be well behaved.  Notably the DNs 
> contribute a significant but important load so the intent is not to 
> de-prioritize all DNs because their sum total load is high relative to users.
> This has the unfortunate side effect of allowing misbehaving & non-critical 
> service principals to abuse the FCQ. The gstorm/* principals are a prime 
> example.   Each server is spamming opens as fast as possible which ensures 
> that none of the gstorm servers can be de-prioritized because each principal 
> is a fraction of the total load from all principals.
> The secondary and more devasting problem is other abusive non-service 
> principals cannot be effectively de-prioritized.  The sum total of all gstorm 
> load prevents other principals from surpassing the priority thresholds.  
> Principals stay in the highest priority queues which allows the abusive 
> principals to overflow the entire call queue for extended periods of time.  
> Notably it prevents the FCQ from moderating the heavy create loads from p_gup 
> @ DB which cause significant performance degradation.
> Prioritization should be based on short name with configurable exemptions for 
> services like the DN/NM.
> [~daryn] suggested a solution that we applied on our clusters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17346) Fair call queue is defeated by abusive service principals

2020-11-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17346:

Labels: pull-request-available  (was: )

> Fair call queue is defeated by abusive service principals
> -
>
> Key: HADOOP-17346
> URL: https://issues.apache.org/jira/browse/HADOOP-17346
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, ipc
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [~daryn] reported  that the FCQ prioritizes based on the full kerberos 
> principal (ie. "user/host@realm") rather than short name (ie. "user") to 
> prevent service principals like the DNs and NMs being de-prioritized since 
> service principals are expected to be well behaved.  Notably the DNs 
> contribute a significant but important load so the intent is not to 
> de-prioritize all DNs because their sum total load is high relative to users.
> This has the unfortunate side effect of allowing misbehaving & non-critical 
> service principals to abuse the FCQ. The gstorm/* principals are a prime 
> example.   Each server is spamming opens as fast as possible which ensures 
> that none of the gstorm servers can be de-prioritized because each principal 
> is a fraction of the total load from all principals.
> The secondary and more devasting problem is other abusive non-service 
> principals cannot be effectively de-prioritized.  The sum total of all gstorm 
> load prevents other principals from surpassing the priority thresholds.  
> Principals stay in the highest priority queues which allows the abusive 
> principals to overflow the entire call queue for extended periods of time.  
> Notably it prevents the FCQ from moderating the heavy create loads from p_gup 
> @ DB which cause significant performance degradation.
> Prioritization should be based on short name with configurable exemptions for 
> services like the DN/NM.
> [~daryn] suggested a solution that we applied on our clusters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein opened a new pull request #2431: HADOOP-17346. Fair call queue is defeated by abusive service principals

2020-11-03 Thread GitBox


amahussein opened a new pull request #2431:
URL: https://github.com/apache/hadoop/pull/2431


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17346) Fair call queue is defeated by abusive service principals

2020-11-03 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17346:
--

 Summary: Fair call queue is defeated by abusive service principals
 Key: HADOOP-17346
 URL: https://issues.apache.org/jira/browse/HADOOP-17346
 Project: Hadoop Common
  Issue Type: Bug
  Components: common, ipc
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


[~daryn] reported  that the FCQ prioritizes based on the full kerberos 
principal (ie. "user/host@realm") rather than short name (ie. "user") to 
prevent service principals like the DNs and NMs being de-prioritized since 
service principals are expected to be well behaved.  Notably the DNs contribute 
a significant but important load so the intent is not to de-prioritize all DNs 
because their sum total load is high relative to users.

This has the unfortunate side effect of allowing misbehaving & non-critical 
service principals to abuse the FCQ. The gstorm/* principals are a prime 
example.   Each server is spamming opens as fast as possible which ensures that 
none of the gstorm servers can be de-prioritized because each principal is a 
fraction of the total load from all principals.

The secondary and more devasting problem is other abusive non-service 
principals cannot be effectively de-prioritized.  The sum total of all gstorm 
load prevents other principals from surpassing the priority thresholds.  
Principals stay in the highest priority queues which allows the abusive 
principals to overflow the entire call queue for extended periods of time.  
Notably it prevents the FCQ from moderating the heavy create loads from p_gup @ 
DB which cause significant performance degradation.

Prioritization should be based on short name with configurable exemptions for 
services like the DN/NM.

[~daryn] suggested a solution that we applied on our clusters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17346) Fair call queue is defeated by abusive service principals

2020-11-03 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17346 started by Ahmed Hussein.
--
> Fair call queue is defeated by abusive service principals
> -
>
> Key: HADOOP-17346
> URL: https://issues.apache.org/jira/browse/HADOOP-17346
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, ipc
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> [~daryn] reported  that the FCQ prioritizes based on the full kerberos 
> principal (ie. "user/host@realm") rather than short name (ie. "user") to 
> prevent service principals like the DNs and NMs being de-prioritized since 
> service principals are expected to be well behaved.  Notably the DNs 
> contribute a significant but important load so the intent is not to 
> de-prioritize all DNs because their sum total load is high relative to users.
> This has the unfortunate side effect of allowing misbehaving & non-critical 
> service principals to abuse the FCQ. The gstorm/* principals are a prime 
> example.   Each server is spamming opens as fast as possible which ensures 
> that none of the gstorm servers can be de-prioritized because each principal 
> is a fraction of the total load from all principals.
> The secondary and more devasting problem is other abusive non-service 
> principals cannot be effectively de-prioritized.  The sum total of all gstorm 
> load prevents other principals from surpassing the priority thresholds.  
> Principals stay in the highest priority queues which allows the abusive 
> principals to overflow the entire call queue for extended periods of time.  
> Notably it prevents the FCQ from moderating the heavy create loads from p_gup 
> @ DB which cause significant performance degradation.
> Prioritization should be based on short name with configurable exemptions for 
> services like the DN/NM.
> [~daryn] suggested a solution that we applied on our clusters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aihuaxu opened a new pull request #2430: HDFS-15562: StandbyCheckpointer will do checkpoint repeatedly while connecting observer/active namenode failed

2020-11-03 Thread GitBox


aihuaxu opened a new pull request #2430:
URL: https://github.com/apache/hadoop/pull/2430


   Standby namenode does the checkpoint and uploads the image to the other 
active/observer namenodes. If other namenodes, e.g., observers are down for 
maintenance, currently uploading will fail and retrigger the checkpoint and 
image uploading immediately. That is causing unnecessary network traffic. This 
patch logs a message when unloading fails and continue with the regular 
checkpoint schedule. 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] LeonGao91 commented on a change in pull request #2288: HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount

2020-11-03 Thread GitBox


LeonGao91 commented on a change in pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#discussion_r516979069



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/MountVolumeInfo.java
##
@@ -0,0 +1,101 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.datanode.fsdataset.impl;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.StorageType;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference;
+
+import java.nio.channels.ClosedChannelException;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+/**
+ * MountVolumeInfo is a wrapper of
+ * detailed volume information for MountVolumeMap.
+ */
+@InterfaceAudience.Private
+class MountVolumeInfo {
+  private ConcurrentMap
+  storageTypeVolumeMap;
+  private double reservedForArchiveDefault;
+
+  MountVolumeInfo(Configuration conf) {
+storageTypeVolumeMap = new ConcurrentHashMap<>();
+reservedForArchiveDefault = conf.getDouble(
+DFSConfigKeys.DFS_DATANODE_RESERVE_FOR_ARCHIVE_DEFAULT_PERCENTAGE,
+DFSConfigKeys
+.DFS_DATANODE_RESERVE_FOR_ARCHIVE_DEFAULT_PERCENTAGE_DEFAULT);
+if (reservedForArchiveDefault > 1) {
+  FsDatasetImpl.LOG.warn("Value of reserve-for-archival is > 100%." +
+  " Setting it to 100%.");
+  reservedForArchiveDefault = 1;
+}
+  }
+
+  FsVolumeReference getVolumeRef(StorageType storageType) {
+try {
+  FsVolumeImpl volumeImpl = storageTypeVolumeMap
+  .getOrDefault(storageType, null);
+  if (volumeImpl != null) {
+return volumeImpl.obtainReference();
+  }
+} catch (ClosedChannelException e) {
+  FsDatasetImpl.LOG.warn("Volume closed when getting volume" +
+  " by storage type: " + storageType);
+}
+return null;
+  }
+
+  /**
+   * Return configured capacity ratio.
+   * If the volume is the only one on the mount,
+   * return 1 to avoid unnecessary allocation.
+   */
+  double getCapacityRatio(StorageType storageType) {
+if (storageTypeVolumeMap.containsKey(storageType)
+&& storageTypeVolumeMap.size() > 1) {
+  if (storageType == StorageType.ARCHIVE) {
+return reservedForArchiveDefault;
+  } else if (storageType == StorageType.DISK) {
+return 1 - reservedForArchiveDefault;
+  }
+}
+return 1;
+  }
+
+  void addVolume(FsVolumeImpl volume) {
+if (storageTypeVolumeMap.containsKey(volume.getStorageType())) {
+  FsDatasetImpl.LOG.error("Found storage type already exist." +

Review comment:
   Yeah, that makes sense, will add a return value for this function.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17342) Creating a token identifier should not do kerberos name resolution

2020-11-03 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225686#comment-17225686
 ] 

Jim Brennan commented on HADOOP-17342:
--

I believe the failure in TestLdapGroupsMapping is unrelated - there is already 
a Jira to fix that: HADOOP-17340

I don't think we need to add a new test case for this.  Any existing tests that 
use this constructor will test it implicitly.  I think a code review in this 
case would be sufficient.




> Creating a token identifier should not do kerberos name resolution
> --
>
> Key: HADOOP-17342
> URL: https://issues.apache.org/jira/browse/HADOOP-17342
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 2.10.1, 3.4.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
> Attachments: HADOOP-17342.001.patch
>
>
> This problem was found and fixed internally for us by [~daryn].
> Creating a token identifier tries to do auth_to_local short username 
> translation. The authentication process creates a blank token identifier for 
> deserializing the wire format. Attempting to resolve an empty username is 
> useless work.
> Discovered the issue during fair call queue backoff testing. The readers are 
> unnecessary slowed down by this bug.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Jing9 commented on a change in pull request #2288: HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount

2020-11-03 Thread GitBox


Jing9 commented on a change in pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#discussion_r516949905



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/MountVolumeInfo.java
##
@@ -0,0 +1,101 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.datanode.fsdataset.impl;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.StorageType;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference;
+
+import java.nio.channels.ClosedChannelException;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+/**
+ * MountVolumeInfo is a wrapper of
+ * detailed volume information for MountVolumeMap.
+ */
+@InterfaceAudience.Private
+class MountVolumeInfo {
+  private ConcurrentMap
+  storageTypeVolumeMap;
+  private double reservedForArchiveDefault;
+
+  MountVolumeInfo(Configuration conf) {
+storageTypeVolumeMap = new ConcurrentHashMap<>();
+reservedForArchiveDefault = conf.getDouble(
+DFSConfigKeys.DFS_DATANODE_RESERVE_FOR_ARCHIVE_DEFAULT_PERCENTAGE,
+DFSConfigKeys
+.DFS_DATANODE_RESERVE_FOR_ARCHIVE_DEFAULT_PERCENTAGE_DEFAULT);
+if (reservedForArchiveDefault > 1) {
+  FsDatasetImpl.LOG.warn("Value of reserve-for-archival is > 100%." +
+  " Setting it to 100%.");
+  reservedForArchiveDefault = 1;
+}
+  }
+
+  FsVolumeReference getVolumeRef(StorageType storageType) {
+try {
+  FsVolumeImpl volumeImpl = storageTypeVolumeMap
+  .getOrDefault(storageType, null);
+  if (volumeImpl != null) {
+return volumeImpl.obtainReference();
+  }
+} catch (ClosedChannelException e) {
+  FsDatasetImpl.LOG.warn("Volume closed when getting volume" +
+  " by storage type: " + storageType);
+}
+return null;
+  }
+
+  /**
+   * Return configured capacity ratio.
+   * If the volume is the only one on the mount,
+   * return 1 to avoid unnecessary allocation.
+   */
+  double getCapacityRatio(StorageType storageType) {
+if (storageTypeVolumeMap.containsKey(storageType)
+&& storageTypeVolumeMap.size() > 1) {
+  if (storageType == StorageType.ARCHIVE) {
+return reservedForArchiveDefault;
+  } else if (storageType == StorageType.DISK) {
+return 1 - reservedForArchiveDefault;
+  }
+}
+return 1;
+  }
+
+  void addVolume(FsVolumeImpl volume) {
+if (storageTypeVolumeMap.containsKey(volume.getStorageType())) {
+  FsDatasetImpl.LOG.error("Found storage type already exist." +

Review comment:
   What if in the future MountVolumeInfo#addVolume is called by other code 
other than activateVolume? If an existing storage type is not allowed, we can 
return a boolean to indicate if the call succeeds or not.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17255) JavaKeyStoreProvider fails to create a new key if the keystore is HDFS

2020-11-03 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-17255.
--
Fix Version/s: 3.2.3
   2.10.2
   3.1.5
   3.4.0
   3.3.1
   Resolution: Fixed

Thanks [~aajisaka]!

> JavaKeyStoreProvider fails to create a new key if the keystore is HDFS
> --
>
> Key: HADOOP-17255
> URL: https://issues.apache.org/jira/browse/HADOOP-17255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.1.5, 2.10.2, 3.2.3
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The caller of JavaKeyStoreProvider#renameOrFail assumes that it throws 
> FileNotFoundException if the src does not exist. However, 
> JavaKeyStoreProvider#renameOrFail calls the old rename API. In 
> DistributedFileSystem, the old API returns false if the src does not exist.
> That way JavaKeyStoreProvider fails to create a new key if the keystore is 
> HDFS.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17336) Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. followup to abfs

2020-11-03 Thread Chen Liang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225644#comment-17225644
 ] 

Chen Liang commented on HADOOP-17336:
-

I have committed v002 patch to branch-2.10. Thanks [~salsally] for the 
contribution and [~shv] for the review!

> Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and 
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix." to branch-2.10
> --
>
> Key: HADOOP-17336
> URL: https://issues.apache.org/jira/browse/HADOOP-17336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.10.1
>Reporter: Sally Zuo
>Assignee: Sally Zuo
>Priority: Major
> Attachments: Hadoop-17336-branch-2.10.001.patch, 
> Hadoop-17336-branch-2.10.002.patch
>
>
> Backport:
>  HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" to branch-2.10
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix."
> from branch-3.2 to branch-2.10



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17336) Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. followup to abfs cl

2020-11-03 Thread Chen Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HADOOP-17336:

Fix Version/s: 2.10.2
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and 
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix." to branch-2.10
> --
>
> Key: HADOOP-17336
> URL: https://issues.apache.org/jira/browse/HADOOP-17336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.10.1
>Reporter: Sally Zuo
>Assignee: Sally Zuo
>Priority: Major
> Fix For: 2.10.2
>
> Attachments: Hadoop-17336-branch-2.10.001.patch, 
> Hadoop-17336-branch-2.10.002.patch
>
>
> Backport:
>  HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" to branch-2.10
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix."
> from branch-3.2 to branch-2.10



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17342) Creating a token identifier should not do kerberos name resolution

2020-11-03 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225628#comment-17225628
 ] 

Hadoop QA commented on HADOOP-17342:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} |  | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} |  | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch does not contain any @author tags. 
{color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} |  | {color:red} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
52s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
35s{color} |  | {color:green} trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
57s{color} |  | {color:green} trunk passed with JDK Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 44s{color} |  | {color:green} branch has no errors when building and 
testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} |  | {color:green} trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} |  | {color:green} trunk passed with JDK Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
18s{color} |  | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} |  | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
43s{color} |  | {color:green} the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
43s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
52s{color} |  | {color:green} the patch passed with JDK Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
52s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} blanks {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch has no blanks issues. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 28s{color} |  | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} |  | {color:green} the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} |  | {color:green} the patch passed with JDK Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} |  | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} || ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m  7s{color} 
| 

[GitHub] [hadoop] jojochuang merged pull request #2291: HADOOP-17255. JavaKeyStoreProvider fails to create a new key if the keystore is HDFS

2020-11-03 Thread GitBox


jojochuang merged pull request #2291:
URL: https://github.com/apache/hadoop/pull/2291


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17255) JavaKeyStoreProvider fails to create a new key if the keystore is HDFS

2020-11-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17255?focusedWorklogId=507289=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-507289
 ]

ASF GitHub Bot logged work on HADOOP-17255:
---

Author: ASF GitHub Bot
Created on: 03/Nov/20 19:20
Start Date: 03/Nov/20 19:20
Worklog Time Spent: 10m 
  Work Description: jojochuang merged pull request #2291:
URL: https://github.com/apache/hadoop/pull/2291


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 507289)
Time Spent: 1.5h  (was: 1h 20m)

> JavaKeyStoreProvider fails to create a new key if the keystore is HDFS
> --
>
> Key: HADOOP-17255
> URL: https://issues.apache.org/jira/browse/HADOOP-17255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The caller of JavaKeyStoreProvider#renameOrFail assumes that it throws 
> FileNotFoundException if the src does not exist. However, 
> JavaKeyStoreProvider#renameOrFail calls the old rename API. In 
> DistributedFileSystem, the old API returns false if the src does not exist.
> That way JavaKeyStoreProvider fails to create a new key if the keystore is 
> HDFS.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17255) JavaKeyStoreProvider fails to create a new key if the keystore is HDFS

2020-11-03 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225623#comment-17225623
 ] 

Wei-Chiu Chuang commented on HADOOP-17255:
--

{quote} * Probably there are many users, so the quality of Ranger KMS is better 
than that of Hadoop KMS.{quote}
That is dubious. As far as I know, RangerKMS is a fork of HadoopKMS and for the 
most part, they simply port fixes/features from HadoopKMS over there. There is 
little commits in the ranger kms too.

Therefore, we should still maintain HadoopKMS code.

> JavaKeyStoreProvider fails to create a new key if the keystore is HDFS
> --
>
> Key: HADOOP-17255
> URL: https://issues.apache.org/jira/browse/HADOOP-17255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> The caller of JavaKeyStoreProvider#renameOrFail assumes that it throws 
> FileNotFoundException if the src does not exist. However, 
> JavaKeyStoreProvider#renameOrFail calls the old rename API. In 
> DistributedFileSystem, the old API returns false if the src does not exist.
> That way JavaKeyStoreProvider fails to create a new key if the keystore is 
> HDFS.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17336) Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. followup to abfs

2020-11-03 Thread Konstantin Shvachko (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225612#comment-17225612
 ] 

Konstantin Shvachko commented on HADOOP-17336:
--

+1 the backport looks good to me.

> Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and 
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix." to branch-2.10
> --
>
> Key: HADOOP-17336
> URL: https://issues.apache.org/jira/browse/HADOOP-17336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.10.1
>Reporter: Sally Zuo
>Assignee: Sally Zuo
>Priority: Major
> Attachments: Hadoop-17336-branch-2.10.001.patch, 
> Hadoop-17336-branch-2.10.002.patch
>
>
> Backport:
>  HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" to branch-2.10
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix."
> from branch-3.2 to branch-2.10



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17345) Work with externally managed user credentials

2020-11-03 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HADOOP-17345:


 Summary: Work with externally managed user credentials
 Key: HADOOP-17345
 URL: https://issues.apache.org/jira/browse/HADOOP-17345
 Project: Hadoop Common
  Issue Type: Task
  Components: security
Reporter: Wei-Chiu Chuang


We don't have a good test coverage for externally managed user credentials. 
It's not clear how someone could ingest kerberos credentials or delegation 
tokens that are externally managed. It's not clear what services/file system 
implementation supports them.

File this Jira to track all relevant fixes/support/test/doc
 #  what is supported.
 #  test coverage
 #  document for best practices (how to do it right) 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17343) Upgrade aws-java-sdk to 1.11.892

2020-11-03 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun updated HADOOP-17343:
---
Affects Version/s: 3.4.0

> Upgrade aws-java-sdk to 1.11.892
> 
>
> Key: HADOOP-17343
> URL: https://issues.apache.org/jira/browse/HADOOP-17343
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17344) Harmonize guava version and shade guava in yarn-csi

2020-11-03 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-17344:
-
Summary: Harmonize guava version and shade guava in yarn-csi  (was: 
Harmonize guava version in yarn-csi)

> Harmonize guava version and shade guava in yarn-csi
> ---
>
> Key: HADOOP-17344
> URL: https://issues.apache.org/jira/browse/HADOOP-17344
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.4.0
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> yarn-csi defines a separate guava version 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml#L30].
>  
>  
> We should harmonize the guava version (pull it from hadoop-project/pom.xml) 
> and use the shaded guava classes. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17344) Harmonize guava version in yarn-csi

2020-11-03 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HADOOP-17344:


 Summary: Harmonize guava version in yarn-csi
 Key: HADOOP-17344
 URL: https://issues.apache.org/jira/browse/HADOOP-17344
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 3.4.0
Reporter: Wei-Chiu Chuang


yarn-csi defines a separate guava version 
[https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml#L30].
 

 

We should harmonize the guava version (pull it from hadoop-project/pom.xml) and 
use the shaded guava classes. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17343) Upgrade aws-java-sdk to 1.11.892

2020-11-03 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun reassigned HADOOP-17343:
-

Assignee: Dongjoon Hyun

> Upgrade aws-java-sdk to 1.11.892
> 
>
> Key: HADOOP-17343
> URL: https://issues.apache.org/jira/browse/HADOOP-17343
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.1
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17343) Upgrade aws-java-sdk to 1.11.892

2020-11-03 Thread Dongjoon Hyun (Jira)
Dongjoon Hyun created HADOOP-17343:
--

 Summary: Upgrade aws-java-sdk to 1.11.892
 Key: HADOOP-17343
 URL: https://issues.apache.org/jira/browse/HADOOP-17343
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 3.3.1
Reporter: Dongjoon Hyun






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dongjoon-hyun opened a new pull request #2429: Upgrade aws-java-sdk to 1.11.892

2020-11-03 Thread GitBox


dongjoon-hyun opened a new pull request #2429:
URL: https://github.com/apache/hadoop/pull/2429


   This PR aims to upgrade `aws-java-sdk` from 1.1.563 (May 30, 2019) to 
1.11.892 (Nov 01, 2020).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17342) Creating a token identifier should not do kerberos name resolution

2020-11-03 Thread Jim Brennan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan updated HADOOP-17342:
-
Status: Patch Available  (was: Open)

> Creating a token identifier should not do kerberos name resolution
> --
>
> Key: HADOOP-17342
> URL: https://issues.apache.org/jira/browse/HADOOP-17342
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 2.10.1, 3.4.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
> Attachments: HADOOP-17342.001.patch
>
>
> This problem was found and fixed internally for us by [~daryn].
> Creating a token identifier tries to do auth_to_local short username 
> translation. The authentication process creates a blank token identifier for 
> deserializing the wire format. Attempting to resolve an empty username is 
> useless work.
> Discovered the issue during fair call queue backoff testing. The readers are 
> unnecessary slowed down by this bug.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17342) Creating a token identifier should not do kerberos name resolution

2020-11-03 Thread Jim Brennan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan updated HADOOP-17342:
-
Attachment: HADOOP-17342.001.patch

> Creating a token identifier should not do kerberos name resolution
> --
>
> Key: HADOOP-17342
> URL: https://issues.apache.org/jira/browse/HADOOP-17342
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 2.10.1, 3.4.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
> Attachments: HADOOP-17342.001.patch
>
>
> This problem was found and fixed internally for us by [~daryn].
> Creating a token identifier tries to do auth_to_local short username 
> translation. The authentication process creates a blank token identifier for 
> deserializing the wire format. Attempting to resolve an empty username is 
> useless work.
> Discovered the issue during fair call queue backoff testing. The readers are 
> unnecessary slowed down by this bug.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17342) Creating a token identifier should not do kerberos name resolution

2020-11-03 Thread Jim Brennan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan reassigned HADOOP-17342:


Assignee: Jim Brennan

> Creating a token identifier should not do kerberos name resolution
> --
>
> Key: HADOOP-17342
> URL: https://issues.apache.org/jira/browse/HADOOP-17342
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 2.10.1, 3.4.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
>
> This problem was found and fixed internally for us by [~daryn].
> Creating a token identifier tries to do auth_to_local short username 
> translation. The authentication process creates a blank token identifier for 
> deserializing the wire format. Attempting to resolve an empty username is 
> useless work.
> Discovered the issue during fair call queue backoff testing. The readers are 
> unnecessary slowed down by this bug.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17342) Creating a token identifier should not do kerberos name resolution

2020-11-03 Thread Jim Brennan (Jira)
Jim Brennan created HADOOP-17342:


 Summary: Creating a token identifier should not do kerberos name 
resolution
 Key: HADOOP-17342
 URL: https://issues.apache.org/jira/browse/HADOOP-17342
 Project: Hadoop Common
  Issue Type: Improvement
  Components: common
Affects Versions: 2.10.1, 3.4.0
Reporter: Jim Brennan


This problem was found and fixed internally for us by [~daryn].

Creating a token identifier tries to do auth_to_local short username 
translation. The authentication process creates a blank token identifier for 
deserializing the wire format. Attempting to resolve an empty username is 
useless work.

Discovered the issue during fair call queue backoff testing. The readers are 
unnecessary slowed down by this bug.





--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2422: HADOOP-17311. ABFS: Masking SAS signatures from logs

2020-11-03 Thread GitBox


steveloughran commented on pull request #2422:
URL: https://github.com/apache/hadoop/pull/2422#issuecomment-721124815


   where did the findbugs error come from? do we need to fix/roll back that 
change?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17311) ABFS: Logs should redact SAS signature

2020-11-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17311?focusedWorklogId=507108=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-507108
 ]

ASF GitHub Bot logged work on HADOOP-17311:
---

Author: ASF GitHub Bot
Created on: 03/Nov/20 14:20
Start Date: 03/Nov/20 14:20
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2422:
URL: https://github.com/apache/hadoop/pull/2422#issuecomment-721124815


   where did the findbugs error come from? do we need to fix/roll back that 
change?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 507108)
Time Spent: 2h  (was: 1h 50m)

> ABFS: Logs should redact SAS signature
> --
>
> Key: HADOOP-17311
> URL: https://issues.apache.org/jira/browse/HADOOP-17311
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, security
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Bilahari T H
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Signature part of the SAS should be redacted for security purposes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17311) ABFS: Logs should redact SAS signature

2020-11-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17311?focusedWorklogId=507075=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-507075
 ]

ASF GitHub Bot logged work on HADOOP-17311:
---

Author: ASF GitHub Bot
Created on: 03/Nov/20 14:17
Start Date: 03/Nov/20 14:17
Worklog Time Spent: 10m 
  Work Description: snvijaya commented on a change in pull request #2422:
URL: https://github.com/apache/hadoop/pull/2422#discussion_r515748434



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
##
@@ -513,4 +509,45 @@ private void parseListFilesResponse(final InputStream 
stream) throws IOException
   private boolean isNullInputStream(InputStream stream) {
 return stream == null ? true : false;
   }
+
+  @VisibleForTesting
+  public String getSignatureMaskedUrlStr() {
+if (this.maskedUrlStr != null) {
+  return this.maskedUrlStr;
+}
+final String urlStr = url.toString();
+final String qpStr = "sig=";

Review comment:
   create a private static final string. - private static final String 
SIGNATURE_QUERY_PARAM_KEY = "sig=";

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsIoUtils.java
##
@@ -58,6 +58,9 @@ public static void dumpHeadersToDebugLog(final String origin,
 if (key.contains("Cookie")) {
   values = "*cookie info*";
 }
+if (key.equals("sig")) {

Review comment:
   Is a header called "sig" getting added when SAS ?

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
##
@@ -513,4 +509,45 @@ private void parseListFilesResponse(final InputStream 
stream) throws IOException
   private boolean isNullInputStream(InputStream stream) {
 return stream == null ? true : false;
   }
+
+  @VisibleForTesting
+  public String getSignatureMaskedUrlStr() {
+if (this.maskedUrlStr != null) {
+  return this.maskedUrlStr;
+}
+final String urlStr = url.toString();
+final String qpStr = "sig=";
+final int qpStrIdx = urlStr.indexOf(qpStr);
+if (qpStrIdx < 0) {
+  return urlStr;
+}
+final StringBuilder sb = new StringBuilder();
+sb.append(urlStr, 0, qpStrIdx);
+sb.append(qpStr);
+sb.append("");
+if (qpStrIdx + qpStr.length() < urlStr.length()) {
+  String urlStrSecondPart = urlStr.substring(qpStrIdx + qpStr.length());
+  int idx = urlStrSecondPart.indexOf("&");
+  if (idx > -1) {
+sb.append(urlStrSecondPart.substring(idx));
+  }

Review comment:
   Using string replace should be easier. 
   
   int sigStartIndex = urlStr.indexOf(SIGNATURE_QUERY_PARAM_KEY);
   if (sigStartIndex == -1) {
 // no signature query param in the url
 return urlStr;
   }
   
   sigStartIndex += SIGNATURE_QUERY_PARAM_KEY.length();
   int sigEndIndex = urlStr.indexOf("&", sigStartIndex);
   String sigValue = (sigEndIndex == -1)
   ? urlStr.substring(sigStartIndex)
   : urlStr.substring(sigStartIndex, sigEndIndex);
   
   return urlStr.replace(sigValue, "");





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 507075)
Time Spent: 1h 50m  (was: 1h 40m)

> ABFS: Logs should redact SAS signature
> --
>
> Key: HADOOP-17311
> URL: https://issues.apache.org/jira/browse/HADOOP-17311
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, security
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Bilahari T H
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Signature part of the SAS should be redacted for security purposes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya commented on a change in pull request #2422: HADOOP-17311. ABFS: Masking SAS signatures from logs

2020-11-03 Thread GitBox


snvijaya commented on a change in pull request #2422:
URL: https://github.com/apache/hadoop/pull/2422#discussion_r515748434



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
##
@@ -513,4 +509,45 @@ private void parseListFilesResponse(final InputStream 
stream) throws IOException
   private boolean isNullInputStream(InputStream stream) {
 return stream == null ? true : false;
   }
+
+  @VisibleForTesting
+  public String getSignatureMaskedUrlStr() {
+if (this.maskedUrlStr != null) {
+  return this.maskedUrlStr;
+}
+final String urlStr = url.toString();
+final String qpStr = "sig=";

Review comment:
   create a private static final string. - private static final String 
SIGNATURE_QUERY_PARAM_KEY = "sig=";

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsIoUtils.java
##
@@ -58,6 +58,9 @@ public static void dumpHeadersToDebugLog(final String origin,
 if (key.contains("Cookie")) {
   values = "*cookie info*";
 }
+if (key.equals("sig")) {

Review comment:
   Is a header called "sig" getting added when SAS ?

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
##
@@ -513,4 +509,45 @@ private void parseListFilesResponse(final InputStream 
stream) throws IOException
   private boolean isNullInputStream(InputStream stream) {
 return stream == null ? true : false;
   }
+
+  @VisibleForTesting
+  public String getSignatureMaskedUrlStr() {
+if (this.maskedUrlStr != null) {
+  return this.maskedUrlStr;
+}
+final String urlStr = url.toString();
+final String qpStr = "sig=";
+final int qpStrIdx = urlStr.indexOf(qpStr);
+if (qpStrIdx < 0) {
+  return urlStr;
+}
+final StringBuilder sb = new StringBuilder();
+sb.append(urlStr, 0, qpStrIdx);
+sb.append(qpStr);
+sb.append("");
+if (qpStrIdx + qpStr.length() < urlStr.length()) {
+  String urlStrSecondPart = urlStr.substring(qpStrIdx + qpStr.length());
+  int idx = urlStrSecondPart.indexOf("&");
+  if (idx > -1) {
+sb.append(urlStrSecondPart.substring(idx));
+  }

Review comment:
   Using string replace should be easier. 
   
   int sigStartIndex = urlStr.indexOf(SIGNATURE_QUERY_PARAM_KEY);
   if (sigStartIndex == -1) {
 // no signature query param in the url
 return urlStr;
   }
   
   sigStartIndex += SIGNATURE_QUERY_PARAM_KEY.length();
   int sigEndIndex = urlStr.indexOf("&", sigStartIndex);
   String sigValue = (sigEndIndex == -1)
   ? urlStr.substring(sigStartIndex)
   : urlStr.substring(sigStartIndex, sigEndIndex);
   
   return urlStr.replace(sigValue, "");





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2399: HADOOP-17318. Support concurrent S3A commit jobs with same app attempt ID.

2020-11-03 Thread GitBox


hadoop-yetus removed a comment on pull request #2399:
URL: https://github.com/apache/hadoop/pull/2399#issuecomment-719856113







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17318) S3A committer to support concurrent jobs with same app attempt ID & dest dir

2020-11-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17318?focusedWorklogId=507048=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-507048
 ]

ASF GitHub Bot logged work on HADOOP-17318:
---

Author: ASF GitHub Bot
Created on: 03/Nov/20 14:15
Start Date: 03/Nov/20 14:15
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #2399:
URL: https://github.com/apache/hadoop/pull/2399#issuecomment-719856113







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 507048)
Time Spent: 2h  (was: 1h 50m)

> S3A committer to support concurrent jobs with same app attempt ID & dest dir
> 
>
> Key: HADOOP-17318
> URL: https://issues.apache.org/jira/browse/HADOOP-17318
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Reported failure of magic committer block uploads as pending upload ID is 
> unknown. Likely cause: it's been aborted by another job
> # Make it possible to turn off cleanup of pending uploads in magic committer
> # log more about uploads being deleted in committers
> # and upload ID in the S3aBlockOutputStream errors
> There are other concurrency issues when you look close, see SPARK-33230
> * magic committer uses app attempt ID as path under __magic; if there are 
> duplicate then they will conflict
> * staging committer local temp dir uses app attempt id
> Fix will be to have a job UUID which for spark will be picked up from the 
> SPARK-33230 changes, (option to self-generate in job setup for hadoop 3.3.1+ 
> older spark builds); fall back to app-attempt *unless that fallback has been 
> disabled*
> MR: configure to use app attempt ID
> Spark: configure to fail job setup if app attempt ID is the source of a job 
> uuid



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tangzhankun edited a comment on pull request #2401: YARN-10469. YARN-UI2 The accuracy of the percentage values in the same chart on the YARN 'Cluster OverView' page are inconsistent

2020-11-03 Thread GitBox


tangzhankun edited a comment on pull request #2401:
URL: https://github.com/apache/hadoop/pull/2401#issuecomment-720364635


   Merged it just now.
   @jiwq Thanks for the review. 
   @akiyamaneko Thanks for the contribution!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-11-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?focusedWorklogId=506990=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506990
 ]

ASF GitHub Bot logged work on HADOOP-17191:
---

Author: ASF GitHub Bot
Created on: 03/Nov/20 14:08
Start Date: 03/Nov/20 14:08
Worklog Time Spent: 10m 
  Work Description: snvijaya commented on a change in pull request #2417:
URL: https://github.com/apache/hadoop/pull/2417#discussion_r515741344



##
File path: hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md
##
@@ -357,6 +357,34 @@ The Huge File tests validate Azure storages's ability to 
handle large files —t
 Tests at this scale are slow: they are best executed from hosts running in
 the cloud infrastructure where the storage endpoint is based.
 
+##No test no review: Run different combinations of tests using the runtests.sh 
script
+
+This is the expected way in which the tests have to be ran before raising a PR.

Review comment:
   Change to:
   To simplify the testing across various authentication and features 
combinations that are mandatory for a PR, script 
`dev-support/testrun-scripts/runTest.sh` should be used. Once the script is 
updated with relevant config settings for various test combinations, it will:
   1. Auto-generate configs specific to each test combinations
   2. Run tests for all combinations
   3. Summarize results across all the test combination runs.
   
   As a pre-requiste step, fill config values for test accounts and credentials 
needed for authentication in `src/test/resources/azure-auth-keys.xml.template`  
and rename as `src/test/resources/azure-auth-keys.xml`.
   
   **To add a new test combination**: Mandatory test combinations for PR 
validation are already pre-filled in `dev-support/testrun-scripts/runTest.sh`. 
If a new one needs to be added, add a combination set within 
`dev-support/testrun-scripts/runTest.sh` similar to the ones already defined and
   1. Provide a new combination name
   2. Update properties and values array which need to be effective for the 
test combination
   3. Call generateConfigs
   
   **To run PR validation**: Running command 
   - `runTest.sh -testCombination #combinationname#` or `runTest.sh -tc 
#combinationname#` : will generate configurations effective for that test 
combination and also run the test.
   - `runTest.sh -testCombination all` or `runTest.sh -tc all` : will generate 
configurations for each of the combinations defined and run tests.
   
   **Test logs**: 
   
   **To generate config for use in IDE**: Running command 
   `runTest.sh -generateConfig #combinationname#` or `runTest.sh -gc 
#combinationname#` 
   will update the effective config relevant for the specific test combination. 
Hence the same config files used by the mvn test runs can be used for IDE 
without any manual updates needed within config file.
   
   **Other command line options**: 
   1. Thread count : ABFS mvn tests are run in parallel mode. Tests by default 
are run with 8 thread count. It can be changed by providing -t #ThreadCount#

##
File path: hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md
##
@@ -357,6 +357,34 @@ The Huge File tests validate Azure storages's ability to 
handle large files —t
 Tests at this scale are slow: they are best executed from hosts running in
 the cloud infrastructure where the storage endpoint is based.
 
+##No test no review: Run different combinations of tests using the runtests.sh 
script

Review comment:
   Change to 
    Generating test run configurations and test triggers over various 
config combinations

##
File path: hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md
##
@@ -357,6 +357,34 @@ The Huge File tests validate Azure storages's ability to 
handle large files —t
 Tests at this scale are slow: they are best executed from hosts running in
 the cloud infrastructure where the storage endpoint is based.
 
+##No test no review: Run different combinations of tests using the runtests.sh 
script
+
+This is the expected way in which the tests have to be ran before raising a PR.
+The script `runtests.sh` contain template for 3 combinations of tests. Ensure
+the auth configs for all the accounts used for testing are provided in
+azure-auth-keys.xml. In case any new flags or properties are introduced
+with the code change, add the combinations with the possible configurations
+into the `runtests.sh`. The thread count can be specified as the command line
+argument for the script. By default the same will be 8. -n option can be
+specified if build is not required prior to the tests.
+
+Adding a combination of tests involves setting the variable combination (ex: 
HNS
+-OAuth) and specifying the specific configurations for the particular
+combination with 2 arrays namely properties and values. Specify the property
+names 

[GitHub] [hadoop] snvijaya commented on a change in pull request #2417: HADOOP-17191. ABFS: Run the tests with various combinations of configurations and publish a consolidated results

2020-11-03 Thread GitBox


snvijaya commented on a change in pull request #2417:
URL: https://github.com/apache/hadoop/pull/2417#discussion_r515741344



##
File path: hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md
##
@@ -357,6 +357,34 @@ The Huge File tests validate Azure storages's ability to 
handle large files —t
 Tests at this scale are slow: they are best executed from hosts running in
 the cloud infrastructure where the storage endpoint is based.
 
+##No test no review: Run different combinations of tests using the runtests.sh 
script
+
+This is the expected way in which the tests have to be ran before raising a PR.

Review comment:
   Change to:
   To simplify the testing across various authentication and features 
combinations that are mandatory for a PR, script 
`dev-support/testrun-scripts/runTest.sh` should be used. Once the script is 
updated with relevant config settings for various test combinations, it will:
   1. Auto-generate configs specific to each test combinations
   2. Run tests for all combinations
   3. Summarize results across all the test combination runs.
   
   As a pre-requiste step, fill config values for test accounts and credentials 
needed for authentication in `src/test/resources/azure-auth-keys.xml.template`  
and rename as `src/test/resources/azure-auth-keys.xml`.
   
   **To add a new test combination**: Mandatory test combinations for PR 
validation are already pre-filled in `dev-support/testrun-scripts/runTest.sh`. 
If a new one needs to be added, add a combination set within 
`dev-support/testrun-scripts/runTest.sh` similar to the ones already defined and
   1. Provide a new combination name
   2. Update properties and values array which need to be effective for the 
test combination
   3. Call generateConfigs
   
   **To run PR validation**: Running command 
   - `runTest.sh -testCombination #combinationname#` or `runTest.sh -tc 
#combinationname#` : will generate configurations effective for that test 
combination and also run the test.
   - `runTest.sh -testCombination all` or `runTest.sh -tc all` : will generate 
configurations for each of the combinations defined and run tests.
   
   **Test logs**: 
   
   **To generate config for use in IDE**: Running command 
   `runTest.sh -generateConfig #combinationname#` or `runTest.sh -gc 
#combinationname#` 
   will update the effective config relevant for the specific test combination. 
Hence the same config files used by the mvn test runs can be used for IDE 
without any manual updates needed within config file.
   
   **Other command line options**: 
   1. Thread count : ABFS mvn tests are run in parallel mode. Tests by default 
are run with 8 thread count. It can be changed by providing -t #ThreadCount#

##
File path: hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md
##
@@ -357,6 +357,34 @@ The Huge File tests validate Azure storages's ability to 
handle large files —t
 Tests at this scale are slow: they are best executed from hosts running in
 the cloud infrastructure where the storage endpoint is based.
 
+##No test no review: Run different combinations of tests using the runtests.sh 
script

Review comment:
   Change to 
    Generating test run configurations and test triggers over various 
config combinations

##
File path: hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md
##
@@ -357,6 +357,34 @@ The Huge File tests validate Azure storages's ability to 
handle large files —t
 Tests at this scale are slow: they are best executed from hosts running in
 the cloud infrastructure where the storage endpoint is based.
 
+##No test no review: Run different combinations of tests using the runtests.sh 
script
+
+This is the expected way in which the tests have to be ran before raising a PR.
+The script `runtests.sh` contain template for 3 combinations of tests. Ensure
+the auth configs for all the accounts used for testing are provided in
+azure-auth-keys.xml. In case any new flags or properties are introduced
+with the code change, add the combinations with the possible configurations
+into the `runtests.sh`. The thread count can be specified as the command line
+argument for the script. By default the same will be 8. -n option can be
+specified if build is not required prior to the tests.
+
+Adding a combination of tests involves setting the variable combination (ex: 
HNS
+-OAuth) and specifying the specific configurations for the particular
+combination with 2 arrays namely properties and values. Specify the property
+names within the array properties and corresponding values in the values
+array. The property and value is determined by the array index. The value for
+the property mentioned at index 1 of array properties should be specified at
+index 1 of the array values. Call the function generateconfigs once the 3
+values mentioned are set. Now the script `runtests.sh` is ready to be ran.
+
+Once the tests are completed, logs will be present in 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2417: HADOOP-17191. ABFS: Run the tests with various combinations of configurations and publish a consolidated results

2020-11-03 Thread GitBox


hadoop-yetus commented on pull request #2417:
URL: https://github.com/apache/hadoop/pull/2417#issuecomment-721103960


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  There were no new 
shellcheck issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 15s |  |  The patch generated 0 new 
+ 104 unchanged - 132 fixed = 104 total (was 236)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  16m 24s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 30s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  74m 58s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2417/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2417 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
markdownlint xml |
   | uname | Linux f8583c3523b7 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e580280a8b0 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2417/3/testReport/ |
   | Max. process+thread count | 308 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2417/3/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-11-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?focusedWorklogId=506971=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506971
 ]

ASF GitHub Bot logged work on HADOOP-17191:
---

Author: ASF GitHub Bot
Created on: 03/Nov/20 14:06
Start Date: 03/Nov/20 14:06
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2417:
URL: https://github.com/apache/hadoop/pull/2417#issuecomment-721103960


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  There were no new 
shellcheck issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 15s |  |  The patch generated 0 new 
+ 104 unchanged - 132 fixed = 104 total (was 236)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  16m 24s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 30s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  74m 58s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2417/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2417 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
markdownlint xml |
   | uname | Linux f8583c3523b7 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e580280a8b0 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2417/3/testReport/ |
   | Max. process+thread count | 308 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2417/3/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 506971)
Time Spent: 7h 40m  (was: 7.5h)

> ABFS: Run the integration tests with various combinations of configurations 
> and publish a consolidated results
> --
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 7h 40m
>  Remaining Estimate: 0h
>
> ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
> ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
> Shared Access Signature. The integration tests need to be executed against 
> accounts with and 

[GitHub] [hadoop] Jing9 commented on a change in pull request #2288: HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount

2020-11-03 Thread GitBox


Jing9 commented on a change in pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#discussion_r516356685



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/MountVolumeInfo.java
##
@@ -0,0 +1,101 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.datanode.fsdataset.impl;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.StorageType;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference;
+
+import java.nio.channels.ClosedChannelException;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+/**
+ * MountVolumeInfo is a wrapper of
+ * detailed volume information for MountVolumeMap.
+ */
+@InterfaceAudience.Private
+class MountVolumeInfo {
+  private ConcurrentMap
+  storageTypeVolumeMap;
+  private double reservedForArchiveDefault;
+
+  MountVolumeInfo(Configuration conf) {
+storageTypeVolumeMap = new ConcurrentHashMap<>();
+reservedForArchiveDefault = conf.getDouble(
+DFSConfigKeys.DFS_DATANODE_RESERVE_FOR_ARCHIVE_DEFAULT_PERCENTAGE,
+DFSConfigKeys
+.DFS_DATANODE_RESERVE_FOR_ARCHIVE_DEFAULT_PERCENTAGE_DEFAULT);
+if (reservedForArchiveDefault > 1) {
+  FsDatasetImpl.LOG.warn("Value of reserve-for-archival is > 100%." +
+  " Setting it to 100%.");
+  reservedForArchiveDefault = 1;
+}
+  }
+
+  FsVolumeReference getVolumeRef(StorageType storageType) {
+try {
+  FsVolumeImpl volumeImpl = storageTypeVolumeMap
+  .getOrDefault(storageType, null);
+  if (volumeImpl != null) {
+return volumeImpl.obtainReference();
+  }
+} catch (ClosedChannelException e) {
+  FsDatasetImpl.LOG.warn("Volume closed when getting volume" +
+  " by storage type: " + storageType);
+}
+return null;
+  }
+
+  /**
+   * Return configured capacity ratio.
+   * If the volume is the only one on the mount,
+   * return 1 to avoid unnecessary allocation.

Review comment:
   we can add a TODO here explaining we plan to support different ratios 
per mount point.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/MountVolumeInfo.java
##
@@ -0,0 +1,101 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.datanode.fsdataset.impl;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.StorageType;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference;
+
+import java.nio.channels.ClosedChannelException;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+/**
+ * MountVolumeInfo is a wrapper of
+ * detailed volume information for MountVolumeMap.
+ */
+@InterfaceAudience.Private
+class MountVolumeInfo {
+  private ConcurrentMap
+  storageTypeVolumeMap;
+  private double reservedForArchiveDefault;
+
+  MountVolumeInfo(Configuration conf) {
+storageTypeVolumeMap = new ConcurrentHashMap<>();
+reservedForArchiveDefault = conf.getDouble(
+

[GitHub] [hadoop] hadoop-yetus commented on pull request #2288: HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount

2020-11-03 Thread GitBox


hadoop-yetus commented on pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#issuecomment-720987925


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  33m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 42s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |   3m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 12s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 52s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +0 :ok: |  spotbugs  |   3m 11s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 40s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  0s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  cc  |   4m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   4m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 48s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  cc  |   3m 48s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 48s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   1m  4s | 
[/diff-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2288/15/artifact/out/diff-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 1 new + 698 unchanged - 0 fixed = 
699 total (was 698)  |
   | +1 :green_heart: |  mvnsite  |   2m  1s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  16m 22s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 53s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  findbugs  |   5m 51s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 13s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 112m 21s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2288/15/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 266m 28s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
   |   | hadoop.hdfs.server.datanode.TestBPOfferService |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2288/15/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2288 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc buflint bufcompat xml |
   | uname | Linux 6aae26552476 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 

[GitHub] [hadoop] zehaoc2 commented on pull request #2328: HDFS-13009. Allow creation of encryption zone if directory is not empty. Contributed by Rushabh Shah.

2020-11-03 Thread GitBox


zehaoc2 commented on pull request #2328:
URL: https://github.com/apache/hadoop/pull/2328#issuecomment-720746804


   @jojochuang Thanks for commenting. We're working on providing a solution for 
those concerns, and will follow up in the jira once we figure that out. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2399: HADOOP-17318. Support concurrent S3A commit jobs with same app attempt ID.

2020-11-03 Thread GitBox


hadoop-yetus commented on pull request #2399:
URL: https://github.com/apache/hadoop/pull/2399#issuecomment-720614677


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 11 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 58s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  26m 16s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |  20m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  checkstyle  |   2m 51s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   2m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +0 :ok: |  spotbugs  |   1m 31s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 14s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  25m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |  25m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |  22m 42s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m 23s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/4/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 4 new + 48 unchanged - 1 fixed = 52 total (was 
49)  |
   | +1 :green_heart: |  mvnsite  |   2m 56s |  |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s | 
[/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/4/artifact/out/whitespace-eol.txt)
 |  The patch has 5 line(s) that end in whitespace. Use git apply 
--whitespace=fix <>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  3s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  18m 16s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | -1 :x: |  javadoc  |   0m 38s | 
[/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/4/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 
with JDK Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 generated 4 new + 
88 unchanged - 0 fixed = 92 total (was 88)  |
   | +1 :green_heart: |  findbugs  |   4m 18s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  11m  6s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/4/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  |   1m 48s | 
[/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/4/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 224m 49s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.security.TestLdapGroupsMapping |
   |   | hadoop.fs.s3a.commit.staging.TestStagingCommitter |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker 

[jira] [Work logged] (HADOOP-17318) S3A committer to support concurrent jobs with same app attempt ID & dest dir

2020-11-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17318?focusedWorklogId=506927=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506927
 ]

ASF GitHub Bot logged work on HADOOP-17318:
---

Author: ASF GitHub Bot
Created on: 03/Nov/20 14:01
Start Date: 03/Nov/20 14:01
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2399:
URL: https://github.com/apache/hadoop/pull/2399#issuecomment-720614677


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 11 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 58s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  26m 16s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |  20m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  checkstyle  |   2m 51s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   2m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +0 :ok: |  spotbugs  |   1m 31s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 14s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  25m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |  25m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |  22m 42s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m 23s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/4/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 4 new + 48 unchanged - 1 fixed = 52 total (was 
49)  |
   | +1 :green_heart: |  mvnsite  |   2m 56s |  |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s | 
[/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/4/artifact/out/whitespace-eol.txt)
 |  The patch has 5 line(s) that end in whitespace. Use git apply 
--whitespace=fix <>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  3s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  18m 16s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | -1 :x: |  javadoc  |   0m 38s | 
[/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/4/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 
with JDK Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 generated 4 new + 
88 unchanged - 0 fixed = 92 total (was 88)  |
   | +1 :green_heart: |  findbugs  |   4m 18s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  11m  6s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/4/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  |   1m 48s | 
[/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/4/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt)
 |  

[GitHub] [hadoop] hadoop-yetus commented on pull request #2225: HDFS-15329. Provide FileContext based ViewFSOverloadScheme implementation

2020-11-03 Thread GitBox


hadoop-yetus commented on pull request #2225:
URL: https://github.com/apache/hadoop/pull/2225#issuecomment-720786035


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  33m  1s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  4s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 42s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |  19m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  checkstyle  |   2m 56s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   3m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +0 :ok: |  spotbugs  |   3m 38s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m  7s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 34s | 
[/patch-mvninstall-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2225/8/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | -1 :x: |  mvninstall  |   1m 13s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2225/8/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   1m 13s | 
[/patch-compile-root-jdkUbuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2225/8/artifact/out/patch-compile-root-jdkUbuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1. 
 |
   | -1 :x: |  javac  |   1m 13s | 
[/patch-compile-root-jdkUbuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2225/8/artifact/out/patch-compile-root-jdkUbuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1. 
 |
   | -1 :x: |  compile  |   1m  8s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2225/8/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10.  |
   | -1 :x: |  javac  |   1m  8s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2225/8/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10.  |
   | +1 :green_heart: |  checkstyle  |   3m 16s |  |  root: The patch generated 
0 new + 51 unchanged - 1 fixed = 51 total (was 52)  |
   | -1 :x: |  mvnsite  |   0m 38s | 
[/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2225/8/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | -1 :x: |  mvnsite  |   1m 26s | 
[/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2225/8/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | -1 :x: |  shadedclient  |   0m 54s |  |  patch has errors when building 
and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 57s | 

  1   2   >