[jira] [Updated] (HADOOP-17106) Replace Guava Joiner with Java8 String Join

2020-06-30 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17106:
---
Attachment: HADOOP-17106.001.patch
Status: Patch Available  (was: In Progress)

> Replace Guava Joiner with Java8 String Join
> ---
>
> Key: HADOOP-17106
> URL: https://issues.apache.org/jira/browse/HADOOP-17106
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17106.001.patch
>
>
> Replace \{{com.google.common.base.Joiner}} with String.join.
>  
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Joiner' in project with mask 
> '*.java'
> Found Occurrences  (103 usages found)
> org.apache.hadoop.crypto.key.kms.server  (1 usage found)
> SimpleKMSAuditLogger.java  (1 usage found)
> 26 import com.google.common.base.Joiner;
> org.apache.hadoop.fs  (1 usage found)
> TestPath.java  (1 usage found)
> 37 import com.google.common.base.Joiner;
> org.apache.hadoop.fs.s3a  (1 usage found)
> StorageStatisticsTracker.java  (1 usage found)
> 25 import com.google.common.base.Joiner;
> org.apache.hadoop.ha  (1 usage found)
> TestHAAdmin.java  (1 usage found)
> 34 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs  (8 usages found)
> DFSClient.java  (1 usage found)
> 196 import com.google.common.base.Joiner;
> DFSTestUtil.java  (1 usage found)
> 76 import com.google.common.base.Joiner;
> DFSUtil.java  (1 usage found)
> 108 import com.google.common.base.Joiner;
> DFSUtilClient.java  (1 usage found)
> 20 import com.google.common.base.Joiner;
> HAUtil.java  (1 usage found)
> 59 import com.google.common.base.Joiner;
> MiniDFSCluster.java  (1 usage found)
> 145 import com.google.common.base.Joiner;
> StripedFileTestUtil.java  (1 usage found)
> 20 import com.google.common.base.Joiner;
> TestDFSUpgrade.java  (1 usage found)
> 53 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.protocol  (1 usage found)
> LayoutFlags.java  (1 usage found)
> 26 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.protocolPB  (1 usage found)
> TestPBHelper.java  (1 usage found)
> 118 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.qjournal  (1 usage found)
> MiniJournalCluster.java  (1 usage found)
> 43 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.qjournal.client  (5 usages found)
> AsyncLoggerSet.java  (1 usage found)
> 38 import com.google.common.base.Joiner;
> QuorumCall.java  (1 usage found)
> 32 import com.google.common.base.Joiner;
> QuorumException.java  (1 usage found)
> 25 import com.google.common.base.Joiner;
> QuorumJournalManager.java  (1 usage found)
> 62 import com.google.common.base.Joiner;
> TestQuorumCall.java  (1 usage found)
> 29 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.server.blockmanagement  (4 usages found)
> HostSet.java  (1 usage found)
> 21 import com.google.common.base.Joiner;
> TestBlockManager.java  (1 usage found)
> 20 import com.google.common.base.Joiner;
> TestBlockReportRateLimiting.java  (1 usage found)
> 24 import com.google.common.base.Joiner;
> TestPendingDataNodeMessages.java  (1 usage found)
> 41 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.server.common  (1 usage found)
> StorageInfo.java  (1 usage found)
> 37 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.server.datanode  (7 usages found)
> BlockPoolManager.java  (1 usage found)
> 32 import com.google.common.base.Joiner;
> BlockRecoveryWorker.java  (1 usage found)
> 21 import com.google.common.base.Joiner;
> BPServiceActor.java  (1 usage found)
> 75 import com.google.common.base.Joiner;
> DataNode.java  (1 usage found)
> 226 import com.google.common.base.Joiner;
> ShortCircuitRegistry.java  (1 usage found)
> 49 import com.google.common.base.Joiner;
> TestDataNodeHotSwapVolumes.java  (1 usage found)
> 21 import com.google.common.base.Joiner;
> TestRefreshNamenodes.java  (1 usage found)
> 35 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl  (1 usage found)
> 

[jira] [Work started] (HADOOP-17106) Replace Guava Joiner with Java8 String Join

2020-06-30 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17106 started by Ahmed Hussein.
--
> Replace Guava Joiner with Java8 String Join
> ---
>
> Key: HADOOP-17106
> URL: https://issues.apache.org/jira/browse/HADOOP-17106
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> Replace \{{com.google.common.base.Joiner}} with String.join.
>  
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Joiner' in project with mask 
> '*.java'
> Found Occurrences  (103 usages found)
> org.apache.hadoop.crypto.key.kms.server  (1 usage found)
> SimpleKMSAuditLogger.java  (1 usage found)
> 26 import com.google.common.base.Joiner;
> org.apache.hadoop.fs  (1 usage found)
> TestPath.java  (1 usage found)
> 37 import com.google.common.base.Joiner;
> org.apache.hadoop.fs.s3a  (1 usage found)
> StorageStatisticsTracker.java  (1 usage found)
> 25 import com.google.common.base.Joiner;
> org.apache.hadoop.ha  (1 usage found)
> TestHAAdmin.java  (1 usage found)
> 34 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs  (8 usages found)
> DFSClient.java  (1 usage found)
> 196 import com.google.common.base.Joiner;
> DFSTestUtil.java  (1 usage found)
> 76 import com.google.common.base.Joiner;
> DFSUtil.java  (1 usage found)
> 108 import com.google.common.base.Joiner;
> DFSUtilClient.java  (1 usage found)
> 20 import com.google.common.base.Joiner;
> HAUtil.java  (1 usage found)
> 59 import com.google.common.base.Joiner;
> MiniDFSCluster.java  (1 usage found)
> 145 import com.google.common.base.Joiner;
> StripedFileTestUtil.java  (1 usage found)
> 20 import com.google.common.base.Joiner;
> TestDFSUpgrade.java  (1 usage found)
> 53 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.protocol  (1 usage found)
> LayoutFlags.java  (1 usage found)
> 26 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.protocolPB  (1 usage found)
> TestPBHelper.java  (1 usage found)
> 118 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.qjournal  (1 usage found)
> MiniJournalCluster.java  (1 usage found)
> 43 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.qjournal.client  (5 usages found)
> AsyncLoggerSet.java  (1 usage found)
> 38 import com.google.common.base.Joiner;
> QuorumCall.java  (1 usage found)
> 32 import com.google.common.base.Joiner;
> QuorumException.java  (1 usage found)
> 25 import com.google.common.base.Joiner;
> QuorumJournalManager.java  (1 usage found)
> 62 import com.google.common.base.Joiner;
> TestQuorumCall.java  (1 usage found)
> 29 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.server.blockmanagement  (4 usages found)
> HostSet.java  (1 usage found)
> 21 import com.google.common.base.Joiner;
> TestBlockManager.java  (1 usage found)
> 20 import com.google.common.base.Joiner;
> TestBlockReportRateLimiting.java  (1 usage found)
> 24 import com.google.common.base.Joiner;
> TestPendingDataNodeMessages.java  (1 usage found)
> 41 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.server.common  (1 usage found)
> StorageInfo.java  (1 usage found)
> 37 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.server.datanode  (7 usages found)
> BlockPoolManager.java  (1 usage found)
> 32 import com.google.common.base.Joiner;
> BlockRecoveryWorker.java  (1 usage found)
> 21 import com.google.common.base.Joiner;
> BPServiceActor.java  (1 usage found)
> 75 import com.google.common.base.Joiner;
> DataNode.java  (1 usage found)
> 226 import com.google.common.base.Joiner;
> ShortCircuitRegistry.java  (1 usage found)
> 49 import com.google.common.base.Joiner;
> TestDataNodeHotSwapVolumes.java  (1 usage found)
> 21 import com.google.common.base.Joiner;
> TestRefreshNamenodes.java  (1 usage found)
> 35 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl  (1 usage found)
> FsVolumeImpl.java  (1 usage found)
> 90 import com.google.common.base.Joiner;
> 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2117: YARN-10332: RESOURCE_UPDATE event was repeatedly registered in DECOMM…

2020-06-30 Thread GitBox


hadoop-yetus commented on pull request #2117:
URL: https://github.com/apache/hadoop/pull/2117#issuecomment-652202882


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 38s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  24m 39s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  3s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 52s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 37s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 52s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 46s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 26s |  hadoop-yarn-server-resourcemanager in 
trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m 54s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 51s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  0s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 58s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 46s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 15s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 22s |  hadoop-yarn-server-resourcemanager in the 
patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   1m 46s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  93m 37s |  hadoop-yarn-server-resourcemanager in the 
patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 167m 56s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestRMHATimelineCollectors |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2117/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2117 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3b748988fe85 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e8dc862d385 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2117/1/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2117/1/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2117/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
   |  Test Results | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2060: HADOOP-17032. Fix getContentSummary in ViewFileSystem to handle multiple children mountpoints pointing to different filesystems

2020-06-30 Thread GitBox


hadoop-yetus commented on pull request #2060:
URL: https://github.com/apache/hadoop/pull/2060#issuecomment-652184149


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  21m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m  1s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 43s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  17m 16s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 37s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 43s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   2m 35s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 32s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 59s |  the patch passed  |
   | +1 :green_heart: |  compile  |  25m  9s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  25m  9s |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 41s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  22m 41s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 58s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 38s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  18m 12s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 47s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   2m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 36s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 54s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 193m  9s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2060/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2060 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 23e32fdacb1c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e8dc862d385 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2060/6/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2060/6/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2060/6/testReport/ |
   | Max. process+thread count | 2738 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2060/6/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-06-30 Thread GitBox


hadoop-yetus commented on pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#issuecomment-652182483


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m  7s |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  8s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  19m 20s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 49s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  4s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 20s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 36s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 33s |  hadoop-hdfs-rbf in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m 29s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 48s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 37s |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 56s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  23m 56s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m  7s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  20m  7s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m  2s |  root: The patch generated 2 new 
+ 60 unchanged - 2 fixed = 62 total (was 62)  |
   | -1 :x: |  mvnsite  |   0m 36s |  hadoop-hdfs-rbf in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 52s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 40s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 34s |  hadoop-hdfs-rbf in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | -1 :x: |  findbugs  |   0m 38s |  hadoop-hdfs-rbf in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  11m 27s |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  |   0m 38s |  hadoop-hdfs-rbf in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 186m 43s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ipc.TestIPC |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2110/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2110 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 895a36dcf8fc 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e8dc862d385 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2110/3/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-06-30 Thread GitBox


hadoop-yetus commented on pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#issuecomment-652173646


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  28m 59s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 10s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m  1s |  trunk passed  |
   | +1 :green_heart: |  compile  |  27m 52s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  18m  8s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 54s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 11s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m  7s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 36s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 35s |  hadoop-hdfs-rbf in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m 20s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 29s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 36s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  20m 36s |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 57s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  19m 57s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m  0s |  root: The patch generated 3 new 
+ 60 unchanged - 2 fixed = 63 total (was 62)  |
   | -1 :x: |  mvnsite  |   0m 42s |  hadoop-hdfs-rbf in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 31s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 36s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 33s |  hadoop-hdfs-rbf in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | -1 :x: |  findbugs  |   0m 39s |  hadoop-hdfs-rbf in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 55s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |   0m 36s |  hadoop-hdfs-rbf in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 212m 38s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2110/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2110 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 8a17ef9069ae 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e8dc862d385 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2110/2/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 

[jira] [Updated] (HADOOP-17106) Replace Guava Joiner with Java8 String Join

2020-06-30 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17106:
---
Description: 
Replace \{{com.google.common.base.Joiner}} with String.join.

 
{code:java}
Targets
Occurrences of 'com.google.common.base.Joiner' in project with mask '*.java'
Found Occurrences  (103 usages found)
org.apache.hadoop.crypto.key.kms.server  (1 usage found)
SimpleKMSAuditLogger.java  (1 usage found)
26 import com.google.common.base.Joiner;
org.apache.hadoop.fs  (1 usage found)
TestPath.java  (1 usage found)
37 import com.google.common.base.Joiner;
org.apache.hadoop.fs.s3a  (1 usage found)
StorageStatisticsTracker.java  (1 usage found)
25 import com.google.common.base.Joiner;
org.apache.hadoop.ha  (1 usage found)
TestHAAdmin.java  (1 usage found)
34 import com.google.common.base.Joiner;
org.apache.hadoop.hdfs  (8 usages found)
DFSClient.java  (1 usage found)
196 import com.google.common.base.Joiner;
DFSTestUtil.java  (1 usage found)
76 import com.google.common.base.Joiner;
DFSUtil.java  (1 usage found)
108 import com.google.common.base.Joiner;
DFSUtilClient.java  (1 usage found)
20 import com.google.common.base.Joiner;
HAUtil.java  (1 usage found)
59 import com.google.common.base.Joiner;
MiniDFSCluster.java  (1 usage found)
145 import com.google.common.base.Joiner;
StripedFileTestUtil.java  (1 usage found)
20 import com.google.common.base.Joiner;
TestDFSUpgrade.java  (1 usage found)
53 import com.google.common.base.Joiner;
org.apache.hadoop.hdfs.protocol  (1 usage found)
LayoutFlags.java  (1 usage found)
26 import com.google.common.base.Joiner;
org.apache.hadoop.hdfs.protocolPB  (1 usage found)
TestPBHelper.java  (1 usage found)
118 import com.google.common.base.Joiner;
org.apache.hadoop.hdfs.qjournal  (1 usage found)
MiniJournalCluster.java  (1 usage found)
43 import com.google.common.base.Joiner;
org.apache.hadoop.hdfs.qjournal.client  (5 usages found)
AsyncLoggerSet.java  (1 usage found)
38 import com.google.common.base.Joiner;
QuorumCall.java  (1 usage found)
32 import com.google.common.base.Joiner;
QuorumException.java  (1 usage found)
25 import com.google.common.base.Joiner;
QuorumJournalManager.java  (1 usage found)
62 import com.google.common.base.Joiner;
TestQuorumCall.java  (1 usage found)
29 import com.google.common.base.Joiner;
org.apache.hadoop.hdfs.server.blockmanagement  (4 usages found)
HostSet.java  (1 usage found)
21 import com.google.common.base.Joiner;
TestBlockManager.java  (1 usage found)
20 import com.google.common.base.Joiner;
TestBlockReportRateLimiting.java  (1 usage found)
24 import com.google.common.base.Joiner;
TestPendingDataNodeMessages.java  (1 usage found)
41 import com.google.common.base.Joiner;
org.apache.hadoop.hdfs.server.common  (1 usage found)
StorageInfo.java  (1 usage found)
37 import com.google.common.base.Joiner;
org.apache.hadoop.hdfs.server.datanode  (7 usages found)
BlockPoolManager.java  (1 usage found)
32 import com.google.common.base.Joiner;
BlockRecoveryWorker.java  (1 usage found)
21 import com.google.common.base.Joiner;
BPServiceActor.java  (1 usage found)
75 import com.google.common.base.Joiner;
DataNode.java  (1 usage found)
226 import com.google.common.base.Joiner;
ShortCircuitRegistry.java  (1 usage found)
49 import com.google.common.base.Joiner;
TestDataNodeHotSwapVolumes.java  (1 usage found)
21 import com.google.common.base.Joiner;
TestRefreshNamenodes.java  (1 usage found)
35 import com.google.common.base.Joiner;
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl  (1 usage found)
FsVolumeImpl.java  (1 usage found)
90 import com.google.common.base.Joiner;
org.apache.hadoop.hdfs.server.namenode  (13 usages found)
FileJournalManager.java  (1 usage found)
49 import com.google.common.base.Joiner;
FSDirectory.java  (1 usage found)
24 import com.google.common.base.Joiner;
FSEditLogLoader.java  (1 usage found)
120 import com.google.common.base.Joiner;
FSEditLogOp.java  (1 usage found)
141 import com.google.common.base.Joiner;
FSImage.java  (1 usage found)
78 import com.google.common.base.Joiner;
FSImageTestUtil.java  (1 usage 

[jira] [Created] (HADOOP-17106) Replace Guava Joiner with Java8 String Join

2020-06-30 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17106:
--

 Summary: Replace Guava Joiner with Java8 String Join
 Key: HADOOP-17106
 URL: https://issues.apache.org/jira/browse/HADOOP-17106
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


Replace \{{com.google.common.base.Joiner}} with String.join.

 
{code:java}
Targets Occurrences of 'com.google.common.base.Joiner' in project with mask 
'*.java' Found Occurrences (103 usages found) 
org.apache.hadoop.crypto.key.kms.server (1 usage found) 
SimpleKMSAuditLogger.java (1 usage found) 26 import 
com.google.common.base.Joiner; org.apache.hadoop.fs (1 usage found) 
TestPath.java (1 usage found) 37 import com.google.common.base.Joiner; 
org.apache.hadoop.fs.s3a (1 usage found) StorageStatisticsTracker.java (1 usage 
found) 25 import com.google.common.base.Joiner; org.apache.hadoop.ha (1 usage 
found) TestHAAdmin.java (1 usage found) 34 import 
com.google.common.base.Joiner; org.apache.hadoop.hdfs (8 usages found) 
DFSClient.java (1 usage found) 196 import com.google.common.base.Joiner; 
DFSTestUtil.java (1 usage found) 76 import com.google.common.base.Joiner; 
DFSUtil.java (1 usage found) 108 import com.google.common.base.Joiner; 
DFSUtilClient.java (1 usage found) 20 import com.google.common.base.Joiner; 
HAUtil.java (1 usage found) 59 import com.google.common.base.Joiner; 
MiniDFSCluster.java (1 usage found) 145 import com.google.common.base.Joiner; 
StripedFileTestUtil.java (1 usage found) 20 import 
com.google.common.base.Joiner; TestDFSUpgrade.java (1 usage found) 53 import 
com.google.common.base.Joiner; org.apache.hadoop.hdfs.protocol (1 usage found) 
LayoutFlags.java (1 usage found) 26 import com.google.common.base.Joiner; 
org.apache.hadoop.hdfs.protocolPB (1 usage found) TestPBHelper.java (1 usage 
found) 118 import com.google.common.base.Joiner; 
org.apache.hadoop.hdfs.qjournal (1 usage found) MiniJournalCluster.java (1 
usage found) 43 import com.google.common.base.Joiner; 
org.apache.hadoop.hdfs.qjournal.client (5 usages found) AsyncLoggerSet.java (1 
usage found) 38 import com.google.common.base.Joiner; QuorumCall.java (1 usage 
found) 32 import com.google.common.base.Joiner; QuorumException.java (1 usage 
found) 25 import com.google.common.base.Joiner; QuorumJournalManager.java (1 
usage found) 62 import com.google.common.base.Joiner; TestQuorumCall.java (1 
usage found) 29 import com.google.common.base.Joiner; 
org.apache.hadoop.hdfs.server.blockmanagement (4 usages found) HostSet.java (1 
usage found) 21 import com.google.common.base.Joiner; TestBlockManager.java (1 
usage found) 20 import com.google.common.base.Joiner; 
TestBlockReportRateLimiting.java (1 usage found) 24 import 
com.google.common.base.Joiner; TestPendingDataNodeMessages.java (1 usage found) 
41 import com.google.common.base.Joiner; org.apache.hadoop.hdfs.server.common 
(1 usage found) StorageInfo.java (1 usage found) 37 import 
com.google.common.base.Joiner; org.apache.hadoop.hdfs.server.datanode (7 usages 
found) BlockPoolManager.java (1 usage found) 32 import 
com.google.common.base.Joiner; BlockRecoveryWorker.java (1 usage found) 21 
import com.google.common.base.Joiner; BPServiceActor.java (1 usage found) 75 
import com.google.common.base.Joiner; DataNode.java (1 usage found) 226 import 
com.google.common.base.Joiner; ShortCircuitRegistry.java (1 usage found) 49 
import com.google.common.base.Joiner; TestDataNodeHotSwapVolumes.java (1 usage 
found) 21 import com.google.common.base.Joiner; TestRefreshNamenodes.java (1 
usage found) 35 import com.google.common.base.Joiner; 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl (1 usage found) 
FsVolumeImpl.java (1 usage found) 90 import com.google.common.base.Joiner; 
org.apache.hadoop.hdfs.server.namenode (13 usages found) 
FileJournalManager.java (1 usage found) 49 import 
com.google.common.base.Joiner; FSDirectory.java (1 usage found) 24 import 
com.google.common.base.Joiner; FSEditLogLoader.java (1 usage found) 120 import 
com.google.common.base.Joiner; FSEditLogOp.java (1 usage found) 141 import 
com.google.common.base.Joiner; FSImage.java (1 usage found) 78 import 
com.google.common.base.Joiner; FSImageTestUtil.java (1 usage found) 66 import 
com.google.common.base.Joiner; NameNode.java (1 usage found) 21 import 
com.google.common.base.Joiner; TestAuditLogAtDebug.java (1 usage found) 21 
import com.google.common.base.Joiner; TestCheckpoint.java (1 usage found) 97 
import com.google.common.base.Joiner; TestFileJournalManager.java (1 usage 
found) 52 import com.google.common.base.Joiner; 
TestNNStorageRetentionFunctional.java (1 usage found) 39 import 
com.google.common.base.Joiner; TestNNStorageRetentionManager.java (1 usage 
found) 53 import com.google.common.base.Joiner; TestProtectedDirectories.java 
(1 usage found) 21 import com.google.common.base.Joiner; 

[jira] [Work started] (HADOOP-17098) Reduce Guava dependency in Hadoop source code

2020-06-30 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17098 started by Ahmed Hussein.
--
> Reduce Guava dependency in Hadoop source code
> -
>
> Key: HADOOP-17098
> URL: https://issues.apache.org/jira/browse/HADOOP-17098
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> Relying on Guava implementation in Hadoop has been painful due to 
> compatibility and vulnerability issues.
>  Guava updates tend to break/deprecate APIs. This made It hard to maintain 
> backward compatibility within hadoop versions and clients/downstreams.
> With 3.x uses java8+, the java 8 features should preferred to Guava, reducing 
> the footprint, and giving stability to source code.
> This jira should serve as an umbrella toward an incremental effort to reduce 
> the usage of Guava in the source code and to create subtasks to replace Guava 
> classes with Java features.
> Furthermore, it will be good to add a rule in the pre-commit build to warn 
> against introducing a new Guava usage in certain modules.
> Any one willing to take part in this code refactoring has to:
>  # Focus on one module at a time in order to reduce the conflicts and the 
> size of the patch. This will significantly help the reviewers.
>  # Run all the unit tests related to the module being affected by the change. 
> It is critical to verify that any change will not break the unit tests, or 
> cause a stable test case to become flaky.
>  
> A list of sub tasks replacing Guava APIs with java8 features:
> {code:java}
> com.google.common.io.BaseEncoding#base64()java.util.Base64
> com.google.common.io.BaseEncoding#base64Url() java.util.Base64
> com.google.common.base.Joiner.on()
> java.lang.String#join() or 
>   
>java.util.stream.Collectors#joining()
> com.google.common.base.Optional#of()  java.util.Optional#of()
> com.google.common.base.Optional#absent()  
> java.util.Optional#empty()
> com.google.common.base.Optional#fromNullable()
> java.util.Optional#ofNullable()
> com.google.common.base.Optional   
> java.util.Optional
> com.google.common.base.Predicate  
> java.util.function.Predicate
> com.google.common.base.Function   
> java.util.function.Function
> com.google.common.base.Supplier   
> java.util.function.Supplier
> {code}
>  
> I also vote for the replacement of {{Precondition}} with either a wrapper, or 
> Apache commons lang.
> I believe you guys have dealt with Guava compatibilities in the past and 
> probably have better insights. Any thoughts? [~weichiu], [~gabor.bota], 
> [~ste...@apache.org], [~ayushtkn], [~busbey], [~jeagles], [~kihwal]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17101) Replace Guava Function with Java8+ Function

2020-06-30 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17101:
---
Attachment: HADOOP-17101.002.patch

> Replace Guava Function with Java8+ Function
> ---
>
> Key: HADOOP-17101
> URL: https://issues.apache.org/jira/browse/HADOOP-17101
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17101.001.patch, HADOOP-17101.002.patch
>
>
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Function'
> Found Occurrences  (7 usages found)
> hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff  (1 usage found)
> Apache_Hadoop_HDFS_2.6.0.xml  (1 usage found)
> 13603  type="com.google.common.base.Function"
> org.apache.hadoop.hdfs.server.blockmanagement  (1 usage found)
> HostSet.java  (1 usage found)
> 20 import com.google.common.base.Function;
> org.apache.hadoop.hdfs.server.datanode.checker  (1 usage found)
> AbstractFuture.java  (1 usage found)
> 58 * (ListenableFuture, com.google.common.base.Function) 
> Futures.transform}
> org.apache.hadoop.hdfs.server.namenode.ha  (1 usage found)
> HATestUtil.java  (1 usage found)
> 40 import com.google.common.base.Function;
> org.apache.hadoop.hdfs.server.protocol  (1 usage found)
> RemoteEditLog.java  (1 usage found)
> 20 import com.google.common.base.Function;
> org.apache.hadoop.mapreduce.lib.input  (1 usage found)
> TestFileInputFormat.java  (1 usage found)
> 58 import com.google.common.base.Function;
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb  (1 usage found)
> GetApplicationsRequestPBImpl.java  (1 usage found)
> 38 import com.google.common.base.Function;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ye-huanhuan opened a new pull request #2117: YARN-10332: RESOURCE_UPDATE event was repeatedly registered in DECOMM…

2020-06-30 Thread GitBox


ye-huanhuan opened a new pull request #2117:
URL: https://github.com/apache/hadoop/pull/2117


   …ISSIONING state
   
   YARN-10332: RESOURCE_UPDATE event was repeatedly registered in 
DECOMMISSIONING state
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ye-huanhuan closed pull request #2116: YARN-10332: RESOURCE_UPDATE event was repeatedly registered in DECOMM…

2020-06-30 Thread GitBox


ye-huanhuan closed pull request #2116:
URL: https://github.com/apache/hadoop/pull/2116


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17084) Update Dockerfile_aarch64 to use Bionic

2020-06-30 Thread zhaorenhai (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149042#comment-17149042
 ] 

zhaorenhai commented on HADOOP-17084:
-

Thanks,  [~ayushtkn],  I do not face any issue in building.  Just a confirm.

> Update Dockerfile_aarch64 to use Bionic
> ---
>
> Key: HADOOP-17084
> URL: https://issues.apache.org/jira/browse/HADOOP-17084
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: RuiChen
>Priority: Major
>
> Dockerfile for x86 have been updated to apply Ubuntu Bionic, JDK11 and other 
> changes, we should make Dockerfile for aarch64 following these changes, keep 
> same behavior.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ye-huanhuan opened a new pull request #2116: YARN-10332: RESOURCE_UPDATE event was repeatedly registered in DECOMM…

2020-06-30 Thread GitBox


ye-huanhuan opened a new pull request #2116:
URL: https://github.com/apache/hadoop/pull/2116


   …ISSIONING state
   
   YARN-10332: RESOURCE_UPDATE event was repeatedly registered in 
DECOMMISSIONING state
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-06-30 Thread GitBox


goiri commented on a change in pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#discussion_r448069115



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
##
@@ -34,6 +38,8 @@
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.io.Text;
+import static org.apache.hadoop.metrics2.util.Metrics2Util.NameValuePair;

Review comment:
   Not always the case but I'd say this is fine.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2114: Try1

2020-06-30 Thread GitBox


hadoop-yetus commented on pull request #2114:
URL: https://github.com/apache/hadoop/pull/2114#issuecomment-652133988


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  20m 27s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m  0s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  shadedclient  |  15m 25s |  branch has no errors when 
building and testing our client artifacts.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  hadolint  |   0m  3s |  There were no new hadolint 
issues.  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 12s |  There were no new shelldocs 
issues.  |
   | +1 :green_heart: |  whitespace  |   0m  1s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 15s |  patch has no errors when 
building and testing our client artifacts.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 29s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  56m  1s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2114/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2114 |
   | Optional Tests | dupname asflicense hadolint shellcheck shelldocs |
   | uname | Linux abd438c7a21c 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e8dc862d385 |
   | Max. process+thread count | 339 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2114/1/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 
hadolint=1.11.1-0-g0e692dd |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] abhishekdas99 commented on pull request #2060: HADOOP-17032. Fix getContentSummary in ViewFileSystem to handle multiple children mountpoints pointing to different filesystems

2020-06-30 Thread GitBox


abhishekdas99 commented on pull request #2060:
URL: https://github.com/apache/hadoop/pull/2060#issuecomment-652126077


   > Thanx, Some checkstyle warnings surfaced, Can you check them.
   
   Fixed the check style issues



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on pull request #2060: HADOOP-17032. Fix getContentSummary in ViewFileSystem to handle multiple children mountpoints pointing to different filesystems

2020-06-30 Thread GitBox


ayushtkn commented on pull request #2060:
URL: https://github.com/apache/hadoop/pull/2060#issuecomment-652123187


   Thanx, Some checkstyle warnings surfaced, Can you check them.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2060: HADOOP-17032. Fix getContentSummary in ViewFileSystem to handle multiple children mountpoints pointing to different filesystems

2020-06-30 Thread GitBox


hadoop-yetus commented on pull request #2060:
URL: https://github.com/apache/hadoop/pull/2060#issuecomment-652122701


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m  1s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 42s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  17m 21s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 13s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 44s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   2m  9s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m  6s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 49s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 50s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  18m 50s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 12s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  17m 12s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 57s |  
hadoop-common-project/hadoop-common: The patch generated 3 new + 103 unchanged 
- 0 fixed = 106 total (was 103)  |
   | +1 :green_heart: |  mvnsite  |   1m 31s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 29s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 43s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   2m 14s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 22s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 53s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 149m 27s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2060/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2060 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 23f2e5bf5406 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e8dc862d385 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2060/5/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2060/5/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2060/5/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2060/5/testReport/ |
   | Max. process+thread count | 1683 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2060/5/console |
   | versions | git=2.17.1 maven=3.6.0 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2114: Try1

2020-06-30 Thread GitBox


hadoop-yetus commented on pull request #2114:
URL: https://github.com/apache/hadoop/pull/2114#issuecomment-652118939


   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2114/1/console in case 
of problems.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn opened a new pull request #2115: Try1

2020-06-30 Thread GitBox


ayushtkn opened a new pull request #2115:
URL: https://github.com/apache/hadoop/pull/2115


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn closed pull request #2114: Try1

2020-06-30 Thread GitBox


ayushtkn closed pull request #2114:
URL: https://github.com/apache/hadoop/pull/2114


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn closed pull request #2115: Try1

2020-06-30 Thread GitBox


ayushtkn closed pull request #2115:
URL: https://github.com/apache/hadoop/pull/2115


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn opened a new pull request #2114: Try1

2020-06-30 Thread GitBox


ayushtkn opened a new pull request #2114:
URL: https://github.com/apache/hadoop/pull/2114


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao commented on pull request #2107: HDFS-15430. create should work when parent dir is internalDir and fallback configured.

2020-06-30 Thread GitBox


umamaheswararao commented on pull request #2107:
URL: https://github.com/apache/hadoop/pull/2107#issuecomment-652117137


   Thanks @jojochuang for the review. I have updated with fixing the comments. 
Let me know if you have further. Thanks



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao commented on a change in pull request #2107: HDFS-15430. create should work when parent dir is internalDir and fallback configured.

2020-06-30 Thread GitBox


umamaheswararao commented on a change in pull request #2107:
URL: https://github.com/apache/hadoop/pull/2107#discussion_r448050744



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsLinkFallback.java
##
@@ -294,4 +298,155 @@ public void 
testMkdirShouldFailWhenFallbackFSNotAvailable()
 assertTrue(fsTarget.exists(test));
   }
 
+  /**
+   * Tests that the create file should be successful when the parent directory
+   * is same as the existent fallback directory. The new file should be created
+   * in fallback.
+   */
+  @Test
+  public void testCreateFileOnInternalMountDirWithSameDirTreeExistInFallback()
+  throws Exception {
+Configuration conf = new Configuration();
+ConfigUtil.addLink(conf, "/user1/hive/warehouse/partition-0",
+new Path(targetTestRoot.toString()).toUri());
+Path dir1 = new Path(targetTestRoot,
+"fallbackDir/user1/hive/warehouse/partition-0");
+fsTarget.mkdirs(dir1);
+Path fallbackTarget = new Path(targetTestRoot, "fallbackDir");
+ConfigUtil.addLinkFallback(conf, fallbackTarget.toUri());
+
+AbstractFileSystem vfs =
+AbstractFileSystem.get(viewFsDefaultClusterUri, conf);
+Path p = new Path("/user1/hive/warehouse/test.file");
+Path test = Path.mergePaths(fallbackTarget, p);
+assertFalse(fsTarget.exists(test));
+assertTrue(fsTarget.exists(test.getParent()));
+vfs.create(p, EnumSet.of(CREATE),
+Options.CreateOpts.perms(FsPermission.getDefault()));
+assertTrue(fsTarget.exists(test));
+
+  }
+
+  /**
+   * Tests the making of a new directory which is not matching to any of
+   * internal directory.
+   */
+  @Test
+  public void testCreateNewFileWithOutMatchingToMountDirOrFallbackDirPath()
+  throws Exception {
+Configuration conf = new Configuration();
+ConfigUtil.addLink(conf, "/user1/hive/warehouse/partition-0",
+new Path(targetTestRoot.toString()).toUri());
+Path fallbackTarget = new Path(targetTestRoot, "fallbackDir");
+fsTarget.mkdirs(fallbackTarget);
+ConfigUtil.addLinkFallback(conf, fallbackTarget.toUri());
+AbstractFileSystem vfs =
+AbstractFileSystem.get(viewFsDefaultClusterUri, conf);
+Path p = new Path("/user2/test.file");
+Path test = Path.mergePaths(fallbackTarget, p);
+assertFalse(fsTarget.exists(test));
+// user2 does not exist in fallback
+assertFalse(fsTarget.exists(test.getParent()));
+vfs.create(p, EnumSet.of(CREATE),
+Options.CreateOpts.perms(FsPermission.getDefault()),
+Options.CreateOpts.createParent());
+// /user2/test.file should be created in fallback
+assertTrue(fsTarget.exists(test));
+  }
+
+  /**
+   * Tests the making of a new file on root which is not matching to any of
+   * fallback files on root.
+   */
+  @Test
+  public void testCreateFileOnRootWithFallbackEnabled()
+  throws Exception {
+Configuration conf = new Configuration();
+Path fallbackTarget = new Path(targetTestRoot, "fallbackDir");
+fsTarget.mkdirs(fallbackTarget);
+
+ConfigUtil.addLink(conf, "/user1/hive/",
+new Path(targetTestRoot.toString()).toUri());
+ConfigUtil.addLinkFallback(conf, fallbackTarget.toUri());
+
+AbstractFileSystem vfs =
+AbstractFileSystem.get(viewFsDefaultClusterUri, conf);
+Path p = new Path("/test.file");
+Path test = Path.mergePaths(fallbackTarget, p);
+assertFalse(fsTarget.exists(test));
+vfs.create(p, EnumSet.of(CREATE),
+Options.CreateOpts.perms(FsPermission.getDefault()));
+// /test.file should be created in fallback
+assertTrue(fsTarget.exists(test));
+
+  }
+
+  /**
+   * Tests the create of a file on root where the path is matching to an
+   * existing file on fallback's file on root.
+   */
+  @Test (expected = FileAlreadyExistsException.class)
+  public void testCreateFileOnRootWithFallbackWithFileAlreadyExist()
+  throws Exception {
+Configuration conf = new Configuration();
+Path fallbackTarget = new Path(targetTestRoot, "fallbackDir");
+Path testFile = new Path(fallbackTarget, "test.file");
+// pre-creating test file in fallback.
+fsTarget.createNewFile(testFile);
+
+ConfigUtil.addLink(conf, "/user1/hive/",
+new Path(targetTestRoot.toString()).toUri());
+ConfigUtil.addLinkFallback(conf, fallbackTarget.toUri());
+
+AbstractFileSystem vfs =
+AbstractFileSystem.get(viewFsDefaultClusterUri, conf);
+  Path p = new Path("/test.file");
+  assertTrue(fsTarget.exists(testFile));
+vfs.create(p, EnumSet.of(CREATE),
+Options.CreateOpts.perms(FsPermission.getDefault()));
+  }
+
+  /**
+   * Tests the creating of a file where the path is same as mount link path.
+   */
+  @Test(expected= FileAlreadyExistsException.class)
+  public void testCreateFileWhereThePathIsSameAsItsMountLinkPath()
+  throws Exception {
+Configuration conf = new Configuration();
+Path 

[GitHub] [hadoop] umamaheswararao commented on a change in pull request #2107: HDFS-15430. create should work when parent dir is internalDir and fallback configured.

2020-06-30 Thread GitBox


umamaheswararao commented on a change in pull request #2107:
URL: https://github.com/apache/hadoop/pull/2107#discussion_r448050802



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
##
@@ -765,4 +766,154 @@ public void 
testMkdirsShouldReturnFalseWhenFallbackFSNotAvailable()
   assertTrue(fsTarget.exists(test));
 }
   }
+
+  /**
+   * Tests that the create file should be successful when the parent directory
+   * is same as the existent fallback directory. The new file should be created
+   * in fallback.
+   */
+  @Test
+  public void testCreateFileOnInternalMountDirWithSameDirTreeExistInFallback()
+  throws Exception {
+Configuration conf = new Configuration();
+ConfigUtil.addLink(conf, "/user1/hive/warehouse/partition-0",
+new Path(targetTestRoot.toString()).toUri());
+Path dir1 = new Path(targetTestRoot,
+"fallbackDir/user1/hive/warehouse/partition-0");
+fsTarget.mkdirs(dir1);
+Path fallbackTarget = new Path(targetTestRoot, "fallbackDir");
+ConfigUtil.addLinkFallback(conf, fallbackTarget.toUri());
+
+try (FileSystem vfs = FileSystem.get(viewFsDefaultClusterUri, conf)) {
+  Path p = new Path("/user1/hive/warehouse/test.file");
+  Path test = Path.mergePaths(fallbackTarget, p);
+  assertFalse(fsTarget.exists(test));
+  assertTrue(fsTarget.exists(test.getParent()));
+  vfs.createNewFile(p);
+  assertTrue(fsTarget.exists(test));
+}
+  }
+
+  /**
+   * Tests the making of a new directory which is not matching to any of
+   * internal directory.
+   */
+  @Test
+  public void testCreateNewFileWithOutMatchingToMountDirOrFallbackDirPath()
+  throws Exception {
+Configuration conf = new Configuration();
+ConfigUtil.addLink(conf, "/user1/hive/warehouse/partition-0",
+new Path(targetTestRoot.toString()).toUri());
+Path fallbackTarget = new Path(targetTestRoot, "fallbackDir");
+fsTarget.mkdirs(fallbackTarget);
+ConfigUtil.addLinkFallback(conf, fallbackTarget.toUri());
+try (FileSystem vfs = FileSystem.get(viewFsDefaultClusterUri, conf)) {
+  Path p = new Path("/user2/test.file");
+  Path test = Path.mergePaths(fallbackTarget, p);

Review comment:
   Done. Thanks





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao commented on a change in pull request #2107: HDFS-15430. create should work when parent dir is internalDir and fallback configured.

2020-06-30 Thread GitBox


umamaheswararao commented on a change in pull request #2107:
URL: https://github.com/apache/hadoop/pull/2107#discussion_r448050609



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
##
@@ -765,4 +766,154 @@ public void 
testMkdirsShouldReturnFalseWhenFallbackFSNotAvailable()
   assertTrue(fsTarget.exists(test));
 }
   }
+
+  /**
+   * Tests that the create file should be successful when the parent directory
+   * is same as the existent fallback directory. The new file should be created
+   * in fallback.
+   */
+  @Test
+  public void testCreateFileOnInternalMountDirWithSameDirTreeExistInFallback()
+  throws Exception {
+Configuration conf = new Configuration();
+ConfigUtil.addLink(conf, "/user1/hive/warehouse/partition-0",
+new Path(targetTestRoot.toString()).toUri());
+Path dir1 = new Path(targetTestRoot,
+"fallbackDir/user1/hive/warehouse/partition-0");
+fsTarget.mkdirs(dir1);
+Path fallbackTarget = new Path(targetTestRoot, "fallbackDir");

Review comment:
   Done. Thanks





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on a change in pull request #2107: HDFS-15430. create should work when parent dir is internalDir and fallback configured.

2020-06-30 Thread GitBox


jojochuang commented on a change in pull request #2107:
URL: https://github.com/apache/hadoop/pull/2107#discussion_r448044244



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
##
@@ -765,4 +766,154 @@ public void 
testMkdirsShouldReturnFalseWhenFallbackFSNotAvailable()
   assertTrue(fsTarget.exists(test));
 }
   }
+
+  /**
+   * Tests that the create file should be successful when the parent directory
+   * is same as the existent fallback directory. The new file should be created
+   * in fallback.
+   */
+  @Test
+  public void testCreateFileOnInternalMountDirWithSameDirTreeExistInFallback()
+  throws Exception {
+Configuration conf = new Configuration();
+ConfigUtil.addLink(conf, "/user1/hive/warehouse/partition-0",
+new Path(targetTestRoot.toString()).toUri());
+Path dir1 = new Path(targetTestRoot,
+"fallbackDir/user1/hive/warehouse/partition-0");
+fsTarget.mkdirs(dir1);
+Path fallbackTarget = new Path(targetTestRoot, "fallbackDir");

Review comment:
   Maybe easier to read if written as
   `
   
   Path fallbackTarget = new Path(targetTestRoot, "fallbackDir");
   Path dir1 = new Path(fallbackTarget,
   "user1/hive/warehouse/partition-0");
   fsTarget.mkdirs(dir1);
   `





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17100) Replace Guava Supplier with Java8+ Supplier in YARN

2020-06-30 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149015#comment-17149015
 ] 

Ahmed Hussein commented on HADOOP-17100:


* {{TestDistributedShell}} fails on trunk
 * {{TestNameNodeRetryCacheMetrics}} is flaky

 

> Replace Guava Supplier with Java8+ Supplier in YARN
> ---
>
> Key: HADOOP-17100
> URL: https://issues.apache.org/jira/browse/HADOOP-17100
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17100.001.patch, HADOOP-17100.002.patch, 
> HADOOP-17100.003.patch
>
>
> Replacing Usage of Guava supplier are in Unit tests 
> {{GenereicTestUtils.waitFor()}} in YARN subdirectory.
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Supplier' in directory 
> hadoop-yarn-project with mask '*.java'
> Found Occurrences  (23 usages found)
> org.apache.hadoop.yarn.applications.distributedshell  (1 usage found)
> TestDistributedShell.java  (1 usage found)
> 43 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.client  (1 usage found)
> TestRMFailover.java  (1 usage found)
> 64 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.client.api.impl  (1 usage found)
> TestYarnClientWithReservation.java  (1 usage found)
> 20 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.nodemanager.containermanager  (1 usage 
> found)
> TestContainerManager.java  (1 usage found)
> 51 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher  (1 
> usage found)
> TestContainerLaunch.java  (1 usage found)
> 57 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer  (1 
> usage found)
> TestContainerLocalizer.java  (1 usage found)
> 97 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation 
>  (1 usage found)
> TestLogAggregationService.java  (1 usage found)
> 150 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor  (1 
> usage found)
> TestContainersMonitor.java  (1 usage found)
> 40 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.nodemanager.logaggregation.tracker  (1 
> usage found)
> TestNMLogAggregationStatusTracker.java  (1 usage found)
> 24 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.resourcemanager  (6 usages found)
> TestApplicationMasterLauncher.java  (1 usage found)
> 95 import com.google.common.base.Supplier;
> TestLeaderElectorService.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> TestRM.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> TestRMHA.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> TestRMRestart.java  (1 usage found)
> 137 import com.google.common.base.Supplier;
> TestWorkPreservingRMRestart.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.resourcemanager.recovery  (1 usage found)
> TestZKRMStateStore.java  (1 usage found)
> 75 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity  (1 
> usage found)
> TestCapacityScheduler.java  (1 usage found)
> 192 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair  (1 usage 
> found)
> TestContinuousScheduling.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.resourcemanager.security  (2 usages found)
> TestDelegationTokenRenewer.java  (1 usage found)
> 117 import com.google.common.base.Supplier;
> TestRMDelegationTokens.java  (1 usage found)
> 29 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.router.webapp  (1 usage found)
> TestRouterWebServicesREST.java  (1 usage found)
> 135 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.webproxy.amfilter  (1 usage found)
> TestAmFilter.java  (1 usage found)
> 53 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.service  (1 usage found)
> MockServiceAM.java  (1 usage found)
> 21 import 

[jira] [Commented] (HADOOP-17100) Replace Guava Supplier with Java8+ Supplier in YARN

2020-06-30 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149013#comment-17149013
 ] 

Hadoop QA commented on HADOOP-17100:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 24 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
30s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m  
5s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m  4s{color} 
| {color:red} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 26s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
52s{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 30s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 26m 
19s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
45s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 25s{color} 
| {color:red} hadoop-yarn-applications-distributedshell in the patch passed. 
{color} |
| {color:green}+1{color} | 

[jira] [Updated] (HADOOP-17100) Replace Guava Supplier with Java8+ Supplier in YARN

2020-06-30 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17100:
---
Attachment: HADOOP-17100.003.patch

> Replace Guava Supplier with Java8+ Supplier in YARN
> ---
>
> Key: HADOOP-17100
> URL: https://issues.apache.org/jira/browse/HADOOP-17100
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17100.001.patch, HADOOP-17100.002.patch, 
> HADOOP-17100.003.patch
>
>
> Replacing Usage of Guava supplier are in Unit tests 
> {{GenereicTestUtils.waitFor()}} in YARN subdirectory.
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Supplier' in directory 
> hadoop-yarn-project with mask '*.java'
> Found Occurrences  (23 usages found)
> org.apache.hadoop.yarn.applications.distributedshell  (1 usage found)
> TestDistributedShell.java  (1 usage found)
> 43 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.client  (1 usage found)
> TestRMFailover.java  (1 usage found)
> 64 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.client.api.impl  (1 usage found)
> TestYarnClientWithReservation.java  (1 usage found)
> 20 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.nodemanager.containermanager  (1 usage 
> found)
> TestContainerManager.java  (1 usage found)
> 51 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher  (1 
> usage found)
> TestContainerLaunch.java  (1 usage found)
> 57 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer  (1 
> usage found)
> TestContainerLocalizer.java  (1 usage found)
> 97 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation 
>  (1 usage found)
> TestLogAggregationService.java  (1 usage found)
> 150 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor  (1 
> usage found)
> TestContainersMonitor.java  (1 usage found)
> 40 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.nodemanager.logaggregation.tracker  (1 
> usage found)
> TestNMLogAggregationStatusTracker.java  (1 usage found)
> 24 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.resourcemanager  (6 usages found)
> TestApplicationMasterLauncher.java  (1 usage found)
> 95 import com.google.common.base.Supplier;
> TestLeaderElectorService.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> TestRM.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> TestRMHA.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> TestRMRestart.java  (1 usage found)
> 137 import com.google.common.base.Supplier;
> TestWorkPreservingRMRestart.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.resourcemanager.recovery  (1 usage found)
> TestZKRMStateStore.java  (1 usage found)
> 75 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity  (1 
> usage found)
> TestCapacityScheduler.java  (1 usage found)
> 192 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair  (1 usage 
> found)
> TestContinuousScheduling.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.resourcemanager.security  (2 usages found)
> TestDelegationTokenRenewer.java  (1 usage found)
> 117 import com.google.common.base.Supplier;
> TestRMDelegationTokens.java  (1 usage found)
> 29 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.router.webapp  (1 usage found)
> TestRouterWebServicesREST.java  (1 usage found)
> 135 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.webproxy.amfilter  (1 usage found)
> TestAmFilter.java  (1 usage found)
> 53 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.service  (1 usage found)
> MockServiceAM.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hadoop] jojochuang commented on a change in pull request #2107: HDFS-15430. create should work when parent dir is internalDir and fallback configured.

2020-06-30 Thread GitBox


jojochuang commented on a change in pull request #2107:
URL: https://github.com/apache/hadoop/pull/2107#discussion_r448036387



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
##
@@ -765,4 +766,154 @@ public void 
testMkdirsShouldReturnFalseWhenFallbackFSNotAvailable()
   assertTrue(fsTarget.exists(test));
 }
   }
+
+  /**
+   * Tests that the create file should be successful when the parent directory
+   * is same as the existent fallback directory. The new file should be created
+   * in fallback.
+   */
+  @Test
+  public void testCreateFileOnInternalMountDirWithSameDirTreeExistInFallback()
+  throws Exception {
+Configuration conf = new Configuration();
+ConfigUtil.addLink(conf, "/user1/hive/warehouse/partition-0",
+new Path(targetTestRoot.toString()).toUri());
+Path dir1 = new Path(targetTestRoot,
+"fallbackDir/user1/hive/warehouse/partition-0");
+fsTarget.mkdirs(dir1);
+Path fallbackTarget = new Path(targetTestRoot, "fallbackDir");
+ConfigUtil.addLinkFallback(conf, fallbackTarget.toUri());
+
+try (FileSystem vfs = FileSystem.get(viewFsDefaultClusterUri, conf)) {
+  Path p = new Path("/user1/hive/warehouse/test.file");
+  Path test = Path.mergePaths(fallbackTarget, p);
+  assertFalse(fsTarget.exists(test));
+  assertTrue(fsTarget.exists(test.getParent()));
+  vfs.createNewFile(p);
+  assertTrue(fsTarget.exists(test));
+}
+  }
+
+  /**
+   * Tests the making of a new directory which is not matching to any of
+   * internal directory.
+   */
+  @Test
+  public void testCreateNewFileWithOutMatchingToMountDirOrFallbackDirPath()
+  throws Exception {
+Configuration conf = new Configuration();
+ConfigUtil.addLink(conf, "/user1/hive/warehouse/partition-0",
+new Path(targetTestRoot.toString()).toUri());
+Path fallbackTarget = new Path(targetTestRoot, "fallbackDir");
+fsTarget.mkdirs(fallbackTarget);
+ConfigUtil.addLinkFallback(conf, fallbackTarget.toUri());
+try (FileSystem vfs = FileSystem.get(viewFsDefaultClusterUri, conf)) {
+  Path p = new Path("/user2/test.file");
+  Path test = Path.mergePaths(fallbackTarget, p);

Review comment:
   give this variable a more meaningful name?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsLinkFallback.java
##
@@ -294,4 +298,155 @@ public void 
testMkdirShouldFailWhenFallbackFSNotAvailable()
 assertTrue(fsTarget.exists(test));
   }
 
+  /**
+   * Tests that the create file should be successful when the parent directory
+   * is same as the existent fallback directory. The new file should be created
+   * in fallback.
+   */
+  @Test
+  public void testCreateFileOnInternalMountDirWithSameDirTreeExistInFallback()
+  throws Exception {
+Configuration conf = new Configuration();
+ConfigUtil.addLink(conf, "/user1/hive/warehouse/partition-0",
+new Path(targetTestRoot.toString()).toUri());
+Path dir1 = new Path(targetTestRoot,
+"fallbackDir/user1/hive/warehouse/partition-0");
+fsTarget.mkdirs(dir1);
+Path fallbackTarget = new Path(targetTestRoot, "fallbackDir");
+ConfigUtil.addLinkFallback(conf, fallbackTarget.toUri());
+
+AbstractFileSystem vfs =
+AbstractFileSystem.get(viewFsDefaultClusterUri, conf);
+Path p = new Path("/user1/hive/warehouse/test.file");
+Path test = Path.mergePaths(fallbackTarget, p);
+assertFalse(fsTarget.exists(test));
+assertTrue(fsTarget.exists(test.getParent()));
+vfs.create(p, EnumSet.of(CREATE),
+Options.CreateOpts.perms(FsPermission.getDefault()));
+assertTrue(fsTarget.exists(test));
+
+  }
+
+  /**
+   * Tests the making of a new directory which is not matching to any of
+   * internal directory.
+   */
+  @Test
+  public void testCreateNewFileWithOutMatchingToMountDirOrFallbackDirPath()
+  throws Exception {
+Configuration conf = new Configuration();
+ConfigUtil.addLink(conf, "/user1/hive/warehouse/partition-0",
+new Path(targetTestRoot.toString()).toUri());
+Path fallbackTarget = new Path(targetTestRoot, "fallbackDir");
+fsTarget.mkdirs(fallbackTarget);
+ConfigUtil.addLinkFallback(conf, fallbackTarget.toUri());
+AbstractFileSystem vfs =
+AbstractFileSystem.get(viewFsDefaultClusterUri, conf);
+Path p = new Path("/user2/test.file");
+Path test = Path.mergePaths(fallbackTarget, p);
+assertFalse(fsTarget.exists(test));
+// user2 does not exist in fallback
+assertFalse(fsTarget.exists(test.getParent()));
+vfs.create(p, EnumSet.of(CREATE),
+Options.CreateOpts.perms(FsPermission.getDefault()),
+Options.CreateOpts.createParent());
+// /user2/test.file should be created in fallback
+assertTrue(fsTarget.exists(test));
+  }
+
+  /**
+   * Tests the making of a new file on root which is 

[jira] [Commented] (HADOOP-17101) Replace Guava Function with Java8+ Function

2020-06-30 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149004#comment-17149004
 ] 

Hadoop QA commented on HADOOP-17101:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
22m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
43s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
54s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
30s{color} | {color:blue} branch/hadoop-build-tools no findbugs output file 
(findbugsXml.xml) {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m  
7s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  1s{color} | {color:orange} root: The patch generated 2 new + 67 unchanged - 
3 fixed = 69 total (was 70) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
23m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  9m 
33s{color} | {color:green} the patch passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
58s{color} | {color:blue} hadoop-build-tools has no data from findbugs {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
9s{color} | {color:green} hadoop-build-tools in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}117m 49s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
7s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m  
0s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
53s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}306m 

[GitHub] [hadoop] fengnanli commented on a change in pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-06-30 Thread GitBox


fengnanli commented on a change in pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#discussion_r448036448



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
##
@@ -726,4 +732,41 @@ public TokenIdent decodeTokenIdentifier(Token 
token) throws IOExcept
 return token.decodeIdentifier();
   }
 
+  /**
+   * Return top token real owners list as well as the tokens count.
+   *
+   * @param n top number of users
+   * @return map of owners to counts
+   */
+  public List getTopTokenRealOwners(int n) {

Review comment:
   I am fine with it. We also need to add an initialization step to make 
sure this structure has the initial information from currentTokens.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2083: HADOOP-17077. S3A delegation token binding to support secondary binding list

2020-06-30 Thread GitBox


hadoop-yetus commented on pull request #2083:
URL: https://github.com/apache/hadoop/pull/2083#issuecomment-652089254


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  18m  2s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
10 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  3s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 34s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 13s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  16m 49s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 45s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 11s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 17s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 46s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 41s |  hadoop-aws in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m 21s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 23s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 43s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 55s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  18m 55s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 41s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  16m 41s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 42s |  root: The patch generated 12 new 
+ 31 unchanged - 3 fixed = 43 total (was 34)  |
   | +1 :green_heart: |  mvnsite  |   2m 21s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 4 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  14m  7s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 50s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 45s |  hadoop-aws in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   4m 30s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  93m 38s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 48s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 269m 34s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestFSImageWithAcl |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2083/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2083 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 83e9172cbc90 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e8dc862d385 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | 

[GitHub] [hadoop] sunchao commented on a change in pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-06-30 Thread GitBox


sunchao commented on a change in pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#discussion_r448022518



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
##
@@ -726,4 +732,41 @@ public TokenIdent decodeTokenIdentifier(Token 
token) throws IOExcept
 return token.decodeIdentifier();
   }
 
+  /**
+   * Return top token real owners list as well as the tokens count.
+   *
+   * @param n top number of users
+   * @return map of owners to counts
+   */
+  public List getTopTokenRealOwners(int n) {
+Map tokenOwnerMap = new HashMap<>();
+for (TokenIdent id : currentTokens.keySet()) {
+  String realUser;
+  if (id.getRealUser() != null && !id.getRealUser().toString().isEmpty()) {
+realUser = id.getRealUser().toString();
+  } else {
+// if there is no real user -> this is a non proxy user
+// the user itself is the real owner
+realUser = id.getUser().getUserName();
+  }
+  tokenOwnerMap.put(realUser, tokenOwnerMap.getOrDefault(realUser, 0)+1);
+}
+n = Math.min(n, tokenOwnerMap.size());
+if (n == 0) {
+  return new LinkedList<>();
+}
+
+TopN topN = new TopN(n);
+for (Map.Entry entry : tokenOwnerMap.entrySet()) {
+  topN.offer(new NameValuePair(
+  entry.getKey(), entry.getValue()));
+}
+
+List list = new LinkedList<>();

Review comment:
   Reverse shouldn't need extra space - it uses two indexes from begin and 
end of the array and swaps elements. I don't see real difference between the 
two for the reverse.

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
##
@@ -726,4 +732,41 @@ public TokenIdent decodeTokenIdentifier(Token 
token) throws IOExcept
 return token.decodeIdentifier();
   }
 
+  /**
+   * Return top token real owners list as well as the tokens count.
+   *
+   * @param n top number of users
+   * @return map of owners to counts
+   */
+  public List getTopTokenRealOwners(int n) {

Review comment:
   Can we update the `TopN` queue when creating/deleting tokens? we are 
just paying an extra constant cost for updating that which I think is fine. 
Even though it is using concurrent hashmap, I'm not sure how much performance 
impact will be if one thread is iterating over the key set while others want to 
updating the map.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17105) S3AFS globStatus attempts to resolve symlinks

2020-06-30 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148986#comment-17148986
 ] 

Hadoop QA commented on HADOOP-17105:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-aws in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
10s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 4 
new + 11 unchanged - 0 fixed = 15 total (was 11) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-aws in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
17s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2113: HADOOP-17105: S3AFS - Do not attempt to resolve symlinks in globStatus

2020-06-30 Thread GitBox


hadoop-yetus commented on pull request #2113:
URL: https://github.com/apache/hadoop/pull/2113#issuecomment-652080912


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 47s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  24m 34s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 36s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 23s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 34s |  hadoop-aws in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m 10s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  7s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 37s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 38s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 30s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 21s |  hadoop-tools/hadoop-aws: The 
patch generated 4 new + 11 unchanged - 0 fixed = 15 total (was 11)  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  1s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 21s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 26s |  hadoop-aws in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   1m  9s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 17s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  72m  8s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2113/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2113 |
   | JIRA Issue | HADOOP-17105 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux bc2062ad8935 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e8dc862d385 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2113/1/artifact/out/branch-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2113/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2113/1/artifact/out/patch-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2113/1/testReport/ |
   | Max. process+thread count | 459 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2113/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 

[GitHub] [hadoop] abhishekdas99 commented on pull request #2060: HADOOP-17032. Fix getContentSummary in ViewFileSystem to handle multiple children mountpoints pointing to different filesystems

2020-06-30 Thread GitBox


abhishekdas99 commented on pull request #2060:
URL: https://github.com/apache/hadoop/pull/2060#issuecomment-652077114


   Thanks @ayushtkn  for the review. I have addressed your comments.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] abhishekdas99 commented on a change in pull request #2060: HADOOP-17032. Fix getContentSummary in ViewFileSystem to handle multiple children mountpoints pointing to different filesys

2020-06-30 Thread GitBox


abhishekdas99 commented on a change in pull request #2060:
URL: https://github.com/apache/hadoop/pull/2060#discussion_r448011766



##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java
##
@@ -1369,4 +1381,61 @@ public void testDeleteOnExit() throws Exception {
 viewFs.close();
 assertFalse(fsTarget.exists(realTestPath));
   }
+
+  @Test
+  public void testGetContentSummary() throws IOException {
+ContentSummary summaryBefore =
+fsView.getContentSummary(new Path("/internalDir"));
+String expected = "GET CONTENT SUMMARY";
+Path filePath =
+new Path("/internalDir/internalDir2/linkToDir3", "foo");
+
+try (FSDataOutputStream outputStream = fsView.create(filePath)) {
+  try (OutputStreamWriter writer =
+  new OutputStreamWriter(outputStream, StandardCharsets.UTF_8)) {
+try (BufferedWriter buffer = new BufferedWriter(writer)) {
+  buffer.write(expected);
+}
+  }
+}
+
+Path newDirPath = new Path("/internalDir/linkToDir2", "bar");
+fsView.mkdirs(newDirPath);
+
+ContentSummary summaryAfter =
+fsView.getContentSummary(new Path("/internalDir"));
+Assert.assertEquals("The file count didn't match",
+summaryBefore.getFileCount() + 1,
+summaryAfter.getFileCount());
+Assert.assertEquals("The size didn't match",
+summaryBefore.getLength() + expected.length(),
+summaryAfter.getLength());
+Assert.assertEquals("The directory count didn't match",

Review comment:
   Removed `Assert.`

##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java
##
@@ -1369,4 +1381,61 @@ public void testDeleteOnExit() throws Exception {
 viewFs.close();
 assertFalse(fsTarget.exists(realTestPath));
   }
+
+  @Test
+  public void testGetContentSummary() throws IOException {
+ContentSummary summaryBefore =
+fsView.getContentSummary(new Path("/internalDir"));
+String expected = "GET CONTENT SUMMARY";
+Path filePath =
+new Path("/internalDir/internalDir2/linkToDir3", "foo");
+
+try (FSDataOutputStream outputStream = fsView.create(filePath)) {
+  try (OutputStreamWriter writer =
+  new OutputStreamWriter(outputStream, StandardCharsets.UTF_8)) {
+try (BufferedWriter buffer = new BufferedWriter(writer)) {
+  buffer.write(expected);
+}

Review comment:
   Changed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-06-30 Thread GitBox


goiri commented on a change in pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#discussion_r448004743



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
##
@@ -34,6 +38,8 @@
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.io.Text;
+import static org.apache.hadoop.metrics2.util.Metrics2Util.NameValuePair;

Review comment:
   If the method/class is very obvious like assertTrue(), it usually makes 
sense to do a static import.
   In this case, I guess is fine either way.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] fengnanli commented on a change in pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-06-30 Thread GitBox


fengnanli commented on a change in pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#discussion_r447994083



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/security/TestRouterSecurityManager.java
##
@@ -124,6 +126,71 @@ public void testDelegationTokens() throws IOException {
 securityManager.renewDelegationToken(token);
   }
 
+  @Test
+  public void testDelgationTokenTopOwners() throws Exception {
+List topOwners;
+
+UserGroupInformation user = UserGroupInformation
+.createUserForTesting("abc", new String[]{"router_group"});
+UserGroupInformation.setLoginUser(user);
+Token dt = securityManager.getDelegationToken(new Text("abc"));
+topOwners = securityManager.getSecretManager().getTopTokenRealOwners(2);
+assertEquals(1, topOwners.size());
+assertEquals("abc", topOwners.get(0).getName());
+assertEquals(1, topOwners.get(0).getValue());
+
+securityManager.renewDelegationToken(dt);
+topOwners = securityManager.getSecretManager().getTopTokenRealOwners(2);
+assertEquals(1, topOwners.size());
+assertEquals("abc", topOwners.get(0).getName());
+assertEquals(1, topOwners.get(0).getValue());
+
+securityManager.cancelDelegationToken(dt);
+topOwners = securityManager.getSecretManager().getTopTokenRealOwners(2);
+assertEquals(0, topOwners.size());
+
+
+// Use proxy user - the code should use the proxy user as the real owner
+UserGroupInformation routerUser =
+UserGroupInformation.createRemoteUser("router");
+UserGroupInformation proxyUser = UserGroupInformation
+.createProxyUserForTesting("abc",
+routerUser,
+new String[]{"router_group"});
+UserGroupInformation.setLoginUser(proxyUser);
+
+Token proxyDT = securityManager.getDelegationToken(new Text("router"));
+topOwners = securityManager.getSecretManager().getTopTokenRealOwners(2);
+assertEquals(1, topOwners.size());
+assertEquals("router", topOwners.get(0).getName());
+assertEquals(1, topOwners.get(0).getValue());
+
+// router to renew tokens
+UserGroupInformation.setLoginUser(routerUser);
+securityManager.renewDelegationToken(proxyDT);
+topOwners = securityManager.getSecretManager().getTopTokenRealOwners(2);
+assertEquals(1, topOwners.size());
+assertEquals("router", topOwners.get(0).getName());
+assertEquals(1, topOwners.get(0).getValue());
+
+securityManager.cancelDelegationToken(proxyDT);
+topOwners = securityManager.getSecretManager().getTopTokenRealOwners(2);
+assertEquals(0, topOwners.size());
+

Review comment:
   will remove





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] fengnanli commented on a change in pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-06-30 Thread GitBox


fengnanli commented on a change in pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#discussion_r447993824



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
##
@@ -726,4 +732,41 @@ public TokenIdent decodeTokenIdentifier(Token 
token) throws IOExcept
 return token.decodeIdentifier();
   }
 
+  /**
+   * Return top token real owners list as well as the tokens count.
+   *
+   * @param n top number of users
+   * @return map of owners to counts
+   */
+  public List getTopTokenRealOwners(int n) {
+Map tokenOwnerMap = new HashMap<>();
+for (TokenIdent id : currentTokens.keySet()) {
+  String realUser;
+  if (id.getRealUser() != null && !id.getRealUser().toString().isEmpty()) {
+realUser = id.getRealUser().toString();
+  } else {
+// if there is no real user -> this is a non proxy user
+// the user itself is the real owner
+realUser = id.getUser().getUserName();
+  }
+  tokenOwnerMap.put(realUser, tokenOwnerMap.getOrDefault(realUser, 0)+1);
+}
+n = Math.min(n, tokenOwnerMap.size());
+if (n == 0) {
+  return new LinkedList<>();
+}
+
+TopN topN = new TopN(n);
+for (Map.Entry entry : tokenOwnerMap.entrySet()) {
+  topN.offer(new NameValuePair(
+  entry.getKey(), entry.getValue()));
+}
+
+List list = new LinkedList<>();

Review comment:
   There is a reverse op and I think reverse linked list is faster without 
additional space.
   Not sure how java implement the reverse array list, but I think it will 
introduce copy and reassign.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] fengnanli commented on a change in pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-06-30 Thread GitBox


fengnanli commented on a change in pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#discussion_r447993331



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
##
@@ -726,4 +732,41 @@ public TokenIdent decodeTokenIdentifier(Token 
token) throws IOExcept
 return token.decodeIdentifier();
   }
 
+  /**
+   * Return top token real owners list as well as the tokens count.
+   *
+   * @param n top number of users
+   * @return map of owners to counts
+   */
+  public List getTopTokenRealOwners(int n) {

Review comment:
   I had similar thoughts as well but didn't come up with a better way. In 
namenode TopN metrics, it is doing so as well just at a lower frequency like 
every 5/15/25 minutes. We can potentially do that by reducing the metric 
reporting frequency.
   I also checked modern CPU for looping over 1M, which takes about 1ms-10ms.
   Another one would be to maintain a data structure to dynamically maintain 
the ordering of users and edit ordering per getdelegationtoken and per 
canceldelegationtoken like stream processing. I am not sure about the cost 
overall since in reality we generally have < 1 Million tokens.
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on a change in pull request #2060: HADOOP-17032. Fix getContentSummary in ViewFileSystem to handle multiple children mountpoints pointing to different filesystems

2020-06-30 Thread GitBox


ayushtkn commented on a change in pull request #2060:
URL: https://github.com/apache/hadoop/pull/2060#discussion_r447990862



##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java
##
@@ -1369,4 +1381,61 @@ public void testDeleteOnExit() throws Exception {
 viewFs.close();
 assertFalse(fsTarget.exists(realTestPath));
   }
+
+  @Test
+  public void testGetContentSummary() throws IOException {
+ContentSummary summaryBefore =
+fsView.getContentSummary(new Path("/internalDir"));
+String expected = "GET CONTENT SUMMARY";
+Path filePath =
+new Path("/internalDir/internalDir2/linkToDir3", "foo");
+
+try (FSDataOutputStream outputStream = fsView.create(filePath)) {
+  try (OutputStreamWriter writer =
+  new OutputStreamWriter(outputStream, StandardCharsets.UTF_8)) {
+try (BufferedWriter buffer = new BufferedWriter(writer)) {
+  buffer.write(expected);
+}

Review comment:
   Will doing just this not work?
   ```
 outputStream.write(expected.getBytes());
   ```

##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java
##
@@ -1369,4 +1381,61 @@ public void testDeleteOnExit() throws Exception {
 viewFs.close();
 assertFalse(fsTarget.exists(realTestPath));
   }
+
+  @Test
+  public void testGetContentSummary() throws IOException {
+ContentSummary summaryBefore =
+fsView.getContentSummary(new Path("/internalDir"));
+String expected = "GET CONTENT SUMMARY";
+Path filePath =
+new Path("/internalDir/internalDir2/linkToDir3", "foo");
+
+try (FSDataOutputStream outputStream = fsView.create(filePath)) {
+  try (OutputStreamWriter writer =
+  new OutputStreamWriter(outputStream, StandardCharsets.UTF_8)) {
+try (BufferedWriter buffer = new BufferedWriter(writer)) {
+  buffer.write(expected);
+}
+  }
+}
+
+Path newDirPath = new Path("/internalDir/linkToDir2", "bar");
+fsView.mkdirs(newDirPath);
+
+ContentSummary summaryAfter =
+fsView.getContentSummary(new Path("/internalDir"));
+Assert.assertEquals("The file count didn't match",
+summaryBefore.getFileCount() + 1,
+summaryAfter.getFileCount());
+Assert.assertEquals("The size didn't match",
+summaryBefore.getLength() + expected.length(),
+summaryAfter.getLength());
+Assert.assertEquals("The directory count didn't match",

Review comment:
   nit: There is already a static import -
   ```
   import static org.junit.Assert.*;
   ```
   no need to have `Assert.`
   
   Similarly for the below test aswell





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] fengnanli commented on a change in pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-06-30 Thread GitBox


fengnanli commented on a change in pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#discussion_r447988483



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
##
@@ -79,6 +79,10 @@
   public static final Class
   DFS_ROUTER_METRICS_CLASS_DEFAULT =
   FederationRPCPerformanceMonitor.class;
+  public static final String DFS_ROUTER_METRICS_TOP_NUM_TOKEN_OWNERS_KEY =

Review comment:
   Sure. I will add it here: 
https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] fengnanli commented on a change in pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-06-30 Thread GitBox


fengnanli commented on a change in pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#discussion_r447988563



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/security/TestRouterSecurityManager.java
##
@@ -39,6 +39,7 @@
 import org.junit.Rule;
 import org.junit.Test;
 
+import static org.apache.hadoop.metrics2.util.Metrics2Util.*;

Review comment:
   Sure





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] fengnanli commented on a change in pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-06-30 Thread GitBox


fengnanli commented on a change in pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#discussion_r447986241



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
##
@@ -34,6 +38,8 @@
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.io.Text;
+import static org.apache.hadoop.metrics2.util.Metrics2Util.NameValuePair;

Review comment:
   Inigo told me once about it and I think it can have the usage of member 
variables easier and cleaner.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jimmy-zuber-amzn opened a new pull request #2113: HADOOP-17105: S3AFS - Do not attempt to resolve symlinks in globStatus

2020-06-30 Thread GitBox


jimmy-zuber-amzn opened a new pull request #2113:
URL: https://github.com/apache/hadoop/pull/2113


   S3AFS does not support symlinks, so attempting to resolve
   symlinks in globStatus causes wasted S3 calls and worse
   performance. Removing it will speed up some calls to
   globStatus.
   
   JIRA link: https://issues.apache.org/jira/browse/HADOOP-17105
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao commented on a change in pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-06-30 Thread GitBox


sunchao commented on a change in pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#discussion_r447965794



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
##
@@ -79,6 +79,10 @@
   public static final Class
   DFS_ROUTER_METRICS_CLASS_DEFAULT =
   FederationRPCPerformanceMonitor.class;
+  public static final String DFS_ROUTER_METRICS_TOP_NUM_TOKEN_OWNERS_KEY =

Review comment:
   not sure if there is a metrics.md page for RBF - if so we should add 
there too.

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
##
@@ -726,4 +732,41 @@ public TokenIdent decodeTokenIdentifier(Token 
token) throws IOExcept
 return token.decodeIdentifier();
   }
 
+  /**
+   * Return top token real owners list as well as the tokens count.
+   *
+   * @param n top number of users
+   * @return map of owners to counts
+   */
+  public List getTopTokenRealOwners(int n) {

Review comment:
   will this get pretty expensive if there are lots of tokens stored? as 
every metrics pull needs to iterate through all tokens.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao commented on a change in pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-06-30 Thread GitBox


sunchao commented on a change in pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#discussion_r447965934



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/security/TestRouterSecurityManager.java
##
@@ -124,6 +126,71 @@ public void testDelegationTokens() throws IOException {
 securityManager.renewDelegationToken(token);
   }
 
+  @Test
+  public void testDelgationTokenTopOwners() throws Exception {
+List topOwners;
+
+UserGroupInformation user = UserGroupInformation
+.createUserForTesting("abc", new String[]{"router_group"});
+UserGroupInformation.setLoginUser(user);
+Token dt = securityManager.getDelegationToken(new Text("abc"));
+topOwners = securityManager.getSecretManager().getTopTokenRealOwners(2);
+assertEquals(1, topOwners.size());
+assertEquals("abc", topOwners.get(0).getName());
+assertEquals(1, topOwners.get(0).getValue());
+
+securityManager.renewDelegationToken(dt);
+topOwners = securityManager.getSecretManager().getTopTokenRealOwners(2);
+assertEquals(1, topOwners.size());
+assertEquals("abc", topOwners.get(0).getName());
+assertEquals(1, topOwners.get(0).getValue());
+
+securityManager.cancelDelegationToken(dt);
+topOwners = securityManager.getSecretManager().getTopTokenRealOwners(2);
+assertEquals(0, topOwners.size());
+
+
+// Use proxy user - the code should use the proxy user as the real owner
+UserGroupInformation routerUser =
+UserGroupInformation.createRemoteUser("router");
+UserGroupInformation proxyUser = UserGroupInformation
+.createProxyUserForTesting("abc",
+routerUser,
+new String[]{"router_group"});
+UserGroupInformation.setLoginUser(proxyUser);
+
+Token proxyDT = securityManager.getDelegationToken(new Text("router"));
+topOwners = securityManager.getSecretManager().getTopTokenRealOwners(2);
+assertEquals(1, topOwners.size());
+assertEquals("router", topOwners.get(0).getName());
+assertEquals(1, topOwners.get(0).getValue());
+
+// router to renew tokens
+UserGroupInformation.setLoginUser(routerUser);
+securityManager.renewDelegationToken(proxyDT);
+topOwners = securityManager.getSecretManager().getTopTokenRealOwners(2);
+assertEquals(1, topOwners.size());
+assertEquals("router", topOwners.get(0).getName());
+assertEquals(1, topOwners.get(0).getValue());
+
+securityManager.cancelDelegationToken(proxyDT);
+topOwners = securityManager.getSecretManager().getTopTokenRealOwners(2);
+assertEquals(0, topOwners.size());
+

Review comment:
   nit: extra blank line

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
##
@@ -726,4 +732,41 @@ public TokenIdent decodeTokenIdentifier(Token 
token) throws IOExcept
 return token.decodeIdentifier();
   }
 
+  /**
+   * Return top token real owners list as well as the tokens count.
+   *
+   * @param n top number of users
+   * @return map of owners to counts
+   */
+  public List getTopTokenRealOwners(int n) {
+Map tokenOwnerMap = new HashMap<>();
+for (TokenIdent id : currentTokens.keySet()) {
+  String realUser;
+  if (id.getRealUser() != null && !id.getRealUser().toString().isEmpty()) {
+realUser = id.getRealUser().toString();
+  } else {
+// if there is no real user -> this is a non proxy user
+// the user itself is the real owner
+realUser = id.getUser().getUserName();
+  }
+  tokenOwnerMap.put(realUser, tokenOwnerMap.getOrDefault(realUser, 0)+1);
+}
+n = Math.min(n, tokenOwnerMap.size());
+if (n == 0) {
+  return new LinkedList<>();
+}
+
+TopN topN = new TopN(n);
+for (Map.Entry entry : tokenOwnerMap.entrySet()) {
+  topN.offer(new NameValuePair(
+  entry.getKey(), entry.getValue()));
+}
+
+List list = new LinkedList<>();

Review comment:
   any reason to use `LinkedList` instead of `ArrayList`? the latter is 
usually more performant.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17084) Update Dockerfile_aarch64 to use Bionic

2020-06-30 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148944#comment-17148944
 ] 

Ayush Saxena commented on HADOOP-17084:
---

Thanx [~RenhaiZhao] for the PR. Regarding the $PATH, I don't think that would 
be having any affect while building, our intention for that docker is to build 
hadoop only, nothing more then that. Is there something getting affected due to 
this, or do you face any issue or problem in building?

> Update Dockerfile_aarch64 to use Bionic
> ---
>
> Key: HADOOP-17084
> URL: https://issues.apache.org/jira/browse/HADOOP-17084
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: RuiChen
>Priority: Major
>
> Dockerfile for x86 have been updated to apply Ubuntu Bionic, JDK11 and other 
> changes, we should make Dockerfile for aarch64 following these changes, keep 
> same behavior.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17098) Reduce Guava dependency in Hadoop source code

2020-06-30 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148942#comment-17148942
 ] 

Ayush Saxena commented on HADOOP-17098:
---

Thanx [~ahussein] for initiating this, Shall be great if this can be done. Let 
me know for any help on this

> Reduce Guava dependency in Hadoop source code
> -
>
> Key: HADOOP-17098
> URL: https://issues.apache.org/jira/browse/HADOOP-17098
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> Relying on Guava implementation in Hadoop has been painful due to 
> compatibility and vulnerability issues.
>  Guava updates tend to break/deprecate APIs. This made It hard to maintain 
> backward compatibility within hadoop versions and clients/downstreams.
> With 3.x uses java8+, the java 8 features should preferred to Guava, reducing 
> the footprint, and giving stability to source code.
> This jira should serve as an umbrella toward an incremental effort to reduce 
> the usage of Guava in the source code and to create subtasks to replace Guava 
> classes with Java features.
> Furthermore, it will be good to add a rule in the pre-commit build to warn 
> against introducing a new Guava usage in certain modules.
> Any one willing to take part in this code refactoring has to:
>  # Focus on one module at a time in order to reduce the conflicts and the 
> size of the patch. This will significantly help the reviewers.
>  # Run all the unit tests related to the module being affected by the change. 
> It is critical to verify that any change will not break the unit tests, or 
> cause a stable test case to become flaky.
>  
> A list of sub tasks replacing Guava APIs with java8 features:
> {code:java}
> com.google.common.io.BaseEncoding#base64()java.util.Base64
> com.google.common.io.BaseEncoding#base64Url() java.util.Base64
> com.google.common.base.Joiner.on()
> java.lang.String#join() or 
>   
>java.util.stream.Collectors#joining()
> com.google.common.base.Optional#of()  java.util.Optional#of()
> com.google.common.base.Optional#absent()  
> java.util.Optional#empty()
> com.google.common.base.Optional#fromNullable()
> java.util.Optional#ofNullable()
> com.google.common.base.Optional   
> java.util.Optional
> com.google.common.base.Predicate  
> java.util.function.Predicate
> com.google.common.base.Function   
> java.util.function.Function
> com.google.common.base.Supplier   
> java.util.function.Supplier
> {code}
>  
> I also vote for the replacement of {{Precondition}} with either a wrapper, or 
> Apache commons lang.
> I believe you guys have dealt with Guava compatibilities in the past and 
> probably have better insights. Any thoughts? [~weichiu], [~gabor.bota], 
> [~ste...@apache.org], [~ayushtkn], [~busbey], [~jeagles], [~kihwal]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17101) Replace Guava Function with Java8+ Function

2020-06-30 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17101:
---
Attachment: HADOOP-17101.001.patch
Status: Patch Available  (was: In Progress)

> Replace Guava Function with Java8+ Function
> ---
>
> Key: HADOOP-17101
> URL: https://issues.apache.org/jira/browse/HADOOP-17101
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17101.001.patch
>
>
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Function'
> Found Occurrences  (7 usages found)
> hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff  (1 usage found)
> Apache_Hadoop_HDFS_2.6.0.xml  (1 usage found)
> 13603  type="com.google.common.base.Function"
> org.apache.hadoop.hdfs.server.blockmanagement  (1 usage found)
> HostSet.java  (1 usage found)
> 20 import com.google.common.base.Function;
> org.apache.hadoop.hdfs.server.datanode.checker  (1 usage found)
> AbstractFuture.java  (1 usage found)
> 58 * (ListenableFuture, com.google.common.base.Function) 
> Futures.transform}
> org.apache.hadoop.hdfs.server.namenode.ha  (1 usage found)
> HATestUtil.java  (1 usage found)
> 40 import com.google.common.base.Function;
> org.apache.hadoop.hdfs.server.protocol  (1 usage found)
> RemoteEditLog.java  (1 usage found)
> 20 import com.google.common.base.Function;
> org.apache.hadoop.mapreduce.lib.input  (1 usage found)
> TestFileInputFormat.java  (1 usage found)
> 58 import com.google.common.base.Function;
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb  (1 usage found)
> GetApplicationsRequestPBImpl.java  (1 usage found)
> 38 import com.google.common.base.Function;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17105) S3AFS globStatus attempts to resolve symlinks

2020-06-30 Thread Jimmy Zuber (Jira)
Jimmy Zuber created HADOOP-17105:


 Summary: S3AFS globStatus attempts to resolve symlinks
 Key: HADOOP-17105
 URL: https://issues.apache.org/jira/browse/HADOOP-17105
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Jimmy Zuber


The S3AFileSystem implementation of the globStatus API has a setting configured 
to resolve symlinks. Under certain circumstances, this will cause additional 
file existence checks to be performed in order to determine if a FileStatus 
signifies a symlink. As symlinks are not supported in S3AFileSystem, these 
calls are unnecessary.

Code snapshot (permalink): 
[https://github.com/apache/hadoop/blob/2a67e2b1a0e3a5f91056f5b977ef9c4c07ba6718/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L4002]

Causes additional getFileStatus call here (permalink): 
[https://github.com/apache/hadoop/blob/1921e94292f0820985a0cfbf8922a2a1a67fe921/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java#L308]

Current code snippet:
{code:java}
/**
   * Override superclass so as to disable symlink resolution and so avoid
   * some calls to the FS which may have problems when the store is being
   * inconsistent.
   * {@inheritDoc}
   */
  @Override
  public FileStatus[] globStatus(
  final Path pathPattern,
  final PathFilter filter)
  throws IOException {
entryPoint(INVOCATION_GLOB_STATUS);
return Globber.createGlobber(this)
.withPathPattern(pathPattern)
.withPathFiltern(filter)
.withResolveSymlinks(true)
.build()
.glob();
  }
{code}
 

The fix should be pretty simple, just flip "withResolveSymlinks" to false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17100) Replace Guava Supplier with Java8+ Supplier in YARN

2020-06-30 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148845#comment-17148845
 ] 

Hadoop QA commented on HADOOP-17100:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 29m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
3s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 146 new or modified 
test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 14m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
34m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 10m  
1s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
54s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
30s{color} | {color:blue} branch/hadoop-build-tools no findbugs output file 
(findbugsXml.xml) {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 11m 
44s{color} | {color:green} the patch passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
44s{color} | {color:blue} hadoop-build-tools has no data from findbugs {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-build-tools in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 38s{color} 
| {color:red} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
55s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
29s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 13s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
12s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | 

[jira] [Work started] (HADOOP-17104) Replace Guava Supplier with Java8+ Supplier in hdfs

2020-06-30 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17104 started by Ahmed Hussein.
--
> Replace Guava Supplier with Java8+ Supplier in hdfs
> ---
>
> Key: HADOOP-17104
> URL: https://issues.apache.org/jira/browse/HADOOP-17104
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> Replacing Usage of Guava supplier are in Unit tests 
> {{GenereicTestUtils.waitFor()}} in hadoop-hdfs-project subdirectory.
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Supplier' in directory 
> hadoop-hdfs-project with mask '*.java'
> Found Occurrences  (99 usages found)
> org.apache.hadoop.fs  (1 usage found)
> TestEnhancedByteBufferAccess.java  (1 usage found)
> 75 import com.google.common.base.Supplier;
> org.apache.hadoop.fs.viewfs  (1 usage found)
> TestViewFileSystemWithTruncate.java  (1 usage found)
> 23 import com.google.common.base.Supplier;
> org.apache.hadoop.hdfs  (20 usages found)
> DFSTestUtil.java  (1 usage found)
> 79 import com.google.common.base.Supplier;
> MiniDFSCluster.java  (1 usage found)
> 78 import com.google.common.base.Supplier;
> TestBalancerBandwidth.java  (1 usage found)
> 29 import com.google.common.base.Supplier;
> TestClientProtocolForPipelineRecovery.java  (1 usage found)
> 30 import com.google.common.base.Supplier;
> TestDatanodeRegistration.java  (1 usage found)
> 44 import com.google.common.base.Supplier;
> TestDataTransferKeepalive.java  (1 usage found)
> 47 import com.google.common.base.Supplier;
> TestDeadNodeDetection.java  (1 usage found)
> 20 import com.google.common.base.Supplier;
> TestDecommission.java  (1 usage found)
> 41 import com.google.common.base.Supplier;
> TestDFSShell.java  (1 usage found)
> 37 import com.google.common.base.Supplier;
> TestEncryptedTransfer.java  (1 usage found)
> 35 import com.google.common.base.Supplier;
> TestEncryptionZonesWithKMS.java  (1 usage found)
> 22 import com.google.common.base.Supplier;
> TestFileCorruption.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> TestLeaseRecovery2.java  (1 usage found)
> 32 import com.google.common.base.Supplier;
> TestLeaseRecoveryStriped.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> TestMaintenanceState.java  (1 usage found)
> 63 import com.google.common.base.Supplier;
> TestPread.java  (1 usage found)
> 61 import com.google.common.base.Supplier;
> TestQuota.java  (1 usage found)
> 39 import com.google.common.base.Supplier;
> TestReplaceDatanodeOnFailure.java  (1 usage found)
> 20 import com.google.common.base.Supplier;
> TestReplication.java  (1 usage found)
> 27 import com.google.common.base.Supplier;
> TestSafeMode.java  (1 usage found)
> 62 import com.google.common.base.Supplier;
> org.apache.hadoop.hdfs.client.impl  (2 usages found)
> TestBlockReaderLocalMetrics.java  (1 usage found)
> 20 import com.google.common.base.Supplier;
> TestLeaseRenewer.java  (1 usage found)
> 20 import com.google.common.base.Supplier;
> org.apache.hadoop.hdfs.qjournal  (1 usage found)
> MiniJournalCluster.java  (1 usage found)
> 31 import com.google.common.base.Supplier;
> org.apache.hadoop.hdfs.qjournal.client  (1 usage found)
> TestIPCLoggerChannel.java  (1 usage found)
> 43 import com.google.common.base.Supplier;
> org.apache.hadoop.hdfs.qjournal.server  (1 usage found)
> TestJournalNodeSync.java  (1 usage found)
> 20 import com.google.common.base.Supplier;
> org.apache.hadoop.hdfs.server.blockmanagement  (7 usages found)
> TestBlockManagerSafeMode.java  (1 usage found)
> 20 import com.google.common.base.Supplier;
> TestBlockReportRateLimiting.java  (1 usage found)
> 25 import com.google.common.base.Supplier;
> TestNameNodePrunesMissingStorages.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> TestPendingInvalidateBlock.java  (1 usage found)
> 43 import com.google.common.base.Supplier;
> TestPendingReconstruction.java  (1 usage found)
> 34 import com.google.common.base.Supplier;
> TestRBWBlockInvalidation.java  (1 usage found)
> 49 

[jira] [Created] (HADOOP-17104) Replace Guava Supplier with Java8+ Supplier in hdfs

2020-06-30 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17104:
--

 Summary: Replace Guava Supplier with Java8+ Supplier in hdfs
 Key: HADOOP-17104
 URL: https://issues.apache.org/jira/browse/HADOOP-17104
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


Replacing Usage of Guava supplier are in Unit tests 
{{GenereicTestUtils.waitFor()}} in hadoop-hdfs-project subdirectory.
{code:java}
Targets
Occurrences of 'com.google.common.base.Supplier' in directory 
hadoop-hdfs-project with mask '*.java'
Found Occurrences  (99 usages found)
org.apache.hadoop.fs  (1 usage found)
TestEnhancedByteBufferAccess.java  (1 usage found)
75 import com.google.common.base.Supplier;
org.apache.hadoop.fs.viewfs  (1 usage found)
TestViewFileSystemWithTruncate.java  (1 usage found)
23 import com.google.common.base.Supplier;
org.apache.hadoop.hdfs  (20 usages found)
DFSTestUtil.java  (1 usage found)
79 import com.google.common.base.Supplier;
MiniDFSCluster.java  (1 usage found)
78 import com.google.common.base.Supplier;
TestBalancerBandwidth.java  (1 usage found)
29 import com.google.common.base.Supplier;
TestClientProtocolForPipelineRecovery.java  (1 usage found)
30 import com.google.common.base.Supplier;
TestDatanodeRegistration.java  (1 usage found)
44 import com.google.common.base.Supplier;
TestDataTransferKeepalive.java  (1 usage found)
47 import com.google.common.base.Supplier;
TestDeadNodeDetection.java  (1 usage found)
20 import com.google.common.base.Supplier;
TestDecommission.java  (1 usage found)
41 import com.google.common.base.Supplier;
TestDFSShell.java  (1 usage found)
37 import com.google.common.base.Supplier;
TestEncryptedTransfer.java  (1 usage found)
35 import com.google.common.base.Supplier;
TestEncryptionZonesWithKMS.java  (1 usage found)
22 import com.google.common.base.Supplier;
TestFileCorruption.java  (1 usage found)
21 import com.google.common.base.Supplier;
TestLeaseRecovery2.java  (1 usage found)
32 import com.google.common.base.Supplier;
TestLeaseRecoveryStriped.java  (1 usage found)
21 import com.google.common.base.Supplier;
TestMaintenanceState.java  (1 usage found)
63 import com.google.common.base.Supplier;
TestPread.java  (1 usage found)
61 import com.google.common.base.Supplier;
TestQuota.java  (1 usage found)
39 import com.google.common.base.Supplier;
TestReplaceDatanodeOnFailure.java  (1 usage found)
20 import com.google.common.base.Supplier;
TestReplication.java  (1 usage found)
27 import com.google.common.base.Supplier;
TestSafeMode.java  (1 usage found)
62 import com.google.common.base.Supplier;
org.apache.hadoop.hdfs.client.impl  (2 usages found)
TestBlockReaderLocalMetrics.java  (1 usage found)
20 import com.google.common.base.Supplier;
TestLeaseRenewer.java  (1 usage found)
20 import com.google.common.base.Supplier;
org.apache.hadoop.hdfs.qjournal  (1 usage found)
MiniJournalCluster.java  (1 usage found)
31 import com.google.common.base.Supplier;
org.apache.hadoop.hdfs.qjournal.client  (1 usage found)
TestIPCLoggerChannel.java  (1 usage found)
43 import com.google.common.base.Supplier;
org.apache.hadoop.hdfs.qjournal.server  (1 usage found)
TestJournalNodeSync.java  (1 usage found)
20 import com.google.common.base.Supplier;
org.apache.hadoop.hdfs.server.blockmanagement  (7 usages found)
TestBlockManagerSafeMode.java  (1 usage found)
20 import com.google.common.base.Supplier;
TestBlockReportRateLimiting.java  (1 usage found)
25 import com.google.common.base.Supplier;
TestNameNodePrunesMissingStorages.java  (1 usage found)
21 import com.google.common.base.Supplier;
TestPendingInvalidateBlock.java  (1 usage found)
43 import com.google.common.base.Supplier;
TestPendingReconstruction.java  (1 usage found)
34 import com.google.common.base.Supplier;
TestRBWBlockInvalidation.java  (1 usage found)
49 import com.google.common.base.Supplier;
TestSlowDiskTracker.java  (1 usage found)
48 import com.google.common.base.Supplier;
org.apache.hadoop.hdfs.server.datanode  (13 usages found)
DataNodeTestUtils.java  (1 usage found)
40 import com.google.common.base.Supplier;
TestBlockRecovery.java  (1 usage found)
120 import 

[jira] [Work started] (HADOOP-17103) Replace Guava Supplier with Java8+ Supplier in MAPREDUCE

2020-06-30 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17103 started by Ahmed Hussein.
--
> Replace Guava Supplier with Java8+ Supplier in MAPREDUCE
> 
>
> Key: HADOOP-17103
> URL: https://issues.apache.org/jira/browse/HADOOP-17103
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> Replacing Usage of Guava supplier are in Unit tests 
> {{GenereicTestUtils.waitFor()}} in hadoop-mapreduce-project subdirectory.
> {code:java}
> Targets
> hadoop-mapreduce-project with mask '*.java'
> Found Occurrences  (8 usages found)
> org.apache.hadoop.mapred  (2 usages found)
> TestTaskAttemptListenerImpl.java  (1 usage found)
> 20 import com.google.common.base.Supplier;
> UtilsForTests.java  (1 usage found)
> 64 import com.google.common.base.Supplier;
> org.apache.hadoop.mapreduce.v2.app  (4 usages found)
> TestFetchFailure.java  (1 usage found)
> 29 import com.google.common.base.Supplier;
> TestMRApp.java  (1 usage found)
> 31 import com.google.common.base.Supplier;
> TestRecovery.java  (1 usage found)
> 31 import com.google.common.base.Supplier;
> TestTaskHeartbeatHandler.java  (1 usage found)
> 28 import com.google.common.base.Supplier;
> org.apache.hadoop.mapreduce.v2.app.rm  (1 usage found)
> TestRMContainerAllocator.java  (1 usage found)
> 156 import com.google.common.base.Supplier;
> org.apache.hadoop.mapreduce.v2.hs  (1 usage found)
> TestJHSDelegationTokenSecretManager.java  (1 usage found)
> 30 import com.google.common.base.Supplier;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17103) Replace Guava Supplier with Java8+ Supplier in MAPREDUCE

2020-06-30 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17103:
--

 Summary: Replace Guava Supplier with Java8+ Supplier in MAPREDUCE
 Key: HADOOP-17103
 URL: https://issues.apache.org/jira/browse/HADOOP-17103
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


Replacing Usage of Guava supplier are in Unit tests 
{{GenereicTestUtils.waitFor()}} in hadoop-mapreduce-project subdirectory.
{code:java}
Targets
hadoop-mapreduce-project with mask '*.java'
Found Occurrences  (8 usages found)
org.apache.hadoop.mapred  (2 usages found)
TestTaskAttemptListenerImpl.java  (1 usage found)
20 import com.google.common.base.Supplier;
UtilsForTests.java  (1 usage found)
64 import com.google.common.base.Supplier;
org.apache.hadoop.mapreduce.v2.app  (4 usages found)
TestFetchFailure.java  (1 usage found)
29 import com.google.common.base.Supplier;
TestMRApp.java  (1 usage found)
31 import com.google.common.base.Supplier;
TestRecovery.java  (1 usage found)
31 import com.google.common.base.Supplier;
TestTaskHeartbeatHandler.java  (1 usage found)
28 import com.google.common.base.Supplier;
org.apache.hadoop.mapreduce.v2.app.rm  (1 usage found)
TestRMContainerAllocator.java  (1 usage found)
156 import com.google.common.base.Supplier;
org.apache.hadoop.mapreduce.v2.hs  (1 usage found)
TestJHSDelegationTokenSecretManager.java  (1 usage found)
30 import com.google.common.base.Supplier;

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17100) Replace Guava Supplier with Java8+ Supplier in YARN

2020-06-30 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17100:
---
Attachment: HADOOP-17100.002.patch

> Replace Guava Supplier with Java8+ Supplier in YARN
> ---
>
> Key: HADOOP-17100
> URL: https://issues.apache.org/jira/browse/HADOOP-17100
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17100.001.patch, HADOOP-17100.002.patch
>
>
> Replacing Usage of Guava supplier are in Unit tests 
> {{GenereicTestUtils.waitFor()}} in YARN subdirectory.
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Supplier' in directory 
> hadoop-yarn-project with mask '*.java'
> Found Occurrences  (23 usages found)
> org.apache.hadoop.yarn.applications.distributedshell  (1 usage found)
> TestDistributedShell.java  (1 usage found)
> 43 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.client  (1 usage found)
> TestRMFailover.java  (1 usage found)
> 64 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.client.api.impl  (1 usage found)
> TestYarnClientWithReservation.java  (1 usage found)
> 20 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.nodemanager.containermanager  (1 usage 
> found)
> TestContainerManager.java  (1 usage found)
> 51 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher  (1 
> usage found)
> TestContainerLaunch.java  (1 usage found)
> 57 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer  (1 
> usage found)
> TestContainerLocalizer.java  (1 usage found)
> 97 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation 
>  (1 usage found)
> TestLogAggregationService.java  (1 usage found)
> 150 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor  (1 
> usage found)
> TestContainersMonitor.java  (1 usage found)
> 40 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.nodemanager.logaggregation.tracker  (1 
> usage found)
> TestNMLogAggregationStatusTracker.java  (1 usage found)
> 24 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.resourcemanager  (6 usages found)
> TestApplicationMasterLauncher.java  (1 usage found)
> 95 import com.google.common.base.Supplier;
> TestLeaderElectorService.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> TestRM.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> TestRMHA.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> TestRMRestart.java  (1 usage found)
> 137 import com.google.common.base.Supplier;
> TestWorkPreservingRMRestart.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.resourcemanager.recovery  (1 usage found)
> TestZKRMStateStore.java  (1 usage found)
> 75 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity  (1 
> usage found)
> TestCapacityScheduler.java  (1 usage found)
> 192 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair  (1 usage 
> found)
> TestContinuousScheduling.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.resourcemanager.security  (2 usages found)
> TestDelegationTokenRenewer.java  (1 usage found)
> 117 import com.google.common.base.Supplier;
> TestRMDelegationTokens.java  (1 usage found)
> 29 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.router.webapp  (1 usage found)
> TestRouterWebServicesREST.java  (1 usage found)
> 135 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.server.webproxy.amfilter  (1 usage found)
> TestAmFilter.java  (1 usage found)
> 53 import com.google.common.base.Supplier;
> org.apache.hadoop.yarn.service  (1 usage found)
> MockServiceAM.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HADOOP-17102) Add checkstyle rule to prevent further usage of Guava classes

2020-06-30 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein resolved HADOOP-17102.

Resolution: Abandoned

This is a moving target. It is better we this precommit rule merged with its 
relevant subtak.

> Add checkstyle rule to prevent further usage of Guava classes
> -
>
> Key: HADOOP-17102
> URL: https://issues.apache.org/jira/browse/HADOOP-17102
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, precommit
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> We should have precommit rules to prevent further usage of Guava classes that 
> are available in Java8+
> A list replacing Guava APIs with java8 features:
> {code:java}
> com.google.common.io.BaseEncoding#base64()java.util.Base64
> com.google.common.io.BaseEncoding#base64Url() java.util.Base64
> com.google.common.base.Joiner.on()
> java.lang.String#join() or 
>   
>java.util.stream.Collectors#joining()
> com.google.common.base.Optional#of()  java.util.Optional#of()
> com.google.common.base.Optional#absent()  
> java.util.Optional#empty()
> com.google.common.base.Optional#fromNullable()
> java.util.Optional#ofNullable()
> com.google.common.base.Optional   
> java.util.Optional
> com.google.common.base.Predicate  
> java.util.function.Predicate
> com.google.common.base.Function   
> java.util.function.Function
> com.google.common.base.Supplier   
> java.util.function.Supplier
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17102) Add checkstyle rule to prevent further usage of Guava classes

2020-06-30 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17102 started by Ahmed Hussein.
--
> Add checkstyle rule to prevent further usage of Guava classes
> -
>
> Key: HADOOP-17102
> URL: https://issues.apache.org/jira/browse/HADOOP-17102
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, precommit
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> We should have precommit rules to prevent further usage of Guava classes that 
> are available in Java8+
> A list replacing Guava APIs with java8 features:
> {code:java}
> com.google.common.io.BaseEncoding#base64()java.util.Base64
> com.google.common.io.BaseEncoding#base64Url() java.util.Base64
> com.google.common.base.Joiner.on()
> java.lang.String#join() or 
>   
>java.util.stream.Collectors#joining()
> com.google.common.base.Optional#of()  java.util.Optional#of()
> com.google.common.base.Optional#absent()  
> java.util.Optional#empty()
> com.google.common.base.Optional#fromNullable()
> java.util.Optional#ofNullable()
> com.google.common.base.Optional   
> java.util.Optional
> com.google.common.base.Predicate  
> java.util.function.Predicate
> com.google.common.base.Function   
> java.util.function.Function
> com.google.common.base.Supplier   
> java.util.function.Supplier
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17102) Add checkstyle rule to prevent further usage of Guava classes

2020-06-30 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17102:
--

 Summary: Add checkstyle rule to prevent further usage of Guava 
classes
 Key: HADOOP-17102
 URL: https://issues.apache.org/jira/browse/HADOOP-17102
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build, precommit
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


We should have precommit rules to prevent further usage of Guava classes that 
are available in Java8+


A list replacing Guava APIs with java8 features:
{code:java}
com.google.common.io.BaseEncoding#base64()  java.util.Base64
com.google.common.io.BaseEncoding#base64Url()   java.util.Base64
com.google.common.base.Joiner.on()  
java.lang.String#join() or 

 java.util.stream.Collectors#joining()
com.google.common.base.Optional#of()java.util.Optional#of()
com.google.common.base.Optional#absent()
java.util.Optional#empty()
com.google.common.base.Optional#fromNullable()  java.util.Optional#ofNullable()
com.google.common.base.Optional java.util.Optional
com.google.common.base.Predicate
java.util.function.Predicate
com.google.common.base.Function 
java.util.function.Function
com.google.common.base.Supplier 
java.util.function.Supplier
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17100) Replace Guava Supplier with Java8+ Supplier in YARN

2020-06-30 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17100:
---
Description: 
Replacing Usage of Guava supplier are in Unit tests 
{{GenereicTestUtils.waitFor()}} in YARN subdirectory.
{code:java}
Targets
Occurrences of 'com.google.common.base.Supplier' in directory 
hadoop-yarn-project with mask '*.java'
Found Occurrences  (23 usages found)
org.apache.hadoop.yarn.applications.distributedshell  (1 usage found)
TestDistributedShell.java  (1 usage found)
43 import com.google.common.base.Supplier;
org.apache.hadoop.yarn.client  (1 usage found)
TestRMFailover.java  (1 usage found)
64 import com.google.common.base.Supplier;
org.apache.hadoop.yarn.client.api.impl  (1 usage found)
TestYarnClientWithReservation.java  (1 usage found)
20 import com.google.common.base.Supplier;
org.apache.hadoop.yarn.server.nodemanager.containermanager  (1 usage found)
TestContainerManager.java  (1 usage found)
51 import com.google.common.base.Supplier;
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher  (1 
usage found)
TestContainerLaunch.java  (1 usage found)
57 import com.google.common.base.Supplier;
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer  (1 
usage found)
TestContainerLocalizer.java  (1 usage found)
97 import com.google.common.base.Supplier;
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation  
(1 usage found)
TestLogAggregationService.java  (1 usage found)
150 import com.google.common.base.Supplier;
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor  (1 
usage found)
TestContainersMonitor.java  (1 usage found)
40 import com.google.common.base.Supplier;
org.apache.hadoop.yarn.server.nodemanager.logaggregation.tracker  (1 usage 
found)
TestNMLogAggregationStatusTracker.java  (1 usage found)
24 import com.google.common.base.Supplier;
org.apache.hadoop.yarn.server.resourcemanager  (6 usages found)
TestApplicationMasterLauncher.java  (1 usage found)
95 import com.google.common.base.Supplier;
TestLeaderElectorService.java  (1 usage found)
21 import com.google.common.base.Supplier;
TestRM.java  (1 usage found)
21 import com.google.common.base.Supplier;
TestRMHA.java  (1 usage found)
21 import com.google.common.base.Supplier;
TestRMRestart.java  (1 usage found)
137 import com.google.common.base.Supplier;
TestWorkPreservingRMRestart.java  (1 usage found)
21 import com.google.common.base.Supplier;
org.apache.hadoop.yarn.server.resourcemanager.recovery  (1 usage found)
TestZKRMStateStore.java  (1 usage found)
75 import com.google.common.base.Supplier;
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity  (1 usage 
found)
TestCapacityScheduler.java  (1 usage found)
192 import com.google.common.base.Supplier;
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair  (1 usage 
found)
TestContinuousScheduling.java  (1 usage found)
21 import com.google.common.base.Supplier;
org.apache.hadoop.yarn.server.resourcemanager.security  (2 usages found)
TestDelegationTokenRenewer.java  (1 usage found)
117 import com.google.common.base.Supplier;
TestRMDelegationTokens.java  (1 usage found)
29 import com.google.common.base.Supplier;
org.apache.hadoop.yarn.server.router.webapp  (1 usage found)
TestRouterWebServicesREST.java  (1 usage found)
135 import com.google.common.base.Supplier;
org.apache.hadoop.yarn.server.webproxy.amfilter  (1 usage found)
TestAmFilter.java  (1 usage found)
53 import com.google.common.base.Supplier;
org.apache.hadoop.yarn.service  (1 usage found)
MockServiceAM.java  (1 usage found)
21 import com.google.common.base.Supplier;

{code}

  was:
Usage of Guava supplier are in Unit tests.

 
{code:java}
Targets
Occurrences of 'com.google.common.base.Supplier' in project with mask 
'*.java'
Found Occurrences  (146 usages found)
org.apache.hadoop.conf  (1 usage found)
TestReconfiguration.java  (1 usage found)
21 import com.google.common.base.Supplier;
org.apache.hadoop.crypto.key.kms.server  (1 usage found)
TestKMS.java  (1 usage found)
20 import com.google.common.base.Supplier;
org.apache.hadoop.fs  (2 usages found)
FCStatisticsBaseTest.java  (1 usage found)
40 import com.google.common.base.Supplier;
TestEnhancedByteBufferAccess.java  (1 usage found)
75 import 

[jira] [Updated] (HADOOP-17100) Replace Guava Supplier with Java8+ Supplier in YARN

2020-06-30 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17100:
---
Summary: Replace Guava Supplier with Java8+ Supplier in YARN  (was: Replace 
Guava Supplier with Java8+ Supplier)

> Replace Guava Supplier with Java8+ Supplier in YARN
> ---
>
> Key: HADOOP-17100
> URL: https://issues.apache.org/jira/browse/HADOOP-17100
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17100.001.patch
>
>
> Usage of Guava supplier are in Unit tests.
>  
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Supplier' in project with mask 
> '*.java'
> Found Occurrences  (146 usages found)
> org.apache.hadoop.conf  (1 usage found)
> TestReconfiguration.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> org.apache.hadoop.crypto.key.kms.server  (1 usage found)
> TestKMS.java  (1 usage found)
> 20 import com.google.common.base.Supplier;
> org.apache.hadoop.fs  (2 usages found)
> FCStatisticsBaseTest.java  (1 usage found)
> 40 import com.google.common.base.Supplier;
> TestEnhancedByteBufferAccess.java  (1 usage found)
> 75 import com.google.common.base.Supplier;
> org.apache.hadoop.fs.viewfs  (1 usage found)
> TestViewFileSystemWithTruncate.java  (1 usage found)
> 23 import com.google.common.base.Supplier;
> org.apache.hadoop.ha  (1 usage found)
> TestZKFailoverController.java  (1 usage found)
> 25 import com.google.common.base.Supplier;
> org.apache.hadoop.hdfs  (20 usages found)
> DFSTestUtil.java  (1 usage found)
> 79 import com.google.common.base.Supplier;
> MiniDFSCluster.java  (1 usage found)
> 78 import com.google.common.base.Supplier;
> TestBalancerBandwidth.java  (1 usage found)
> 29 import com.google.common.base.Supplier;
> TestClientProtocolForPipelineRecovery.java  (1 usage found)
> 30 import com.google.common.base.Supplier;
> TestDatanodeRegistration.java  (1 usage found)
> 44 import com.google.common.base.Supplier;
> TestDataTransferKeepalive.java  (1 usage found)
> 47 import com.google.common.base.Supplier;
> TestDeadNodeDetection.java  (1 usage found)
> 20 import com.google.common.base.Supplier;
> TestDecommission.java  (1 usage found)
> 41 import com.google.common.base.Supplier;
> TestDFSShell.java  (1 usage found)
> 37 import com.google.common.base.Supplier;
> TestEncryptedTransfer.java  (1 usage found)
> 35 import com.google.common.base.Supplier;
> TestEncryptionZonesWithKMS.java  (1 usage found)
> 22 import com.google.common.base.Supplier;
> TestFileCorruption.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> TestLeaseRecovery2.java  (1 usage found)
> 32 import com.google.common.base.Supplier;
> TestLeaseRecoveryStriped.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> TestMaintenanceState.java  (1 usage found)
> 63 import com.google.common.base.Supplier;
> TestPread.java  (1 usage found)
> 61 import com.google.common.base.Supplier;
> TestQuota.java  (1 usage found)
> 39 import com.google.common.base.Supplier;
> TestReplaceDatanodeOnFailure.java  (1 usage found)
> 20 import com.google.common.base.Supplier;
> TestReplication.java  (1 usage found)
> 27 import com.google.common.base.Supplier;
> TestSafeMode.java  (1 usage found)
> 62 import com.google.common.base.Supplier;
> org.apache.hadoop.hdfs.client.impl  (2 usages found)
> TestBlockReaderLocalMetrics.java  (1 usage found)
> 20 import com.google.common.base.Supplier;
> TestLeaseRenewer.java  (1 usage found)
> 20 import com.google.common.base.Supplier;
> org.apache.hadoop.hdfs.qjournal  (1 usage found)
> MiniJournalCluster.java  (1 usage found)
> 31 import com.google.common.base.Supplier;
> org.apache.hadoop.hdfs.qjournal.client  (1 usage found)
> TestIPCLoggerChannel.java  (1 usage found)
> 43 import com.google.common.base.Supplier;
> org.apache.hadoop.hdfs.qjournal.server  (1 usage found)
> TestJournalNodeSync.java  (1 usage found)
> 20 import com.google.common.base.Supplier;
> org.apache.hadoop.hdfs.server.blockmanagement  (7 usages found)
> 

[GitHub] [hadoop] Hexiaoqiao commented on pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-06-30 Thread GitBox


Hexiaoqiao commented on pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#issuecomment-651850748


   Please check the failed unit tests and checkstyle report by yetus.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on a change in pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-06-30 Thread GitBox


Hexiaoqiao commented on a change in pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#discussion_r447745987



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
##
@@ -34,6 +38,8 @@
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.io.Text;
+import static org.apache.hadoop.metrics2.util.Metrics2Util.NameValuePair;

Review comment:
   is it necessary to import static?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
##
@@ -79,6 +79,10 @@
   public static final Class
   DFS_ROUTER_METRICS_CLASS_DEFAULT =
   FederationRPCPerformanceMonitor.class;
+  public static final String DFS_ROUTER_METRICS_TOP_NUM_TOKEN_OWNERS_KEY =

Review comment:
   we need to define this new configure item at hdfs-rbf-default.xml.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/security/TestRouterSecurityManager.java
##
@@ -39,6 +39,7 @@
 import org.junit.Rule;
 import org.junit.Test;
 
+import static org.apache.hadoop.metrics2.util.Metrics2Util.*;

Review comment:
   just suggest to replace with single class imports.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17099) Replace Guava Predicate with Java8+ Predicate

2020-06-30 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148716#comment-17148716
 ] 

Ahmed Hussein commented on HADOOP-17099:


Looking at the qbt reports, the failed unit tests seem to be flaky for quite 
sometime.
I added a rule in the 
{{hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xmll}} to fail 
whenever a patch introduces a Guava Predicate class.


{code:xml}



{code}


> Replace Guava Predicate with Java8+ Predicate
> -
>
> Key: HADOOP-17099
> URL: https://issues.apache.org/jira/browse/HADOOP-17099
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Minor
> Attachments: HADOOP-17099.001.patch, HADOOP-17099.002.patch
>
>
> {{com.google.common.base.Predicate}} can be replaced with 
> {{java.util.function.Predicate}}. 
> The change involving 9 occurrences is straightforward:
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Predicate' in project with mask 
> '*.java'
> Found Occurrences  (9 usages found)
> org.apache.hadoop.hdfs.server.blockmanagement  (1 usage found)
> CombinedHostFileManager.java  (1 usage found)
> 43 import com.google.common.base.Predicate;
> org.apache.hadoop.hdfs.server.namenode  (1 usage found)
> NameNodeResourceChecker.java  (1 usage found)
> 38 import com.google.common.base.Predicate;
> org.apache.hadoop.hdfs.server.namenode.snapshot  (1 usage found)
> Snapshot.java  (1 usage found)
> 41 import com.google.common.base.Predicate;
> org.apache.hadoop.metrics2.impl  (2 usages found)
> MetricsRecords.java  (1 usage found)
> 21 import com.google.common.base.Predicate;
> TestMetricsSystemImpl.java  (1 usage found)
> 41 import com.google.common.base.Predicate;
> org.apache.hadoop.yarn.logaggregation  (1 usage found)
> AggregatedLogFormat.java  (1 usage found)
> 77 import com.google.common.base.Predicate;
> org.apache.hadoop.yarn.logaggregation.filecontroller  (1 usage found)
> LogAggregationFileController.java  (1 usage found)
> 22 import com.google.common.base.Predicate;
> org.apache.hadoop.yarn.logaggregation.filecontroller.ifile  (1 usage 
> found)
> LogAggregationIndexedFileController.java  (1 usage found)
> 22 import com.google.common.base.Predicate;
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation 
>  (1 usage found)
> AppLogAggregatorImpl.java  (1 usage found)
> 75 import com.google.common.base.Predicate;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jianghuazhu opened a new pull request #2112: HDFS-15448.When starting a DataNode, call BlockPoolManager#startAll() twice.

2020-06-30 Thread GitBox


jianghuazhu opened a new pull request #2112:
URL: https://github.com/apache/hadoop/pull/2112


   … twice.
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2083: HADOOP-17077. S3A delegation token binding to support secondary binding list

2020-06-30 Thread GitBox


hadoop-yetus removed a comment on pull request #2083:
URL: https://github.com/apache/hadoop/pull/2083#issuecomment-648073271







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17099) Replace Guava Predicate with Java8+ Predicate

2020-06-30 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148542#comment-17148542
 ] 

Hadoop QA commented on HADOOP-17099:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 29m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
54s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
36s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
30s{color} | {color:blue} branch/hadoop-build-tools no findbugs output file 
(findbugsXml.xml) {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 2s{color} | {color:green} root: The patch generated 0 new + 100 unchanged - 5 
fixed = 100 total (was 105) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
43s{color} | {color:green} the patch passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
30s{color} | {color:blue} hadoop-build-tools has no data from findbugs {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-build-tools in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 44s{color} 
| {color:red} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}129m 28s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
49s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 23m 
51s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
45s{color} | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-06-30 Thread GitBox


hadoop-yetus commented on pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#issuecomment-651720321


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 19s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  9s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 16s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 23s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  16m 35s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 40s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 17s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 33s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 44s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 41s |  hadoop-hdfs-rbf in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 51s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m 23s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 31s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 20s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 35s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  18m 35s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 39s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  16m 39s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 38s |  root: The patch generated 1 new 
+ 46 unchanged - 0 fixed = 47 total (was 46)  |
   | +1 :green_heart: |  mvnsite  |   2m 18s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 58s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 44s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 40s |  hadoop-hdfs-rbf in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   3m 49s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 27s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |   8m 15s |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 171m  3s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRBFConfigFields |
   |   | hadoop.hdfs.server.federation.security.TestRouterSecurityManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2110/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2110 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 8a06221d6636 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cd188ea9f0e |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2110/1/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 

[GitHub] [hadoop] steveloughran commented on a change in pull request #2102: HADOOP-13327. Specify Output Stream and Syncable

2020-06-30 Thread GitBox


steveloughran commented on a change in pull request #2102:
URL: https://github.com/apache/hadoop/pull/2102#discussion_r447593933



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/StoreImplementationUtils.java
##
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License
+ */
+
+package org.apache.hadoop.fs.impl;
+
+import java.io.InputStream;
+import java.io.OutputStream;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.fs.StreamCapabilities;
+
+import static org.apache.hadoop.fs.StreamCapabilities.HFLUSH;
+import static org.apache.hadoop.fs.StreamCapabilities.HSYNC;
+
+/**
+ * Utility classes to help implementing filesystems and streams.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Unstable
+public final class StoreImplementationUtils {
+
+  private StoreImplementationUtils() {
+  }
+
+  /**
+   * Check the supplied capabilities for being those required for full
+   * {@code Syncable.hsync()} and {@code Syncable.hflush()} functionality.

Review comment:
   not AFAIK. We need to make clear, if you implement one you MUST do the 
other. After all, if you can implement hsync then hflush could just forward to 
hsync





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2102: HADOOP-13327. Specify Output Stream and Syncable

2020-06-30 Thread GitBox


hadoop-yetus removed a comment on pull request #2102:
URL: https://github.com/apache/hadoop/pull/2102#issuecomment-650391819


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 40s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
5 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 35s |  trunk passed  |
   | +1 :green_heart: |  compile  |  28m  8s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  19m 29s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 54s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m  1s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 14s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 39s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 42s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 32s |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 31s |  hadoop-azure-datalake in trunk failed 
with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   2m 39s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   0m 44s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   7m  6s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 50s |  hadoop-common in the patch failed.  |
   | -1 :x: |  mvninstall  |   0m 16s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  mvninstall  |   0m 13s |  hadoop-azure-datalake in the patch 
failed.  |
   | -1 :x: |  compile  |   1m 24s |  root in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javac  |   1m 24s |  root in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  compile  |   1m 11s |  root in the patch failed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09.  |
   | -1 :x: |  javac  |   1m 11s |  root in the patch failed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09.  |
   | -0 :warning: |  checkstyle  |   2m 39s |  root: The patch generated 1 new 
+ 76 unchanged - 3 fixed = 77 total (was 79)  |
   | -1 :x: |  mvnsite  |   0m 46s |  hadoop-common in the patch failed.  |
   | -1 :x: |  mvnsite  |   0m 17s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  mvnsite  |   0m 15s |  hadoop-azure-datalake in the patch 
failed.  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 3 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | -1 :x: |  shadedclient  |   0m 52s |  patch has errors when building and 
testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 22s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 27s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 16s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 15s |  hadoop-azure-datalake in the patch failed 
with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 19s |  
hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with 
JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 2 new + 0 unchanged 
- 0 fixed = 2 total (was 0)  |
   | -1 :x: |  javadoc  |   0m 15s |  
hadoop-tools_hadoop-azure-datalake-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09
 with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   | -1 :x: |  findbugs  |   0m 43s |  hadoop-common in the patch failed.  |
   | -1 :x: |  findbugs  |   0m 16s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  findbugs  |   0m 15s |  hadoop-azure-datalake in the patch 

[jira] [Commented] (HADOOP-16798) job commit failure in S3A MR magic committer test

2020-06-30 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148490#comment-17148490
 ] 

Hudson commented on HADOOP-16798:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18392 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18392/])
HADOOP-16798. S3A Committer thread pool shutdown problems. (#1963) (github: rev 
4249c04d454ca82aadeed152ab777e93474754ab)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/staging/StagingCommitter.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/Tasks.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitter.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/TestTasks.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/staging/PartitionedStagingCommitter.java


> job commit failure in S3A MR magic committer test
> -
>
> Key: HADOOP-16798
> URL: https://issues.apache.org/jira/browse/HADOOP-16798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.1
>
> Attachments: stdout
>
>
> failure in 
> {code}
> ITestS3ACommitterMRJob.test_200_execute:304->Assert.fail:88 Job 
> job_1578669113137_0003 failed in state FAILED with cause Job commit failed: 
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.FutureTask@6e894de2 rejected from 
> org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor@225eed53[Terminated,
>  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
> {code}
> Stack implies thread pool rejected it, but toString says "Terminated". Race 
> condition?
> *update 2020-04-22*: it's caused when a task is aborted in the AM -the 
> threadpool is disposed of, and while that is shutting down in one thread, 
> task commit is initiated using the same thread pool. When the task 
> committer's destroy operation times out, it kills all the active uploads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16202) S3A openFile() operation to support explicit length parameter

2020-06-30 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148486#comment-17148486
 ] 

Steve Loughran commented on HADOOP-16202:
-

not adding options to set etag/version; it complicates things way too much

> S3A openFile() operation to support explicit length parameter
> -
>
> Key: HADOOP-16202
> URL: https://issues.apache.org/jira/browse/HADOOP-16202
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> The {{openFile()}} builder API lets us add new options when reading a file
> Add an option {{"fs.s3a.open.option.length"}} which takes a long and allows 
> the length of the file to be declared. If set, *no check for the existence of 
> the file is issued when opening the file*
> Also: withFileStatus() to take any FileStatus implementation, rather than 
> only S3AFileStatus -and not check that the path matches the path being 
> opened. Needed to support viewFS-style wrapping and mounting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2046: HADOOP-16202 Enhance S3A openFile()

2020-06-30 Thread GitBox


hadoop-yetus removed a comment on pull request #2046:
URL: https://github.com/apache/hadoop/pull/2046#issuecomment-637726275


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 27s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 47s |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 33s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 46s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  4s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 26s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  6s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  7s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 22s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 53s |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m 53s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 49s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m  3s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m  7s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 30s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 11s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   1m 31s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 122m 45s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2046/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2046 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 2b0a1f09fe16 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / aa6d13455b9 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2046/1/testReport/ |
   | Max. process+thread count | 1619 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2046/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15492) increase performance of s3guard import command

2020-06-30 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15492:

Priority: Minor  (was: Major)

> increase performance of s3guard import command
> --
>
> Key: HADOOP-15492
> URL: https://issues.apache.org/jira/browse/HADOOP-15492
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Priority: Minor
>
> Some perf improvements which spring to mind having looked at the s3guard 
> import command
> Key points: it can handle the import of a tree with existing data better
> # if the bucket is already under s3guard, then the listing will return all 
> listed files, which will then be put() again.
> # import calls {{putParentsIfNotPresent()}}, but DDBMetaStore.put() will do 
> the parent creation anyway
> # For each entry in the store (i.e. a file), the full parent listing is 
> created, then a batch write created to put all the parents and the actual file
> As a result, it's at risk of doing many more put calls than needed, 
> especially for wide/deep directory trees.
> It would be much more efficient to put all files in a single directory as 
> part of 1+ batch request, with 1 parent tree. Better yet: a get() of that 
> parent could skip the put of parent entries.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-16202) S3A openFile() operation to support explicit length parameter

2020-06-30 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-16202 started by Steve Loughran.
---
> S3A openFile() operation to support explicit length parameter
> -
>
> Key: HADOOP-16202
> URL: https://issues.apache.org/jira/browse/HADOOP-16202
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> The {{openFile()}} builder API lets us add new options when reading a file
> Add an option {{"fs.s3a.open.option.length"}} which takes a long and allows 
> the length of the file to be declared. If set, *no check for the existence of 
> the file is issued when opening the file*
> Also: withFileStatus() to take any FileStatus implementation, rather than 
> only S3AFileStatus -and not check that the path matches the path being 
> opened. Needed to support viewFS-style wrapping and mounting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16798) job commit failure in S3A MR magic committer test

2020-06-30 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16798:

Description: 
failure in 
{code}
ITestS3ACommitterMRJob.test_200_execute:304->Assert.fail:88 Job 
job_1578669113137_0003 failed in state FAILED with cause Job commit failed: 
java.util.concurrent.RejectedExecutionException: Task 
java.util.concurrent.FutureTask@6e894de2 rejected from 
org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor@225eed53[Terminated, 
pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
{code}

Stack implies thread pool rejected it, but toString says "Terminated". Race 
condition?

*update 2020-04-22*: it's caused when a task is aborted in the AM -the 
threadpool is disposed of, and while that is shutting down in one thread, task 
commit is initiated using the same thread pool. When the task committer's 
destroy operation times out, it kills all the active uploads.



  was:
failure in 
{code}
ITestS3ACommitterMRJob.test_200_execute:304->Assert.fail:88 Job 
job_1578669113137_0003 failed in state FAILED with cause Job commit failed: 
java.util.concurrent.RejectedExecutionException: Task 
java.util.concurrent.FutureTask@6e894de2 rejected from 
org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor@225eed53[Terminated, 
pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
{code}

Stack implies thread pool rejected it, but toString says "Terminated". Race 
condition?

*update 2020-04-22*: it's caused when a task is aborted in the AM -the 
threadpool is disposed of, and while that is shutting down in one thread, task 
commit is initiated using the same thread pool. When the task committer's 
destroy operation times out, it kills all the active uploads.

Proposed: destroyThreadPool immediately copies reference to current thread pool 
and nullifies it, so that any new operation needing a thread pool will create a 
new one


> job commit failure in S3A MR magic committer test
> -
>
> Key: HADOOP-16798
> URL: https://issues.apache.org/jira/browse/HADOOP-16798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: stdout
>
>
> failure in 
> {code}
> ITestS3ACommitterMRJob.test_200_execute:304->Assert.fail:88 Job 
> job_1578669113137_0003 failed in state FAILED with cause Job commit failed: 
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.FutureTask@6e894de2 rejected from 
> org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor@225eed53[Terminated,
>  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
> {code}
> Stack implies thread pool rejected it, but toString says "Terminated". Race 
> condition?
> *update 2020-04-22*: it's caused when a task is aborted in the AM -the 
> threadpool is disposed of, and while that is shutting down in one thread, 
> task commit is initiated using the same thread pool. When the task 
> committer's destroy operation times out, it kills all the active uploads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16798) job commit failure in S3A MR magic committer test

2020-06-30 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16798.
-
Fix Version/s: 3.3.1
   Resolution: Fixed

> job commit failure in S3A MR magic committer test
> -
>
> Key: HADOOP-16798
> URL: https://issues.apache.org/jira/browse/HADOOP-16798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.1
>
> Attachments: stdout
>
>
> failure in 
> {code}
> ITestS3ACommitterMRJob.test_200_execute:304->Assert.fail:88 Job 
> job_1578669113137_0003 failed in state FAILED with cause Job commit failed: 
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.FutureTask@6e894de2 rejected from 
> org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor@225eed53[Terminated,
>  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
> {code}
> Stack implies thread pool rejected it, but toString says "Terminated". Race 
> condition?
> *update 2020-04-22*: it's caused when a task is aborted in the AM -the 
> threadpool is disposed of, and while that is shutting down in one thread, 
> task commit is initiated using the same thread pool. When the task 
> committer's destroy operation times out, it kills all the active uploads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #1963: HADOOP-16798. S3A Committer thread pool shutdown problems.

2020-06-30 Thread GitBox


steveloughran commented on pull request #1963:
URL: https://github.com/apache/hadoop/pull/1963#issuecomment-651690292


   thx. merged to trunk & 3.3



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #1963: HADOOP-16798. S3A Committer thread pool shutdown problems.

2020-06-30 Thread GitBox


steveloughran merged pull request #1963:
URL: https://github.com/apache/hadoop/pull/1963


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2069: HADOOP-16830. IOStatistics API.

2020-06-30 Thread GitBox


hadoop-yetus removed a comment on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-650308284


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 52s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
23 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 21s |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 35s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  17m 24s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 55s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 14s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 51s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 36s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 36s |  hadoop-aws in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m  8s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 11s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 27s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 21s |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 57s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | -1 :x: |  javac  |  19m 57s |  
root-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 generated 1 new + 1964 unchanged - 1 
fixed = 1965 total (was 1965)  |
   | +1 :green_heart: |  compile  |  17m 27s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | -1 :x: |  javac  |  17m 27s |  
root-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 1 new + 1858 unchanged - 1 
fixed = 1859 total (was 1859)  |
   | -0 :warning: |  checkstyle  |   2m 52s |  root: The patch generated 34 new 
+ 160 unchanged - 22 fixed = 194 total (was 182)  |
   | +1 :green_heart: |  mvnsite  |   2m  6s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 20 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 46s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 35s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 36s |  hadoop-aws in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   3m 31s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 18s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 30s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 169m 40s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.statistics.TestDynamicIOStatistics |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2069/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2069 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint xml |
   | uname | Linux 2024cbd12f35 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2111: HADOOP-17090. Increase precommit job timeout from 5 hours to 20 hours.

2020-06-30 Thread GitBox


hadoop-yetus commented on pull request #2111:
URL: https://github.com/apache/hadoop/pull/2111#issuecomment-651652087


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 18s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  shadedclient  |  14m  2s |  branch has no errors when 
building and testing our client artifacts.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 16s |  There were no new shelldocs 
issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 48s |  patch has no errors when 
building and testing our client artifacts.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 36s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  31m 40s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2111/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2111 |
   | Optional Tests | dupname asflicense shellcheck shelldocs |
   | uname | Linux e9363c8be5a8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cd188ea9f0e |
   | Max. process+thread count | 414 (vs. ulimit of 5500) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2111/1/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet

2020-06-30 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148421#comment-17148421
 ] 

Hadoop QA commented on HADOOP-17079:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
25m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
46s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
56s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 13m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 19m 24s{color} 
| {color:red} root generated 81 new + 1865 unchanged - 0 fixed = 1946 total 
(was 1865) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
18s{color} | {color:green} root: The patch generated 0 new + 898 unchanged - 8 
fixed = 898 total (was 906) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 13m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
53s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}148m 40s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m  
0s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
34s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 24m 47s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m  4s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
1s{color} | {color:green} 

[jira] [Updated] (HADOOP-17090) Increase precommit job timeout from 5 hours to 20 hours

2020-06-30 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17090:
---
Target Version/s: 2.9.3, 3.2.2, 2.10.1, 3.3.1, 3.4.0, 3.1.5
  Status: Patch Available  (was: Open)

> Increase precommit job timeout from 5 hours to 20 hours
> ---
>
> Key: HADOOP-17090
> URL: https://issues.apache.org/jira/browse/HADOOP-17090
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> Now we frequently increase the timeout for testing and undo the change before 
> committing.
> * https://github.com/apache/hadoop/pull/2026
> * https://github.com/apache/hadoop/pull/2051
> * https://github.com/apache/hadoop/pull/2012
> * https://github.com/apache/hadoop/pull/2098
> * and more...
> I'd like to increase the timeout by default to reduce the work.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka opened a new pull request #2111: HADOOP-17090. Increase precommit job timeout from 5 hours to 20 hours.

2020-06-30 Thread GitBox


aajisaka opened a new pull request #2111:
URL: https://github.com/apache/hadoop/pull/2111


   JIRA: https://issues.apache.org/jira/browse/HADOOP-17090



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17090) Increase precommit job timeout from 5 hours to 20 hours

2020-06-30 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-17090:
--

Assignee: Akira Ajisaka

> Increase precommit job timeout from 5 hours to 20 hours
> ---
>
> Key: HADOOP-17090
> URL: https://issues.apache.org/jira/browse/HADOOP-17090
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> Now we frequently increase the timeout for testing and undo the change before 
> committing.
> * https://github.com/apache/hadoop/pull/2026
> * https://github.com/apache/hadoop/pull/2051
> * https://github.com/apache/hadoop/pull/2012
> * https://github.com/apache/hadoop/pull/2098
> * and more...
> I'd like to increase the timeout by default to reduce the work.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17090) Increase precommit job timeout from 5 hours to 20 hours

2020-06-30 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148413#comment-17148413
 ] 

Akira Ajisaka commented on HADOOP-17090:


Thank you [~ayushtkn] for your comment. I'll create a PR.

> Increase precommit job timeout from 5 hours to 20 hours
> ---
>
> Key: HADOOP-17090
> URL: https://issues.apache.org/jira/browse/HADOOP-17090
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
>
> Now we frequently increase the timeout for testing and undo the change before 
> committing.
> * https://github.com/apache/hadoop/pull/2026
> * https://github.com/apache/hadoop/pull/2051
> * https://github.com/apache/hadoop/pull/2012
> * https://github.com/apache/hadoop/pull/2098
> * and more...
> I'd like to increase the timeout by default to reduce the work.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] fengnanli opened a new pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-06-30 Thread GitBox


fengnanli opened a new pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #2106: YARN-10331. Upgrade node.js to 10.21.0.

2020-06-30 Thread GitBox


aajisaka commented on pull request #2106:
URL: https://github.com/apache/hadoop/pull/2106#issuecomment-651618208


   Thank you @iwasakims 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka merged pull request #2106: YARN-10331. Upgrade node.js to 10.21.0.

2020-06-30 Thread GitBox


aajisaka merged pull request #2106:
URL: https://github.com/apache/hadoop/pull/2106


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #2098: HDFS-15424. Javadoc failing with "cannot find symbol com.google.protobuf.GeneratedMessageV3 implements"

2020-06-30 Thread GitBox


aajisaka commented on pull request #2098:
URL: https://github.com/apache/hadoop/pull/2098#issuecomment-651614836


   Hmm. The 7th build looks good.
   
https://builds.apache.org/job/hadoop-multibranch/job/PR-2098/7/artifact/out/patch-javadoc-root-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17099) Replace Guava Predicate with Java8+ Predicate

2020-06-30 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148395#comment-17148395
 ] 

Hadoop QA commented on HADOOP-17099:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m  
7s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
23m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
25s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
48s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 24m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 24m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 34s{color} | {color:orange} root: The patch generated 10 new + 100 unchanged 
- 5 fixed = 110 total (was 105) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 11m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m  8s{color} 
| {color:red} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}145m 34s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
18s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 
26s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}348m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestGroupsCaching |
|   | hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |

[jira] [Commented] (HADOOP-15338) Java 11 runtime support

2020-06-30 Thread zhenhe fu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148391#comment-17148391
 ] 

zhenhe fu commented on HADOOP-15338:


Hi [~aajisaka]

 

I was running some basic benchmark workload, such as TeraSort workload on ARM 
server with JDK11, but JDK11 has bad performance compared with JDK8

thanks 

> Java 11 runtime support
> ---
>
> Key: HADOOP-15338
> URL: https://issues.apache.org/jira/browse/HADOOP-15338
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0
>
>
> Oracle JDK 8 will be EoL during January 2019, and RedHat will end support for 
> OpenJDK 8 in June 2023 ([https://access.redhat.com/articles/1299013]), so we 
> need to support Java 11 LTS at least before June 2023.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2098: HDFS-15424. Javadoc failing with "cannot find symbol com.google.protobuf.GeneratedMessageV3 implements"

2020-06-30 Thread GitBox


hadoop-yetus commented on pull request #2098:
URL: https://github.com/apache/hadoop/pull/2098#issuecomment-651593370


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 20s |  trunk passed  |
   | +1 :green_heart: |  compile  |  26m 18s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  21m 18s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  mvnsite  |  23m 31s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m  6s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 35s |  root in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   9m 36s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  26m 10s |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m  0s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  21m  0s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 36s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  17m 36s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  17m 56s |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 14s |  There were no new shelldocs 
issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 26s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   1m 10s |  root in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   6m 59s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 594m 42s |  root in the patch passed.  |
   | -1 :x: |  asflicense  |   2m 30s |  The patch generated 1 ASF License 
warnings.  |
   |  |   | 834m 10s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.applications.distributedshell.TestDistributedShell |
   |   | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption |
   |   | hadoop.fs.azurebfs.services.TestAbfsClientThrottlingAnalyzer |
   |   | hadoop.hdfs.server.federation.router.TestRouterFaultTolerant |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2098/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2098 |
   | Optional Tests | dupname asflicense shellcheck shelldocs compile javac 
javadoc mvninstall mvnsite unit shadedclient xml |
   | uname | Linux 987082bd2cb0 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0be26811f3d |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2098/7/artifact/out/branch-javadoc-root-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2098/7/artifact/out/patch-javadoc-root-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | unit | 

[jira] [Commented] (HADOOP-16582) LocalFileSystem's mkdirs() does not work as expected under viewfs.

2020-06-30 Thread Zoltan Haindrich (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148345#comment-17148345
 ] 

Zoltan Haindrich commented on HADOOP-16582:
---

[~ste...@apache.org], [~kihwal]: ping

> LocalFileSystem's mkdirs() does not work as expected under viewfs.
> --
>
> Key: HADOOP-16582
> URL: https://issues.apache.org/jira/browse/HADOOP-16582
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Fix For: 2.10.0, 3.3.0, 2.8.6, 2.9.3, 3.1.3, 3.2.2
>
> Attachments: HADOOP-16582.1.patch, HADOOP-16582.patch
>
>
> When {{mkdirs(Path)}} is called against {{LocalFileSystem}}, the 
> implementation in {{RawLocalFileSystem}} is called and the directory 
> permission is determined by the umask.  However, if it is under 
> {{ViewFileSystem}}, the default implementation in {{FileSystem}} is called 
> and this causes explicit {{chmod()}} to 0777.
> The {{mkdirs(Path)}} method needs to be overriden in
> - ViewFileSystem to avoid calling the default implementation
> - ChRootedFileSystem for proper resolution of viewfs mount table
> - FilterFileSystem to avoid calling the default implementation
> Only then the same method in the target ({{LocalFileSystem}} in this case) 
> will be called.  Hdfs does not suffer from the same flaw since it applies 
> umask in all cases, regardless of what version of {{mkdirs()}} was called.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >